You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My colleagues and I are currently working on a POC to assess the practicality of implementing semantic kernels within our enterprise applications. Our primary objectives are as follows:
Implementing a straightforward natural language search system to query information from our database (Postgres), effectively enabling NL2SQL capabilities.
Developing a natural language search system for constructing 3rd-party API request payloads and retrieving data from external sources to populate dashboards.
For the first use case, I've made significant progress following an example from the kernel-memory repository, specifically: nl2sql example.
However, I've encountered a couple of challenges:
The token usage is exceptionally high due to the extensive size of our database schema, which includes numerous normalized RDBMS tables. This schema is incorporated into the semantic kernel skills, contributing to the high token consumption.
The generated SQL statements often do not align perfectly with our database schema.
For the second use case, I aim to prepare an API request payload based on the search query. The constructed payload will be used to make a 3rd-party API call, and the response data will be used to create dashboards. The complexity here lies in crafting a request payload that adheres to the valid parameters defined by the DTOs.
I have several questions regarding these challenges:
How can I minimize token usage, regardless of the size of the database schema?
What strategies can I employ to ensure that the system fine-tunes its outputs based on our specific database schema?
What is the most effective and scalable approach for constructing request payloads for any API?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
My colleagues and I are currently working on a POC to assess the practicality of implementing semantic kernels within our enterprise applications. Our primary objectives are as follows:
For the first use case, I've made significant progress following an example from the kernel-memory repository, specifically: nl2sql example.
However, I've encountered a couple of challenges:
The token usage is exceptionally high due to the extensive size of our database schema, which includes numerous normalized RDBMS tables. This schema is incorporated into the semantic kernel skills, contributing to the high token consumption.
The generated SQL statements often do not align perfectly with our database schema.
For the second use case, I aim to prepare an API request payload based on the search query. The constructed payload will be used to make a 3rd-party API call, and the response data will be used to create dashboards. The complexity here lies in crafting a request payload that adheres to the valid parameters defined by the DTOs.
I have several questions regarding these challenges:
Beta Was this translation helpful? Give feedback.
All reactions