A powerful service designed to enhance the educational experience for both parents and teachers. Our service revolves around a curated collection of documents focused on children's activities and curriculum frameworks. With simplicity at its core, "Activity Sakhi" empowers parents and teachers to effortlessly discover relevant content and find answers to context-specific questions.
Explore a predefined set of documents tailored to children's activities and curriculum frameworks, ensuring a wealth of valuable information at your fingertips.
Targeted specifically for parents and teachers, "Activity Sakhi" caters to their unique needs, providing insights and resources tailored to enhance the learning journey. Discover and Learn: Seamlessly discover engaging content and obtain answers to your specific questions, making the educational process more accessible and enjoyable.
Whether you're a parent looking for creative activities or a teacher seeking curriculum support, Activity Sakhi is your go-to solution. Unlock the potential of educational resources and make learning a delightful experience for children.
Integrate "Activity Sakhi" effortlessly into your applications to revolutionize the way parents and teachers engage with educational content. Check out our documentation to get started and embark on a journey of enriched learning experiences.
- Python 3.7 or higher
- Latest Docker
- Redis
- To get the Marqo image, use the following command:
docker pull marqoai/marqo:1.5.1
- To create the Marqo instance, run the following command:
docker rm -f marqo;docker run --name marqo -it --privileged -p 8882:8882 --add-host host.docker.internal:host-gateway -e "MARQO_MAX_INDEX_FIELDS=400" -e "MARQO_MAX_DOC_BYTES=200000" -e "MARQO_MAX_RETRIEVABLE_DOCS=600" -e "MARQO_MAX_NUMBER_OF_REPLICAS=2" -e "MARQO_MODELS_TO_PRELOAD=[\"flax-sentence-embeddings/all_datasets_v4_mpnet-base\"]" marqoai/marqo:1.5.1
To use the code, you need to follow these steps:
-
Clone the repository from GitHub:
git clone https://github.com/DJP-Digital-Jaaduii-Pitara/sakhi-api-service.git
cd sakhi-api-service
-
The code requires Python 3.7 or higher and some additional python packages. To install these packages, run the following command in your terminal:
pip install -r requirements-dev.txt
-
To ingest data to marqo
python3 index_documents.py --folder_path=<PATH_TO_INPUT_FILE_DIRECTORY> --fresh_index
--fresh_index: This is a flag that creating a new index or overwriting an existing one. Fresh indexing typically starts from scratch without using existing data. PATH_TO_INPUT_FILE_DIRECTORY should have only PDF, audio, video and txt file only.
e.g.
python3 index_documents.py --folder_path=parent_pdfs --fresh_index python3 index_documents.py --folder_path=teacher_pfs --fresh_index
Create the index by using the above command. After creating the index add the index name in
config.ini
file.indices = { "parent":"<PARENT_INDEX_NAME>", "teacher": "<TEACHER_INDEX_NAME>" }
-
You will need Cloud storage account to store the audio file for response. (Supported - OCI, GCP, AWS)
-
Copy
.env.example
file, paste it into the same location, and rename to.env
and update the values in that file.
Once the above installation steps are completed, run the following command in home directory of the repository in terminal
uvicorn main:app
Open your browser at http://127.0.0.1:8000/docs to access the application.
The command uvicorn main:app
refers to:
main
: the filemain.py
(the Python "module").app
: the object created inside ofmain.py
with the lineapp = FastAPI()
.--reload
: make the server restart after code changes. Only do this for development.uvicorn main:app --reload
API is used to generate activity/story based on user query and translation of text/audio from one language to another language in text/audio format. To achieve the same, Language Services has been integrated. Cloud object storage has been used to store translated audio files when audio is chosen as target output format.
curl -X 'POST' \
'http://127.0.0.1:8000/v1/query' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"input": {
"language": "en",
"text": "string",
"audio": "string",
"context": "teacher"
},
"output": {
"format": "text"
}
}'
Request Input | Value |
---|---|
input.language |
en,bn,gu,hi,kn,ml,mr,or,pa,ta,te |
input.text |
User entered question (any of the above language) |
input.audio |
Public file URL Or Base64 encoded audio |
input.context |
parent, teacher (default value is parent, if not passing) |
output.format |
text or audio |
Required inputs are text
, audio
and language
.
Either of the text
(string) or audio
(string) should be present. If both the values are given, text
is taken for consideration. Another requirement is that the language
should be same as the one given in text and audio (i.e, if you pass English as language
, then your text/audio
should contain queries in English language). The audio should either contain a publicly downloadable url of mp3 file or base64 encoded text of the mp3.
If output format is given as text
than response will return text
format only. If output format is given as audio
than response will return text
and audio
both.
{
"input": {
"text": "How to Teach Kids to Play Games",
"language": "en"
},
"output": {
"format": "text"
}
}
{
"output": {
"text": "string",
"audio": "string",
"language": "en",
"format": "text|audio"
}
}
Once the API is hit with proper request parameters, it is then checked for the presence of query text.
If query text is present, the translation of query text based on input language is done. Then the translated query text is given to langchain model which does the same work. Then the paraphrased answer is again translated back to input_language. If the output_format is voice, the translated paraphrased answer is then converted to a mp3 file and uploaded to supported cloud storage folder and made public.
If the query text is absent and audio url is present, then the audio url is downloaded and converted into text based on the input language. Once speech to text conversion in input language is finished, the same process mentioned above happens. One difference is that by default, the paraphrased answer is converted to voice irrespective of the output_format since the input format is voice.
API is used to generate activity/story based on user query and translation of text/audio from one language to another language in text/audio format. To achieve the same, Language Services has been integrated. Cloud object storage has been used to store translated audio files when audio is chosen as target output format.
curl -X 'POST' \
'http://127.0.0.1:8000/v1/chat' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"input": {
"language": "en",
"text": "string",
"audio": "string",
"context": "parent"
},
"output": {
"format": "text"
}
}'
Request Input | Value |
---|---|
input.language |
en,bn,gu,hi,kn,ml,mr,or,pa,ta,te |
input.text |
User entered question (any of the above language) |
input.audio |
Public file URL Or Base64 encoded audio |
input.context |
parent, teacher (default value is parent, if not passing) |
output.format |
text or audio |
Required inputs are text
, audio
and language
.
Either of the text
(string) or audio
(string) should be present. If both the values are given, text
is taken for consideration. Another requirement is that the language
should be same as the one given in text and audio (i.e, if you pass English as language
, then your text/audio
should contain queries in English language). The audio should either contain a publicly downloadable url of mp3 file or base64 encoded text of the mp3.
If output format is given as text
than response will return text
format only. If output format is given as audio
than response will return text
and audio
both.
{
"input": {
"text": "How to Teach Kids to Play Games",
"language": "en"
},
"output": {
"format": "text"
}
}
{
"output": {
"text": "string",
"audio": "string",
"language": "en",
"format": "text|audio"
}
}
Once the API is hit with proper request parameters, it is then checked for the presence of query text.
When a query is provided as text, the system first checks the language. If it's not English, the query is translated. Then, the translated query and the user's past interactions are sent to a large language model (LLM) to generate a refined query for retrieving relevant documents from a vector store.The retrieved documents are filtered based on a set score. These documents and the refined query are again sent to the LLM to generate the final answer. Finally, the answer is translated back to the original language if necessary. If the output format is voice, the translated answer is converted to an MP3 file, uploaded to a public cloud storage folder.
If the query text is absent and audio url is present, then the audio url is downloaded and converted into text based on the input language. Once speech to text conversion in input language is finished, the same process mentioned above happens. One difference is that by default, the paraphrased answer is converted to voice irrespective of the output_format since the input format is voice.
This repository comes with a Dockerfile. You can use this dockerfile to deploy your version of this application to Cloud Run.
Make the necessary changes to your dockerfile with respect to your new changes. (Note: The given Dockerfile will deploy the base code without any error, provided you added the required environment variables (mentioned in the .env
file) to either the Dockerfile or the cloud run revision)
Variable | Description | Default Value |
---|---|---|
database.indices | index or collection name to be referred to from vector database based on input context | |
database.top_docs_to_fetch | Number of filtered documents retrieved from vector database to be passed to Gen AI as contexts | 5 |
database.docs_min_score | Minimum score of the documents based on which filtration happens on retrieved documents | 0.4 |
redis.ttl | Redis cache expiration time for a key in seconds. (Only applicable for /v1/chat API.) |
43200 |
request.supported_lang_codes | Supported languages by the service | en,bn,gu,hi,kn,ml,mr,or,pa,ta,te |
request.supported_response_format | Supported response formats | text,audio |
request.supported_context | index name to be referred to from vector database based on context type | teacher, parent (Default) |
llm.max_messages | Maximum number of messages to include in conversation history | 4 |
llm.enable_bot_intent | Flag to enable or disable verification of user's query to check if it is referring to bot | false |
llm.intent_prompt | System prompt to Gen AI to verify if the user's query is referring to the bot | |
llm.bot_prompt | System prompt to Gen AI to generate responses for user's query related to bot | |
llm.activity_prompt | System prompt to Gen AI to generate responses based on user's query and input contexts | |
llm.chat_intent_prompt | System prompt to Gen AI to generate standalone query based on user's previous history and input contexts | |
telemetry.telemetry_log_enabled | Flag to enable or disable telemetry events logging to Sunbird Telemetry service | true |
telemetry.environment | service environment from where telemetry is generated from, in telemetry service | dev |
telemetry.service_id | service identifier to be passed to Sunbird telemetry service | |
telemetry.service_ver | service version to be passed to Sunbird telemetry service | |
telemetry.actor_id | service actor id to be passed to Sunbird telemetry service | |
telemetry.channel | channel value to be passed to Sunbird telemetry service | |
telemetry.pdata_id | pdata_id value to be passed to Sunbird telemetry service | |
telemetry.events_threshold | telemetry events batch size upon which events will be passed to Sunbird telemetry service | 5 |
- We are currently in the alpha stage and hence need all the inputs, feedbacks and contributions we can.
- Kindly visit our project board to see what is it that we are prioritizing.