An Encouraging Therapy Journal - Our submission to Junction 2023 Outokumpu challenge Suistainable Generative AI Assistant for Insights.
Introduction (Pitch Deck)
Documenting your journey towards a better mental place is tough. With the use of a large language model (LLM) powered therapy journal, we wish to encourage the patient to document their process and experiences and improve the information exchange to the healthcare experts. On one hand, this is done by providing the users with feedback to improve the way they process their experiences with the journal. On the other hand, we help the experts by providing insights on the patients' writing.
Landing Page Patient View Therapist View
We make journaling more engaging and mining for insights more efficient. This way, we encourage patient commitment to their care, as well as empowering the health care professionals to focus on what they do best: providing diagnoses and treatment via humane patient interaction.
Our model relies on LLMs (in practice, OpenAI gpt-3.5-turbo) to provide the feedback and insights on the journal. With in-context learning and well-defined output schemata, we both ground the model to expert documents (rubrics for feedback, symptoms and their descriptions for insights), as well as to the journal on the word-level by utilizing exact excerpts from the journal entries.
The previous approach is to show what is possible now with the state of the art. To align our solution with the sustainability criterion of the challenge, we implemented another version relying on Instructor Embeddings. The model embeds both the user journal (on sentence-level) as well as the rubrics and symptom descriptions. We then cross-reference these elements with cosine similarity to provide the feedback and insights. Unfortunately, this approach does not allow generating personalised feedback or summaries, but it manages to provide to find related symptoms and rubric criteria.
As text-embedding models are clearly smaller than capable LLMs (millions of parameters, not billions), and for retrieval we only need inner-product, this approach is more sustainable. Unfortunately, though, we witnessed degraded performance compared to the LLM approach, as well as losing the ability to generate personalised feedback.
An alternative would have been to look into smaller open-source LLMs. Unfortunately, there were two major restrictions to why we decided not to explore this direction:
- Deploying these models for inference is not trivial.
- We are not aware of API calling fine-tuned open-source LLMs for structured output.
- Fine-tuning our own model was out of scope for this project, but with more time and resources would definitely be a possibility.
We deploy our model as an API endpoint, see here for details and here for an example response.
The journey is just beginning, and the future holds exciting possibilities:
- Actionable Recommendations. Lower the patients' threshold to take actions towards their improved well-being by providing actionable recommendations based on their data and preferences.
- Integration with Wearables. Stay better informed of the patients' journey by integrating with popular health wearables, allowing users to seamlessly incorporate data from devices like fitness trackers and smartwatches into their records.
- Population-Level Insights. Help professionals research therapeutic journals by processing the structured data gathered by our intelligent platforms. Find out the effectiveness of treatment plans through the insights parsed from thousands of patient journals.