You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's friday and I can't stand another hour in the office, so bear with me on this.
I'm now listening to The Mercy of Gods and my god I can't keep up with all the character and their feelings and the nth race of space bugs. What I'd love is to have a button on the player UI to bring up a simple chat with a UI companion.
Such companion should have the text files of the book loaded as context and a (configurable) persona, so that it could answer questions relative to the book/series.
A persona would be something like "you're a book shop clerk, you will answer to questions about this book, avoiding spoilers for anything beyond {your_current_position}". Personas could be configurable per book/series, I'd love to ask Skippy about that time Joe did that stupid thing. (You can do this in Home Assistant, with a persona per each assistant you create)
{your_current_position} would be something that ABS provides automatically to the query, each time you send a question, based on your current position in the book. It could be the page number for text books or timestamp for audiobooks.
The backend of the request would be a local installation of Ollama or another LLMs (even cloud based, even though I guess the costs with context files would be pretty high), which would accept the query and the text files.
A simple setting page with a provider selection, an url and auth (username+password or api key, depending on the provider). Something like the "connect" pages in arr softwares.
For audiobooks, I guess the first step would be to generate a text file with timestamps. Using the same paradigm of the LLM provider, we could point ABS to a local installation of Whisper to generate them. (already being discussed here)
For book series, the whole serie should be loaded as context (I guess LLMs limitations may apply, I'm not familiar with them).
Once you have the system up, you could have more "quick" AI actions, like:
"Previously on": generate a short summary of the last part of the book, to have an idea of where you left off 3 weeks ago
"What is this book about?": a summary of the book, using the whole book and not just your progress (which would be 0, if you're deciding to read it)
"Cast summary": a list of known characters in the book, with a 2 sentence summary of who the hell is this guy and what is he doing now (now = your current position in the book)
"Current scene": a single sentence describing the current scene
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
It's friday and I can't stand another hour in the office, so bear with me on this.
I'm now listening to The Mercy of Gods and my god I can't keep up with all the character and their feelings and the nth race of space bugs. What I'd love is to have a button on the player UI to bring up a simple chat with a UI companion.
Such companion should have the text files of the book loaded as context and a (configurable) persona, so that it could answer questions relative to the book/series.
A persona would be something like
"you're a book shop clerk, you will answer to questions about this book, avoiding spoilers for anything beyond {your_current_position}"
. Personas could be configurable per book/series, I'd love to ask Skippy about that time Joe did that stupid thing. (You can do this in Home Assistant, with a persona per each assistant you create){your_current_position}
would be something that ABS provides automatically to the query, each time you send a question, based on your current position in the book. It could be the page number for text books or timestamp for audiobooks.The backend of the request would be a local installation of Ollama or another LLMs (even cloud based, even though I guess the costs with context files would be pretty high), which would accept the query and the text files.
A simple setting page with a provider selection, an url and auth (username+password or api key, depending on the provider). Something like the "connect" pages in arr softwares.
For audiobooks, I guess the first step would be to generate a text file with timestamps. Using the same paradigm of the LLM provider, we could point ABS to a local installation of Whisper to generate them. (already being discussed here)
For book series, the whole serie should be loaded as context (I guess LLMs limitations may apply, I'm not familiar with them).
Once you have the system up, you could have more "quick" AI actions, like:
Beta Was this translation helpful? Give feedback.
All reactions