Skip to content

Commit

Permalink
templates[patch]: Add cohere librarian template (#14601)
Browse files Browse the repository at this point in the history
Adding the example I build for the Cohere hackathon.

It can:

use a vector database to reccommend books

<img width="840" alt="image"
src="https://github.com/langchain-ai/langchain/assets/144115527/96543a18-217b-4445-ab4b-950c7cced915">

Use a prompt template to provide information about the library

<img width="834" alt="image"
src="https://github.com/langchain-ai/langchain/assets/144115527/996c8e0f-cab0-4213-bcc9-9baf84f1494b">

Use Cohere RAG to provide grounded results

<img width="822" alt="image"
src="https://github.com/langchain-ai/langchain/assets/144115527/7bb4a883-5316-41a9-9d2e-19fd49a43dcb">

---------

Co-authored-by: Erick Friis <[email protected]>
  • Loading branch information
billytrend-cohere and efriis authored Dec 13, 2023
1 parent 4745195 commit 7e4dbb2
Show file tree
Hide file tree
Showing 12 changed files with 3,518 additions and 0 deletions.
71 changes: 71 additions & 0 deletions templates/cohere-librarian/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@

# cohere-librarian

This template turns Cohere into a librarian.

It demonstrates the use of a router to switch between chains that can handle different things: a vector database with Cohere embeddings; a chat bot that has a prompt with some information about the library; and finally a RAG chatbot that has access to the internet.

For a fuller demo of the book recomendation, consider replacing books_with_blurbs.csv with a larger sample from the following dataset: https://www.kaggle.com/datasets/jdobrow/57000-books-with-metadata-and-blurbs/ .

## Environment Setup

Set the `COHERE_API_KEY` environment variable to access the Cohere models.

## Usage

To use this package, you should first have the LangChain CLI installed:

```shell
pip install -U langchain-cli
```

To create a new LangChain project and install this as the only package, you can do:

```shell
langchain app new my-app --package cohere-librarian
```

If you want to add this to an existing project, you can just run:

```shell
langchain app add cohere-librarian
```

And add the following code to your `server.py` file:
```python
from cohere_librarian.chain import chain as cohere_librarian_chain

add_routes(app, cohere_librarian_chain, path="/cohere-librarian")
```

(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
LangSmith is currently in private beta, you can sign up [here](https://smith.langchain.com/).
If you don't have access, you can skip this section


```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```

If you are inside this directory, then you can spin up a LangServe instance directly by:

```shell
langchain serve
```

This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)

We can see all templates at [http://localhost:8000/docs](http://localhost:8000/docs)
We can access the playground at [http://localhost:8000/cohere-librarian/playground](http://localhost:8000/cohere-librarian/playground)

We can access the template from code with:

```python
from langserve.client import RemoteRunnable

runnable = RemoteRunnable("http://localhost:8000/cohere-librarian")
```
3 changes: 3 additions & 0 deletions templates/cohere-librarian/cohere_librarian/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .chain import chain

__all__ = ["chain"]
49 changes: 49 additions & 0 deletions templates/cohere-librarian/cohere_librarian/blurb_matcher.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
import csv

from langchain.chains.question_answering import load_qa_chain
from langchain.embeddings import CohereEmbeddings
from langchain.prompts import PromptTemplate
from langchain.vectorstores import Chroma

from .chat import chat

csv_file = open("data/books_with_blurbs.csv", "r")
csv_reader = csv.reader(csv_file)
csv_data = list(csv_reader)
parsed_data = [
{
"id": x[0],
"title": x[1],
"author": x[2],
"year": x[3],
"publisher": x[4],
"blurb": x[5],
}
for x in csv_data
]
parsed_data[1]

embeddings = CohereEmbeddings()

docsearch = Chroma.from_texts(
[x["title"] for x in parsed_data], embeddings, metadatas=parsed_data
).as_retriever()


prompt_template = """
{context}
Use the book reccommendations to suggest books for the user to read.
Only use the titles of the books, do not make up titles. Format the response as
a bulleted list prefixed by a relevant message.
User: {message}"""

PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "message"]
)

book_rec_chain = {
"input_documents": lambda x: docsearch.get_relevant_documents(x["message"]),
"message": lambda x: x["message"],
} | load_qa_chain(chat, chain_type="stuff", prompt=PROMPT)
10 changes: 10 additions & 0 deletions templates/cohere-librarian/cohere_librarian/chain.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
from langchain.pydantic_v1 import BaseModel

from .router import branched_chain


class ChainInput(BaseModel):
message: str


chain = branched_chain.with_types(input_type=ChainInput)
3 changes: 3 additions & 0 deletions templates/cohere-librarian/cohere_librarian/chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from langchain.llms import Cohere

chat = Cohere()
27 changes: 27 additions & 0 deletions templates/cohere-librarian/cohere_librarian/library_info.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
from langchain.prompts import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)

from .chat import chat

librarian_prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"""
You are a librarian at cohere community library. Your job is to
help recommend people books to read based on their interests and
preferences. You also give information about the library.
The library opens at 8am and closes at 9pm daily. It is closed on
Sundays.
Please answer the following message:
"""
),
HumanMessagePromptTemplate.from_template("{message}"),
]
)

library_info = librarian_prompt | chat
16 changes: 16 additions & 0 deletions templates/cohere-librarian/cohere_librarian/rag.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
from langchain.chat_models import ChatCohere
from langchain.retrievers import CohereRagRetriever

rag = CohereRagRetriever(llm=ChatCohere())


def get_docs_message(message):
docs = rag.get_relevant_documents(message)
message_doc = next(
(x for x in docs if x.metadata.get("type") == "model_response"), None
)
return message_doc.page_content


def librarian_rag(x):
return get_docs_message(x["message"])
43 changes: 43 additions & 0 deletions templates/cohere-librarian/cohere_librarian/router.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnableBranch

from .blurb_matcher import book_rec_chain
from .chat import chat
from .library_info import library_info
from .rag import librarian_rag

chain = (
ChatPromptTemplate.from_template(
"""Given the user message below,
classify it as either being about `recommendation`, `library` or `other`.
'{message}'
Respond with just one word.
For example, if the message is about a book recommendation,respond with
`recommendation`.
"""
)
| chat
| StrOutputParser()
)


def extract_op_field(x):
return x["output_text"]


branch = RunnableBranch(
(
lambda x: "recommendation" in x["topic"].lower(),
book_rec_chain | extract_op_field,
),
(
lambda x: "library" in x["topic"].lower(),
{"message": lambda x: x["message"]} | library_info,
),
librarian_rag,
)

branched_chain = {"topic": chain, "message": lambda x: x["message"]} | branch
Empty file.
Loading

0 comments on commit 7e4dbb2

Please sign in to comment.