diff --git a/README.md b/README.md index 2cad99f..6f57b71 100644 --- a/README.md +++ b/README.md @@ -51,7 +51,7 @@ Using incremental vector search, only the most relevant context is automatically ![Automated real-time knowledge mining and alerting](examples/pipelines/drive_alert/drive_alert_demo.gif) -For the code, see the [`drive_alert`](#examples) app example. You can find more details in a [blog post on alerting with LLM-App](https://pathway.com/developers/showcases/llm-alert-pathway). +For the code, see the [`drive_alert`](#examples) app example. You can find more details in a [blog post on alerting with LLM-App](https://pathway.com/developers/templates/llm-alert-pathway). ## How it works @@ -99,7 +99,7 @@ Pick one that is closest to your needs. | [`local`](examples/pipelines/local/) | This example runs the application using Huggingface Transformers, which eliminates the need for the data to leave the machine. It provides a convenient way to use state-of-the-art NLP models locally. | | [`unstructured-to-sql`](examples/pipelines/unstructured_to_sql_on_the_fly/) | This example extracts the data from unstructured files and stores it into a PostgreSQL table. It also transforms the user query into an SQL query which is then executed on the PostgreSQL table. | | [`alert`](examples/pipelines/alert/) | Ask questions, get alerted whenever response changes. Pathway is always listening for changes, whenever new relevant information is added to the stream (local files in this example), LLM decides if there is a substantial difference in response and notifies the user with a Slack message. | -| [`drive-alert`](examples/pipelines/drive_alert/) | The [`alert`](examples/pipelines/alert/) example on steroids. Whenever relevant information on Google Docs is modified or added, get real-time alerts via Slack. See the [`tutorial`](https://pathway.com/developers/showcases/llm-alert-pathway). | +| [`drive-alert`](examples/pipelines/drive_alert/) | The [`alert`](examples/pipelines/alert/) example on steroids. Whenever relevant information on Google Docs is modified or added, get real-time alerts via Slack. See the [`tutorial`](https://pathway.com/developers/templates/llm-alert-pathway). | | [`contextful-geometric`](examples/pipelines/contextful_geometric/) | The [`contextful`](examples/pipelines/contextful/) example, which optimises use of tokens in queries. It asks the same questions with increasing number of documents given as a context in the question, until ChatGPT finds an answer. | @@ -132,7 +132,7 @@ Each [example](examples/pipelines/) contains a README.md with instructions on ho ### Bonus: Build your own Pathway-powered LLM App -Want to learn more about building your own app? See step-by-step guide [Building a llm-app tutorial](https://pathway.com/developers/showcases/llm-app-pathway) +Want to learn more about building your own app? See step-by-step guide [Building a llm-app tutorial](https://pathway.com/developers/templates/llm-app-pathway) Or, diff --git a/examples/pipelines/adaptive-rag/README.md b/examples/pipelines/adaptive-rag/README.md index 500f356..c9758d6 100644 --- a/examples/pipelines/adaptive-rag/README.md +++ b/examples/pipelines/adaptive-rag/README.md @@ -10,7 +10,7 @@ ## End to end Adaptive RAG with Pathway -This is the accompanying code for deploying the `adaptive RAG` technique with Pathway. To understand the technique and learn how it can save tokens without sacrificing accuracy, read [our showcase](https://pathway.com/developers/showcases/adaptive-rag). +This is the accompanying code for deploying the `adaptive RAG` technique with Pathway. To understand the technique and learn how it can save tokens without sacrificing accuracy, read [our showcase](https://pathway.com/developers/templates/adaptive-rag). To learn more about building & deploying RAG applications with Pathway, including containerization, refer to [demo question answering](../demo-question-answering/README.md). @@ -49,7 +49,7 @@ If you are interested in building this app in a fully private & local setup, che You can modify any of the used components by checking the options from: `from pathway.xpacks.llm import embedders, llms, parsers, splitters`. It is also possible to easily create new components by extending the [`pw.UDF`](https://pathway.com/developers/user-guide/data-transformation/user-defined-functions) class and implementing the `__wrapped__` function. -To see the setup used in our work, check [the showcase](https://pathway.com/developers/showcases/private-rag-ollama-mistral). +To see the setup used in our work, check [the showcase](https://pathway.com/developers/templates/private-rag-ollama-mistral). ## Running the app To run the app you need to set your OpenAI API key, by setting the environmental variable `OPENAI_API_KEY` or creating an `.env` file in this directory with line `OPENAI_API_KEY=sk-...`. If you modify the code to use another LLM provider, you may need to set a relevant API key. diff --git a/examples/pipelines/contextful_geometric/README.md b/examples/pipelines/contextful_geometric/README.md index 831e7f4..bf9364f 100644 --- a/examples/pipelines/contextful_geometric/README.md +++ b/examples/pipelines/contextful_geometric/README.md @@ -9,7 +9,7 @@ # RAG pipeline with up-to-date knowledge: get answers based on increasing number of documents -This example implements a pipeline that answers questions based on documents in a given folder. To get the answer it sends increasingly more documents to the LLM chat until it can find an answer. You can read more about the reasoning behind this approach [here](https://pathway.com/developers/showcases/adaptive-rag). +This example implements a pipeline that answers questions based on documents in a given folder. To get the answer it sends increasingly more documents to the LLM chat until it can find an answer. You can read more about the reasoning behind this approach [here](https://pathway.com/developers/templates/adaptive-rag). Each query text is first turned into a vector using OpenAI embedding service, then relevant documentation pages are found using a Nearest Neighbor index computed diff --git a/examples/pipelines/unstructured_to_sql_on_the_fly/README.md b/examples/pipelines/unstructured_to_sql_on_the_fly/README.md index 0c0372e..09bdabd 100644 --- a/examples/pipelines/unstructured_to_sql_on_the_fly/README.md +++ b/examples/pipelines/unstructured_to_sql_on_the_fly/README.md @@ -20,7 +20,7 @@ Pipeline 2 then starts a REST API endpoint serving queries about programming in Each query text is converted into a SQL query using the OpenAI API. Architecture diagram and description are at -https://pathway.com/developers/showcases/unstructured-to-structured +https://pathway.com/developers/templates/unstructured-to-structured ⚠️ This project requires a running PostgreSQL instance. diff --git a/examples/pipelines/unstructured_to_sql_on_the_fly/app.py b/examples/pipelines/unstructured_to_sql_on_the_fly/app.py index 587770a..99c83f4 100644 --- a/examples/pipelines/unstructured_to_sql_on_the_fly/app.py +++ b/examples/pipelines/unstructured_to_sql_on_the_fly/app.py @@ -21,7 +21,7 @@ Each query text is converted into a SQL query using the OpenAI API. Architecture diagram and description are at -https://pathway.com/developers/showcases/unstructured-to-structured +https://pathway.com/developers/templates/unstructured-to-structured ⚠️ This project requires a running PostgreSQL instance.