diff --git a/README.md b/README.md index 584281b..9ae7e62 100644 --- a/README.md +++ b/README.md @@ -64,6 +64,10 @@ Ideal for using this repo to build a SUQL-powered conversational interface to yo Check out [conv_agent.md](https://github.com/stanford-oval/suql/blob/main/docs/conv_agent.md) for more information on best practices for using SUQL to power your conversational agent. +# Release notes + +Check [release_notes.md](https://github.com/stanford-oval/suql/blob/main/docs/release_notes.md) for new release notes. + # Bugs / Contribution If you encounter a problem, first check [known_issues.md](https://github.com/stanford-oval/suql/blob/main/docs/known_issues.md). If it is not listed there, we welcome Issues and/or PRs! diff --git a/docs/install_pip.md b/docs/install_pip.md index 4eb79c2..b8d62f0 100644 --- a/docs/install_pip.md +++ b/docs/install_pip.md @@ -72,6 +72,7 @@ embedding_store.start_embedding_server(host = host, port = port) - Make sure to modify the keyword arguments `select_username` and `select_userpswd` if you changed this user in Step 2 above; - You can add more columns as needed using ``embedding_store.add()`; - This will be set up on port 8501, which matches the default keyword argument `embedding_server_address` in `suql_execute`. Make sure both addresses match if you modify it. +- Check [API documentation](https://stanford-oval.github.io/suql/suql/faiss_embedding.html#suql.faiss_embedding.MultipleEmbeddingStore.add) on more details, including options to disable caching. 5. Set up the backend server for the `answer`, `summary` functions. In a separate terminal, first set up your LLM API key environment variable following [the litellm provider doc](https://docs.litellm.ai/docs/providers) (e.g., for OpenAI, run `export OPENAI_API_KEY=[your OpenAI API key here]`). Write the following content into a Python script and execute in that terminal: ```python diff --git a/docs/install_source.md b/docs/install_source.md index c60f89e..94ef19c 100644 --- a/docs/install_source.md +++ b/docs/install_source.md @@ -76,6 +76,8 @@ embedding_store.add( under `if __name__ == "__main__":` to match your database with its column names. Then, run `python suql/faiss_embedding.py` under the `src` folder. - For instance, this line instructs the SUQL compiler to set up an embedding server for the `restaurants` database, which has `_id` column as the unique row identifier, for the `popular_dishes` column (such column need to be of type `TEXT` or `TEXT[]`, or other fixed-length strings/list of strings) under table `restaurants`. This is executed with user privilege `user="select_user"` and `password="select_user"`; - By default, this will be set up on port 8501, which is then called by `src/suql/execute_free_text_sql.py`. In case you need to use another port, please change both addresses. + - Check [API documentation](https://stanford-oval.github.io/suql/suql/faiss_embedding.html#suql.faiss_embedding.MultipleEmbeddingStore.add) on more details, including options to disable caching. + 5. Set up the backend server for the `answer`, `summary` functions. In a separate terminal, first set up your LLM API key environment variable following [the litellm provider doc](https://docs.litellm.ai/docs/providers) (e.g., for OpenAI, run `export OPENAI_API_KEY=[your OpenAI API key here]`). Then, run `python suql/free_text_fcns_server.py` under the `src` folder. - As you probably noticed, the code in `custom_functions.sql` is just making queries to this server, which handles the LLM API calls. If you changed the address in `custom_functions.sql`, then also update the address under `if __name__ == "__main__":`.