Skip to content

Commit

Permalink
Merge pull request #90 from g-linville/docs-repo-fix
Browse files Browse the repository at this point in the history
fix: update repo name in docs
  • Loading branch information
sanjay920 authored Feb 21, 2024
2 parents 2d0cb19 + 01ab1a1 commit ead6dd5
Show file tree
Hide file tree
Showing 7 changed files with 11 additions and 11 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ We welcome contributions from the developer community! Whether it's adding new f

## Support

If you encounter any issues or have questions, please file an issue on GitHub. For more detailed guidance and discussions, join our community on [Discord](https://discord.gg/swvAH2DXZH) or [Slack](https://slack.acorn.io) or start a [Github discussion](https://github.com/acorn-io/rubra/discussions).
If you encounter any issues or have questions, please file an issue on GitHub. For more detailed guidance and discussions, join our community on [Discord](https://discord.gg/swvAH2DXZH) or [Slack](https://slack.acorn.io) or start a [Github discussion](https://github.com/rubra-ai/rubra/discussions).

---

Expand Down
2 changes: 1 addition & 1 deletion docs/docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Rubra ships with an optimized LLM and includes built-in tools to get you started
Rubra runs on your machine. Additionally, your chat history and the files you use for knowledge retrieval (RAG) never leave your machine.

#### Does Rubra support other models?
Yes, Rubra supports OpenAI and Anthropic models in addition to the Rubra local model or one that [you configure yourself](https://github.com/acorn-io/rubra/tree/main/deploy_local_llm). We are working on introducing larger, more capable local models in the near future.
Yes, Rubra supports OpenAI and Anthropic models in addition to the Rubra local model or one that [you configure yourself](https://github.com/rubra-ai/rubra/tree/main/deploy_local_llm). We are working on introducing larger, more capable local models in the near future.

#### Why isn't knowledge retrieval working?
Our RAG pipeline uses an embedding model. If you're running in quickstart mode, the model is running on CPU and could be very slow, depending on machine. If you just created the assistant, it may take a few minutes to index the knowledge base. If you're still having issues, please check the logs for any errors.
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/installation/deploy_local_llm.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ sidebar_position: 2
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

To create assistants that run entirely on your machine, you must run a model locally. We recommend the [OpenHermes-NeuralChat merged model](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp) that is 7 billion parameters and ~6GB. We have tested Rubra with this model, but you can use any model you want at your own risk. Let us know if you'd like support for other models by [opening up a Github issue](https://github.com/acorn-io/rubra/issues/new)!
To create assistants that run entirely on your machine, you must run a model locally. We recommend the [OpenHermes-NeuralChat merged model](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp) that is 7 billion parameters and ~6GB. We have tested Rubra with this model, but you can use any model you want at your own risk. Let us know if you'd like support for other models by [opening up a Github issue](https://github.com/rubra-ai/rubra/issues/new)!

We leverage [llamafile](https://github.com/Mozilla-Ocho/llamafile) to distribute and run local LLMs.

Expand Down
6 changes: 3 additions & 3 deletions docs/docs/installation/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,15 @@ Follow these steps to install Rubra:

1. Clone the Rubra GitHub repository by executing the following command in your terminal:
```shell
git clone https://github.com/acorn-io/rubra.git
git clone https://github.com/rubra-ai/rubra.git
```

2. Navigate into the `rubra` directory:
```shell
cd rubra
```

3. (Optional) Define the models you want Rubra to access by modifying [`llm-config.yaml`](https://github.com/acorn-io/rubra/blob/main/llm-config.yaml). Refer to [LLM Configuration instructions](/installation/llm-config) for more details.
3. (Optional) Define the models you want Rubra to access by modifying [`llm-config.yaml`](https://github.com/rubra-ai/rubra/blob/main/llm-config.yaml). Refer to [LLM Configuration instructions](/installation/llm-config) for more details.
Additionally, you can add or remove LLMs in the Rubra UI after installation.

4. Pull the necessary images and start Rubra:
Expand All @@ -39,4 +39,4 @@ Develop with Rubra backend by setting the OpenAI base URL to Rubra backend's URL
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/", api_key="abc")
```
```
2 changes: 1 addition & 1 deletion docs/docs/installation/llm-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ sidebar_label: LLM Configuration File (Optional)
sidebar_position: 3
---

Before you start with Rubra, configure the models you want Rubra to access by editing the [`llm-config.yaml`](https://github.com/acorn-io/rubra/blob/main/llm-config.yaml) file.
Before you start with Rubra, configure the models you want Rubra to access by editing the [`llm-config.yaml`](https://github.com/rubra-ai/rubra/blob/main/llm-config.yaml) file.

The models currently supported:
* OpenAI
Expand Down
4 changes: 2 additions & 2 deletions docs/docusaurus.config.js
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ const config = {
sidebarPath: require.resolve("./sidebars.js"),
routeBasePath: "/",
editUrl:
'https://github.com/acorn-io/rubra/tree/main/docs',
'https://github.com/rubra-ai/rubra/tree/main/docs',
},
theme: {
customCss: require.resolve("./src/css/custom.css"),
Expand Down Expand Up @@ -72,7 +72,7 @@ const config = {
target: '_self',
},
{
to: 'https://github.com/acorn-io/rubra',
to: 'https://github.com/rubra-ai/rubra',
label: 'GitHub',
position: 'right',
}
Expand Down
4 changes: 2 additions & 2 deletions quickstart.sh
Original file line number Diff line number Diff line change
Expand Up @@ -174,14 +174,14 @@ check_rubra_llamafile_ready() {

# --- download docker-compose.yml ---
download_docker_compose_yml() {
DOCKER_COMPOSE_URL="https://raw.githubusercontent.com/acorn-io/rubra/main/docker-compose.yml"
DOCKER_COMPOSE_URL="https://raw.githubusercontent.com/rubra-ai/rubra/main/docker-compose.yml"
info "Downloading docker-compose.yml from $DOCKER_COMPOSE_URL"
curl -sSL "$DOCKER_COMPOSE_URL" -o docker-compose.yml || fatal "Failed to download docker-compose.yml"
}

# --- download llm-config.yaml ---
download_llm_config_yaml() {
LLM_CONFIG_URL="https://raw.githubusercontent.com/acorn-io/rubra/main/llm-config.yaml"
LLM_CONFIG_URL="https://raw.githubusercontent.com/rubra-ai/rubra/main/llm-config.yaml"
info "Downloading llm-config.yaml from $LLM_CONFIG_URL"
curl -sSL "$LLM_CONFIG_URL" -o llm-config.yaml || fatal "Failed to download llm-config.yaml"
}
Expand Down

0 comments on commit ead6dd5

Please sign in to comment.