Skip to content

Commit

Permalink
Merge branch 'main' into 19-featdocument-search-add-evaluation-pipeli…
Browse files Browse the repository at this point in the history
…ne-for-retrieval-accuracy
  • Loading branch information
micpst committed Oct 16, 2024
2 parents 4bc0066 + d61725f commit 296e169
Show file tree
Hide file tree
Showing 13 changed files with 477 additions and 36 deletions.
28 changes: 28 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Installation

## Build from source

To build and run Ragbits from the source code:

1. Requirements: [**uv**](https://docs.astral.sh/uv/getting-started/installation/) & [**python**](https://docs.astral.sh/uv/guides/install-python/) 3.10 or higher
2. Install dependencies and run venv in editable mode:

```bash
$ source ./setup_dev_env.sh
```

## Install pre-commit

To ensure code quality we use pre-commit hook with several checks. Setup it by:

```
pre-commit install
```

All updated files will be reformatted and linted before the commit.

To reformat and lint all files in the project, use:

`pre-commit run --all-files`

The used linters are configured in `.pre-commit-config.yaml`. You can use `pre-commit autoupdate` to bump tools to the latest versions.
90 changes: 74 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,90 @@
# Ragbits
<div align="center">

Repository for internal experiment with our upcoming LLM framework.
<h1>Ragbits</h1>

# Installation
*Building blocks for rapid development of GenAI applications*

## Build from source
[Documentation](https://ragbits.deepsense.ai) | [Contact](https://deepsense.ai/contact/)

To build and run Ragbits from the source code:
[![PyPI - License](https://img.shields.io/pypi/l/ragbits)](https://pypi.org/project/ragbits)
[![PyPI - Version](https://img.shields.io/pypi/v/ragbits)](https://pypi.org/project/ragbits)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ragbits)](https://pypi.org/project/ragbits)

1. Requirements: [**uv**](https://docs.astral.sh/uv/getting-started/installation/) & [**python**](https://docs.astral.sh/uv/guides/install-python/) 3.10 or higher
2. Install dependencies and run venv in editable mode:
</div>

```bash
$ source ./setup_dev_env.sh
---

## What's Included?

- [X] **[Core](packages/ragbits-core)** - Fundamental tools for working with prompts and LLMs.
- [X] **[Document Search](packages/ragbits-document-search)** - Handles vector search to retrieve relevant documents.
- [X] **[CLI](packages/ragbits-cli)** - The `ragbits` shell command, enabling tools such as GUI prompt management.
- [ ] **Flow Controls** - Manages multi-stage chat flows for performing advanced actions *(coming soon)*.
- [ ] **Structured Querying** - Queries structured data sources in a predictable manner *(coming soon)*.
- [ ] **Caching** - Adds a caching layer to reduce costs and response times *(coming soon)*.
- [ ] **Observability & Audit** - Tracks user queries and events for easier troubleshooting *(coming soon)*.
- [ ] **Guardrails** - Ensures response safety and relevance *(coming soon)*.

## Installation

To use the complete Ragbits stack, install the `ragbits` package:

```sh
pip install ragbits
```

Alternatively, you can use individual components of the stack by installing their respective packages: `ragbits-core`, `ragbits-document-search`, `ragbits-cli`.

## Quickstart

First, create a prompt and a model for the data used in the prompt:

```python
from pydantic import BaseModel
from ragbits.core.prompt import Prompt

class Dog(BaseModel):
breed: str
age: int
temperament: str

class DogNamePrompt(Prompt[Dog, str]):
system_prompt = """
You are a dog name generator. You come up with funny names for dogs given the dog details.
"""

user_prompt = """
The dog is a {breed} breed, {age} years old, and has a {temperament} temperament.
"""
```

## Install pre-commit
Next, create an instance of the LLM and the prompt:

To ensure code quality we use pre-commit hook with several checks. Setup it by:
```python
from ragbits.core.llms.litellm import LiteLLM

llm = LiteLLM("gpt-4o")
example_dog = Dog(breed="Golden Retriever", age=3, temperament="friendly")
prompt = DogNamePrompt(example_dog)
```
pre-commit install

Finally, generate a response from the LLM using the prompt:

```python
response = await llm.generate(prompt)
print(f"Generated dog name: {response}")
```

All updated files will be reformatted and linted before the commit.
<!--
TODO:
Add links to quickstart guides for individual packages, demonstrating how to extend this with their functionality.
Add a link to the full tutorial.
-->

## License

To reformat and lint all files in the project, use:
Ragbits is licensed under the [MIT License](LICENSE).

`pre-commit run --all-files`
## Contributing

The used linters are configured in `.pre-commit-config.yaml`. You can use `pre-commit autoupdate` to bump tools to the latest versions.
We welcome contributions! Please read [CONTRIBUTING.md](CONTRIBUTING.md) for more information.
3 changes: 3 additions & 0 deletions packages/ragbits-core/src/ragbits/core/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,8 @@ class CoreConfig(BaseModel):
# Pattern used to search for prompt files
prompt_path_pattern: str = "**/prompt_*.py"

# Path to a function that returns an LLM object, e.g. "my_project.llms.get_llm"
default_llm_factory: str | None = None


core_config = get_config_instance(CoreConfig, subproject="core")
60 changes: 60 additions & 0 deletions packages/ragbits-core/src/ragbits/core/llms/factory.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
import importlib

from ragbits.core.config import core_config
from ragbits.core.llms.base import LLM
from ragbits.core.llms.litellm import LiteLLM


def get_llm_from_factory(factory_path: str) -> LLM:
"""
Get an instance of an LLM using a factory function specified by the user.
Args:
factory_path (str): The path to the factory function.
Returns:
LLM: An instance of the LLM.
"""
module_name, function_name = factory_path.rsplit(".", 1)
module = importlib.import_module(module_name)
function = getattr(module, function_name)
return function()


def has_default_llm() -> bool:
"""
Check if the default LLM factory is set in the configuration.
Returns:
bool: Whether the default LLM factory is set.
"""
return core_config.default_llm_factory is not None


def get_default_llm() -> LLM:
"""
Get an instance of the default LLM using the factory function
specified in the configuration.
Returns:
LLM: An instance of the default LLM.
Raises:
ValueError: If the default LLM factory is not set.
"""
factory = core_config.default_llm_factory
if factory is None:
raise ValueError("Default LLM factory is not set")

return get_llm_from_factory(factory)


def simple_litellm_factory() -> LLM:
"""
A basic LLM factory that creates an LiteLLM instance with the default model,
default options, and assumes that the API key is set in the environment.
Returns:
LLM: An instance of the LiteLLM.
"""
return LiteLLM()
35 changes: 18 additions & 17 deletions packages/ragbits-core/src/ragbits/core/prompt/lab/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,8 +14,8 @@
from rich.console import Console

from ragbits.core.config import core_config
from ragbits.core.llms import LiteLLM
from ragbits.core.llms.clients import LiteLLMOptions
from ragbits.core.llms import LLM
from ragbits.core.llms.factory import get_llm_from_factory
from ragbits.core.prompt import Prompt
from ragbits.core.prompt.discovery import PromptDiscovery

Expand All @@ -30,14 +30,12 @@ class PromptState:
Attributes:
prompts (list): A list containing discovered prompts.
rendered_prompt (Prompt): The most recently rendered Prompt instance.
llm_model_name (str): The name of the selected LLM model.
llm_api_key (str | None): The API key for the chosen LLM model.
llm (LLM): The LLM instance to be used for generating responses.
"""

prompts: list = field(default_factory=list)
rendered_prompt: Prompt | None = None
llm_model_name: str | None = None
llm_api_key: str | None = None
llm: LLM | None = None


def render_prompt(index: int, system_prompt: str, user_prompt: str, state: PromptState, *args: Any) -> PromptState:
Expand Down Expand Up @@ -99,15 +97,17 @@ def send_prompt_to_llm(state: PromptState) -> str:
Returns:
str: The response generated by the LLM.
"""
assert state.llm_model_name is not None, "LLM model name is not set."
llm_client = LiteLLM(model_name=state.llm_model_name, api_key=state.llm_api_key)
Raises:
ValueError: If the LLM model is not configured.
"""
assert state.rendered_prompt is not None, "Prompt has not been rendered yet."

if state.llm is None:
raise ValueError("LLM model is not configured.")

try:
response = asyncio.run(
llm_client.client.call(conversation=state.rendered_prompt.chat, options=LiteLLMOptions())
)
response = asyncio.run(state.llm.generate_raw(prompt=state.rendered_prompt))
except Exception as e: # pylint: disable=broad-except
response = str(e)

Expand Down Expand Up @@ -136,7 +136,8 @@ def get_input_type_fields(obj: BaseModel | None) -> list[dict]:


def lab_app( # pylint: disable=missing-param-doc
file_pattern: str = core_config.prompt_path_pattern, llm_model: str | None = None, llm_api_key: str | None = None
file_pattern: str = core_config.prompt_path_pattern,
llm_factory: str | None = core_config.default_llm_factory,
) -> None:
"""
Launches the interactive application for listing, rendering, and testing prompts
Expand All @@ -163,9 +164,8 @@ def lab_app( # pylint: disable=missing-param-doc
with gr.Blocks() as gr_app:
prompts_state = gr.State(
PromptState(
llm_model_name=llm_model,
llm_api_key=llm_api_key,
prompts=list(prompts),
llm=get_llm_from_factory(llm_factory) if llm_factory else None,
)
)

Expand Down Expand Up @@ -220,14 +220,15 @@ def show_split(index: int, state: gr.State) -> None:
)
gr.Textbox(label="Rendered User Prompt", value=rendered_user_prompt, interactive=False)

llm_enabled = state.llm_model_name is not None
llm_enabled = state.llm is not None
prompt_ready = state.rendered_prompt is not None
llm_request_button = gr.Button(
value="Send to LLM",
interactive=llm_enabled and prompt_ready,
)
gr.Markdown(
"To enable this button, select an LLM model when starting the app in CLI.", visible=not llm_enabled
"To enable this button set an LLM factory function in CLI options or your pyproject.toml",
visible=not llm_enabled,
)
gr.Markdown("To enable this button, render a prompt first.", visible=llm_enabled and not prompt_ready)
llm_prompt_response = gr.Textbox(lines=10, label="LLM response")
Expand Down
5 changes: 5 additions & 0 deletions packages/ragbits-core/tests/unit/llms/factory/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
import sys
from pathlib import Path

# Add "llms" to sys.path
sys.path.append(str(Path(__file__).parent.parent))
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from ragbits.core.config import core_config
from ragbits.core.llms.factory import get_default_llm
from ragbits.core.llms.litellm import LiteLLM


def test_get_default_llm(monkeypatch):
"""
Test the get_llm_from_factory function.
"""

monkeypatch.setattr(core_config, "default_llm_factory", "factory.test_get_llm_from_factory.mock_llm_factory")

llm = get_default_llm()
assert isinstance(llm, LiteLLM)
assert llm.model_name == "mock_model"
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
from ragbits.core.llms.factory import get_llm_from_factory
from ragbits.core.llms.litellm import LiteLLM


def mock_llm_factory() -> LiteLLM:
"""
A mock LLM factory that creates a LiteLLM instance with a mock model name.
Returns:
LiteLLM: An instance of the LiteLLM.
"""
return LiteLLM(model_name="mock_model")


def test_get_llm_from_factory():
"""
Test the get_llm_from_factory function.
"""
llm = get_llm_from_factory("factory.test_get_llm_from_factory.mock_llm_factory")

assert isinstance(llm, LiteLLM)
assert llm.model_name == "mock_model"
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
from ragbits.core.config import core_config
from ragbits.core.llms.factory import has_default_llm


def test_has_default_llm(monkeypatch):
"""
Test the has_default_llm function when the default LLM factory is not set.
"""
monkeypatch.setattr(core_config, "default_llm_factory", None)

assert has_default_llm() is False


def test_has_default_llm_false(monkeypatch):
"""
Test the has_default_llm function when the default LLM factory is set.
"""
monkeypatch.setattr(core_config, "default_llm_factory", "my_project.llms.get_llm")

assert has_default_llm() is True
Loading

0 comments on commit 296e169

Please sign in to comment.