Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename public classes #12

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This repository provides LangChain components to connect your LangChain applicat

## Features

- **🤖 LLMs**: The `ChatDatabricks` component allows you to access chat endpoints hosted on [Databricks Model Serving](https://www.databricks.com/product/model-serving), including state-of-the-art models such as Llama3, Mixtral, and DBRX, as well as your own fine-tuned models.
- **🤖 LLMs**: The `DatabricksChatModel` component allows you to access chat endpoints hosted on [Databricks Model Serving](https://www.databricks.com/product/model-serving), including state-of-the-art models such as Llama3, Mixtral, and DBRX, as well as your own fine-tuned models.
- **📐 Vector Store**: [Databricks Vector Search](https://www.databricks.com/product/machine-learning/vector-search) is a serverless similarity search engine that allows you to store a vector representation of your data, including metadata, in a vector database. With Vector Search, you can create auto-updating vector search indexes from Delta tables managed by Unity Catalog and query them with a simple API to return the most similar vectors.
- **🔢 Embeddings**: Provides components for working with embedding models hosted on [Databricks Model Serving](https://www.databricks.com/product/model-serving).

Expand All @@ -23,9 +23,9 @@ pip install langchain-databricks
Here's a simple example of how to use the `langchain-databricks` package.

```python
from langchain_databricks import ChatDatabricks
from langchain_databricks import DatabricksChatModel

chat_model = ChatDatabricks(endpoint="databricks-meta-llama-3-70b-instruct")
chat_model = DatabricksChatModel(endpoint="databricks-meta-llama-3-70b-instruct")

response = chat_model.invoke("What is MLflow?")
print(response)
Expand All @@ -37,7 +37,7 @@ For more detailed usage examples and documentation, please refer to the [LangCha

We welcome contributions to this project! Please follow the following guidance to setup the project for development and start contributing.

### Folk and clone the repository
### Fork and clone the repository

To contribute to this project, please follow the ["fork and pull request"](https://docs.github.com/en/get-started/exploring-projects-on-github/contributing-to-a-project) workflow. Please do not try to push directly to this repo unless you are a maintainer.

Expand Down
6 changes: 3 additions & 3 deletions libs/databricks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@ And you should configure credentials by setting the following environment variab

## Chat Models

`ChatDatabricks` class exposes chat models from Databricks.
`DatabricksChatModel` class exposes chat models from Databricks.

```python
from langchain_databricks import ChatDatabricks
from langchain_databricks import DatabricksChatModel

llm = ChatDatabricks()
llm = DatabricksChatModel()
llm.invoke("Sing a ballad of LangChain.")
```
8 changes: 4 additions & 4 deletions libs/databricks/langchain_databricks/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from importlib import metadata

from langchain_databricks.chat_models import ChatDatabricks
from langchain_databricks.embeddings import DatabricksEmbeddings
from langchain_databricks.chat_models import DatabricksChatModel
from langchain_databricks.embeddings import DatabricksEmbeddingModel
from langchain_databricks.vectorstores import DatabricksVectorSearch

try:
Expand All @@ -12,8 +12,8 @@
del metadata # optional, avoids polluting the results of dir(__package__)

__all__ = [
"ChatDatabricks",
"DatabricksEmbeddings",
"DatabricksChatModel",
"DatabricksEmbeddingModel",
"DatabricksVectorSearch",
"__version__",
]
6 changes: 3 additions & 3 deletions libs/databricks/langchain_databricks/chat_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@
logger = logging.getLogger(__name__)


class ChatDatabricks(BaseChatModel):
class DatabricksChatModel(BaseChatModel):
"""Databricks chat model integration.

Setup:
Expand Down Expand Up @@ -90,8 +90,8 @@ class ChatDatabricks(BaseChatModel):
Instantiate:
.. code-block:: python

from langchain_databricks import ChatDatabricks
llm = ChatDatabricks(
from langchain_databricks import DatabricksChatModel
llm = DatabricksChatModel(
endpoint="databricks-meta-llama-3-1-405b-instruct",
temperature=0,
max_tokens=500,
Expand Down
6 changes: 3 additions & 3 deletions libs/databricks/langchain_databricks/embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
from langchain_databricks.utils import get_deployment_client


class DatabricksEmbeddings(Embeddings, BaseModel):
class DatabricksEmbeddingModel(Embeddings, BaseModel):
"""Databricks embedding model integration.

Setup:
Expand Down Expand Up @@ -36,8 +36,8 @@ class DatabricksEmbeddings(Embeddings, BaseModel):

Instantiate:
.. code-block:: python
from langchain_databricks import DatabricksEmbeddings
embed = DatabricksEmbeddings(
from langchain_databricks import DatabricksEmbeddingModel
embed = DatabricksEmbeddingModel(
endpoint="databricks-bge-large-en",
)

Expand Down
14 changes: 7 additions & 7 deletions libs/databricks/tests/unit_tests/test_chat_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
from langchain_core.pydantic_v1 import BaseModel, Field

from langchain_databricks.chat_models import (
ChatDatabricks,
DatabricksChatModel,
_convert_dict_to_message,
_convert_dict_to_message_chunk,
_convert_message_to_dict,
Expand Down Expand Up @@ -153,20 +153,20 @@ def mock_client() -> Generator:


@pytest.fixture
def llm() -> ChatDatabricks:
return ChatDatabricks(
def llm() -> DatabricksChatModel:
return DatabricksChatModel(
endpoint="databricks-meta-llama-3-70b-instruct", target_uri="databricks"
)


def test_dict(llm: ChatDatabricks) -> None:
def test_dict(llm: DatabricksChatModel) -> None:
d = llm.dict()
assert d["_type"] == "chat-databricks"
assert d["endpoint"] == "databricks-meta-llama-3-70b-instruct"
assert d["target_uri"] == "databricks"


def test_chat_model_predict(llm: ChatDatabricks) -> None:
def test_chat_model_predict(llm: DatabricksChatModel) -> None:
res = llm.invoke(
[
{"role": "system", "content": "You are a helpful assistant."},
Expand All @@ -176,7 +176,7 @@ def test_chat_model_predict(llm: ChatDatabricks) -> None:
assert res.content == _MOCK_CHAT_RESPONSE["choices"][0]["message"]["content"] # type: ignore[index]


def test_chat_model_stream(llm: ChatDatabricks) -> None:
def test_chat_model_stream(llm: DatabricksChatModel) -> None:
res = llm.stream(
[
{"role": "system", "content": "You are a helpful assistant."},
Expand All @@ -187,7 +187,7 @@ def test_chat_model_stream(llm: ChatDatabricks) -> None:
assert chunk.content == expected["choices"][0]["delta"]["content"] # type: ignore[index]


def test_chat_model_bind_tools(llm: ChatDatabricks) -> None:
def test_chat_model_bind_tools(llm: DatabricksChatModel) -> None:
class GetWeather(BaseModel):
"""Get the current weather in a given location"""

Expand Down
10 changes: 5 additions & 5 deletions libs/databricks/tests/unit_tests/test_embeddings.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import pytest
from mlflow.deployments import BaseDeploymentClient # type: ignore[import-untyped]

from langchain_databricks import DatabricksEmbeddings
from langchain_databricks import DatabricksEmbeddingModel


def _mock_embeddings(endpoint: str, inputs: Dict[str, Any]) -> Dict[str, Any]:
Expand Down Expand Up @@ -34,16 +34,16 @@ def mock_client() -> Generator:


@pytest.fixture
def embeddings() -> DatabricksEmbeddings:
return DatabricksEmbeddings(
def embeddings() -> DatabricksEmbeddingModel:
return DatabricksEmbeddingModel(
endpoint="text-embedding-3-small",
documents_params={"fruit": "apple"},
query_params={"fruit": "banana"},
)


def test_embed_documents(
mock_client: BaseDeploymentClient, embeddings: DatabricksEmbeddings
mock_client: BaseDeploymentClient, embeddings: DatabricksEmbeddingModel
) -> None:
documents = ["foo"] * 30
output = embeddings.embed_documents(documents)
Expand All @@ -57,7 +57,7 @@ def test_embed_documents(


def test_embed_query(
mock_client: BaseDeploymentClient, embeddings: DatabricksEmbeddings
mock_client: BaseDeploymentClient, embeddings: DatabricksEmbeddingModel
) -> None:
query = "foo bar"
output = embeddings.embed_query(query)
Expand Down
4 changes: 2 additions & 2 deletions libs/databricks/tests/unit_tests/test_imports.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
from langchain_databricks import __all__

EXPECTED_ALL = [
"ChatDatabricks",
"DatabricksEmbeddings",
"DatabricksChatModel",
"DatabricksEmbeddingModel",
"DatabricksVectorSearch",
"__version__",
]
Expand Down
Loading