Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bagatur/community #14257

Closed
wants to merge 62 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
62 commits
Select commit Hold shift + click to select a range
88d3970
scripts
baskaryan Dec 4, 2023
2d18c65
wip
baskaryan Dec 5, 2023
4965f9a
rm
baskaryan Dec 5, 2023
bf7b59e
ci
baskaryan Dec 5, 2023
ee1478b
update
baskaryan Dec 5, 2023
5535c78
deps
baskaryan Dec 5, 2023
f5ed74d
wip
baskaryan Dec 6, 2023
31c9081
merge
baskaryan Dec 6, 2023
98c4f2a
merge
baskaryan Dec 6, 2023
f56c035
latest
baskaryan Dec 6, 2023
c4f32cb
tests
baskaryan Dec 6, 2023
960faa6
more
baskaryan Dec 6, 2023
a212971
more
baskaryan Dec 6, 2023
e08017f
merge
baskaryan Dec 6, 2023
c6c9d93
ci
baskaryan Dec 6, 2023
f922f9f
more
baskaryan Dec 6, 2023
f62331a
script
baskaryan Dec 6, 2023
8ccc18f
more
baskaryan Dec 6, 2023
75ade61
fmt
baskaryan Dec 7, 2023
ce884f8
more
baskaryan Dec 7, 2023
11fc0a5
poetry
baskaryan Dec 7, 2023
d6403c6
poetry
baskaryan Dec 7, 2023
fd2fe4a
poetry
baskaryan Dec 7, 2023
6b1af2d
more
baskaryan Dec 7, 2023
85eae95
anyio
baskaryan Dec 7, 2023
12039a0
more
baskaryan Dec 7, 2023
280aec4
more
baskaryan Dec 7, 2023
8226b81
more
baskaryan Dec 7, 2023
a66df25
more
baskaryan Dec 7, 2023
5631e7e
more
baskaryan Dec 7, 2023
cea3d61
more
baskaryan Dec 7, 2023
de690b0
stubs
baskaryan Dec 7, 2023
6a0a7a7
poetry
baskaryan Dec 7, 2023
bbc795b
override llm config
efriis Dec 7, 2023
f0304a8
Merge branch 'bagatur/community' of github.com:langchain-ai/langchain…
efriis Dec 7, 2023
d7793a0
merge
baskaryan Dec 7, 2023
deab168
poetry
baskaryan Dec 7, 2023
9a5a52d
nit
baskaryan Dec 7, 2023
3249916
fmt
baskaryan Dec 7, 2023
243465d
poetry
baskaryan Dec 7, 2023
c3232a8
make
baskaryan Dec 7, 2023
60fc995
test
baskaryan Dec 7, 2023
af2ad3b
conftest
baskaryan Dec 7, 2023
91dd47b
ignore
baskaryan Dec 7, 2023
a7cbdcc
namespaces
baskaryan Dec 8, 2023
dee5fdf
extra
baskaryan Dec 8, 2023
89f74bf
rm mypy test cache
baskaryan Dec 8, 2023
f541e02
poetry
baskaryan Dec 8, 2023
a2c4479
nit
baskaryan Dec 8, 2023
279155f
switch cache
baskaryan Dec 8, 2023
34709ec
examples
baskaryan Dec 8, 2023
3e0c5a3
mypy cache
baskaryan Dec 8, 2023
0de6e09
examples
baskaryan Dec 8, 2023
9e745c8
examples
baskaryan Dec 8, 2023
d1a0140
lint
baskaryan Dec 8, 2023
76fb19f
telegrame
baskaryan Dec 8, 2023
7051e83
mv more
baskaryan Dec 8, 2023
247950e
hash
baskaryan Dec 8, 2023
b46802a
key
baskaryan Dec 8, 2023
0de98e7
Merge branch 'master' into bagatur/community
baskaryan Dec 8, 2023
251fd93
adapters
baskaryan Dec 8, 2023
b56bfd5
document script
baskaryan Dec 8, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 2 additions & 0 deletions .github/scripts/check_diff.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,8 @@
"libs/core",
"libs/langchain",
"libs/experimental",
"libs/community",
"libs/partners/openai",
}

if __name__ == "__main__":
Expand Down
7 changes: 4 additions & 3 deletions .github/workflows/_lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,8 @@ jobs:
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
key: mypy-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}
key: mypy-lint-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}


- name: Analysing the code with our lint
working-directory: ${{ inputs.working-directory }}
Expand All @@ -105,13 +106,13 @@ jobs:
run: |
poetry install --with test

- name: Get .mypy_cache to speed up mypy
- name: Get .mypy_cache_test to speed up mypy
uses: actions/cache@v3
env:
SEGMENT_DOWNLOAD_TIMEOUT_MIN: "2"
with:
path: |
${{ env.WORKDIR }}/.mypy_cache
${{ env.WORKDIR }}/.mypy_cache_test
key: mypy-test-${{ runner.os }}-${{ runner.arch }}-py${{ matrix.python-version }}-${{ inputs.working-directory }}-${{ hashFiles(format('{0}/poetry.lock', env.WORKDIR)) }}

- name: Analysing the code with our lint
Expand Down
13 changes: 13 additions & 0 deletions .github/workflows/langchain_community_release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
name: libs/community Release

on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI

jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/community
secrets: inherit
13 changes: 13 additions & 0 deletions .github/workflows/langchain_openai_release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
name: libs/core Release

on:
workflow_dispatch: # Allows to trigger the workflow manually in GitHub UI

jobs:
release:
uses:
./.github/workflows/_release.yml
with:
working-directory: libs/core
secrets: inherit
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
"""Main entrypoint into package."""
from importlib import metadata

try:
__version__ = metadata.version(__package__)
except metadata.PackageNotFoundError:
# Case where package metadata is not available.
__version__ = ""
del metadata # optional, avoids polluting the results of dir(__package__)
Original file line number Diff line number Diff line change
@@ -0,0 +1,123 @@
"""Agent toolkits contain integrations with various resources and services.

LangChain has a large ecosystem of integrations with various external resources
like local and remote file systems, APIs and databases.

These integrations allow developers to create versatile applications that combine the
power of LLMs with the ability to access, interact with and manipulate external
resources.

When developing an application, developers should inspect the capabilities and
permissions of the tools that underlie the given agent toolkit, and determine
whether permissions of the given toolkit are appropriate for the application.

See [Security](https://python.langchain.com/docs/security) for more information.
"""
from pathlib import Path
from typing import Any

from langchain_core._api.path import as_import_path

from langchain_community.agent_toolkits.ainetwork.toolkit import AINetworkToolkit
from langchain_community.agent_toolkits.amadeus.toolkit import AmadeusToolkit
from langchain_community.agent_toolkits.azure_cognitive_services import (
AzureCognitiveServicesToolkit,
)
from langchain_community.agent_toolkits.conversational_retrieval.openai_functions import ( # noqa: E501
create_conversational_retrieval_agent,
)
from langchain_community.agent_toolkits.file_management.toolkit import (
FileManagementToolkit,
)
from langchain_community.agent_toolkits.gmail.toolkit import GmailToolkit
from langchain_community.agent_toolkits.jira.toolkit import JiraToolkit
from langchain_community.agent_toolkits.json.base import create_json_agent
from langchain_community.agent_toolkits.json.toolkit import JsonToolkit
from langchain_community.agent_toolkits.multion.toolkit import MultionToolkit
from langchain_community.agent_toolkits.nasa.toolkit import NasaToolkit
from langchain_community.agent_toolkits.nla.toolkit import NLAToolkit
from langchain_community.agent_toolkits.office365.toolkit import O365Toolkit
from langchain_community.agent_toolkits.openapi.base import create_openapi_agent
from langchain_community.agent_toolkits.openapi.toolkit import OpenAPIToolkit
from langchain_community.agent_toolkits.playwright.toolkit import (
PlayWrightBrowserToolkit,
)
from langchain_community.agent_toolkits.powerbi.base import create_pbi_agent
from langchain_community.agent_toolkits.powerbi.chat_base import create_pbi_chat_agent
from langchain_community.agent_toolkits.powerbi.toolkit import PowerBIToolkit
from langchain_community.agent_toolkits.slack.toolkit import SlackToolkit
from langchain_community.agent_toolkits.spark_sql.base import create_spark_sql_agent
from langchain_community.agent_toolkits.spark_sql.toolkit import SparkSQLToolkit
from langchain_community.agent_toolkits.sql.base import create_sql_agent
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
from langchain_community.agent_toolkits.steam.toolkit import SteamToolkit
from langchain_community.agent_toolkits.vectorstore.base import (
create_vectorstore_agent,
create_vectorstore_router_agent,
)
from langchain_community.agent_toolkits.vectorstore.toolkit import (
VectorStoreInfo,
VectorStoreRouterToolkit,
VectorStoreToolkit,
)
from langchain_community.agent_toolkits.zapier.toolkit import ZapierToolkit
from langchain_community.tools.retriever import create_retriever_tool

DEPRECATED_AGENTS = [
"create_csv_agent",
"create_pandas_dataframe_agent",
"create_xorbits_agent",
"create_python_agent",
"create_spark_dataframe_agent",
]


def __getattr__(name: str) -> Any:
"""Get attr name."""
if name in DEPRECATED_AGENTS:
relative_path = as_import_path(Path(__file__).parent, suffix=name)
old_path = "langchain." + relative_path
new_path = "langchain_experimental." + relative_path
raise ImportError(
f"{name} has been moved to langchain experimental. "
"See https://github.com/langchain-ai/langchain/discussions/11680"
"for more information.\n"
f"Please update your import statement from: `{old_path}` to `{new_path}`."
)
raise AttributeError(f"{name} does not exist")


__all__ = [
"AINetworkToolkit",
"AmadeusToolkit",
"AzureCognitiveServicesToolkit",
"FileManagementToolkit",
"GmailToolkit",
"JiraToolkit",
"JsonToolkit",
"MultionToolkit",
"NasaToolkit",
"NLAToolkit",
"O365Toolkit",
"OpenAPIToolkit",
"PlayWrightBrowserToolkit",
"PowerBIToolkit",
"SlackToolkit",
"SteamToolkit",
"SQLDatabaseToolkit",
"SparkSQLToolkit",
"VectorStoreInfo",
"VectorStoreRouterToolkit",
"VectorStoreToolkit",
"ZapierToolkit",
"create_json_agent",
"create_openapi_agent",
"create_pbi_agent",
"create_pbi_chat_agent",
"create_spark_sql_agent",
"create_sql_agent",
"create_vectorstore_agent",
"create_vectorstore_router_agent",
"create_conversational_retrieval_agent",
"create_retriever_tool",
]
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
from __future__ import annotations

from typing import Any, List, Optional, TYPE_CHECKING

from langchain_core.language_models import BaseLanguageModel
from langchain_core.memory import BaseMemory
from langchain_core.messages import SystemMessage
from langchain_core.prompts.chat import MessagesPlaceholder
from langchain_core.tools import BaseTool

if TYPE_CHECKING:
from langchain.agents.agent import AgentExecutor


def _get_default_system_message() -> SystemMessage:
return SystemMessage(
content=(
"Do your best to answer the questions. "
"Feel free to use any tools available to look up "
"relevant information, only if necessary"
)
)

def create_conversational_retrieval_agent(
llm: BaseLanguageModel,
tools: List[BaseTool],
remember_intermediate_steps: bool = True,
memory_key: str = "chat_history",
system_message: Optional[SystemMessage] = None,
verbose: bool = False,
max_token_limit: int = 2000,
**kwargs: Any,
) -> AgentExecutor:
"""A convenience method for creating a conversational retrieval agent.

Args:
llm: The language model to use, should be ChatOpenAI
tools: A list of tools the agent has access to
remember_intermediate_steps: Whether the agent should remember intermediate
steps or not. Intermediate steps refer to prior action/observation
pairs from previous questions. The benefit of remembering these is if
there is relevant information in there, the agent can use it to answer
follow up questions. The downside is it will take up more tokens.
memory_key: The name of the memory key in the prompt.
system_message: The system message to use. By default, a basic one will
be used.
verbose: Whether or not the final AgentExecutor should be verbose or not,
defaults to False.
max_token_limit: The max number of tokens to keep around in memory.
Defaults to 2000.

Returns:
An agent executor initialized appropriately
"""
from langchain.agents.agent import AgentExecutor
from langchain.agents.openai_functions_agent.agent_token_buffer_memory import (
AgentTokenBufferMemory,
)
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain.memory.token_buffer import ConversationTokenBufferMemory

if remember_intermediate_steps:
memory: BaseMemory = AgentTokenBufferMemory(
memory_key=memory_key, llm=llm, max_token_limit=max_token_limit
)
else:
memory = ConversationTokenBufferMemory(
memory_key=memory_key,
return_messages=True,
output_key="output",
llm=llm,
max_token_limit=max_token_limit,
)

_system_message = system_message or _get_default_system_message()
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=_system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)],
)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
return AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=verbose,
return_intermediate_steps=remember_intermediate_steps,
**kwargs,
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
"""Json agent."""
from __future__ import annotations
from typing import Any, Dict, List, Optional, TYPE_CHECKING

from langchain_core.callbacks import BaseCallbackManager
from langchain_core.language_models import BaseLanguageModel

from langchain_community.agent_toolkits.json.prompt import JSON_PREFIX, JSON_SUFFIX
from langchain_community.agent_toolkits.json.toolkit import JsonToolkit

if TYPE_CHECKING:
from langchain.agents.agent import AgentExecutor


def create_json_agent(
llm: BaseLanguageModel,
toolkit: JsonToolkit,
callback_manager: Optional[BaseCallbackManager] = None,
prefix: str = JSON_PREFIX,
suffix: str = JSON_SUFFIX,
format_instructions: Optional[str] = None,
input_variables: Optional[List[str]] = None,
verbose: bool = False,
agent_executor_kwargs: Optional[Dict[str, Any]] = None,
**kwargs: Any,
) -> AgentExecutor:
"""Construct a json agent from an LLM and tools."""
from langchain.agents.agent import AgentExecutor
from langchain.agents.mrkl.base import ZeroShotAgent
from langchain.chains.llm import LLMChain
tools = toolkit.get_tools()
prompt_params = {"format_instructions": format_instructions} if format_instructions is not None else {}
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=input_variables,
**prompt_params,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
callback_manager=callback_manager,
)
tool_names = [tool.name for tool in tools]
agent = ZeroShotAgent(llm_chain=llm_chain, allowed_tools=tool_names, **kwargs)
return AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
callback_manager=callback_manager,
verbose=verbose,
**(agent_executor_kwargs or {}),
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
"""Tool for interacting with a single API with natural language definition."""

from __future__ import annotations
from typing import Any, Optional, TYPE_CHECKING

from langchain_core.language_models import BaseLanguageModel
from langchain_core.tools import Tool

from langchain_community.tools.openapi.utils.api_models import APIOperation
from langchain_community.tools.openapi.utils.openapi_utils import OpenAPISpec
from langchain_community.utilities.requests import Requests

if TYPE_CHECKING:
from langchain.chains.api.openapi.chain import OpenAPIEndpointChain


class NLATool(Tool):
"""Natural Language API Tool."""

@classmethod
def from_open_api_endpoint_chain(
cls, chain: OpenAPIEndpointChain, api_title: str
) -> "NLATool":
"""Convert an endpoint chain to an API endpoint tool."""
expanded_name = (
f'{api_title.replace(" ", "_")}.{chain.api_operation.operation_id}'
)
description = (
f"I'm an AI from {api_title}. Instruct what you want,"
" and I'll assist via an API with description:"
f" {chain.api_operation.description}"
)
return cls(name=expanded_name, func=chain.run, description=description)

@classmethod
def from_llm_and_method(
cls,
llm: BaseLanguageModel,
path: str,
method: str,
spec: OpenAPISpec,
requests: Optional[Requests] = None,
verbose: bool = False,
return_intermediate_steps: bool = False,
**kwargs: Any,
) -> "NLATool":
"""Instantiate the tool from the specified path and method."""
api_operation = APIOperation.from_openapi_spec(spec, path, method)
chain = OpenAPIEndpointChain.from_api_operation(
api_operation,
llm,
requests=requests,
verbose=verbose,
return_intermediate_steps=return_intermediate_steps,
**kwargs,
)
return cls.from_open_api_endpoint_chain(chain, spec.info.title)
Loading
Loading