An enterprise-grade, Generative AI framework and utilities for AI-enabled applications. connectchain
is designed to bridge the gap between enterprise needs and what is available in existing frameworks.
Primary objectives include:
- A login utility for API-based LLM services that integrates with Enterprise Auth Service (EAS). Simplified generation of the JWT token, which is then passed to the modeling service provider.
- Support for configuration-based outbound proxy support at the model level to allow integration with enterprise-level security requirements.
- A set of tools to provide greater control over generated prompts. This is done by adding hooks to the existing langchain packages.
pip install connectchain
Connectchain works with a combination of environmental variables and a configuration .yml
file. Environmental variables are defined in the config.yml
and their corresponding values set in the .env
file. The path to the config.yml
is defined as an variable in the .env
file. You MUST create both a config.yml
and .env
file to use the module. The example config file can be found at ./connectchain/config/example.config.yml
. See the example env file for more details. You can copy and rename both files; replacing the required values with your ids and secrets and adding additional supported options as needed.
LangChain Expression Language (LCEL) supports adding a model() method. Now one can execute a chain by following the LCEL syntax with a minor tweak:
- when you add the () to the model, it gets instantiated on the fly:
from connectchain.lcel import model
...
prompt = PromptTemplate(
input_variables=["music_genre"],
template="Tell me about {music_genre} music."
)
# using langchain directly this would be:
# chain = prompt | model
chain = prompt | model()
out = chain.invoke({"music_genre": "classical"})
print(out)
You can have multiple model configurations defined in the config.yml
. These are accessed via connectchain's LCEL support by passing the model configuration index (the key under which the model's configuration is defined in the config.yml
) to the model
method of connectchain. The following is an example using 2
as the model configuraiton index assuming it is defined in the config.yml > models
section:
chain = prompt | model('2')
Optionally eas
, proxy
and cert
sections of the config.yml
can be overriden by model definitions. To do this, simply define those sections in a model config (again in the config.yml
) and re-define any values you want to override. For example, if you want to override all three options for a model, you can define it as follows:
models:
foo:
eas:
id_key: ... # Env key for id
secret_key: ... # Env key for secret
scope: [
# ...
]
cert:
cert_path: /path/to/cert
cert_name: model_specific_cert.crt
cert_size: 2048
proxy:
host: proxy.foo.com
port: 8080
# ... continue the model configuration
Add logging or auditing to the chain:
from connectchain.lcel import Logger
...
class PrintLogger(Logger):
def print(self, payload):
print(payload)
...
logger = PrintLogger()
chain = prompt | logger.log() | model() | logger.log()
There is a portable solution for the "regular" prompt template-based requests. It is portable, i.e. no need to directly import a model provider package (e.g. openai
). Additionally, prompts can be validated before being sent to the LLM.
from connectchain.orchestrators import PortableOrchestrator
orchestrator = PortableOrchestrator.from_prompt_template(
prompt_template="Tell me about the climate in {area_of_interest}.", input_variables=["area_of_interest"])
output = orchestrator.run('Peru')
Again, you can have multiple models defined in the config.yml
. For example, a second model could be defined as '2' in the config which configures a different model, a different API and even a different EAS and would look like this:
orchestrator = PortableOrchestrator.from_prompt_template(
prompt_template="Tell me about the climate in {area_of_interest}.", input_variables=["area_of_interest"], index='2')
from connectchain.utils import get_token_from_env
...
auth_token = get_token_from_env()
...
openai.api_key = auth_token
Same token can be used in lieu of the OPENAI_API_KEY:
my_api_base = "<insert_your_api_base_here>"
llm = AzureOpenAI(
engine='gpt-35',
model_name='gpt-35-turbo',
openai_api_key=auth_token,
openai_api_base=my_api_base)
chain = LLMChain(llm=llm, prompt=prompt)
connectchain.prompts
: A package to provide greater control over generated prompts before they are passed to the LLM by providing an entrypoint for sanitizer implementations.
from connectchain.prompts import ValidPromptTemplate
from connectchain.utils.exceptions import OperationNotPermittedException
def my_sanitizer(query: str) -> str:
"""IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk."""
pattern = r'BADWORD'
if re.search(pattern, query):
print("BADWORD found!")
raise OperationNotPermittedException("Illegal execution detected: {}".format(query))
else:
return query
...
prompt_template = "Tell me about {food_type} production.}"
prompt = ValidPromptTemplate(
output_sanitizer=my_sanitizer,
input_variables=["food_type"],
template=prompt_template
)
chain = LLMChain(llm=llm, prompt=prompt)
# the following will throw an exception
output = chain.run('BADWORD')
print(output)
We add hooks to improve control over code that is executed by providing an entrypoint for sanitizer implementations.
from connectchain.chains import ValidLLMChain
def my_sanitizer(query: str) -> str:
"""IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk."""
# define your own logic here.
# for example, can call an API to verify the content of the code
pass
chain = ValidLLMChain(llm=llm, prompt=prompt, output_sanitizer=my_sanitizer)
output = chain.run('drought resistant wheat')
print(output)
try:
output = chain.run('BADWORD')
except OperationNotPermittedException as e:
print(e)
We add hooks to improve control over code that is executed by providing an entrypoint for sanitizer implementations.
from connectchain.tools import ValidPythonREPLTool
def my_sanitizer(query: str) -> str:
"""IMPORTANT: This is a simplified example designed to showcase concepts and should not used
as a reference for production code. The features are experimental and may not be suitable for
use in sensitive environments or without additional safeguards and testing.
Any use of this code is at your own risk."""
# define your own logic here.
# for example, can call an API to verify the content of the code
pass
agent_executor = create_python_agent(
llm=llm,
tool=ValidPythonREPLTool(my_sanitizer), # normally, you would use PythonREPLTool
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
prompt="Just output the result, no other text or comments.",
)
output = agent_executor.run("what is 10 to the power of 10?")
print(output)
We welcome Your interest in the American Express Open Source Community on GitHub. Any Contributor to any Open Source Project managed by the American Express Open Source Community must accept and sign an Agreement indicating agreement to the terms below. Except for the rights granted in this Agreement to American Express and to recipients of software distributed by American Express, You reserve all right, title, and interest, if any, in and to Your Contributions. Please fill out the Agreement.
Any contributions made under this project will be governed by the Apache License 2.0.
This project adheres to the American Express Community Guidelines. By participating, you are expected to honor these guidelines.