Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LangChain support for Orchestration Client #176

Open
kay-schmitteckert opened this issue Sep 25, 2024 · 4 comments
Open

LangChain support for Orchestration Client #176

kay-schmitteckert opened this issue Sep 25, 2024 · 4 comments
Labels
feature request New feature or request

Comments

@kay-schmitteckert
Copy link

kay-schmitteckert commented Sep 25, 2024

Describe the Problem

The Orchestration Client already simplifies the process of developing and kickstarting GenAI projects as well as communicating with foundation models. Now, LangChain support for OpenAI is available, and instead of writing a separate wrapper for each vendor, the idea is to proceed with a LangChain wrapper for the Orchestration Client, which would broadly cover everything.

Propose a Solution

LangChain wrapper for Orchestration client, e.g.:

import { LLM, type BaseLLMParams } from "@langchain/core/language_models/llms";
import type { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";

import { OrchestrationClient, OrchestrationModuleConfig, ChatMessages } from "@sap-ai-sdk/orchestration";

export interface CustomLLMInput extends BaseLLMParams {
    deploymentId: string;
    resourceGroup?: string;
    modelName: string;
    modelParams?: {};
    modelVersion?: string;
}

export class GenerativeAIHubCompletion extends LLM {
    deploymentId: string;
    resourceGroup: string;
    modelName: string;
    modelParams: {};
    modelVersion: string;

    constructor(fields: CustomLLMInput) {
        super(fields);
        this.deploymentId = fields.deploymentId;
        this.resourceGroup = fields.resourceGroup || "default";
        this.modelName = fields.modelName;
        this.modelParams = fields.modelParams || {};
        this.modelVersion = fields.modelVersion || "latest";
    }

    _llmType() {
        return "Generative AI Hub - Orchestration Service";
    }

    async _call(
        prompt: string,
        options: this["ParsedCallOptions"],
        runManager: CallbackManagerForLLMRun
    ): Promise<string> {
       
        // Configuration & Prompt
        const llmConfig = {
            model_name: this.modelName,
            model_params: this.modelParams,
            model_version: this.modelVersion
        };

        const messages: ChatMessages = [{ role: "user", content: "{{?prompt}}?" }];
        const config: OrchestrationModuleConfig = {
            templating: {
                template: [{ role: "user", content: "{{?prompt}}" }]
            },
            llm: llmConfig
        };

        // Orchestration Client
        const orchestrationClient = new OrchestrationClient(config, {
            resourceGroup: this.resourceGroup
            //deploymentId: this.deploymentId
        });

        // Call the orchestration service.
        const response = await orchestrationClient.chatCompletion({
            inputParams: { prompt }
        });
        // Access the response content.
        return response.getContent();
    }
}

Describe Alternatives

No response

Affected Development Phase

Getting Started

Impact

Inconvenience

Timeline

No response

Additional Context

No response

@ZhongpinWang ZhongpinWang added the feature request New feature or request label Sep 25, 2024
@jjtang1985
Copy link
Contributor

Thank you very much for raising up this feature request.
Orchestration service contains a list of the following modules in a pipeline:

  • LLM access itself via templating module
    • The harmonised API allows using LLMs from different vendors
  • Content filtering
  • Data Masking
  • Grounding (coming soon)
  • ...

Therefore, it's more than a service for LLM access.

If we want to make an adapter like what we do for our LangChain package, users would be able to use the original LangChain APIs with SAP GenAI Hub.

However, LangChain knows nothing about other orchestration modules beyonds the LLM access, so we need some API extensions for e.g.,:

  • configuring content filtering
  • response/error handling for content filtering, with a new API design, which might not look like a LangChain client?

This seems to be a big epic.
I would like to understand more details of the use cases, so we might be able to split the task.

@kay-schmitteckert
Copy link
Author

kay-schmitteckert commented Oct 23, 2024

Hey @jjtang1985,

Thanks for your reply. Since we define how to initialize and call the model, we can expose the api of the orchestration model via the constructor of the langchain wrapper and just pass it through.

export class GenerativeAIHubCompletion extends LLM {
    
    orchestrationClient: OrchestrationClient;

    constructor(config: OrchestrationModuleConfig, deploymentConfig?: ResourceGroupConfig | undefined) {
        super();
        this.orchestrationClient = new OrchestrationClient(config, deploymentConfig);
        ...
    }
    ...
}

What are your thoughts on this?

-Kay

@jjtang1985
Copy link
Contributor

Thanks for the example.

Since we define how to initialize and call the model, we can expose the api of the orchestration model via the constructor of the langchain wrapper and just pass it through.

This is a valid design for initialisation and sending the request. We'll then think about the response parsing.

This seems to be a big epic.
I would like to understand more details of the use cases, so we might be able to split the task.

Maybe I should rephrase my original question.
What orchestration modules would you be interested in the first place:

  1. llm module and templating module
  2. content filtering module
  3. data masking module
  4. grounding module (will come soon)

I would assume #1 would be our first task, which is the foundation of all LLM accesses.
Would it be already helpful or would you expect that we should support all the orchestration modules in the first place.
I'm asking, because I want to split the task.

Best, Junjie

@kay-schmitteckert
Copy link
Author

Yes, I think model access would be a valid starting point

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants