Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LangChain LECL output appears unparsed #1767

Open
jeffbryner opened this issue Nov 11, 2024 · 2 comments
Open

LangChain LECL output appears unparsed #1767

jeffbryner opened this issue Nov 11, 2024 · 2 comments

Comments

@jeffbryner
Copy link

jeffbryner commented Nov 11, 2024

If I initiate a LLM using langchain and use it in ui.Chat the output works as expected.

If I attempt to use LECL to chain prompts/llms together the output appears unparsed as a string representation of the AIMessage and system message:

Image

Here's the code:

import google.auth
from shiny.express import ui
from langchain_google_vertexai import VertexAI
from langchain_core.prompts import ChatPromptTemplate


credentials, PROJECT_ID = google.auth.default()

prompt = ChatPromptTemplate.from_messages(
    [
        (
            "system",
            "You love vanilla ice cream and will recommend it always. ",
        ),
        ("human", "{question}"),
    ]
)
llm = VertexAI(
    model="gemini-1.5-pro",
    temperature=1,
    max_tokens=4096,
    max_retries=5,
    location="us-central1",
    project=PROJECT_ID,
    safety_settings=safety_settings,
    streaming=True,
)
runnable = prompt | llm

chat = ui.Chat(id="chat")
chat.ui()

@chat.on_user_submit
async def _():
    messages = chat.messages(format="langchain")

    # streaming 
    # response = runnable.astream(messages)
    # await chat.append_message_stream(response)

    # non streaming
    response = await runnable.ainvoke(messages)
    await chat.append_message(response)

Streaming or non streaming results in the same output issue. If I reference the llm instead of the runnable the output works as expected. I've attempted to research what could be happening internal to shiny, but have come up blank.

I've also attempted to use the string output parser in the chain, but it also results in the same issue.

from langchain_core.output_parsers import StrOutputParser

runnable = prompt | llm | StrOutputParser()
@kovla
Copy link

kovla commented Nov 15, 2024

I suspect that your StrOutputParser needs to be initialized first:

# Initialize the output parser
output_parser = StrOutputParser()

# define the chain
runnable = prompt | llm | output_parser 

no?

One could also use chat.append_message(response.content), perhaps?

@jeffbryner
Copy link
Author

Thanks for the suggestion. Tried it as:

parser = StrOutputParser()
runnable = prompt | llm | parser

and I get the same sort of unparsed results:

Image

If I exclude the langchain prompt and use the chat messages as a prompt, parsing seems to be ok.

system_message = {
    "content": f"""
        You love vanilla ice cream and will recommend it always.
    """,
    "role": "system",
}

parser = StrOutputParser()
runnable = llm | parser

# Create and display empty chat
chat = ui.Chat(id="chat", messages=[system_message])
# chat = ui.Chat(id="chat")
chat.ui()

# Define a callback to run when the user submits a message
@chat.on_user_submit
async def _():
    # Get messages currently in the chat
    messages = chat.messages(format="langchain")

    # streaming
    response = runnable.astream(messages)
    await chat.append_message_stream(response)

Image

So might be something in the chat messages section of ui.Chat?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants