You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I attempted to use Langfuse for tracing my pipeline, but it appears that nothing is being traced. Initially, I speculated that the issue might lie with the model. Although I ran the example code provided in the documentation and observed some functionality, it didn't meet expectations. Specifically, the pipeline fails to trace input and output values.
To diagnose the issue, I modified my pipelines multiple times. When I explicitly declare the input and output values, Langfuse traces only the final output value and still fails to capture intermediate values, particularly those related to the LLM model’s input and output.
After numerous tests, I'm still unable to determine the root of the problem. Attached is a screenshot for reference.
My Code
fromhaystackimportPipelinefromhaystack.dataclassesimportChatMessagefromhaystack.components.generators.chatimportOpenAIChatGeneratorfromhaystack.components.builders.chat_prompt_builderimportChatPromptBuilderfromhaystack.components.convertersimportOutputAdapterfromhaystack_integrations.components.connectors.langfuseimportLangfuseConnectorimportosos.environ["LANGFUSE_HOST"] ="https://cloud.langfuse.com"os.environ["HAYSTACK_CONTENT_TRACING_ENABLED"] ="true"tracer=LangfuseConnector('haystack-test')
messages= [
ChatMessage.from_system("""You are a data generator, and you will generate data according to the given format.---User: Language: English Number of words: 1 Part of speech: Noun Assistant: Result: Apple---"""
),
ChatMessage.from_user("""User:Language: {{lang}}Number of words: {{num}}Part of speech: {{pos}}""")
]
prompt_builder=ChatPromptBuilder(template=messages)
llm=OpenAIChatGenerator(
model='gpt-3.5-turbo'
)
adapter=OutputAdapter(
template="""{{replies[0].content}}""",output_type=str
)
gen_pipe=Pipeline()
gen_pipe.add_component("tracer", tracer)
gen_pipe.add_component('prompt',prompt_builder)
gen_pipe.add_component('llm',llm)
gen_pipe.add_component('adapter',adapter)
gen_pipe.connect('prompt.prompt','llm.messages')
gen_pipe.connect('llm.replies','adapter')
response=gen_pipe.run(data={
'prompt':{
'template_variables': {
'lang':'English',
'num':1,
'pos':'verb'
}
}
})
print(response)
print(response["tracer"]["trace_url"])
Picuture
example code in the documentation works:
tracing result in my code:
System:
OS: Win10
GPU/CPU: Intel/Nvidea
Haystack version (commit or version number): 2.2.0
Python version: 3.10.13
The text was updated successfully, but these errors were encountered:
Describe the bug
I attempted to use Langfuse for tracing my pipeline, but it appears that nothing is being traced. Initially, I speculated that the issue might lie with the model. Although I ran the example code provided in the documentation and observed some functionality, it didn't meet expectations. Specifically, the pipeline fails to trace input and output values.
To diagnose the issue, I modified my pipelines multiple times. When I explicitly declare the input and output values, Langfuse traces only the final output value and still fails to capture intermediate values, particularly those related to the LLM model’s input and output.
After numerous tests, I'm still unable to determine the root of the problem. Attached is a screenshot for reference.
My Code
Picuture
example code in the documentation works:
tracing result in my code:
System:
The text was updated successfully, but these errors were encountered: