-
Checked other resources
Commit to Help
Example Codefrom langchain.agents.agent_types import AgentType
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
from langchain_openai import ChatOpenAI
from langchain_community.callbacks import get_openai_callback
import pandas as pd
from langchain_openai import OpenAI
import os
os.environ['OPENAI_API_KEY'] = "XXXXXX"
df = pd.read_csv("XXXXX")
agent = create_pandas_dataframe_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"),
df,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
)
with get_openai_callback() as cb:
res = agent.invoke("Can you project how many items will be sold in the next month? ")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
print("Answer: ", res) DescriptionI`m using callback function to count tokens from agent.invoke, but it returns zero tokens. Agent is working propely. Same get_openai_callback() method works for simple ChatOpenAI(model_name="gpt-4") calls. System InfoPackage Information
|
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 9 replies
-
🤖 Hey @Leoccleao! Great to see you diving into another LangChain adventure. Let's tackle this issue together. Based on the information you've provided and the similar issues I found in the LangChain repository, it seems like the issue you're facing is related to the asynchronous nature of the agent's To fix this, you should await the However, since the import asyncio
async def main():
with get_openai_callback() as cb:
loop = asyncio.get_event_loop()
res = await loop.run_in_executor(None, agent.invoke, "Can you project how many items will be sold in the next month? ")
print(f"Total Tokens: {cb.total_tokens}")
print(f"Prompt Tokens: {cb.prompt_tokens}")
print(f"Completion Tokens: {cb.completion_tokens}")
print(f"Total Cost (USD): ${cb.total_cost}")
print("Answer: ", res)
asyncio.run(main()) Please note that you'll need to make sure that the If you're still facing issues, it might be helpful to look at the solutions provided in these similar issues:
These issues suggest creating a custom async and cost calculator 'handler' that uses the tiktoken dependency to calculate the token usage and cost. It is used by instantiating the I hope this helps! If you have any other questions or need further clarification, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
Hello, Is there any new update Regarding Token Usage Tracking in Csv Agents? |
Beta Was this translation helpful? Give feedback.
-
Any update ? |
Beta Was this translation helpful? Give feedback.
-
Haven't tried langchain 2.0 yet, will let you know, wheter they fixed it or not |
Beta Was this translation helpful? Give feedback.
-
@Leoccleao since you have marked this as closed, could you please share the code that fixed this issue. The one shared by the bot above isn't working |
Beta Was this translation helpful? Give feedback.
-
Follow these instructions: https://python.langchain.com/v0.2/docs/how_to/chat_token_usage_tracking/ llm = ChatOpenAI(
model="gpt-3.5-turbo-0125",
temperature=0,
stream_usage=True, <--- You must specify this to ask open ai to return token counts
) |
Beta Was this translation helpful? Give feedback.
Follow these instructions:
https://python.langchain.com/v0.2/docs/how_to/chat_token_usage_tracking/