Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update agent eval tutorial #582

Merged
merged 26 commits into from
Dec 19, 2024
Merged

Update agent eval tutorial #582

merged 26 commits into from
Dec 19, 2024

Conversation

baskaryan
Copy link
Contributor

No description provided.

Copy link

vercel bot commented Dec 11, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
langsmith-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 19, 2024 4:51pm

@baskaryan baskaryan changed the title update agent eval tutorial wip: update agent eval tutorial Dec 11, 2024
davidx33

This comment was marked as duplicate.

@baskaryan baskaryan changed the title wip: update agent eval tutorial Update agent eval tutorial Dec 18, 2024
@baskaryan baskaryan marked this pull request as ready for review December 18, 2024 02:09
_set_env("LANGCHAIN_API_KEY")
_set_env("OPENAI_API_KEY")
#endregion
```

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
While we're using OpenAI for this example, you can use any [LangChain supported chat model](https://python.langchain.com/docs/integrations/chat/#all-chat-models).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nothing looks openai dependent from a quick skim but need to confirm if true

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i dont know if we should even say that you need to use langchain. yes this code is langchain-based but you could apply all the same techniques without it, from langsmith perspective it doesn't matter if you use langchain


# Grade prompt
grade_prompt_answer_accuracy = prompt = hub.pull("langchain-ai/rag-answer-vs-reference")
You can see what these results look like here: [LangSmith link](https://smith.langchain.com/public/708d08f4-300e-4c75-9677-c6b71b0d28c9/d).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think embedding the link on the word "here" makes more sense to me

@baskaryan baskaryan merged commit f9ec6b7 into main Dec 19, 2024
6 checks passed
@baskaryan baskaryan deleted the bagatur/eval_agent_tutorial branch December 19, 2024 16:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants