-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(agent): Add langchain + crew.ai agent examples - hot_weather & cold_weather #121
Conversation
Signed-off-by: Nigel Jones <[email protected]>
A crew.ai agent has now been added. This also uses DuckDuckGo and creates a trivial crew, with one agent and one task. The results need further work on prompting, and as above we need to sort variable passing, but the agent/llm is working
|
Signed-off-by: Nigel Jones <[email protected]>
@@ -0,0 +1,57 @@ | |||
#!/usr/bin/env python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets put these examples under examples/bee-hive
(so to distinguish them from the python framework examples)
we probably should also group them by use case, not framework, so maybe examples/bee-hive/third-party-agents
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did wonder about 'third party' but also thought how neutral bee-hive is going to be. So should they just be under bee/crewai/langchain, or do we make the bee ones primary. Naming is always hard. If by use case, what do we call this use case? activity-planner ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's defer to @AngeloDanducci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've moved into examples/bee-hive/third-party-agents
for now. Will wait for any comments @AngeloDanducci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make it return the data in a JSON format optionally? Perhaps an optional parameter that defaults to Text but when set to “json” then format the result as JSON. This would allow it to easily be consumable by the next agent…
Also, agree with moving to examples repo and following the structure there.
Finally, important to have a simple test. That calls this agent and excepts some results.
When done I will provide more details on the code. For one I’d love to see pythonic style for methods and functions. So always _ and separate words with _ and use __private_methods()>
Lastly, please include some pythonic style comments for each class and public methods. Everything else comments optional.
Thanks for doing this.
…ture Signed-off-by: Nigel Jones <[email protected]>
Thanks for the feedback.
We also don't really have a 'build' currently, so to automate any test, even locally, we need to get this in place The other thing to note is that we do not have code here that always returns the same results. We're using an LLM, and dependent on dynamically changing data (duckduckgo search). If both were stubs then perhaps, but that results in a very isolated test. Do we use another AI model for validating test results (probably we should!) Suggest having this discussion in our next team meeting
Task summary:
If merging is helpful to any of our other activities we can do so at any time. For now I'll target the above at updates in this PR |
Signed-off-by: Nigel Jones <[email protected]>
Signed-off-by: Nigel Jones <[email protected]>
|
@AngeloDanducci This is the code I will want to move over to demos (specifically crewai version for interation 1) I can create a PR against the demo repo once you suggest a layout. I would like to add tests (including perhaps a workflow) - but a later iteration. |
# Assume llama3.1 (running locally given env above) | ||
os.environ["LLAMAFILE_SERVER_BASE_URL"] = "http://localhost:11434" | ||
llm = ChatOllama(model="granite3.1-dense:8b", tools=tools) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
comment says llama but model for llm is granite
Thanks for the comments. This will now be moved to the bee-hive-demos repo. See:
Closing. |
Adds initial langchain based agent in support of #119 and #117 (source: https://github.com/planetf1/langgraph/blob/main/weather/hotweather.py )
This agent code defines and runs a langchain agent
Assumptions
Later iterations can
To run
poetry install
(may also need to runpoetry lock
)examples/langchain/things-to-do
./hot-weather.py
Example output is: