diff --git a/components/landing.module.css b/components/landing.module.css
index c9377d480..43aad1d09 100644
--- a/components/landing.module.css
+++ b/components/landing.module.css
@@ -84,7 +84,8 @@
.startGuides {
display: flex;
flex-grow: 1;
- margin-top: 24px;
+ margin-top: 48px;
+ margin-bottom: 48px;
}
.startGuideText {
@@ -274,3 +275,37 @@
right: 20px;
top: 30px;
}
+
+.comingSomewhereWrapper:hover {
+ background: #c5cae9;
+}
+
+.comingSomewhereWrapper {
+ display: flex;
+ padding: 16px;
+ justify-content: center;
+ align-items: center;
+ gap: 12px;
+ align-self: stretch;
+ border-radius: 8px;
+ border: 1px solid #d0d9e3;
+ background: #fff;
+ backdrop-filter: blur(35px);
+ cursor: pointer;
+}
+
+.comingSomewhereImg {
+ width: 47.35px;
+ height: 24px;
+ flex-shrink: 0;
+}
+
+.comingSomewhereTitle {
+ color: #000d3d;
+ font-family: Lexend;
+ font-size: 16px;
+ font-style: normal;
+ font-weight: 400;
+ line-height: normal;
+ letter-spacing: -0.16px;
+}
diff --git a/pages/guides/_meta.json b/pages/guides/_meta.json
index fd8194b49..4dc8012e5 100644
--- a/pages/guides/_meta.json
+++ b/pages/guides/_meta.json
@@ -4,5 +4,6 @@
"apis": "APIs",
"fetch-network": "Fetch Network",
"agent-courses": "Courses",
- "ai-engine-sdk": "AI Engine SDK"
+ "ai-engine-sdk": "AI Engine SDK",
+ "quickstart-with": "Quickstart With"
}
diff --git a/pages/guides/quickstart-with/CrewAI/_meta.json b/pages/guides/quickstart-with/CrewAI/_meta.json
new file mode 100644
index 000000000..3df176657
--- /dev/null
+++ b/pages/guides/quickstart-with/CrewAI/_meta.json
@@ -0,0 +1,14 @@
+{
+ "startup-idea-analyser": {
+ "title": "Startup Idea analyser",
+ "tags": [
+ "Intermediate",
+ "Python",
+ "CrewAI",
+ "Functions",
+ "Mailbox",
+ "Use Cases"
+ ],
+ "timestamp": true
+ }
+}
diff --git a/pages/guides/quickstart-with/CrewAI/creating-an-agent-with-crewai.mdx b/pages/guides/quickstart-with/CrewAI/creating-an-agent-with-crewai.mdx
new file mode 100644
index 000000000..c3de54529
--- /dev/null
+++ b/pages/guides/quickstart-with/CrewAI/creating-an-agent-with-crewai.mdx
@@ -0,0 +1,537 @@
+# Getting started with Fetch.ai x CrewAI
+
+Fetch.ai provides a dynamic communication layer that allows you to modularize components into distinct [Agents ↗️](/guides/agents/getting-started/whats-an-agent)️. In this ecosystem, Agents act as microservices, programmed to communicate either with other agents or with humans. While other frameworks also create agents, here we’ll explore how Fetch.ai’s unique agent architecture can complement and extend CrewAI’s functionality.
+
+By using Fetch.ai’s Agents to represent various elements of your Crew, your project can be designed to interact with [third parties ↗️](/guides/agents/intermediate/communicating-with-other-agents)️ to enhance both flexibility and scalability and increase potential economic benefits.
+
+In this guide, we'll walk you through a simple CrewAI example to understand its fundamentals and then demonstrate how we can enhance this model by integrating Fetch.ai Agents for advanced communication and workflow management.
+
+## A simple CrewAI example
+
+Let's use a shortened example that CrewAI provides; in this example we define an Agent and a Task. At a high level here, Agent is defining the LLMs "personality" and the Task is what that LLM is to solve.
+
+ ```python copy filename="crew.py"
+ fimport os
+ from crewai import Agent, Task, Crew, Process
+ from crewai_tools import SerperDevTool
+
+ os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
+ os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
+
+ search_tool = SerperDevTool()
+
+ # Define your agents with roles and goals
+ researcher = Agent(
+ role='Senior Research Analyst',
+ goal='Uncover cutting-edge developments in AI and data science',
+ backstory="""You work at a leading tech think tank.
+ Your expertise lies in identifying emerging trends.
+ You have a knack for dissecting complex data and presenting actionable insights.""",
+ verbose=True,
+ allow_delegation=False,
+ tools=[search_tool]
+ )
+
+ # Create tasks for your agents
+ task1 = Task(
+ description="""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
+ Identify key trends, breakthrough technologies, and potential industry impacts.""",
+ expected_output="Full analysis report in bullet points",
+ agent=researcher
+ )
+
+ # Instantiate your crew with a sequential process
+ crew = Crew(
+ agents=[researcher,],
+ tasks=[task1, ],
+ verbose=True,
+ process = Process.sequential
+ )
+
+ # Get your crew to work!
+ result = crew.kickoff()
+
+ print("######################")
+ print(result)
+ ```
+
+This is a nice feature; we love the idea of having many tasks and then orchestrating to agents which can execute them.
+But what if we want to create a workflow, perhaps one where information is passed back and forwards?
+
+Next, let us show you two Agents, which communicate, just like that.
+
+## A simple communication with agents
+
+Fetch.ai has the concept of an agent where this is an agent that is the component that links agents together.
+
+You can read more about agents communication in our [guides ↗️](/guides/agents/intermediate/communicating-with-other-agents)
+
+Let's install what we need:
+
+ ```bash copy
+ poetry init
+ poetry add uagents
+ ```
+
+Check out more detailed instructions for [installation ↗️](/guides/agents/getting-started/installing-uagent) of `uagents` library on your end.
+
+### First Agent
+
+Our first agent is simple; it sends a message every two seconds to a static address. When this agent receives a message, it prints that to log:
+
+ ```python copy filename="agent1.py"
+ from uagents import Agent, Context, Model
+ from uagents.setup import fund_agent_if_low
+
+ class Message(Model):
+ message: str
+
+ RECIPIENT_ADDRESS = "agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp"
+
+ agent = Agent(
+ name="agent",
+ port=8000,
+ seed="",
+ endpoint=["http://127.0.0.1:8000/submit"],
+ )
+
+ fund_agent_if_low(agent.wallet.address())
+
+ @agent.on_interval(period=2.0)
+ async def send_message(ctx: Context):
+ await ctx.send(RECIPIENT_ADDRESS, Message(message="hello there"))
+
+ @agent.on_message(model=Message)
+ async def message_handler(ctx: Context, sender: str, msg: Message):
+ ctx.logger.info(f"Received message from {sender}: {msg.message}")
+
+ if __name__ == "__main__":
+ agent.run()
+
+ ```
+
+This first agent introduces a few core concepts you will need to be aware of when creating any agent.
+
+Agents are defined with the `Agent` class:
+
+ ```python copy
+ agent = Agent(
+ name="agent",
+ port=8000,
+ seed="",
+ endpoint=["http://127.0.0.1:8000/submit"],
+ )
+ ```
+
+A `seed` is a unique phrase which `uagents` library uses to create a unique private key pair for your agent. If you change your `seed` you may lose access to previous messages, and also, the agent's address registered to the Almanac will change subsequently. The `port` allows us to define a local port for messages to be received. The `endpoint` defines the path to the in-built Rest API. The `name` defines the name of the agent.
+
+There are more options for the `Agent` class; see [`Agent` Class ↗️](/references/uagents/uagents-api/agent) for further reference.
+
+We then need to define our **communication model**:
+
+ ```python copy
+ class Message(Model):
+ message: str
+ ```
+
+The `Model` defines the object sent from agent to agent and represents the type of messages the agent is able to handle. For explicit communication, both agents must handle the same `Model` class. `Model` is the base class that inherits from Pydantic BaseModel.
+
+With the `fund_agent_if_low(agent.wallet.address())` function, agents will ultimately pay for discoverability as the economy of agents matures. There is a placeholder for registration here.
+
+Finally, agents have two decorated functions.
+
+The first one is the `agent.on_interval()` function. This one sends a message every 2 seconds. `ctx.send()` has the args of `destination_address` and `Message` which we defined earlier.
+
+ ```python copy
+ @agent.on_interval(period=2.0)
+ async def send_message(ctx: Context):
+ await ctx.send(RECIPIENT_ADDRESS, Message(message="hello there"))
+ ```
+
+The second one is `agent.on_message()` which is a little different; when the agent receives a message at the `endpoint` we defined earlier, the `uagent` library unpacks the message and triggers any function which handles that message; in our case, the `agent.on_message()` function:
+
+ ```python copy
+ @agent.on_message(model=Message)
+ async def message_handler(ctx: Context, sender: str, msg: Message):
+ ctx.logger.info(f"Received message from {sender}: {msg.message}")
+ ```
+
+### Second Agent
+
+Agent two doesn't do anything different to agent one; it has different args for the Agent instantiation, and instead of sending a message `on_event("startup")`, agent two just logs its address to screen. Whenever agent two receives a message matching `Message` data model, it will send a response to the sender.
+
+ ```python copy filename="agent2.py"
+ from uagents.setup import fund_agent_if_low
+ from uagents import Agent, Context, Model
+
+
+ class Message(Model):
+ message: str
+
+ agent = Agent(
+ name="agent 2",
+ port=8001,
+ seed="",
+ endpoint=["http://127.0.0.1:8001/submit"],
+ )
+
+ fund_agent_if_low(agent.wallet.address())
+
+ @agent.on_event("startup")
+ async def start(ctx: Context):
+ ctx.logger.info(f"agent address is {agent.address}")
+
+ @agent.on_message(model=Message)
+ async def message_handler(ctx: Context, sender: str, msg: Message):
+ ctx.logger.info(f"Received message from {sender}: {msg.message}")
+
+ await ctx.send(sender, Message(message="hello there"))
+
+ if __name__ == "__main__":
+ agent.run()
+
+ ```
+
+Okay, let's now run these agents.
+
+### Running the agents
+
+Let's run the second agent's script first using this command: `poetry run python agent2.py`
+
+**We must run the second agent first to get its unique address**. This is shown in output in the log. Let's update `agent1.py` script by filling the `RECIPIENT_ADDRESS` field with the address of the second agent from of the output we previously got by running `agent2.py` script.
+
+Updated `agent1.py` script sample:
+
+ ```python copy filename="agent1.py"
+ from uagents import Agent, Context, Model
+ from uagents.setup import fund_agent_if_low
+
+ class Message(Model):
+ message: bool
+
+ RECIPIENT_ADDRESS="agent...."
+
+ agent = Agent(
+ ...
+ ```
+
+Then, let's run the script for the first agent using this command: `poetry run python agent1.py`
+
+Great! You should now be seeing some log out output with our messages being displayed.
+
+### Output
+
+- **Agent 1**:
+
+ ```
+ INFO: [agent]: Registering on almanac contract...
+ INFO: [agent]: Registering on almanac contract...complete
+ INFO: [agent]: Starting server on http://0.0.0.0:8000 (Press CTRL+C to quit)
+ INFO: [agent]: Received message from agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp: hello there
+ INFO: [agent]: Received message from agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp: hello there
+ INFO: [agent]: Received message from agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp: hello there
+ ```
+
+- **Agent 2**:
+
+ ```
+ INFO: [agent 2]: Registering on almanac contract...
+ INFO: [agent 2]: Registering on almanac contract...complete
+ INFO: [agent 2]: agent address is agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp
+ INFO: [agent 2]: Starting server on http://0.0.0.0:8001 (Press CTRL+C to quit)
+ ```
+
+# happy to here
+
+## Wrapping them together - Building a service
+
+Let's go further now and change our agents scripts by splitting the logic of the CrewAI example above. Let's have one agent that creates a task and another agent which fulfills it.
+
+### Agent one: Senior Research Analyst Agent
+
+The `senior_research_analyst_agent`, is designed to handle requests for city-specific information by researching advancements in artificial intelligence (AI) and providing current weather updates. It utilizes the `SerperDevTool` for web searches and employs models for managing input and output.
+
+- `create_task(city: str) -> Task`: This method formulates a task description that outlines the specific research and weather update needed for the given city, returning a Task object.
+
+- `run_process(city: str)`: This method coordinates the execution of the created task. It initializes a Crew process that includes the research agent and the task, allowing it to gather results efficiently.
+
+- `handle_city_request(ctx: Context, sender: str, msg: CityRequestModel)`: This function listens for incoming messages containing city names. Upon receiving a request, it logs the city name, initiates the research process, and sends back a report that includes both AI advancements and weather data.
+
+When a request is received with a city name, the agent logs the request, conducts a comprehensive analysis, and returns a report that combines AI insights with real-time weather data. This functionality makes it a valuable tool for research organizations and tech consultancies seeking relevant information.
+
+ ```python copy filename="crewai_agent_1.py"
+ from uagents import Agent, Context, Model
+ import os
+ from crewai import Agent as CrewAIAgent, Task, Crew, Process
+ from crewai_tools import SerperDevTool
+
+ senior_research_analyst_agent = Agent(
+ name="senior_research_analyst_agent",
+ seed="senior_research_analyst_agent_seed",
+ port=8001,
+ endpoint=["http://127.0.0.1:8001/submit"],
+ )
+
+
+ class CityRequestModel(Model):
+ city: str
+
+
+ class ResearchReportModel(Model):
+ report: str
+
+
+ os.environ["OPENAI_API_KEY"] = ""
+ os.environ["SERPER_API_KEY"] = ""
+
+
+ class SeniorResearchAnalyst:
+ def __init__(self):
+ """
+ Initializes the Senior Research Analyst agent with a search tool.
+ """
+ self.search_tool = SerperDevTool()
+
+ self.researcher = CrewAIAgent(
+ role="Senior Research Analyst",
+ goal="Uncover cutting-edge developments in AI and provide weather updates.",
+ backstory="""You work at a leading tech think tank.
+ Your expertise lies in identifying emerging trends and understanding external factors like weather.""",
+ verbose=True,
+ allow_delegation=False,
+ tools=[self.search_tool],
+ )
+
+ def create_task(self, city: str) -> Task:
+ """
+ Creates a task for conducting research on AI advancements and retrieving weather updates.
+
+ Parameters:
+ - city: str, the city for which the weather update is requested.
+
+ Returns:
+ - Task: The created task with the specified description and expected output.
+ """
+ task_description = (
+ f"Conduct a comprehensive analysis of the latest advancements in AI in 2024. "
+ f"Also, use the search tool to provide the current weather update for {city}."
+ )
+
+ return Task(
+ description=task_description,
+ expected_output="Full analysis report with weather data",
+ agent=self.researcher,
+ )
+
+ def run_process(self, city: str):
+ """
+ Runs the process for the created task and retrieves the result.
+
+ Parameters:
+ - city: str, the city for which the task is run.
+
+ Returns:
+ - result: The output from the CrewAI process after executing the task.
+ """
+ task = self.create_task(city)
+ crew = Crew(
+ agents=[self.researcher],
+ tasks=[task],
+ verbose=True,
+ process=Process.sequential,
+ )
+ result = crew.kickoff()
+ return result
+
+
+ @senior_research_analyst_agent.on_message(model=CityRequestModel, replies=ResearchReportModel)
+ async def handle_city_request(ctx: Context, sender: str, msg: CityRequestModel):
+ """
+ Handles incoming messages requesting city information.
+
+ What it does:
+ - Logs the received city name.
+ - Runs the research process for the specified city and sends the report back to the sender.
+
+ Parameters:
+ - ctx: Context, provides the execution context for logging and messaging.
+ - sender: str, the address of the sender agent.
+ - msg: CityRequestModel, the received message containing the city name.
+
+ Returns:
+ - None: Sends the research report to the sender agent.
+ """
+ ctx.logger.info(f"Received message from {sender} with city: {msg.city}")
+ research_analyst = SeniorResearchAnalyst()
+ gather_task_result = research_analyst.run_process(msg.city)
+ await ctx.send(sender, ResearchReportModel(report=str(gather_task_result)))
+
+
+ if __name__ == "__main__":
+ """
+ Starts the communication agent and begins listening for messages.
+
+ What it does:
+ - Runs the agent, enabling it to send/receive messages and handle events.
+
+ Returns:
+ - None: Runs the agent loop indefinitely.
+ """
+ senior_research_analyst_agent.run()
+
+ ```
+
+### Agent two: Research Asking Agent
+
+The `research_asking_agent` agent is designed to initiate requests for city-specific research information and manage responses related to research reports.
+
+- Models: `CityRequestModel` for storing the city name and `ResearchReportModel` for the research findings.
+
+- Startup Event: The `on_startup` function logs the agent's name and address, then sends a message to a target agent with the default city (London).
+
+- Message Handling: The `handle_research_report` function processes incoming reports, logging the sender's address and the content of the research report.
+
+This agent operates continuously, enabling effective communication and retrieval of city-related research insights.
+
+ ```python copy filename="crewai_agent_2.py"
+ from uagents import Agent, Context, Model
+
+
+ class CityRequestModel(Model):
+ city: str
+
+
+ class ResearchReportModel(Model):
+ report: str
+
+
+ research_asking_agent = Agent(
+ name="research_asking_agent",
+ seed="research_asking_agent_seed",
+ port=8000,
+ endpoint=["http://127.0.0.1:8000/submit"],
+ )
+
+ TARGET_AGENT_ADDRESS = (
+ "agent1qgxfhzy78m2qfdsg726gtj4vnd0hkqx96xwprng2e4rmn0xfq7p35u6dz8q"
+ )
+ DEFAULT_CITY = "London"
+
+
+ @research_asking_agent.on_event("startup")
+ async def on_startup(ctx: Context):
+ """
+ Triggered when the agent starts up.
+
+ What it does:
+ - Logs the agent's name and address.
+ - Sends a message to the target agent with the default city (e.g., 'London').
+
+ Parameters:
+ - ctx: Context, provides the execution context for logging and messaging.
+
+ Returns:
+ - None: Sends the message to the target agent asynchronously.
+ """
+ ctx.logger.info(
+ f"Hello, I'm {research_asking_agent.name}, and my address is {research_asking_agent.address}."
+ )
+
+ await ctx.send(TARGET_AGENT_ADDRESS, CityRequestModel(city=DEFAULT_CITY))
+
+
+ @research_asking_agent.on_message(model=ResearchReportModel)
+ async def handle_research_report(ctx: Context, sender: str, msg: ResearchReportModel):
+ """
+ Triggered when a message of type ResearchReportModel is received.
+
+ What it does:
+ - Logs the sender's address and the research report received.
+
+ Parameters:
+ - ctx: Context, provides the execution context for logging and messaging.
+ - sender: str, the address of the sender agent.
+ - msg: ResearchReportModel, the received research report.
+
+ Returns:
+ - None: Processes the message and logs it.
+ """
+ ctx.logger.info(f"Received research report from {sender}: {msg.report}")
+
+
+ if __name__ == "__main__":
+ """
+ Starts the research analyst agent and begins listening for events.
+
+ What it does:
+ - Runs the agent, enabling it to send/receive messages and handle events.
+
+ Returns:
+ - None: Runs the agent loop indefinitely.
+ """
+ research_asking_agent.run()
+
+ ```
+
+### Output
+
+Run `poetry run python crewai_agent_1.py` first and then `poetry run python crewai_agent_2.py`.
+
+You should get something similar to the following for each agent:
+
+- **Agent 2**:
+
+ ```
+ INFO: [research_asking_agent]: Registration on Almanac API successful
+ INFO: [research_asking_agent]: Almanac contract registration is up to date!
+ INFO: [research_asking_agent]: Hello, I'm research_asking_agent, and my address is agent1qvfw094hmsfrvmlw8amg4ypyvd2wm8hvahxt0zwmareus6fzajpm5dpgh3f.
+ INFO: [research_asking_agent]: Starting server on http://0.0.0.0:8000 (Press CTRL+C to quit)
+ INFO: [research_asking_agent]: Received research report from agent1qgxfhzy78m2qfdsg726gtj4vnd0hkqx96xwprng2e4rmn0xfq7p35u6dz8q: **Comprehensive Analysis of the Latest Advancements in AI in 2024**
+
+ The year 2024 has emerged as a pivotal period for artificial intelligence, with several groundbreaking trends and developments. After analyzing multiple sources, here are the key advancements in AI for this year:
+
+ 1. **Multimodal AI**:
+ Multimodal AI involves the integration of multiple forms of data including text, audio, and visual inputs to create more sophisticated and context-aware AI models. This development allows for more accurate and comprehensive AI systems capable of better mimicking human understanding.
+
+ 2. **Small Language Models**:
+ As opposed to massive language models like GPT-3, smaller language models are becoming more prevalent. These models are more efficient, faster, and require fewer resources while maintaining a high level of performance.
+
+ 3. **Customizable Generative AI**:
+ The rise of generative AI that can be fine-tuned for specific tasks has been significant. Such AI systems can be adapted to meet the unique needs of different users and industries, providing more personalized and effective solutions.
+
+ 4. **Agentic AI**:
+ These are AI systems designed to act autonomously to achieve specific goals, showing more complex behavior and decision-making capabilities. This advancement heralds a future where AI can conduct complex tasks with minimal human intervention.
+
+ 5. **Open Source AI**:
+ There is a growing trend towards open-sourcing AI models which facilitates collaboration and innovation among researchers and developers worldwide. This movement is democratizing access to advanced AI technologies.
+
+ 6. **Retrieval-Augmented Generation**:
+ This involves AI systems that can search for and retrieve external information to enhance their responses. This technology is particularly useful in creating more accurate and up-to-date AI systems.
+
+ 7. **Generative AI and Copyright Challenges**:
+ The rise of generative AI has brought about new discussions around copyright and usage laws. As AI becomes more creative, legal frameworks must adapt to address ownership and licensing issues.
+
+ 8. **Licensing Innovations**:
+ To keep up with the rapid development in AI, new licensing models are being explored. These models aim to balance innovation with fair use and protection of intellectual property.
+
+ 9. **AI Patent Developments**:
+ Innovations in AI are leading to an increase in AI-related patents. This increase highlights the competitive landscape and the importance of intellectual property in the field of AI.
+
+ 10. **Healthcare Applications**:
+ AI’s role in healthcare continues to grow, particularly in diagnostics, personalized medicine, and patient care. AI technologies are aiding in early detection of diseases and creating more effective treatment plans.
+
+ These developments collectively signify a year of significant progress in AI, promising transformative impacts across various industries.
+
+ **Current Weather Update for London**
+
+ As of the latest reports, the current weather in London, England, United Kingdom is as follows:
+
+ - Morning: 65°F, partly cloudy with an 11% chance of rain.
+ - Afternoon: 71°F, sunny with 0% chance of rain.
+ - Evening: 62°F, partly cloudy with a 6% chance of rain.
+ - Overnight: 60°F, clear skies.
+
+ Overall, London is experiencing mild weather with clear skies and minimal chances of rain throughout the day.
+
+ This comprehensive analysis and weather update should provide valuable insights into the latest advancements in AI and the current weather conditions in London.`
+ ```
diff --git a/pages/guides/quickstart-with/CrewAI/startup-idea-analyser.mdx b/pages/guides/quickstart-with/CrewAI/startup-idea-analyser.mdx
new file mode 100644
index 000000000..15dfa745d
--- /dev/null
+++ b/pages/guides/quickstart-with/CrewAI/startup-idea-analyser.mdx
@@ -0,0 +1,270 @@
+# Startup Idea Analyzer with CrewAI and Agents
+
+## Introduction
+
+The fast-paced startup environment and market makes it essential to correctly and efficiently validate a business idea quickly in order to attract users and investors. Fetch.ai [Agents ↗️](/guides/agents/getting-started/whats-an-agent) technology makes it possible to develop an application being able to evaluate startup ideas autonomously. By using **CrewAI** and **Agents** it is possible to design a **Startup Idea Analyzer** offering a structured way to assess the feasibility and validity of any business idea you may have. By leveraging agents, which are programmable micro-services designed to communicate with one another, this system can independently evaluate key aspects of your business including market demand, technological needs, and business strategies with the aim of providing a comprehensive evaluation of your startup idea.
+
+This guide aims at walking you through the setting up and running of a simple evaluation system, allowing you to not only assess your ideas viability but also to integrate them with other systems, users and agents, thus unlocking new potential for economic collaboration and growth.
+
+Let's get started!
+
+### Supporting documentation
+
+- [Creating an agent ↗️](/guides/agents/create-a-uagent)
+- [Communicating with other agents ↗️](/guides/agents/communicating-with-other-agents)
+- [Register in Almanac ↗️](/guides/agents/register-in-almanac)
+- [Almanac Contract ↗️](/references/contracts/uagents-almanac/almanac-overview)
+- [Utilising the Agentverse Mailroom service ↗️](/guides/agentverse/utilising-the-mailbox)
+- [Protocols ↗️](/references/uagents/uagents-protocols/agent-protocols)
+- [Agentverse Functions ↗️](/guides/agents/intermediate/agent-functions)
+- [Register an Agent Function on the Agentverse ↗️](/guides/agentverse/agentverse-functions/registering-agent-services)
+
+## Pre-requisites
+
+ - **Python**: Download and install from [Python official website ↗️](https://www.python.org/downloads/).
+ - **Poetry**: Install by following the instructions on [Poetry's official website ↗️](https://python-poetry.org/docs/#installation).
+ - **Gemini API key**: To get the Gemini API key, sign up at [Google AI Studio ↗️](https://aistudio.google.com/app/apikey) and generate the API key.
+ - **Mailbox API key**: You can get the Mailbox API key by following the [Mailbox ↗️](/guides/agents/intermediate/mailbox) guide.
+
+## Project Structure
+
+The Agent project presented in this guide has the following structure:
+
+ ```
+ .startup-idea-analyser
+ ├── agent.py
+ ├── crew_ai.py
+ ├── poetry.lock
+ ├── project.json
+ ├── pyproject.toml
+ └── README.md
+ ```
+
+ Each file serves a specific purpose in building and managing the **Startup Idea Analyzer** system:
+
+ - `agent.py`: this defines the core agent setup and protocol communication.
+ - `crew_ai.py`: it defines the market research, technology assistance and business strategy processes.
+ - `poetry.lock`: this ensures a consistent dependency management.
+ - `project.json`: it contains the project configuration.
+ - `pyproject.toml`: it manages Python package dependencies with Poetry.
+ - `README.md`: it is the documentation needed to explain how to use or contribute to the project.
+
+## Poetry Dependencies
+
+ ```pyproject.toml copy filename="pyproject.toml"
+ [tool.poetry.dependencies]
+ python = ">=3.10,<3.12"
+ uagents = "0.14.0"
+ requests = "^2.31.0"
+ uagents-ai-engine = "0.4.0"
+ crewai = "0.36.0"
+ python-dotenv = "1.0.1"
+ langchain-google-genai = "1.0.7"
+ ```
+
+## Startup Idea Analyzer
+
+This guide showcases how to use Agents to analyze a startup idea by performing **market research**, **technological assessments**, and **business planning**. It employs two different systems: `uagents` (for agent-based communication) and `crew_ai` (for task-driven collaboration).
+
+### Features
+
+ - **Market Research**: The Marketing Agent dives deep into understanding the demand for your product, defining the ideal customer, and crafting strategies to reach a wide audience.
+ - **Technological Analysis**: The Technology Expert Agent assesses essential technologies needed to create your product efficiently and with top quality.
+ - **Business Strategy Development**: The Business Consultant Agent synthesizes the information into a comprehensive business plan. This plan includes key strategies, detailed milestones, and a timeline to guide you toward profitability and sustainability.
+ - **Provides over 10 crucial insights**.
+ - **Identifies 5 clear business goals**.
+ - **Generates a detailed timeline charting your path to success**.
+
+### How It Works
+
+Within the following system, the aim is not to only aid in market analysis and technology assessment but also to provide tailor-made business blueprint for the success of your business idea.
+
+Here below we depict how the system works:
+
+ 1. **Market Research**: The _Marketing Agent_ conducts thorough research to understand market demand, customer demographics, and effective marketing strategies.
+ 2. **Technological Analysis**: The _Technology Expert Agent_ evaluates the necessary technologies, offering insights on the best tools and methods for your product development.
+ 3. **Business Strategy Development**: The _Business Consultant Agent_ consolidates all gathered information to create a comprehensive business plan, including strategies, milestones, and a timeline.
+
+### Agent System
+
+This system defines an Agent called **startup idea analyser** that interacts with other Agents using a [protocol ↗️](/references/uagents/uagents-protocols/agent-protocols). The Agent listens for messages containing a startup idea `description` and forwards this input to a market research process, which returns a response based on the input provided.
+
+ ```py copy filename="agent.py"
+ import os
+
+ from ai_engine import UAgentResponse, UAgentResponseType
+ from uagents import Agent, Context, Field, Model, Protocol
+ from uagents.setup import fund_agent_if_low
+
+ from crew_ai import MarketResearchProcess
+
+ AGENT_MAILBOX_KEY = os.environ.get("AGENT_MAILBOX_KEY")
+ SEED_PHRASE = "YOUR_SEED_PHRASE"
+
+ agent = Agent(
+ name="startup idea analyser",
+ seed=SEED_PHRASE,
+ mailbox=f"{AGENT_MAILBOX_KEY}@https://agentverse.ai",
+ )
+
+ protocol = Protocol("Startup idea Analyser version", version="0.1.2")
+ researcher = MarketResearchProcess()
+
+ print(agent.address, "agent_address")
+
+ fund_agent_if_low(agent.wallet.address())
+
+ class StartUpIdeaAnalyser(Model):
+ description: str = Field(
+ description = "describes the field which will be the description of the startup idea and it is provided by the user in context"
+ )
+
+ @protocol.on_message(model=StartUpIdeaAnalyser, replies={UAgentResponse})
+ async def on_message(ctx: Context, sender: str, msg: StartUpIdeaAnalyser):
+ ctx.logger.info(f"Received message from {sender}, message {msg.description}")
+ result = researcher.run_process(msg.description)
+ await ctx.send(
+ sender, UAgentResponse(message=result, type=UAgentResponseType.FINAL)
+ )
+
+ agent.include(protocol, publish_manifest=True)
+
+ agent.run()
+ ```
+
+ The `Agent` is defined using a seed phrase for wallet generation and a mailbox for message exchanges. Then, a protocol named `Startup idea Analyser version` (`version 0.1.2`) defines how the agent processes incoming messages. The agent uses this protocol to respond to requests containing startup idea descriptions. The agent relies on a component called `MarketResearchProcess` from an external module `crew_ai` to conduct market research on the provided startup idea. This research process is triggered upon receiving a startup idea description. The `StartUpIdeaAnalyser` class defines a data model for startup idea descriptions; this model expects a field `description` that holds the string of text describing the startup idea provided by the user.
+
+ Whenever the agent receives a message matching the `StartUpIdeaAnalyser` model, it logs the incoming message, sends the startup description to the `MarketResearchProcess`, and then responds to the sender with the result. The agent is set to include this `protocol`, publish a manifest of its functionalities, and run continuously to process incoming requests.
+
+### CrewAI System
+
+The following script defines a `MarketResearchProcess` class that sets up a market research workflow using three specialized agents: a `Market Research Analyst`, a `Technology Expert`, and a `Business Development Consultant`. The script creates and assigns tasks to these agents, such as analyzing market demand, assessing technological needs, and developing a business plan. It then uses the `Crew` class to execute these tasks sequentially, passing the results from one agent to the next one, to then return the final outcome.
+
+ ```py copy filename="crew_ai.py"
+ import os
+
+ from crewai import Agent, Crew, Process, Task
+ from dotenv import load_dotenv
+ from langchain_google_genai import ChatGoogleGenerativeAI
+
+ load_dotenv()
+
+ class MarketResearchProcess:
+ def __init__(self):
+ api_gemini = os.environ.get("GEMINI_API_KEY")
+ self.llm = ChatGoogleGenerativeAI(
+ model="gemini-pro", verbose=True, temperature=0.1, google_api_key=api_gemini
+ )
+
+ self.marketer = Agent(
+ role="Market Research Analyst",
+ goal="Find out how big is the demand for my products and suggest how to reach the widest possible customer base",
+ backstory="""You are an expert at understanding the market demand, target audience, and competition. This is crucial for
+ validating whether an idea fulfills a market need and has the potential to attract a wide audience. You are good at coming up
+ with ideas on how to appeal to widest possible audience.
+ """,
+ verbose=True, # enable more detailed or extensive output
+ allow_delegation=True, # enable collaboration between agent
+ llm=self.llm, # to load gemini
+ )
+
+ self.technologist = Agent(
+ role="Technology Expert",
+ goal="Make assessment on how technologically feasible the company is and what type of technologies the company needs to adopt in order to succeed",
+ backstory="""You are a visionary in the realm of technology, with a deep understanding of both current and emerging technological trends. Your
+ expertise lies not just in knowing the technology but in foreseeing how it can be leveraged to solve real-world problems and drive business innovation.
+ You have a knack for identifying which technological solutions best fit different business models and needs, ensuring that companies stay ahead of
+ the curve. Your insights are crucial in aligning technology with business strategies, ensuring that the technological adoption not only enhances
+ operational efficiency but also provides a competitive edge in the market.""",
+ verbose=True, # enable more detailed or extensive output
+ allow_delegation=True, # enable collaboration between agent
+ llm=self.llm, # to load gemini
+ )
+
+ self.business_consultant = Agent(
+ role="Business Development Consultant",
+ goal="Evaluate and advise on the business model, scalability, and potential revenue streams to ensure long-term sustainability and profitability",
+ backstory="""You are a seasoned professional with expertise in shaping business strategies. Your insight is essential for turning innovative ideas
+ into viable business models. You have a keen understanding of various industries and are adept at identifying and developing potential revenue streams.
+ Your experience in scalability ensures that a business can grow without compromising its values or operational efficiency. Your advice is not just
+ about immediate gains but about building a resilient and adaptable business that can thrive in a changing market.""",
+ verbose=True, # enable more detailed or extensive output
+ allow_delegation=True, # enable collaboration between agent
+ llm=self.llm, # to load gemini
+ )
+
+ def create_tasks(self, input_data):
+ self.task1 = Task(
+ description=f"""Analyze what the market demand for {input_data}.
+ Write a detailed report with description of what the ideal customer might look like, and how to reach the widest possible audience. The report has to
+ be concise with at least 10 bullet points and it has to address the most important areas when it comes to marketing this type of business.
+ """,
+ agent=self.marketer,
+ expected_output="A detailed market research report with at least 10 bullet points on the ideal customer and marketing strategy.",
+ )
+
+ self.task2 = Task(
+ description=f"""Analyze {input_data}. Write a detailed report
+ with description of which technologies the business needs to use in order to make High Quality T shirts. The report has to be concise with
+ at least 10 bullet points and it has to address the most important areas when it comes to manufacturing this type of business.
+ """,
+ agent=self.technologist,
+ expected_output="A detailed technological report with at least 10 bullet points on the necessary technologies for manufacturing.",
+ )
+
+ self.task3 = Task(
+ description=f"""Analyze and summarize marketing and technological report and write a detailed business plan with
+ description of {input_data}.
+ The business plan has to be concise with at least 10 bullet points, 5 goals and it has to contain a time schedule for which goal should be achieved and when.
+ """,
+ agent=self.business_consultant,
+ expected_output="A detailed business plan with at least 10 bullet points, 5 goals, and a time schedule.",
+ )
+
+ def run_process(self, input_data):
+ self.create_tasks(input_data)
+
+ crew = Crew(
+ agents=[self.marketer, self.technologist, self.business_consultant],
+ tasks=[self.task1, self.task2, self.task3],
+ verbose=2,
+ process=Process.sequential, # Sequential process will have tasks executed one after the other and the outcome of the previous one is passed as extra content into this next.
+ )
+
+ result = crew.kickoff(inputs={"input": input_data})
+ return result
+ ```
+
+ The script initializes by loading a Large Language Model (LLM) called `ChatGoogleGenerativeAI`, which uses the Gemini API for generating detailed text-based outputs. This LLM is shared among the agents to enhance their ability to generate accurate and detailed reports.
+
+ **Each agent has a unique role and goal**: the **Market Research Analyst** focuses on understanding market demand, target audience, and marketing strategies. The **Technology Expert** assesses the technological feasibility of the company and suggests necessary technologies for success. The **Business Development Consultant** evaluates the business model, scalability, and potential revenue streams to ensure long-term sustainability.
+
+ The `create_tasks` method generates specific tasks for each agent. These tasks involve analyzing input data (e.g., a business idea), producing reports in their areas of expertise, and then passing these reports from one agent to the next one. For instance, the Market Research Analyst creates a marketing strategy, which the Technology Expert uses to assess technological needs, and finally, the Business Development Consultant uses both reports to develop a business plan.
+
+ The `run_process` method coordinates the execution of these tasks in a sequential manner. A `Crew` is formed, comprising the three agents and their tasks. The process is set to execute each task one after the other, passing the output of one task as input for the next. The final result is a comprehensive report that includes market research, technological assessments, and a business plan.
+
+## How to Run This Example
+
+### Update the required environment variables
+
+You need to provide the following API Keys to correctly run the example provided within this guide:
+
+ ```env copy filename=".env.example"
+ AGENT_MAILBOX_KEY = "YOUR_MAILBOX_KEY"
+ GEMINI_API_KEY = "YOUR_GEMINI_API_KEY"
+ ```
+
+### Run the example
+
+ - Navigate to the root folder of the example.
+ - Update the `.env` file.
+ - Install dependencies by running this command: `poetry install`.
+ - Execute the script by running the following command: `python agent.py`.
+
+### Expected Output
+
+To run the example, make sure your scripts are running on your end and then head over to [DeltaV ↗️](/concepts/ai-engine/deltav) and provide a description for your startup idea in the dedicated bar. Choose the [AI Engine personality ↗️](/concepts/ai-engine/ai-engine-personalities) and click on **Start** button. You should get something similar to the following:
+
+
+
+
diff --git a/pages/guides/quickstart-with/_meta.json b/pages/guides/quickstart-with/_meta.json
new file mode 100644
index 000000000..971225077
--- /dev/null
+++ b/pages/guides/quickstart-with/_meta.json
@@ -0,0 +1,4 @@
+{
+ "CrewAI": "CrewAI",
+ "langchain": "LangChain"
+}
diff --git a/pages/guides/quickstart-with/langchain/_meta.json b/pages/guides/quickstart-with/langchain/_meta.json
new file mode 100644
index 000000000..ef307f076
--- /dev/null
+++ b/pages/guides/quickstart-with/langchain/_meta.json
@@ -0,0 +1,12 @@
+{
+ "creating-an-agent-with-langchain": {
+ "title": "Getting started with Fetch.ai x Langchain",
+ "tags": ["Intermediate", "Python", "Functions", "LangChain", "Use Cases"],
+ "timestamp": true
+ },
+ "multiple-agent-workflows": {
+ "title": "Multi-agent workflows with Fetch.ai x Langchain",
+ "tags": ["Intermediate", "Python", "LangChain", "Functions", "Use Cases"],
+ "timestamp": true
+ }
+}
diff --git a/pages/guides/quickstart-with/langchain/creating-an-agent-with-langchain.mdx b/pages/guides/quickstart-with/langchain/creating-an-agent-with-langchain.mdx
new file mode 100644
index 000000000..befe945be
--- /dev/null
+++ b/pages/guides/quickstart-with/langchain/creating-an-agent-with-langchain.mdx
@@ -0,0 +1,346 @@
+# Getting started with Fetch.ai x Langchain
+
+Fetch.ai creates a dynamic communication layer that allows you to abstract away components into individual [Agents ↗️](/guides/agents/getting-started/whats-an-agent). Agents are micro-services that are programmed to communicate with other agents, and or humans. By using **Agents** to represent different parts of your **Langchain** program you give your project the option to be used by [other parties ↗️](/guides/agents/intermediate/communicating-with-other-agents) for economic benefit.
+
+Let's take a look at a simple Langchain example, then see how we can extend this with agents.
+
+## A simple langchain example
+
+Let's create a simple script that can find any information in a PDF. Using a document loader from Langchain, and FAISS vector store along with OpenAI, we can load the PDF, use `FAISS` to create a vector store, `open_ai` to create embeddings on the documents, and then use `FAISS` to do a similarity search. Quite complicated for a small example, but it is only a handful of lines of code:
+
+```python copy
+from langchain_community.document_loaders import PyPDFLoader
+import os
+from langchain_community.vectorstores import FAISS
+from langchain_openai import OpenAIEmbeddings
+
+openai_api_key = os.environ['OPENAI_API_KEY']
+
+loader = PyPDFLoader("./your-pdf.pdf")
+pages = loader.load_and_split()
+
+faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings(openai_api_key=openai_api_key))
+
+docs = faiss_index.similarity_search("what problem does fetch solve?", k=2)
+for doc in docs:
+ print(str(doc.metadata["page"]) + ":", doc.page_content[:600])
+
+```
+
+However, there is a lot of smaller bits happening there. If we use agents for each step, then other agents can use those pieces of code 💡.
+
+## A simple communication with agents
+
+Fetch.ai has the concept of an agent which at a base level an agent cannot do what Langchain does, however an agent is the component that links them together.
+
+You can read more about agents communication in our [guides ↗️](/guides/agents/intermediate/communicating-with-other-agents)
+
+Let's install what we need:
+
+```bash copy
+poetry init
+poetry add uagents
+```
+
+Check out more detailed instructions for [installation ↗️](/guides/agents/getting-started/installing-uagent) of `uagents` library on your end.
+
+### First Agent
+
+Our first agent is simple; it sends a message every two seconds to a static address. When this agent receives a message, it prints that to log:
+
+```python copy filename="agent1.py"
+from uagents import Agent, Context, Model
+from uagents.setup import fund_agent_if_low
+
+class Message(Model):
+ message: str
+
+RECIPIENT_ADDRESS = "agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp"
+
+agent = Agent(
+ name="agent",
+ port=8000,
+ seed="",
+ endpoint=["http://127.0.0.1:8000/submit"],
+)
+
+fund_agent_if_low(agent.wallet.address())
+
+@agent.on_interval(period=2.0)
+async def send_message(ctx: Context):
+ await ctx.send(RECIPIENT_ADDRESS, Message(message="hello there"))
+
+@agent.on_message(model=Message)
+async def message_handler(ctx: Context, sender: str, msg: Message):
+ ctx.logger.info(f"Received message from {sender}: {msg.message}")
+
+if __name__ == "__main__":
+ agent.run()
+
+```
+
+This first agent introduces a few core concepts you will need to be aware of when creating any agent.
+
+Agents are defined with the `Agent` class:
+
+```python copy
+ agent = Agent(
+ name="agent",
+ port=8000,
+ seed="",
+ endpoint=["http://127.0.0.1:8000/submit"],
+)
+```
+
+A `seed` is a unique phrase which `uagents` library uses to create a unique private key pair for your agent. If you change your `seed` you may lose access to previous messages, and also, the agent's address registered to the Almanac will change subsequently. The `port` allows us to define a local port for messages to be received. The `endpoint` defines the path to the in-built Rest API. The `name` defines the name of the agent.
+
+There are more options for the `Agent` class; see [`Agent` Class ↗️](/references/uagents/uagents-api/agent) for further reference.
+
+We then need to define our communication model:
+
+```python copy
+ class Message(Model):
+ message: str
+```
+
+The `Model` defines the object sent from agent to agent and represents the type of messages the agent is able to handle. For explicit communication, both agents must handle the same `Model` class. `Model` is the base class that inherits from Pydantic BaseModel.
+
+With the `fund_agent_if_low(agent.wallet.address())` function, agents will ultimately pay for discoverability as the economy of agents matures. There is a placeholder for registration here.
+
+Finally, agents have two decorated functions.
+
+The first one is the `agent.on_interval()` function. This one sends a message every 2 seconds. `ctx.send()` has the args of `destination_address` and `Message` which we defined earlier.
+
+```python copy
+@agent.on_interval(period=2.0)
+async def send_message(ctx: Context):
+ await ctx.send(RECIPIENT_ADDRESS, Message(message="hello there"))
+```
+
+The second one is `agent.on_message()` which is a little different; when the agent receives a message at the `endpoint` we defined earlier, the `uagent` library unpacks the message and triggers any function which handles that message; in our case, the `agent.on_message()` function:
+
+```python copy
+@agent.on_message(model=Message)
+async def message_handler(ctx: Context, sender: str, msg: Message):
+ ctx.logger.info(f"Received message from {sender}: {msg.message}")
+```
+
+### Second Agent
+
+Agent two doesn't do anything different to agent one; it has different args for the Agent instantiation, and instead of sending a message `on_event("startup")`, agent two just logs its address to screen. Whenever agent two receives a message matching `Message` data model, it will send a response to the sender.
+
+```python copy filename="agent2.py"
+from uagents.setup import fund_agent_if_low
+from uagents import Agent, Context, Model
+
+
+class Message(Model):
+ message: str
+
+agent = Agent(
+ name="agent 2",
+ port=8001,
+ seed="",
+ endpoint=["http://127.0.0.1:8001/submit"],
+)
+
+fund_agent_if_low(agent.wallet.address())
+
+@agent.on_event("startup")
+async def start(ctx: Context):
+ ctx.logger.info(f"agent address is {agent.address}")
+
+@agent.on_message(model=Message)
+async def message_handler(ctx: Context, sender: str, msg: Message):
+ ctx.logger.info(f"Received message from {sender}: {msg.message}")
+
+ await ctx.send(sender, Message(message="hello there"))
+
+if __name__ == "__main__":
+ agent.run()
+
+```
+
+Okay, let's now run these agents.
+
+### Running the agents
+
+Let's run the second agent's script first using this command: `poetry run python agent2.py`
+
+**We must run the second agent first to get its unique address**. This is shown in output in the log. Let's update `agent1.py` script by filling the `RECIPIENT_ADDRESS` field with the address of the second agent from of the output we previously got by running `agent2.py` script.
+
+Updated `agent1.py` script sample:
+
+```python copy filename="agent1.py"
+from uagents import Agent, Context, Model
+from uagents.setup import fund_agent_if_low
+
+class Message(Model):
+ message: bool
+
+RECIPIENT_ADDRESS="agent...."
+
+agent = Agent(
+ ...
+```
+
+Then, let's run the script for the first agent using this command: `poetry run python agent1.py`
+
+Great! You should now be seeing some log out output with our messages being displayed.
+
+### Output
+
+- **Agent 1**:
+
+ ```
+ INFO: [agent]: Registering on almanac contract...
+ INFO: [agent]: Registering on almanac contract...complete
+ INFO: [agent]: Starting server on http://0.0.0.0:8000 (Press CTRL+C to quit)
+ INFO: [agent]: Received message from agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp: hello there
+ INFO: [agent]: Received message from agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp: hello there
+ INFO: [agent]: Received message from agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp: hello there
+ ```
+
+- **Agent 2**:
+
+ ```
+ INFO: [agent 2]: Registering on almanac contract...
+ INFO: [agent 2]: Registering on almanac contract...complete
+ INFO: [agent 2]: agent address is agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp
+ INFO: [agent 2]: Starting server on http://0.0.0.0:8001 (Press CTRL+C to quit)
+ ```
+
+## Wrapping them together - Building a service
+
+Let's go further now and change our agents scripts by splitting the logic of the Langchain example above. Let's have one agent that sends a PDF path and questions it wants answered about that PDF by the other agent. Conversely, the other agent returns information on the PDF based on the questions asked by using Langchain tools.
+
+### Agent one: providing PDF and requesting information
+
+This agent sends `DocumentUnderstanding` model which contains a local path to a PDF, and a question that the other agent must answer about the PDF. It's a small update on our first agent script.
+
+However now, `.on_message(model=DocumentsResponse)` expects a `DocumentsResponse` object instead of a string.
+
+To learn more about communication with other agents check out the following [Guide ↗️](/guides/agents/intermediate/communicating-with-other-agents)
+
+```python copy filename="agent1.py"
+from uagents import Agent, Context, Protocol, Model
+from ai_engine import UAgentResponse, UAgentResponseType
+from typing import List
+
+class DocumentUnderstanding(Model):
+ pdf_path: str
+ question: str
+
+class DocumentsResponse(Model):
+ learnings: List
+
+agent = Agent(
+ name="find_in_pdf",
+ seed="",
+ port=8001,
+ endpoint=["http://127.0.0.1:8001/submit"]
+)
+
+print("uAgent address: ", agent.address)
+summary_protocol = Protocol("Text Summariser")
+
+RECIPIENT_PDF_AGENT = ""
+
+@agent.on_event("startup")
+async def on_startup(ctx: Context):
+ await ctx.send(RECIPIENT_PDF_AGENT, DocumentUnderstanding(pdf_path="../a-little-story.pdf", question="What's the synopsis?"))
+
+@agent.on_message(model=DocumentsResponse)
+async def document_load(ctx: Context, sender: str, msg: DocumentsResponse):
+ ctx.logger.info(msg.learnings)
+
+agent.include(summary_protocol, publish_manifest=True)
+agent.run()
+
+```
+
+### Agent two: wrapping the Langchain bits
+
+Agent two defines the same models as agent one, but this time, it wraps the logic for the Langchain PDF question in the `document_load()` function, which is decorated with `.on_message(model=DocumentUnderstanding, replies=DocumentsResponse)` . You can specify a `replies` argument in your `on_message` decorators; this is useful for being more explicit with communication.
+
+```python copy filename="agent2.py"
+from langchain_community.document_loaders import PyPDFLoader
+import os
+from langchain_community.vectorstores import FAISS
+from langchain_openai import OpenAIEmbeddings
+from uagents import Agent, Context, Protocol, Model
+from typing import List
+
+class DocumentUnderstanding(Model):
+ pdf_path: str
+ question: str
+
+class DocumentsResponse(Model):
+ learnings: List
+
+pdf_questioning_agent = Agent(
+ name="pdf_questioning_agent",
+ seed="",
+ port=8003,
+ endpoint=["http://127.0.0.1:8003/submit"],
+)
+
+print("uAgent address: ", pdf_questioning_agent.address)
+pdf_loader_protocol = Protocol("Text Summariser")
+
+@pdf_questioning_agent.on_message(model=DocumentUnderstanding, replies=DocumentsResponse)
+async def document_load(ctx: Context, sender: str, msg: DocumentUnderstanding):
+ loader = PyPDFLoader(msg.pdf_path)
+ pages = loader.load_and_split()
+ openai_api_key = os.environ['OPENAI_API_KEY']
+ learnings = []
+
+ faiss_index = FAISS.from_documents(pages, OpenAIEmbeddings(openai_api_key=openai_api_key))
+
+ docs = faiss_index.similarity_search(msg.question, k=2)
+
+ for doc in docs:
+ learnings.append(str(doc.metadata["page"]) + ":" + doc.page_content[:600])
+
+ await ctx.send(sender, DocumentsResponse(learnings=learnings))
+
+pdf_questioning_agent.include(pdf_loader_protocol, publish_manifest=True)
+pdf_questioning_agent.run()
+
+```
+
+With these agents now being defined, it is time to run them. Let's run Agent two first to get its address and then update Agent one to send a message to it by filling the `RECIPIENT_PDF_AGENT` field in-line.
+
+### Expected Output
+
+Run `poetry run python langchain_agent_2.py` first and then `poetry run python langchain_agent_1.py`.
+
+You should get something similar to the following for each agent:
+
+- **Agent 1**:
+
+ ```
+ uAgent address agent: agent1qv9qmj3ug83vcrg774g2quz0urmlyqlmzh6a5t3r88q3neejlrffz405p7x
+ INFO: [find_in_pdf]: Manifest published successfully: Text Summariser
+ INFO: [find_in_pdf]: Registration on Almanac API successful
+ INFO: [find_in_pdf]: Almanac contract registration is up to date!
+ INFO: [find_in_pdf]: Starting server on http://0.0.0.0:8001 (Press CTRL+C to quit)
+ INFO: [find_in_pdf]: ['0: This is a simple story about two ... ]
+ ```
+
+- **Agent 2**:
+
+ ```
+ uAgent address: agent1qfwfpz6dpyzvz0f0tgxax58fpppaknnqm99fpggmm2wffjcxgqe8sn4cwx3
+ INFO: [pdf_questioning_agent]: Manifest published successfully: Text Summariser
+ INFO: [pdf_questioning_agent]: Registration on Almanac API successful
+ INFO: [pdf_questioning_agent]: Almanac contract registration is up to date!
+ INFO: [pdf_questioning_agent]: Starting server on http://0.0.0.0:8003 (Press CTRL+C to quit)
+ INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
+ INFO:faiss.loader:Loading faiss with AVX2 support.
+ INFO:faiss.loader:Successfully loaded faiss with AVX2 support.
+ INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
+ ```
+## Next steps
+
+In the [next part ↗️](/guides/quickstart-with/langchain/multiple-agent-workflows) of this introduction, we will create a multi-agent workflow where we split the logic of the PDF agent into two more agents: the first one which verifies a PDF, loads and then splits the PDF and the second one which uses FAISS to do the similarity search.
diff --git a/pages/guides/quickstart-with/langchain/multiple-agent-workflows.mdx b/pages/guides/quickstart-with/langchain/multiple-agent-workflows.mdx
new file mode 100644
index 000000000..7022e00a9
--- /dev/null
+++ b/pages/guides/quickstart-with/langchain/multiple-agent-workflows.mdx
@@ -0,0 +1,313 @@
+# Multi-agent workflows with Fetch.ai x Langchain
+
+Multi-agent workflows are at the forefront of modern agent development; the idea that individual [Agents ↗️](/guides/agents/getting-started/whats-an-agent) can be utilised to create larger more complex services for people has created lots of excitement in the AI space. At Fetch.ai, we're building the agent communication layer which perfectly compliments Langchain libraries.
+
+Agents representing smaller parts of a service, allow for many agents to represent a whole. Agents reduce technical requirements in projects, for example you wouldn't need to write a function to calculate the historical index of a stock price, an agent will already return that data for you.
+
+Before we go any further please read over our introduction guide to [Agents and Langchain](/guides/quickstart-with/langchain/creating-an-agent-with-langchain)
+
+## The system
+
+Three agents make up a simple agent workflow: **RequestAgent**, **PDFQuestionAgent** and **PDFSplitAgent**.
+
+
+
+A variation of the following `Model` class is passed between each agent:
+
+```
+class DocumentUnderstanding(Model):
+ pdf_path: str
+ question: str
+
+```
+
+The flow is the following:
+
+ - **RequestAgent** provides the `DocumentUnderstanding` object.
+ - **PDFQuestionAgent** upon receiving `DocumentUnderstanding` sends a request to the third agent to validate and split the document.
+ - **PDFSplitAgent** upon receiving the request from PDFQuestionAgent, returns a list of pages to it.
+ - **PDFQuestionAgent** upto receiving the message from PDFSplitAgent processes the pages to answer the question from RequestAgent, then returns that answer to this latter one.
+
+In this example, we are hard coding the agents addresses, meaning we know them. To search for agents dynamically, take a look at the [Almanac ↗️](/concepts/fetch-network/almanac).
+
+## Installation
+
+Run the following:
+
+ ```bash copy
+ poetry init
+ poetry add uagents requests langchain openai langchain-openai faiss-cpu validators
+ ```
+
+Versions used for this example are:
+
+ ```
+ [tool.poetry.dependencies]
+ python = ">=3.10,<3.12"
+ uagents = "0.12.0"
+ requests = "^2.31.0"
+ langchain = "^0.1.7"
+ openai = "^1.12.0"
+ langchain-openai = "^0.0.6"
+ faiss-cpu = "^1.7.4"
+ ```
+
+### Environment setup
+
+ ```
+ export OPENAI_API_KEY="YOUR_OPENAI_API_KEY"
+ ```
+
+### Agent 1 - RequestAgent: provides a question and a source
+
+
+This is the our simplest agent; this agent provides a link to a PDF and question to be answered from the document.
+
+```python copy filename="request_agent.py"
+from uagents import Agent, Context, Protocol, Model
+
+class DocumentUnderstanding(Model):
+ pdf_path: str
+ question: str
+
+class DocumentsResponse(Model):
+ learnings: str
+
+agent = Agent(
+ name="find_in_pdf",
+ seed="",
+ port=8001,
+ endpoint=["http://127.0.0.1:8001/submit"],
+)
+
+print("uAgent address: ", agent.address)
+summary_protocol = Protocol("Text Summariser")
+
+AGENT_2_FAISS = ""
+
+@agent.on_event("startup")
+async def on_startup(ctx: Context):
+ await ctx.send(
+ AGENT_2_FAISS,
+ DocumentUnderstanding(pdf_path="./a.pdf", question="What is the story about?"),
+ )
+
+@agent.on_message(model=DocumentsResponse)
+async def document_load(ctx: Context, sender: str, msg: DocumentsResponse):
+ ctx.logger.info(msg.learnings)
+
+
+agent.include(summary_protocol, publish_manifest=True)
+agent.run()
+
+
+```
+
+### Agent 2 - PDFQuestionAgent: takes a request and returns a result
+
+The PDFQuestionAgent gets the PDF and the request from the first agent, but is unable to split the PDF. This second agent sends a request to the third agent to split the PDF. Once the pages from the PDF are returned, a FAISS similarity search is ran on the pages by the second agent.
+
+```python copy filename="pdf_question_agent.py"
+from langchain_community.document_loaders import PyPDFLoader
+from langchain_community.vectorstores import FAISS
+from langchain_community.docstore.in_memory import InMemoryDocstore
+from langchain_openai import OpenAIEmbeddings
+from uagents import Agent, Context, Protocol, Model
+from langchain_core.documents import Document
+from typing import List
+import os
+import uuid
+import faiss
+
+class PDF_Request(Model):
+ pdf_path: str
+ session: str
+
+class DocumentUnderstanding(Model):
+ pdf_path: str
+ question: str
+
+class PagesResponse(Model):
+ pages: List
+ session: str
+
+class DocumentsResponse(Model):
+ learnings: str
+
+faiss_pdf_agent = Agent(
+ name="faiss_pdf_agent",
+ seed="",
+ port=8002,
+ endpoint=["http://127.0.0.1:8002/submit"],
+)
+
+print("uAgent address: ", faiss_pdf_agent.address)
+faiss_protocol = Protocol("FAISS")
+
+RequestAgent = ""
+PDF_splitter_address = ""
+
+openai_api_key = os.environ["OPENAI_API_KEY"]
+embeddings = OpenAIEmbeddings(model="text-embedding-3-large")
+
+@faiss_pdf_agent.on_message(model=DocumentUnderstanding, replies=PDF_Request)
+async def document_load(ctx: Context, sender: str, msg: DocumentUnderstanding):
+ ctx.logger.info(msg)
+ ref = str(uuid.uuid4())
+ ctx.storage.set(ref, {"question": msg.question, "sender": sender})
+ await ctx.send(
+ PDF_splitter_address, PDF_Request(pdf_path=msg.pdf_path, session=ref)
+ )
+
+@faiss_pdf_agent.on_message(model=PagesResponse, replies=DocumentsResponse)
+async def document_understand(ctx: Context, sender: str, msg: PagesResponse):
+ index = faiss.IndexFlatL2(len(embeddings.embed_query("hello")))
+
+ vector_store = FAISS(
+ embedding_function=embeddings,
+ index=index,
+ docstore=InMemoryDocstore(),
+ index_to_docstore_id={},
+ )
+
+ documents = []
+ for page in msg.pages:
+ documents.append(
+ Document(page_content=page["page_content"], metadata=page["metadata"])
+ )
+
+ uuids = [str(uuid.uuid4()) for _ in range(len(documents))]
+
+ vector_store.add_documents(documents=documents, ids=uuids)
+
+ prev = ctx.storage.get(msg.session)
+
+ results = vector_store.similarity_search(
+ prev["question"],
+ k=2,
+ )
+
+ if len(results) > 0:
+ await ctx.send(
+ prev["sender"], DocumentsResponse(learnings=results[0].page_content)
+ )
+
+faiss_pdf_agent.include(faiss_protocol, publish_manifest=True)
+faiss_pdf_agent.run()
+
+```
+
+The core difference with agent two compared to other agents you have seen so far is that there are multiple `on_message` decorators. Your agent can have as many any number of message handlers as you want.
+
+**PDFQuestionAgent** also has every request/response model to communicate with **RequestAgent** and **PDFSplitAgent**.
+
+### Agent 3 - PDFSplitAgent: validates the PDF, loads and returns the split
+
+The PDFSplitAgent receives the PDF, and splits the document using the `langchain_community` document loader `PyPDFLoader`. It then returns an array of pages to the second agent.
+
+```python copy filename="pdf_split_agent.py"
+from langchain_community.document_loaders import PyPDFLoader
+from uagents import Agent, Context, Protocol, Model
+from typing import List
+
+class PDF_Request(Model):
+ pdf_path: str
+ session: str
+
+class PagesResponse(Model):
+ pages: List
+ session: str
+
+pdf_loader_agent = Agent(
+ name="pdf_loader_agent",
+ seed="",
+ port=8003,
+ endpoint=["http://127.0.0.1:8003/submit"],
+)
+
+print("uAgent address: ", pdf_loader_agent.address)
+pdf_loader_protocol = Protocol("Text Summariser")
+
+@pdf_loader_agent.on_message(model=PDF_Request, replies=PagesResponse)
+async def document_load(ctx: Context, sender: str, msg: PDF_Request):
+ loader = PyPDFLoader(msg.pdf_path)
+ pages = loader.load_and_split()
+ await ctx.send(sender, PagesResponse(pages=pages, session=msg.session))
+
+pdf_loader_agent.include(pdf_loader_protocol, publish_manifest=True)
+pdf_loader_agent.run()
+
+```
+
+## Run the agents
+
+We need to run the agents backwards so that we can generate their addresses and the update the other agents with their addresses respectively.
+
+Let's run **PDFSplitAgent**, and update **PDFQuestionAgent** with its address:
+
+Run: `poetry run python pdf_split_agent.py`
+
+Update `pdf_question_agent.py` script by filling the `PDF_splitter_address` field with the address of the third agent.
+
+Run `poetry run python pdf_question_agent.py`
+
+Update `request_agent.py` script by filling the `AGENT_2_FAISS` field with the address of the second agent.
+
+Run `poetry run python request_agent.py`
+
+Add the address of the first agent in the dedicated field `RequestAgent` within the script for the second agent, `pdf_question_agent.py`.
+
+### Expected Output
+
+- `request_agent.py`:
+
+ ```
+ uAgent address: agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp
+ INFO: [find_in_pdf]: Manifest published successfully: Text Summariser
+ INFO: [find_in_pdf]: Registering on almanac contract...
+ INFO: [find_in_pdf]: Registering on almanac contract...complete
+ INFO: [find_in_pdf]: Starting server on http://0.0.0.0:8001 (Press CTRL+C to quit)
+ INFO: [find_in_pdf]: GuidesAI AgentsGetting StartedWhat's an Agent?
+ Beginner Python
+ Agents - uAgents Framework
+ Introduction
+ Agents are autonomous software programs that can commuicate with eachother, they're
+ designed to solve problems alone, or as part of a multi agent system. Standardised
+ communication protocols allows agents to communicate eciently, aiding negotiation and
+ problem resolution. Unlike microservices or centralized systems, agents are reactive and self
+ reasoning. Agents are built using the uAgents framework, and with additional libraries are
+ party to the AI Engine.
+ Agents oer intercontevity for larger systems, where centralised systmes may have some
+ fault tolerance, multi agent systemsd allow for plug and play design; agents can be replaced
+ or upgraded in a live system without downtime. Agents can be built to only care for one
+ process or rule. In your systems, agents may have a totally independant network.
+ Agents aren't just agents, you can wrap LLMs and other models to create self reasoning, self
+ learning chat interfaces or services. This, is a simple as 12 lines of code link
+ ```
+
+- `pdf_question_agent.py`:
+
+ ```
+ INFO:faiss.loader:Loading faiss with AVX2 support.
+ INFO:faiss.loader:Successfully loaded faiss with AVX2 support.
+ uAgent address: agent1qt89fz44fp0nxvkpgfts4lm566lj8gs7qnlh7yz3lwz5f5scp7nrkcpt3qe
+ INFO: [faiss_pdf_agent]: Manifest published successfully: FAISS
+ INFO: [faiss_pdf_agent]: Registration on Almanac API successful
+ INFO: [faiss_pdf_agent]: Registering on almanac contract...
+ INFO: [faiss_pdf_agent]: Registering on almanac contract...complete
+ INFO: [faiss_pdf_agent]: Starting server on http://0.0.0.0:8002 (Press CTRL+C to quit)
+ INFO: [faiss_pdf_agent]: pdf_path='./a.pdf' question='What is the story about?'
+ INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
+ INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
+ INFO:httpx:HTTP Request: POST https://api.openai.com/v1/embeddings "HTTP/1.1 200 OK"
+ ```
+
+- `pdf_split_agent.py`:
+
+ ```
+ uAgent address: agent1qf4au6rzaauxhy2jze6v85rspgvredx9m42p0e0cukz0hv4dh2sqjuhujpp
+ INFO: [pdf_loader_agent]: Manifest published successfully: Text Summariser
+ INFO: [pdf_loader_agent]: Registering on almanac contract...
+ INFO: [pdf_loader_agent]: Registering on almanac contract...complete
+ INFO: [pdf_loader_agent]: Starting server on http://0.0.0.0:8003 (Press CTRL+C to quit)
+ ```
diff --git a/src/images/examples/crewAi-example-1.png b/src/images/examples/crewAi-example-1.png
new file mode 100644
index 000000000..e78bbcada
Binary files /dev/null and b/src/images/examples/crewAi-example-1.png differ
diff --git a/src/images/examples/crewAi-example-2.png b/src/images/examples/crewAi-example-2.png
new file mode 100644
index 000000000..f254e6983
Binary files /dev/null and b/src/images/examples/crewAi-example-2.png differ
diff --git a/src/images/guides/quickstart-with/langchain/multi-agent-workflow-simple.drawio.svg b/src/images/guides/quickstart-with/langchain/multi-agent-workflow-simple.drawio.svg
new file mode 100644
index 000000000..5b682936e
--- /dev/null
+++ b/src/images/guides/quickstart-with/langchain/multi-agent-workflow-simple.drawio.svg
@@ -0,0 +1,4 @@
+
+
+
+
\ No newline at end of file
diff --git a/src/svgs/crewai.svg b/src/svgs/crewai.svg
new file mode 100644
index 000000000..a0790f825
--- /dev/null
+++ b/src/svgs/crewai.svg
@@ -0,0 +1,15 @@
+
\ No newline at end of file
diff --git a/src/svgs/fastapi.svg b/src/svgs/fastapi.svg
new file mode 100644
index 000000000..e33adaf5d
--- /dev/null
+++ b/src/svgs/fastapi.svg
@@ -0,0 +1,3 @@
+
\ No newline at end of file
diff --git a/src/svgs/langchain.svg b/src/svgs/langchain.svg
new file mode 100644
index 000000000..aa7810c46
--- /dev/null
+++ b/src/svgs/langchain.svg
@@ -0,0 +1,5 @@
+
\ No newline at end of file