Skip to content

Commit

Permalink
Merge branch 'main' into feature/pxc/env
Browse files Browse the repository at this point in the history
  • Loading branch information
pan-x-c committed Oct 8, 2024
2 parents 7d90257 + 9d7af22 commit 7e404f1
Show file tree
Hide file tree
Showing 12 changed files with 153 additions and 93 deletions.
4 changes: 2 additions & 2 deletions .github/workflows/unittest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,9 +24,9 @@ jobs:
- name: Install Minimal Dependencies
run: |
pip install -q -e .
- name: Run import tests
- name: Run minimal import tests
run: |
python -c "import agentscope; print(agentscope.__version__)"
python tests/minimal.py
- name: Install Full Dependencies
run: |
pip install -q -e .[full]
Expand Down
70 changes: 34 additions & 36 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ Start building LLM-empowered multi-agent applications in an easier way.
|----------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|
| <img src="https://gw.alicdn.com/imgextra/i1/O1CN01hhD1mu1Dd3BWVUvxN_!!6000000000238-2-tps-400-400.png" width="100" height="100"> | <img src="https://img.alicdn.com/imgextra/i2/O1CN01tuJ5971OmAqNg9cOw_!!6000000001747-0-tps-444-460.jpg" width="100" height="100"> |


----

## News
Expand Down Expand Up @@ -187,7 +186,6 @@ the following libraries.
- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>[Conversation with CodeAct Agent](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_codeact_agent/)
- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>[Conversation with Router Agent](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_router_agent/)


- Game
- [Gomoku](https://github.com/modelscope/agentscope/blob/main/examples/game_gomoku)
- [Werewolf](https://github.com/modelscope/agentscope/blob/main/examples/game_werewolf)
Expand Down Expand Up @@ -236,7 +234,6 @@ optional dependencies. Full list of optional dependencies refers to
Taking distribution mode as an example, you can install its dependencies
as follows:


#### On Windows

```bash
Expand All @@ -247,14 +244,14 @@ pip install agentscope[distribute]
```

#### On Mac & Linux

```bash
# From source
pip install -e .\[distribute\]
# From pypi
pip install agentscope\[distribute\]
```


## Quick Start

### Configuration
Expand Down Expand Up @@ -391,35 +388,36 @@ pre-commit install

Please refer to our [Contribution Guide](https://modelscope.github.io/agentscope/en/tutorial/302-contribute.html) for more details.

## References

If you find our work helpful for your research or application, please
cite [our paper](https://arxiv.org/abs/2402.14034):

```
@article{agentscope,
author = {Dawei Gao and
Zitao Li and
Xuchen Pan and
Weirui Kuang and
Zhijian Ma and
Bingchen Qian and
Fei Wei and
Wenhao Zhang and
Yuexiang Xie and
Daoyuan Chen and
Liuyi Yao and
Hongyi Peng and
Ze Yu Zhang and
Lin Zhu and
Chen Cheng and
Hongzhu Shi and
Yaliang Li and
Bolin Ding and
Jingren Zhou},
title = {AgentScope: A Flexible yet Robust Multi-Agent Platform},
journal = {CoRR},
volume = {abs/2402.14034},
year = {2024},
}
```
## Publications

If you find our work helpful for your research or application, please cite our papers.

1. [AgentScope: A Flexible yet Robust Multi-Agent Platform](https://arxiv.org/abs/2402.14034)

```
@article{agentscope,
author = {Dawei Gao and
Zitao Li and
Xuchen Pan and
Weirui Kuang and
Zhijian Ma and
Bingchen Qian and
Fei Wei and
Wenhao Zhang and
Yuexiang Xie and
Daoyuan Chen and
Liuyi Yao and
Hongyi Peng and
Ze Yu Zhang and
Lin Zhu and
Chen Cheng and
Hongzhu Shi and
Yaliang Li and
Bolin Ding and
Jingren Zhou}
title = {AgentScope: A Flexible yet Robust Multi-Agent Platform},
journal = {CoRR},
volume = {abs/2402.14034},
year = {2024},
}
```
68 changes: 34 additions & 34 deletions README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,6 @@
|---------|----------|
| <img src="https://gw.alicdn.com/imgextra/i1/O1CN01hhD1mu1Dd3BWVUvxN_!!6000000000238-2-tps-400-400.png" width="100" height="100"> | <img src="https://img.alicdn.com/imgextra/i2/O1CN01tuJ5971OmAqNg9cOw_!!6000000001747-0-tps-444-460.jpg" width="100" height="100"> |



----

## 新闻
Expand All @@ -56,7 +54,6 @@
<img src="https://github.com/user-attachments/assets/dfffbd1e-1fe7-49ee-ac11-902415b2b0d6" width="45%" alt="agentscope-logo">
</h5>


- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-07-15]** AgentScope 中添加了 Mixture of Agents 算法。使用样例请参考 [MoA 示例](https://github.com/modelscope/agentscope/blob/main/examples/conversation_mixture_of_agents)

- **[2024-06-14]** 新的提示调优(Prompt tuning)模块已经上线 AgentScope,用以帮助开发者生成和优化智能体的 system prompt。更多的细节和使用样例请参考 AgentScope [教程](https://modelscope.github.io/agentscope/en/tutorial/209-prompt_opt.html)
Expand Down Expand Up @@ -232,6 +229,7 @@ pip install agentscope[distribute]
```

#### On Mac & Linux

```bash
# From source
pip install -e .\[distribute\]
Expand Down Expand Up @@ -362,34 +360,36 @@ pre-commit install

请参阅我们的[贡献指南](https://modelscope.github.io/agentscope/zh_CN/tutorial/302-contribute.html)了解更多细节。

## 引用

如果您觉得我们的工作对您的研究或应用有帮助,请引用[我们的论文](https://arxiv.org/abs/2402.14034)

```
@article{agentscope,
author = {Dawei Gao and
Zitao Li and
Xuchen Pan and
Weirui Kuang and
Zhijian Ma and
Bingchen Qian and
Fei Wei and
Wenhao Zhang and
Yuexiang Xie and
Daoyuan Chen and
Liuyi Yao and
Hongyi Peng and
Zeyu Zhang and
Lin Zhu and
Chen Cheng and
Hongzhu Shi and
Yaliang Li and
Bolin Ding and
Jingren Zhou},
title = {AgentScope: A Flexible yet Robust Multi-Agent Platform},
journal = {CoRR},
volume = {abs/2402.14034},
year = {2024},
}
```
## 发表

如果您觉得我们的工作对您的研究或应用有帮助,请引用如下论文

1. [AgentScope: A Flexible yet Robust Multi-Agent Platform](https://arxiv.org/abs/2402.14034)

```
@article{agentscope,
author = {Dawei Gao and
Zitao Li and
Xuchen Pan and
Weirui Kuang and
Zhijian Ma and
Bingchen Qian and
Fei Wei and
Wenhao Zhang and
Yuexiang Xie and
Daoyuan Chen and
Liuyi Yao and
Hongyi Peng and
Ze Yu Zhang and
Lin Zhu and
Chen Cheng and
Hongzhu Shi and
Yaliang Li and
Bolin Ding and
Jingren Zhou}
title = {AgentScope: A Flexible yet Robust Multi-Agent Platform},
journal = {CoRR},
volume = {abs/2402.14034},
year = {2024},
}
```
2 changes: 1 addition & 1 deletion docs/sphinx_doc/en/source/tutorial/104-usecase.md
Original file line number Diff line number Diff line change
Expand Up @@ -291,7 +291,7 @@ With the game logic and agents set up, you're ready to run the Werewolf game. By

```bash
cd examples/game_werewolf
python main.py # Assuming the pipeline is implemented in main.py
python werewolf.py # Assuming the pipeline is implemented in werewolf.py
```

It is recommended that you start the game in [AgentScope Studio](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html), where you
Expand Down
4 changes: 2 additions & 2 deletions docs/sphinx_doc/zh_CN/source/tutorial/104-usecase.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ for i in range(1, MAX_GAME_ROUND + 1):
# Night phase: werewolves discuss
hint = HostMsg(content=Prompts.to_wolves.format(n2s(wolves)))
with msghub(wolves, announcement=hint) as hub:
set_parsers(wolves, Prompts.wolves_discuss_parser)
set_parsers(wolves, Prompts.wolves_discuss_parser)
for _ in range(MAX_WEREWOLF_DISCUSSION_ROUND):
x = sequentialpipeline(wolves)
if x.metadata.get("finish_discussion", False):
Expand Down Expand Up @@ -295,7 +295,7 @@ for i in range(1, MAX_GAME_ROUND + 1):

```bash
cd examples/game_werewolf
python main.py # Assuming the pipeline is implemented in main.py
python werewolf.py # Assuming the pipeline is implemented in werewolf.py
```

建议您在在 [AgentScope Studio](https://modelscope.github.io/agentscope/zh_CN/tutorial/209-gui.html) 中启动游戏,在对应的链接中您将看到下面的内容输出。
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_doc/zh_CN/source/tutorial/209-gui.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ agentscope.init(
# ...
project="xxx",
name="xxx",
studio_url="http://127.0.0.15000" # AgentScope Studio 的 URL
studio_url="http://127.0.0.1:5000" # AgentScope Studio 的 URL
)
```

Expand Down
33 changes: 19 additions & 14 deletions examples/paper_llm_based_algorithm/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
# LLM-based algorithms


This folder contains the source code for reproducing the experiment results in our arXiv preprint "On the Design and Analysis of LLM-Based Algorithms".

Our work initiates a formal investigation into the design and analysis of LLM-based algorithms,
Expand All @@ -11,7 +10,6 @@ Within this folder, you can find our implementation for the key abstractions,
the LLM-based algorithms in four concrete examples,
and the experiments for validating our analysis in the manuscript.


## Tested Models

The following models have been tested, which are also listed in `model_configs.json`:
Expand All @@ -20,26 +18,25 @@ GPT-3.5 Turbo,
Llama3-8B (with ollama),
Llama3-70B (with vLLM).


## Prerequisites


1. Install AgentScope from source with `pip`, according to the [official instruction](../../README.md).
2. Install matplotlib: `pip install matplotlib`.
3. Change directory: `cd examples/llm_based_algorithm`.
3. Change directory: `cd examples/paper_llm_based_algorithm`.
4. Set up LLM model configs in `model_configs.json`.



## Usage

### Run experiments

To run experiments for a certain task:

```bash
bash ./scripts/exp_{task}.sh
```

or copy a piece of scripts therein, modify the parameters, and run it in the terminal, for example:

```bash
python3 run_exp_single_variable.py \
--task counting \
Expand All @@ -52,6 +49,7 @@ python3 run_exp_single_variable.py \
```

Parameters:

- `task`: name of the task, {"counting", "sorting", "retrieval", "retrieval_no_needle", "rag"}.
- `llm_model`: name of the LLM model, i.e. `config_name` in `model_configs.json`.
- `variable_name`: "n" for problem size, or "m" for sub-task size.
Expand All @@ -60,30 +58,37 @@ Parameters:
- `save_results`: if `True`, experiment results will be saved to `./out`; otherwise, results will be plotted and shown at the end of the experiment, and won't be saved.
- `ntrials`: number of independent trials for each experiment config, i.e. each entry of `lst_variable`.


### Plot results

To plot experiment results that have been saved:

```bash
bash ./scripts/plot_{task}.sh
```

or copy a piece of scripts therein and run it in the terminal, for example:

```bash
python3 plot_exp_results.py \
--folder ./out/counting/exp_counting_vary_n_model_ollama_llama3_8b-2024-06-19-11-11-13-kkwrhc
```

The path to the experiment results need to be replaced with the actual one generated during your own experiment.
The generated figures will be saved to the same folder.


## Reference

For more details, please refer to our arXiv preprint:

```
@article{chen2024llmbasedalgorithms,
title={On the Design and Analysis of LLM-Based Algorithms},
author={Yanxi Chen and Yaliang Li and Bolin Ding and Jingren Zhou},
year={2024},
@article{llm_based_algorithms,
author = {Yanxi Chen and
Yaliang Li and
Bolin Ding and
Jingren Zhou},
title = {On the Design and Analysis of LLM-Based Algorithms},
journal = {CoRR},
volume = {abs/2407.14788},
year = {2024},
}
```

2 changes: 1 addition & 1 deletion src/agentscope/models/ollama_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -359,7 +359,7 @@ def format(
system_content = "\n".join(system_content_template)

system_message = {
"role": "system",
"role": "user",
"content": system_content,
}

Expand Down
3 changes: 2 additions & 1 deletion src/agentscope/models/post_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
import requests
from loguru import logger

from .gemini_model import GeminiChatWrapper
from .openai_model import OpenAIChatWrapper
from .model import ModelWrapperBase, ModelResponse
from ..constants import _DEFAULT_MAX_RETRIES
Expand Down Expand Up @@ -221,6 +220,8 @@ def format(

# Gemini
elif model_name and model_name.startswith("gemini"):
from .gemini_model import GeminiChatWrapper

return GeminiChatWrapper.format(*args)

# Include DashScope, ZhipuAI, Ollama, the other models supported by
Expand Down
2 changes: 1 addition & 1 deletion tests/format_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -283,7 +283,7 @@ def test_ollama_chat(self) -> None:
# correct format
ground_truth = [
{
"role": "system",
"role": "user",
"content": (
"You are a helpful assistant\n"
"\n"
Expand Down
Loading

0 comments on commit 7e404f1

Please sign in to comment.