Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main'
Browse files Browse the repository at this point in the history
  • Loading branch information
garyzhang99 committed Jul 2, 2024
2 parents dc7afcc + 2641e9b commit 672dc28
Show file tree
Hide file tree
Showing 41 changed files with 447 additions and 155 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Welcome to join our community on

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-06-11]** The RAG functionality is available for agents in **AgentScope** now! [**A quick introduction to RAG in AgentScope**](https://modelscope.github.io/agentscope/en/tutorial/210-rag.html) can help you equip your agent with external knowledge!

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-06-09]** We release **AgentScope** v0.0.5 now! In this new version, [**AgentScope Workstation**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html) is open-sourced with the refactored [**AgentScope Studio**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html)!
- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-06-09]** We release **AgentScope** v0.0.5 now! In this new version, [**AgentScope Workstation**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html) (the online version is running on [agentscope.io](https://agentscope.io)) is open-sourced with the refactored [**AgentScope Studio**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html)!

<h5 align="center">
<img src="https://img.alicdn.com/imgextra/i1/O1CN01RXAVVn1zUtjXVvuqS_!!6000000006718-1-tps-3116-1852.gif" width="600" alt="agentscope-logo">
Expand Down
2 changes: 1 addition & 1 deletion README_ZH.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-06-11]** RAG功能现在已经整合进 **AgentScope** 中! 大家可以根据 [**简要介绍AgentScope中的RAG**](https://modelscope.github.io/agentscope/en/tutorial/210-rag.html) ,让自己的agent用上外部知识!

- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-06-09]** AgentScope v0.0.5 已经更新!在这个新版本中,我们开源了 [**AgentScope Workstation**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html)
- <img src="https://img.alicdn.com/imgextra/i3/O1CN01SFL0Gu26nrQBFKXFR_!!6000000007707-2-tps-500-500.png" alt="new" width="30" height="30"/>**[2024-06-09]** AgentScope v0.0.5 已经更新!在这个新版本中,我们开源了 [**AgentScope Workstation**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html) (在线版本的网址是[agentscope.io](https://agentscope.io))

<h5 align="center">
<img src="https://img.alicdn.com/imgextra/i1/O1CN01RXAVVn1zUtjXVvuqS_!!6000000006718-1-tps-3116-1852.gif" width="600" alt="agentscope-logo">
Expand Down
6 changes: 5 additions & 1 deletion docs/sphinx_doc/en/source/tutorial/104-usecase.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,13 @@ To implement your own agent, you need to inherit `AgentBase` and implement the `

```python
from agentscope.agents import AgentBase
from agentscope.message import Msg


from typing import Optional, Union, Sequence

class MyAgent(AgentBase):
def reply(self, x):
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# Do something here
...
return x
Expand Down
10 changes: 5 additions & 5 deletions docs/sphinx_doc/en/source/tutorial/201-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,15 +39,15 @@ class AgentBase(Operator):
) -> None:

# ... [code omitted for brevity]
def observe(self, x: Union[dict, Sequence[dict]]) -> None:
def observe(self, x: Union[Msg, Sequence[Msg]]) -> None:
# An optional method for updating the agent's internal state based on
# messages it has observed. This method can be used to enrich the
# agent's understanding and memory without producing an immediate
# response.
if self.memory:
self.memory.add(x)

def reply(self, x: dict = None) -> dict:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# The core method to be implemented by custom agents. It defines the
# logic for processing an input message and generating a suitable
# response.
Expand Down Expand Up @@ -86,7 +86,7 @@ Below, we provide usages of how to configure various agents from the AgentPool:
* **Reply Method**: The `reply` method is where the main logic for processing input *message* and generating responses.

```python
def reply(self, x: dict = None) -> dict:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# Additional processing steps can occur here

# Record the input if needed
Expand Down Expand Up @@ -142,9 +142,9 @@ service_bot = DialogAgent(**dialog_agent_config)
```python
def reply(
self,
x: dict = None,
x: Optional[Union[Msg, Sequence[Msg]]] = None,
required_keys: Optional[Union[list[str], str]] = None,
) -> dict:
) -> Msg:
# Check if there is initial data to be added to memory
if self.memory:
self.memory.add(x)
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_doc/en/source/tutorial/203-parser.md
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ In AgentScope, we achieve post-processing by calling the `to_content`, `to_memor
```python
# ...
def reply(x: dict = None) -> None:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# ...
res = self.model(prompt, parse_func=self.parser.parse)
Expand Down
5 changes: 4 additions & 1 deletion docs/sphinx_doc/en/source/tutorial/204-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,9 @@ import json
import inspect
from agentscope.service import ServiceResponse
from agentscope.agents import AgentBase
from agentscope.message import Msg

from typing import Optional, Union, Sequence


def create_file(file_path: str, content: str = "") -> ServiceResponse:
Expand All @@ -282,7 +285,7 @@ def create_file(file_path: str, content: str = "") -> ServiceResponse:
class YourAgent(AgentBase):
# ... [omitted for brevity]

def reply(self, x: dict = None) -> dict:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# ... [omitted for brevity]

# construct a prompt to ask the agent to provide the parameters in JSON format
Expand Down
4 changes: 3 additions & 1 deletion docs/sphinx_doc/en/source/tutorial/209-prompt_opt.md
Original file line number Diff line number Diff line change
Expand Up @@ -397,6 +397,8 @@ from agentscope.agents import AgentBase
from agentscope.prompt import SystemPromptOptimizer
from agentscope.message import Msg

from typing import Optional, Union, Sequence

class MyAgent(AgentBase):
def __init__(
self,
Expand All @@ -411,7 +413,7 @@ class MyAgent(AgentBase):
# or model_or_model_config_name=self.model
)

def reply(self, x: dict = None) -> dict:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
self.memory.add(x)

prompt = self.model.format(
Expand Down
105 changes: 105 additions & 0 deletions docs/sphinx_doc/en/source/tutorial/210-rag.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,6 +190,111 @@ RAG agent is an agent that can generate answers based on the retrieved knowledge
Your agent will be equipped with a list of knowledge according to the `knowledge_id_list`.
You can decide how to use the retrieved content and even update and refresh the index in your agent's `reply` function.

## (Optional) Setting up a local embedding model service

For those who are interested in setting up a local embedding service, we provide the following example based on the
`sentence_transformers` package, which is a popular specialized package for embedding models (based on the `transformer` package and compatible with both HuggingFace and ModelScope models).
In this example, we will use one of the SOTA embedding models, `gte-Qwen2-7B-instruct`.

* Step 1: Follow the instruction on [HuggingFace](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) or [ModelScope](https://www.modelscope.cn/models/iic/gte_Qwen2-7B-instruct ) to download the embedding model.
(For those who cannot access HuggingFace directly, you may want to use a HuggingFace mirror by running a bash command
`export HF_ENDPOINT=https://hf-mirror.com` or add a line of code `os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"` in your Python code.)
* Step 2: Set up the server. The following code is for reference.

```python
import datetime
import argparse

from flask import Flask
from flask import request
from sentence_transformers import SentenceTransformer

def create_timestamp(format_: str = "%Y-%m-%d %H:%M:%S") -> str:
"""Get current timestamp."""
return datetime.datetime.now().strftime(format_)

app = Flask(__name__)

@app.route("/embedding/", methods=["POST"])
def get_embedding() -> dict:
"""Receive post request and return response"""
json = request.get_json()

inputs = json.pop("inputs")

global model

if isinstance(inputs, str):
inputs = [inputs]

embeddings = model.encode(inputs)

return {
"data": {
"completion_tokens": 0,
"messages": {},
"prompt_tokens": 0,
"response": {
"data": [
{
"embedding": emb.astype(float).tolist(),
}
for emb in embeddings
],
"created": "",
"id": create_timestamp(),
"model": "flask_model",
"object": "text_completion",
"usage": {
"completion_tokens": 0,
"prompt_tokens": 0,
"total_tokens": 0,
},
},
"total_tokens": 0,
"username": "",
},
}

if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--model_name_or_path", type=str, required=True)
parser.add_argument("--device", type=str, default="auto")
parser.add_argument("--port", type=int, default=8000)
args = parser.parse_args()

global model

print("setting up for embedding model....")
model = SentenceTransformer(
args.model_name_or_path
)

app.run(port=args.port)
```

* Step 3: start server.
```bash
python setup_ms_service.py --model_name_or_path {$PATH_TO_gte_Qwen2_7B_instruct}
```


Testing whether the model is running successfully.
```python
from agentscope.models.post_model import PostAPIEmbeddingWrapper


model = PostAPIEmbeddingWrapper(
config_name="test_config",
api_url="http://127.0.0.1:8000/embedding/",
json_args={
"max_length": 4096,
"temperature": 0.5
}
)

print(model("testing"))
```

[[Back to the top]](#210-rag-en)

Expand Down
7 changes: 6 additions & 1 deletion docs/sphinx_doc/zh_CN/source/tutorial/104-usecase.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,14 @@

```python
from agentscope.agents import AgentBase
from agentscope.message import Msg

from typing import Optional, Union, Sequence


class MyAgent(AgentBase):
def reply(self, x):

def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# Do something here
...
return x
Expand Down
8 changes: 4 additions & 4 deletions docs/sphinx_doc/zh_CN/source/tutorial/201-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ class AgentBase(Operator):
if self.memory:
self.memory.add(x)

def reply(self, x: dict = None) -> dict:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# The core method to be implemented by custom agents. It defines the
# logic for processing an input message and generating a suitable
# response.
Expand Down Expand Up @@ -87,7 +87,7 @@ class AgentBase(Operator):
* **回复方法**`reply` 方法是处理输入消息和生成响应的主要逻辑所在

```python
def reply(self, x: dict = None) -> dict:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# Additional processing steps can occur here

# Record the input if needed
Expand Down Expand Up @@ -143,9 +143,9 @@ service_bot = DialogAgent(**dialog_agent_config)
```python
def reply(
self,
x: dict = None,
x: Optional[Union[Msg, Sequence[Msg]]] = None,
required_keys: Optional[Union[list[str], str]] = None,
) -> dict:
) -> Msg:
# Check if there is initial data to be added to memory
if self.memory:
self.memory.add(x)
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_doc/zh_CN/source/tutorial/203-parser.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ AgentScope中,我们通过调用`to_content`,`to_memory`和`to_metadata`方
```python
# ...
def reply(x: dict = None) -> None:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# ...
res = self.model(prompt, parse_func=self.parser.parse)
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_doc/zh_CN/source/tutorial/204-service.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ def create_file(file_path: str, content: str = "") -> ServiceResponse:
class YourAgent(AgentBase):
# ... [为简洁起见省略代码]

def reply(self, x: dict = None) -> dict:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
# ... [为简洁起见省略代码]

# 构造提示,让代理提供 JSON 格式的参数
Expand Down
2 changes: 1 addition & 1 deletion docs/sphinx_doc/zh_CN/source/tutorial/209-prompt_opt.md
Original file line number Diff line number Diff line change
Expand Up @@ -392,7 +392,7 @@ class MyAgent(AgentBase):
# 或是 model_or_model_config_name=self.model
)

def reply(self, x: dict = None) -> dict:
def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg:
self.memory.add(x)

prompt = self.model.format(
Expand Down
Loading

0 comments on commit 672dc28

Please sign in to comment.