diff --git a/README.md b/README.md index a6822c344..17606b654 100644 --- a/README.md +++ b/README.md @@ -49,11 +49,11 @@ Start building LLM-empowered multi-agent applications in an easier way. - new**[2024-07-15]** AgentScope has implemented the Mixture-of-Agents algorithm. Refer to our [MoA example](https://github.com/modelscope/agentscope/blob/main/examples/conversation_mixture_of_agents) for more details. -- new**[2024-06-14]** A new prompt tuning module is available in AgentScope to help developers generate and optimize the agents' system prompts! Refer to our [tutorial](https://modelscope.github.io/agentscope/en/tutorial/209-prompt_opt.html) for more details! +- **[2024-06-14]** A new prompt tuning module is available in AgentScope to help developers generate and optimize the agents' system prompts! Refer to our [tutorial](https://modelscope.github.io/agentscope/en/tutorial/209-prompt_opt.html) for more details! -- new**[2024-06-11]** The RAG functionality is available for agents in **AgentScope** now! [**A quick introduction to RAG in AgentScope**](https://modelscope.github.io/agentscope/en/tutorial/210-rag.html) can help you equip your agent with external knowledge! +- **[2024-06-11]** The RAG functionality is available for agents in **AgentScope** now! [**A quick introduction to RAG in AgentScope**](https://modelscope.github.io/agentscope/en/tutorial/210-rag.html) can help you equip your agent with external knowledge! -- new**[2024-06-09]** We release **AgentScope** v0.0.5 now! In this new version, [**AgentScope Workstation**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html) (the online version is running on [agentscope.io](https://agentscope.io)) is open-sourced with the refactored [**AgentScope Studio**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html)! +- **[2024-06-09]** We release **AgentScope** v0.0.5 now! In this new version, [**AgentScope Workstation**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html) (the online version is running on [agentscope.io](https://agentscope.io)) is open-sourced with the refactored [**AgentScope Studio**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html)! - **[2024-05-24]** We are pleased to announce that features related to the **AgentScope Workstation** will soon be open-sourced! The online website services are temporarily offline. The online website service will be upgraded and back online shortly. Stay tuned... @@ -67,7 +67,7 @@ Start building LLM-empowered multi-agent applications in an easier way. - **[2024-04-30]** We release **AgentScope** v0.0.4 now! -- **[2024-04-27]** [AgentScope Workstation](https://agentscope.aliyun.com/) is now online! You are welcome to try building your multi-agent application simply with our *drag-and-drop platform* and ask our *copilot* questions about AgentScope! +- **[2024-04-27]** [AgentScope Workstation](https://agentscope.io/) is now online! You are welcome to try building your multi-agent application simply with our *drag-and-drop platform* and ask our *copilot* questions about AgentScope! - **[2024-04-19]** AgentScope supports Llama3 now! We provide [scripts](https://github.com/modelscope/agentscope/blob/main/examples/model_llama3) and example [model configuration](https://github.com/modelscope/agentscope/blob/main/examples/model_llama3) for quick set-up. Feel free to try llama3 in our examples! @@ -96,7 +96,7 @@ to build multi-agent applications with large-scale models. It features three high-level capabilities: - 🤝 **Easy-to-Use**: Designed for developers, with [fruitful components](https://modelscope.github.io/agentscope/en/tutorial/204-service.html#), -[comprehensive documentation](https://modelscope.github.io/agentscope/en/index.html), and broad compatibility. Besides, [AgentScope Workstation](https://agentscope.aliyun.com/) provides a *drag-and-drop programming platform* and a *copilot* for beginners of AgentScope! +[comprehensive documentation](https://modelscope.github.io/agentscope/en/index.html), and broad compatibility. Besides, [AgentScope Workstation](https://agentscope.io/) provides a *drag-and-drop programming platform* and a *copilot* for beginners of AgentScope! - ✅ **High Robustness**: Supporting customized fault-tolerance controls and retry mechanisms to enhance application stability. @@ -109,24 +109,25 @@ applications in a centralized programming manner for streamlined development. AgentScope provides a list of `ModelWrapper` to support both local model services and third-party model APIs. -| API | Task | Model Wrapper | Configuration | Some Supported Models | -|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|-----------------------------------------------------------------| -| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |[guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4o, gpt-4, gpt-3.5-turbo, ... | -| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_embedding_template.json) | text-embedding-ada-002, ... | -| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_dall_e_template.json) | dall-e-2, dall-e-3 | -| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_chat_template.json) | qwen-plus, qwen-max, ... | -| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_image_synthesis_template.json) | wanx-v1 | -| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_text_embedding_template.json) | text-embedding-v1, text-embedding-v2, ... | -| | Multimodal | [`DashScopeMultiModalWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_multimodal_template.json) | qwen-vl-max, qwen-vl-chat-v1, qwen-audio-chat | -| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_chat_template.json) | gemini-pro, ... | -| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_embedding_template.json) | models/embedding-001, ... | -| ZhipuAI API | Chat | [`ZhipuAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_chat_template.json) | glm-4, ... | -| | Embedding | [`ZhipuAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_embedding_template.json) | embedding-2, ... | -| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_chat_template.json) | llama3, llama2, Mistral, ... | -| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_embedding_template.json) | llama2, Mistral, ... | -| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_generate_template.json) | llama2, Mistral, ... | -| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#litellm-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/litellm_chat_template.json) | [models supported by litellm](https://docs.litellm.ai/docs/)... | -| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#post-request-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/postapi_model_config_template.json) | - | +| API | Task | Model Wrapper | Configuration | Some Supported Models | +|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------| +| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4o, gpt-4, gpt-3.5-turbo, ... | +| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_embedding_template.json) | text-embedding-ada-002, ... | +| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_dall_e_template.json) | dall-e-2, dall-e-3 | +| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_chat_template.json) | qwen-plus, qwen-max, ... | +| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_image_synthesis_template.json) | wanx-v1 | +| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_text_embedding_template.json) | text-embedding-v1, text-embedding-v2, ... | +| | Multimodal | [`DashScopeMultiModalWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_multimodal_template.json) | qwen-vl-max, qwen-vl-chat-v1, qwen-audio-chat | +| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_chat_template.json) | gemini-pro, ... | +| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_embedding_template.json) | models/embedding-001, ... | +| ZhipuAI API | Chat | [`ZhipuAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_chat_template.json) | glm-4, ... | +| | Embedding | [`ZhipuAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_embedding_template.json) | embedding-2, ... | +| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_chat_template.json) | llama3, llama2, Mistral, ... | +| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_embedding_template.json) | llama2, Mistral, ... | +| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_generate_template.json) | llama2, Mistral, ... | +| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#litellm-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/litellm_chat_template.json) | [models supported by litellm](https://docs.litellm.ai/docs/)... | +| Yi API | Chat | [`YiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/yi_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/yi_chat_template.json) | yi-large, yi-medium, ... | +| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#post-request-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/postapi_model_config_template.json) | - | **Supported Local Model Deployment** @@ -148,6 +149,8 @@ the following libraries. - File Operation - Text Processing - Multi Modality +- Wikipedia Search and Retrieval +- TripAdvisor Search **Example Applications** @@ -162,12 +165,13 @@ the following libraries. - [Conversation with ReAct Agent](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_react_agent) - [Conversation in Natural Language to Query SQL](https://github.com/modelscope/agentscope/blob/main/examples/conversation_nl2sql/) - [Conversation with RAG Agent](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_RAG_agents) - - new[Conversation with gpt-4o](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_gpt-4o) - - new[Conversation with Software Engineering Agent](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_swe-agent/) - - new[Conversation with Customized Tools](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_customized_services/) + - [Conversation with gpt-4o](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_gpt-4o) + - [Conversation with Software Engineering Agent](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_swe-agent/) + - [Conversation with Customized Tools](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_customized_services/) - new[Mixture of Agents Algorithm](https://github.com/modelscope/agentscope/blob/main/examples/conversation_mixture_of_agents/) - new[Conversation in Stream Mode](https://github.com/modelscope/agentscope/blob/main/examples/conversation_in_stream_mode/) - new[Conversation with CodeAct Agent](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_codeact_agent/) + - new[Conversation with Router Agent](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_router_agent/) - Game diff --git a/README_ZH.md b/README_ZH.md index 8c89ff927..d1704835f 100644 --- a/README_ZH.md +++ b/README_ZH.md @@ -50,11 +50,11 @@ - new**[2024-07-15]** AgentScope 中添加了 Mixture of Agents 算法。使用样例请参考 [MoA 示例](https://github.com/modelscope/agentscope/blob/main/examples/conversation_mixture_of_agents)。 -- new**[2024-06-14]** 新的提示调优(Prompt tuning)模块已经上线 AgentScope,用以帮助开发者生成和优化智能体的 system prompt。更多的细节和使用样例请参考 AgentScope [教程](https://modelscope.github.io/agentscope/en/tutorial/209-prompt_opt.html)! +- **[2024-06-14]** 新的提示调优(Prompt tuning)模块已经上线 AgentScope,用以帮助开发者生成和优化智能体的 system prompt。更多的细节和使用样例请参考 AgentScope [教程](https://modelscope.github.io/agentscope/en/tutorial/209-prompt_opt.html)! -- new**[2024-06-11]** RAG功能现在已经整合进 **AgentScope** 中! 大家可以根据 [**简要介绍AgentScope中的RAG**](https://modelscope.github.io/agentscope/en/tutorial/210-rag.html) ,让自己的agent用上外部知识! +- **[2024-06-11]** RAG功能现在已经整合进 **AgentScope** 中! 大家可以根据 [**简要介绍AgentScope中的RAG**](https://modelscope.github.io/agentscope/en/tutorial/210-rag.html) ,让自己的agent用上外部知识! -- new**[2024-06-09]** AgentScope v0.0.5 已经更新!在这个新版本中,我们开源了 [**AgentScope Workstation**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html) (在线版本的网址是[agentscope.io](https://agentscope.io))! +- **[2024-06-09]** AgentScope v0.0.5 已经更新!在这个新版本中,我们开源了 [**AgentScope Workstation**](https://modelscope.github.io/agentscope/en/tutorial/209-gui.html) (在线版本的网址是[agentscope.io](https://agentscope.io))! - **[2024-05-24]** 我们很高兴地宣布 **AgentScope Workstation** 相关功能即将开源。我们的网站服务暂时下线。在线服务会很快升级重新上线,敬请期待... @@ -66,7 +66,7 @@ - **[2024-04-30]** 我们现在发布了**AgentScope** v0.0.4版本! -- **[2024-04-27]** [AgentScope Workstation](https://agentscope.aliyun.com/)上线了! 欢迎使用 Workstation 体验如何在*拖拉拽编程平台* 零代码搭建多智体应用,也欢迎大家通过*copilot*查询AgentScope各种小知识! +- **[2024-04-27]** [AgentScope Workstation](https://agentscope.io/)上线了! 欢迎使用 Workstation 体验如何在*拖拉拽编程平台* 零代码搭建多智体应用,也欢迎大家通过*copilot*查询AgentScope各种小知识! - **[2024-04-19]** AgentScope现已经支持Llama3!我们提供了面向CPU推理和GPU推理的[脚本](./examples/model_llama3)和[模型配置](./examples/model_llama3),一键式开启Llama3的探索,在我们的样例中尝试Llama3吧! @@ -90,7 +90,7 @@ AgentScope是一个创新的多智能体开发平台,旨在赋予开发人员使用大模型轻松构建多智能体应用的能力。 -- 🤝 **高易用**: AgentScope专为开发人员设计,提供了[丰富的组件](https://modelscope.github.io/agentscope/en/tutorial/204-service.html#), [全面的文档](https://modelscope.github.io/agentscope/zh_CN/index.html)和广泛的兼容性。同时,[AgentScope Workstation](https://agentscope.aliyun.com/)提供了在线拖拉拽编程和在线小助手(copilot)功能,帮助开发者迅速上手! +- 🤝 **高易用**: AgentScope专为开发人员设计,提供了[丰富的组件](https://modelscope.github.io/agentscope/en/tutorial/204-service.html#), [全面的文档](https://modelscope.github.io/agentscope/zh_CN/index.html)和广泛的兼容性。同时,[AgentScope Workstation](https://agentscope.io/)提供了在线拖拉拽编程和在线小助手(copilot)功能,帮助开发者迅速上手! - ✅ **高鲁棒**:支持自定义的容错控制和重试机制,以提高应用程序的稳定性。 @@ -100,24 +100,25 @@ AgentScope是一个创新的多智能体开发平台,旨在赋予开发人员 AgentScope提供了一系列`ModelWrapper`来支持本地模型服务和第三方模型API。 -| API | Task | Model Wrapper | Configuration | Some Supported Models | -|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------|-----------------------------------------------| -| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) |[guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4o, gpt-4, gpt-3.5-turbo, ... | -| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_embedding_template.json) | text-embedding-ada-002, ... | -| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_dall_e_template.json) | dall-e-2, dall-e-3 | -| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_chat_template.json) | qwen-plus, qwen-max, ... | -| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_image_synthesis_template.json) | wanx-v1 | -| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_text_embedding_template.json) | text-embedding-v1, text-embedding-v2, ... | -| | Multimodal | [`DashScopeMultiModalWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_multimodal_template.json) | qwen-vl-max, qwen-vl-chat-v1, qwen-audio-chat | -| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_chat_template.json) | gemini-pro, ... | -| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_embedding_template.json) | models/embedding-001, ... | -| ZhipuAI API | Chat | [`ZhipuAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_chat_template.json) | glm-4, ... | -| | Embedding | [`ZhipuAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_embedding_template.json) | embedding-2, ... | -| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_chat_template.json) | llama3, llama2, Mistral, ... | -| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_embedding_template.json) | llama2, Mistral, ... | -| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_generate_template.json) | llama2, Mistral, ... | -| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#litellm-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/litellm_chat_template.json) | [models supported by litellm](https://docs.litellm.ai/docs/)... | -| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#post-request-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/postapi_model_config_template.json) | - | +| API | Task | Model Wrapper | Configuration | Some Supported Models | +|------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------| +| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_chat_template.json) | gpt-4o, gpt-4, gpt-3.5-turbo, ... | +| | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_embedding_template.json) | text-embedding-ada-002, ... | +| | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#openai-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/openai_dall_e_template.json) | dall-e-2, dall-e-3 | +| DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_chat_template.json) | qwen-plus, qwen-max, ... | +| | Image Synthesis | [`DashScopeImageSynthesisWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_image_synthesis_template.json) | wanx-v1 | +| | Text Embedding | [`DashScopeTextEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_text_embedding_template.json) | text-embedding-v1, text-embedding-v2, ... | +| | Multimodal | [`DashScopeMultiModalWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#dashscope-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/dashscope_multimodal_template.json) | qwen-vl-max, qwen-vl-chat-v1, qwen-audio-chat | +| Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_chat_template.json) | gemini-pro, ... | +| | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#gemini-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/gemini_embedding_template.json) | models/embedding-001, ... | +| ZhipuAI API | Chat | [`ZhipuAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_chat_template.json) | glm-4, ... | +| | Embedding | [`ZhipuAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#zhipu-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/zhipu_embedding_template.json) | embedding-2, ... | +| ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_chat_template.json) | llama3, llama2, Mistral, ... | +| | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_embedding_template.json) | llama2, Mistral, ... | +| | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#ollama-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/ollama_generate_template.json) | llama2, Mistral, ... | +| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#litellm-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/litellm_chat_template.json) | [models supported by litellm](https://docs.litellm.ai/docs/)... | +| Yi API | Chat | [`YiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/yi_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/yi_chat_template.json) | yi-large, yi-medium, ... | +| Post Request based API | - | [`PostAPIModelWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | [guidance](https://modelscope.github.io/agentscope/en/tutorial/203-model.html#post-request-api)
[template](https://github.com/modelscope/agentscope/blob/main/examples/model_configs_template/postapi_model_config_template.json) | - | **支持的本地模型部署** @@ -138,6 +139,8 @@ AgentScope支持使用以下库快速部署本地模型服务。 - 文件操作 - 文本处理 - 多模态生成 +- 维基百科搜索 +- TripAdvisor搜索 **样例应用** @@ -152,15 +155,13 @@ AgentScope支持使用以下库快速部署本地模型服务。 - [与ReAct智能体对话](./examples/conversation_with_react_agent) - [通过对话查询SQL信息](./examples/conversation_nl2sql/) - [与RAG智能体对话](./examples/conversation_with_RAG_agents) - - new[与gpt-4o模型对话](./examples/conversation_with_gpt-4o) - - new[与自定义服务对话](./examples/conversation_with_customized_services/) - - - new[与SoftWare Engineering智能体对话](./examples/conversation_with_swe-agent/) - - new[自定义工具函数](./examples/conversation_with_customized_services/) + - [与gpt-4o模型对话](./examples/conversation_with_gpt-4o) + - [自定义工具函数](./examples/conversation_with_customized_services/) + - [与SoftWare Engineering智能体对话](./examples/conversation_with_swe-agent/) - new[Mixture of Agents算法](https://github.com/modelscope/agentscope/blob/main/examples/conversation_mixture_of_agents/) - new[流式对话](https://github.com/modelscope/agentscope/blob/main/examples/conversation_in_stream_mode/) - new[与CodeAct智能体对话](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_codeact_agent/) - + - new[与Router Agent对话](https://github.com/modelscope/agentscope/blob/main/examples/conversation_with_router_agent/) - 游戏 - [五子棋](./examples/game_gomoku) diff --git a/docs/sphinx_doc/en/source/tutorial/201-agent.md b/docs/sphinx_doc/en/source/tutorial/201-agent.md index 3fa916a88..1a90bf589 100644 --- a/docs/sphinx_doc/en/source/tutorial/201-agent.md +++ b/docs/sphinx_doc/en/source/tutorial/201-agent.md @@ -35,7 +35,6 @@ class AgentBase(Operator): sys_prompt: Optional[str] = None, model_config_name: str = None, use_memory: bool = True, - memory_config: Optional[dict] = None, ) -> None: # ... [code omitted for brevity] @@ -71,7 +70,6 @@ Below is a table summarizing the functionality of some of the key agents availab | `DialogAgent` | Manages dialogues by understanding context and generating coherent responses. | Customer service bots, virtual assistants. | | `DictDialogAgent` | Manages dialogues by understanding context and generating coherent responses, and the responses are in json format. | Customer service bots, virtual assistants. | | `UserAgent` | Interacts with the user to collect input, generating messages that may include URLs or additional specifics based on required keys. | Collecting user input for agents | -| `TextToImageAgent` | An agent that convert user input text to image. | Converting text to image | | `ReActAgent` | An agent class that implements the ReAct algorithm. | Solving complex tasks | | *More to Come* | AgentScope is continuously expanding its pool with more specialized agents for diverse applications. | | diff --git a/docs/sphinx_doc/en/source/tutorial/203-model.md b/docs/sphinx_doc/en/source/tutorial/203-model.md index 9ac18e62b..2aad86e1e 100644 --- a/docs/sphinx_doc/en/source/tutorial/203-model.md +++ b/docs/sphinx_doc/en/source/tutorial/203-model.md @@ -74,7 +74,7 @@ In the current AgentScope, the supported `model_type` types, the corresponding | API | Task | Model Wrapper | `model_type` | Some Supported Models | |------------------------|-----------------|---------------------------------------------------------------------------------------------------------------------------------|-------------------------------|--------------------------------------------------| -| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai_chat"` | gpt-4, gpt-3.5-turbo, ... | +| OpenAI API | Chat | [`OpenAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai_chat"` | gpt-4, gpt-3.5-turbo, ... | | | Embedding | [`OpenAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai_embedding"` | text-embedding-ada-002, ... | | | DALL·E | [`OpenAIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/openai_model.py) | `"openai_dall_e"` | dall-e-2, dall-e-3 | | DashScope API | Chat | [`DashScopeChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | `"dashscope_chat"` | qwen-plus, qwen-max, ... | @@ -83,12 +83,13 @@ In the current AgentScope, the supported `model_type` types, the corresponding | | Multimodal | [`DashScopeMultiModalWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/dashscope_model.py) | `"dashscope_multimodal"` | qwen-vl-plus, qwen-vl-max, qwen-audio-turbo, ... | | Gemini API | Chat | [`GeminiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | `"gemini_chat"` | gemini-pro, ... | | | Embedding | [`GeminiEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/gemini_model.py) | `"gemini_embedding"` | models/embedding-001, ... | -| ZhipuAI API | Chat | [`ZhipuAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | `"zhipuai_chat"` | glm4, ... | -| | Embedding | [`ZhipuAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | `"zhipuai_embedding"` | embedding-2, ... | +| ZhipuAI API | Chat | [`ZhipuAIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | `"zhipuai_chat"` | glm4, ... | +| | Embedding | [`ZhipuAIEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/zhipu_model.py) | `"zhipuai_embedding"` | embedding-2, ... | | ollama | Chat | [`OllamaChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_chat"` | llama2, ... | | | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_embedding"` | llama2, ... | | | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_generate"` | llama2, ... | -| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | `"litellm_chat"` | - | +| LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | `"litellm_chat"` | - | +| Yi API | Chat | [`YiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/yi_model.py) | `"yi_chat"` | yi-large, yi-medium, ... | | Post Request based API | - | [`PostAPIModelWrapperBase`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api"` | - | | | Chat | [`PostAPIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api_chat"` | meta-llama/Meta-Llama-3-8B-Instruct, ... | | | Image Synthesis | [`PostAPIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `post_api_dall_e` | - | | diff --git a/docs/sphinx_doc/en/source/tutorial/204-service.md b/docs/sphinx_doc/en/source/tutorial/204-service.md index 0cfaec6a3..572b7e5af 100644 --- a/docs/sphinx_doc/en/source/tutorial/204-service.md +++ b/docs/sphinx_doc/en/source/tutorial/204-service.md @@ -12,47 +12,48 @@ AgentScope and how to use them to enhance the capabilities of your agents. The following table outlines the various Service functions by type. These functions can be called using `agentscope.service.{function_name}`. -| Service Scene | Service Function Name | Description | -|-----------------------------|----------------------------|----------------------------------------------------------------------------------------------------------------| -| Code | `execute_python_code` | Execute a piece of Python code, optionally inside a Docker container. | -| | `NoteBookExecutor.run_code_on_notebook` | Compute Execute a segment of Python code in the IPython environment of the NoteBookExecutor, adhering to the IPython interactive computing style. | -| Retrieval | `retrieve_from_list` | Retrieve a specific item from a list based on given criteria. | -| | `cos_sim` | Compute the cosine similarity between two different embeddings. | -| SQL Query | `query_mysql` | Execute SQL queries on a MySQL database and return results. | -| | `query_sqlite` | Execute SQL queries on a SQLite database and return results. | -| | `query_mongodb` | Perform queries or operations on a MongoDB collection. | -| Text Processing | `summarization` | Summarize a piece of text using a large language model to highlight its main points. | -| Web | `bing_search` | Perform bing search | -| | `google_search` | Perform google search | -| | `arxiv_search` | Perform arXiv search | -| | `download_from_url` | Download file from given URL. | -| | `load_web` | Load and parse the web page of the specified url (currently only supports HTML). | -| | `digest_webpage` | Digest the content of a already loaded web page (currently only supports HTML). -| | `dblp_search_publications` | Search publications in the DBLP database -| | `dblp_search_authors` | Search for author information in the DBLP database | -| | `dblp_search_venues` | Search for venue information in the DBLP database | -| File | `create_file` | Create a new file at a specified path, optionally with initial content. | -| | `delete_file` | Delete a file specified by a file path. | -| | `move_file` | Move or rename a file from one path to another. | -| | `create_directory` | Create a new directory at a specified path. | -| | `delete_directory` | Delete a directory and all its contents. | -| | `move_directory` | Move or rename a directory from one path to another. | -| | `read_text_file` | Read and return the content of a text file. | -| | `write_text_file` | Write text content to a file at a specified path. | -| | `read_json_file` | Read and parse the content of a JSON file. | -| | `write_json_file` | Serialize a Python object to JSON and write to a file. | -| Multi Modality | `dashscope_text_to_image` | Convert text to image using Dashscope API. | -| | `dashscope_image_to_text` | Convert image to text using Dashscope API. | -| | `dashscope_text_to_audio` | Convert text to audio using Dashscope API. | -| | `openai_text_to_image` | Convert text to image using OpenAI API -| | `openai_edit_image` | Edit an image based on the provided mask and prompt using OpenAI API -| | `openai_create_image_variation` | Create variations of an image using OpenAI API -| | `openai_image_to_text` | Convert text to image using OpenAI API -| | `openai_text_to_audio` | Convert text to audio using OpenAI API -| | `openai_audio_to_text` | Convert audio to text using OpenAI API - - -| *More services coming soon* | | More service functions are in development and will be added to AgentScope to further enhance its capabilities. | +| Service Scene | Service Function Name | Description | +|-----------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------| +| Code | `execute_python_code` | Execute a piece of Python code, optionally inside a Docker container. | +| | `NoteBookExecutor` | Compute Execute a segment of Python code in the IPython environment of the NoteBookExecutor, adhering to the IPython interactive computing style. | +| Retrieval | `retrieve_from_list` | Retrieve a specific item from a list based on given criteria. | +| | `cos_sim` | Compute the cosine similarity between two different embeddings. | +| SQL Query | `query_mysql` | Execute SQL queries on a MySQL database and return results. | +| | `query_sqlite` | Execute SQL queries on a SQLite database and return results. | +| | `query_mongodb` | Perform queries or operations on a MongoDB collection. | +| Text Processing | `summarization` | Summarize a piece of text using a large language model to highlight its main points. | +| Web | `bing_search` | Perform bing search | +| | `google_search` | Perform google search | +| | `arxiv_search` | Perform arXiv search | +| | `download_from_url` | Download file from given URL. | +| | `load_web` | Load and parse the web page of the specified url (currently only supports HTML). | +| | `digest_webpage` | Digest the content of a already loaded web page (currently only supports HTML). | +| | `dblp_search_publications` | Search publications in the DBLP database | +| | `dblp_search_authors` | Search for author information in the DBLP database | +| | `dblp_search_venues` | Search for venue information in the DBLP database | +| | `tripadvisor_search` | Search for locations using the TripAdvisor API. | +| | `tripadvisor_search_location_photos` | Retrieve photos for a specific location using the TripAdvisor API. | +| | `tripadvisor_search_location_details` | Get detailed information about a specific location using the TripAdvisor API. | +| File | `create_file` | Create a new file at a specified path, optionally with initial content. | +| | `delete_file` | Delete a file specified by a file path. | +| | `move_file` | Move or rename a file from one path to another. | +| | `create_directory` | Create a new directory at a specified path. | +| | `delete_directory` | Delete a directory and all its contents. | +| | `move_directory` | Move or rename a directory from one path to another. | +| | `read_text_file` | Read and return the content of a text file. | +| | `write_text_file` | Write text content to a file at a specified path. | +| | `read_json_file` | Read and parse the content of a JSON file. | +| | `write_json_file` | Serialize a Python object to JSON and write to a file. | +| Multi Modality | `dashscope_text_to_image` | Convert text to image using Dashscope API. | +| | `dashscope_image_to_text` | Convert image to text using Dashscope API. | +| | `dashscope_text_to_audio` | Convert text to audio using Dashscope API. | +| | `openai_text_to_image` | Convert text to image using OpenAI API | +| | `openai_edit_image` | Edit an image based on the provided mask and prompt using OpenAI API | +| | `openai_create_image_variation` | Create variations of an image using OpenAI API | +| | `openai_image_to_text` | Convert text to image using OpenAI API | +| | `openai_text_to_audio` | Convert text to audio using OpenAI API | +| | `openai_audio_to_text` | Convert audio to text using OpenAI API | +| *More services coming soon* | | More service functions are in development and will be added to AgentScope to further enhance its capabilities. | About each service function, you can find detailed information in the [API document](https://modelscope.github.io/agentscope/). diff --git a/docs/sphinx_doc/en/source/tutorial/206-prompt.md b/docs/sphinx_doc/en/source/tutorial/206-prompt.md index 47d459527..dc98d6070 100644 --- a/docs/sphinx_doc/en/source/tutorial/206-prompt.md +++ b/docs/sphinx_doc/en/source/tutorial/206-prompt.md @@ -551,67 +551,4 @@ print(prompt) ] ``` -## Prompt Engine (Will be deprecated in the future) - -AgentScope provides the `PromptEngine` class to simplify the process of crafting -prompts for large language models (LLMs). - -## About `PromptEngine` Class - -The `PromptEngine` class provides a structured way to combine different components of a prompt, such as instructions, hints, conversation history, and user inputs, into a format that is suitable for the underlying language model. - -### Key Features of PromptEngine - -- **Model Compatibility**: It works with any `ModelWrapperBase` subclass. -- **Prompt Type**: It supports both string and list-style prompts, aligning with the model's preferred input format. - -### Initialization - -When creating an instance of `PromptEngine`, you can specify the target model and, optionally, the shrinking policy, the maximum length of the prompt, the prompt type, and a summarization model (could be the same as the target model). - -```python -model = OpenAIChatWrapper(...) -engine = PromptEngine(model) -``` - -### Joining Prompt Components - -The `join` method of `PromptEngine` provides a unified interface to handle an arbitrary number of components for constructing the final prompt. - -#### Output String Type Prompt - -If the model expects a string-type prompt, components are joined with a newline character: - -```python -system_prompt = "You're a helpful assistant." -memory = ... # can be dict, list, or string -hint_prompt = "Please respond in JSON format." - -prompt = engine.join(system_prompt, memory, hint_prompt) -# the result will be [ "You're a helpful assistant.", {"name": "user", "content": "What's the weather like today?"}] -``` - -#### Output List Type Prompt - -For models that work with list-type prompts,e.g., OpenAI and Huggingface chat models, the components can be converted to Message objects, whose type is list of dict: - -```python -system_prompt = "You're a helpful assistant." -user_messages = [{"name": "user", "content": "What's the weather like today?"}] - -prompt = engine.join(system_prompt, user_messages) -# the result should be: [{"role": "assistant", "content": "You're a helpful assistant."}, {"name": "user", "content": "What's the weather like today?"}] -``` - -#### Formatting Prompts in Dynamic Way - -The `PromptEngine` supports dynamic prompts using the `format_map` parameter, allowing you to flexibly inject various variables into the prompt components for different scenarios: - -```python -variables = {"location": "London"} -hint_prompt = "Find the weather in {location}." - -prompt = engine.join(system_prompt, user_input, hint_prompt, format_map=variables) -``` - [[Return to the top]](#206-prompt-en) diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/201-agent.md b/docs/sphinx_doc/zh_CN/source/tutorial/201-agent.md index 10b29aeba..01f4bf6ef 100644 --- a/docs/sphinx_doc/zh_CN/source/tutorial/201-agent.md +++ b/docs/sphinx_doc/zh_CN/source/tutorial/201-agent.md @@ -36,7 +36,6 @@ class AgentBase(Operator): sys_prompt: Optional[str] = None, model_config_name: str = None, use_memory: bool = True, - memory_config: Optional[dict] = None, ) -> None: # ... [code omitted for brevity] @@ -72,7 +71,6 @@ class AgentBase(Operator): | `DialogAgent` | 通过理解上下文和生成连贯的响应来管理对话。 | 客户服务机器人,虚拟助手。 | | `DictDialogAgent` | 通过理解上下文和生成连贯的响应来管理对话,返回的消息为 Json 格式。 | 客户服务机器人,虚拟助手。 | | `UserAgent` | 与用户互动以收集输入,生成可能包括URL或基于所需键的额外具体信息的消息。 | 为agent收集用户输入 | -| `TextToImageAgent` | 将用户输入的文本转化为图片 | 提供文生图功能 | | `ReActAgent` | 实现了 ReAct 算法的 Agent,能够自动调用工具处理较为复杂的任务。 | 借助工具解决复杂任务 | | *更多agent* | AgentScope 正在不断扩大agent池,加入更多专门化的agent,以适应多样化的应用。 | | diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md b/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md index 217a4ae14..dda8afe22 100644 --- a/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md +++ b/docs/sphinx_doc/zh_CN/source/tutorial/203-model.md @@ -109,6 +109,7 @@ API如下: | | Embedding | [`OllamaEmbeddingWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_embedding"` | llama2, ... | | | Generation | [`OllamaGenerationWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/ollama_model.py) | `"ollama_generate"` | llama2, ... | | LiteLLM API | Chat | [`LiteLLMChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/litellm_model.py) | `"litellm_chat"` | - | +| Yi API | Chat | [`YiChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/yi_model.py) | `"yi_chat"` | yi-large, yi-medium, ... | | Post Request based API | - | [`PostAPIModelWrapperBase`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api"` | - | | | Chat | [`PostAPIChatWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `"post_api_chat"` | meta-llama/Meta-Llama-3-8B-Instruct, ... | | | Image Synthesis | [`PostAPIDALLEWrapper`](https://github.com/modelscope/agentscope/blob/main/src/agentscope/models/post_model.py) | `post_api_dall_e` | - | | diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md b/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md index 00de68001..88afc655b 100644 --- a/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md +++ b/docs/sphinx_doc/zh_CN/source/tutorial/204-service.md @@ -9,45 +9,48 @@ 下面的表格按照类型概述了各种Service函数。以下函数可以通过`agentscope.service.{函数名}`进行调用。 -| Service场景 | Service函数名称 | 描述 | -|------------|-----------------------|-----------------------------------------| -| 代码 | `execute_python_code` | 执行一段 Python 代码,可选择在 Docker
容器内部执行。 | -| | `NoteBookExecutor.run_code_on_notebook` | 在 NoteBookExecutor 的 IPython 环境中执行一段 Python 代码,遵循 IPython 交互式计算风格。 | -| 检索 | `retrieve_from_list` | 根据给定的标准从列表中检索特定项目。 | -| | `cos_sim` | 计算2个embedding的余弦相似度。 | -| SQL查询 | `query_mysql` | 在 MySQL 数据库上执行 SQL 查询并返回结果。 | -| | `query_sqlite` | 在 SQLite 数据库上执行 SQL 查询并返回结果。 | -| | `query_mongodb` | 对 MongoDB 集合执行查询或操作。 | -| 文本处理 | `summarization` | 使用大型语言模型总结一段文字以突出其主要要点。 | -| 网络 | `bing_search` | 使用bing搜索。 | -| | `google_search` | 使用google搜索。 | -| | `arxiv_search` | 使用arxiv搜索。 | -| | `download_from_url` | 从指定的 URL 下载文件。 | -| | `load_web` | 爬取并解析指定的网页链接 (目前仅支持爬取 HTML 页面) | -| | `digest_webpage` | 对已经爬取好的网页生成摘要信息(目前仅支持 HTML 页面 -| | `dblp_search_publications` | 在dblp数据库里搜索文献。 -| | `dblp_search_authors` | 在dblp数据库里搜索作者。 | -| | `dblp_search_venues` | 在dblp数据库里搜索期刊,会议及研讨会。 | -| 文件处理 | `create_file` | 在指定路径创建一个新文件,并可选择添加初始内容。 | -| | `delete_file` | 删除由文件路径指定的文件。 | -| | `move_file` | 将文件从一个路径移动或重命名到另一个路径。 | -| | `create_directory` | 在指定路径创建一个新的目录。 | -| | `delete_directory` | 删除一个目录及其所有内容。 | -| | `move_directory` | 将目录从一个路径移动或重命名到另一个路径。 | -| | `read_text_file` | 读取并返回文本文件的内容。 | -| | `write_text_file` | 向指定路径的文件写入文本内容。 | -| | `read_json_file` | 读取并解析 JSON 文件的内容。 | -| | `write_json_file` | 将 Python 对象序列化为 JSON 并写入到文件。 | -| 多模态 | `dashscope_text_to_image` | 使用 DashScope API 将文本生成图片。 | -| | `dashscope_image_to_text` | 使用 DashScope API 根据图片生成文字。 | -| | `dashscope_text_to_audio` | 使用 DashScope API 根据文本生成音频。 | -| | `openai_text_to_image` | 使用 OpenAI API根据文本生成图片。 -| | `openai_edit_image` | 使用 OpenAI API 根据提供的遮罩和提示编辑图像。 -| | `openai_create_image_variation` | 使用 OpenAI API 创建图像的变体。 -| | `openai_image_to_text` | 使用 OpenAI API 根据图片生成文字。 -| | `openai_text_to_audio` | 使用 OpenAI API 根据文本生成音频。 -| | `openai_audio_to_text` | 使用OpenAI API将音频转换为文本。 -| *更多服务即将推出* | | 正在开发更多服务功能,并将添加到 AgentScope 以进一步增强其能力。 | +| Service场景 | Service函数名称 | 描述 | +|------------|---------------------------------------|--------------------------------------------------------------------| +| 代码 | `execute_python_code` | 执行一段 Python 代码,可选择在 Docker 容器内部执行。 | +| | `NoteBookExecutor` | 在 NoteBookExecutor 的 IPython 环境中执行一段 Python 代码,遵循 IPython 交互式计算风格。 | +| 检索 | `retrieve_from_list` | 根据给定的标准从列表中检索特定项目。 | +| | `cos_sim` | 计算2个embedding的余弦相似度。 | +| SQL查询 | `query_mysql` | 在 MySQL 数据库上执行 SQL 查询并返回结果。 | +| | `query_sqlite` | 在 SQLite 数据库上执行 SQL 查询并返回结果。 | +| | `query_mongodb` | 对 MongoDB 集合执行查询或操作。 | +| 文本处理 | `summarization` | 使用大型语言模型总结一段文字以突出其主要要点。 | +| 网络 | `bing_search` | 使用bing搜索。 | +| | `google_search` | 使用google搜索。 | +| | `arxiv_search` | 使用arxiv搜索。 | +| | `download_from_url` | 从指定的 URL 下载文件。 | +| | `load_web` | 爬取并解析指定的网页链接 (目前仅支持爬取 HTML 页面) | +| | `digest_webpage` | 对已经爬取好的网页生成摘要信息(目前仅支持 HTML 页面) | +| | `dblp_search_publications` | 在dblp数据库里搜索文献。 | +| | `dblp_search_authors` | 在dblp数据库里搜索作者。 | +| | `dblp_search_venues` | 在dblp数据库里搜索期刊,会议及研讨会。 | +| | `tripadvisor_search` | 使用 TripAdvisor API 搜索位置。 | +| | `tripadvisor_search_location_photos` | 使用 TripAdvisor API 检索特定位置的照片。 | +| | `tripadvisor_search_location_details` | 使用 TripAdvisor API 获取特定位置的详细信息。 | +| 文件处理 | `create_file` | 在指定路径创建一个新文件,并可选择添加初始内容。 | +| | `delete_file` | 删除由文件路径指定的文件。 | +| | `move_file` | 将文件从一个路径移动或重命名到另一个路径。 | +| | `create_directory` | 在指定路径创建一个新的目录。 | +| | `delete_directory` | 删除一个目录及其所有内容。 | +| | `move_directory` | 将目录从一个路径移动或重命名到另一个路径。 | +| | `read_text_file` | 读取并返回文本文件的内容。 | +| | `write_text_file` | 向指定路径的文件写入文本内容。 | +| | `read_json_file` | 读取并解析 JSON 文件的内容。 | +| | `write_json_file` | 将 Python 对象序列化为 JSON 并写入到文件。 | +| 多模态 | `dashscope_text_to_image` | 使用 DashScope API 将文本生成图片。 | +| | `dashscope_image_to_text` | 使用 DashScope API 根据图片生成文字。 | +| | `dashscope_text_to_audio` | 使用 DashScope API 根据文本生成音频。 | +| | `openai_text_to_image` | 使用 OpenAI API根据文本生成图片。 | +| | `openai_edit_image` | 使用 OpenAI API 根据提供的遮罩和提示编辑图像。 | +| | `openai_create_image_variation` | 使用 OpenAI API 创建图像的变体。 | +| | `openai_image_to_text` | 使用 OpenAI API 根据图片生成文字。 | +| | `openai_text_to_audio` | 使用 OpenAI API 根据文本生成音频。 | +| | `openai_audio_to_text` | 使用OpenAI API将音频转换为文本。 | +| *更多服务即将推出* | | 正在开发更多服务功能,并将添加到 AgentScope 以进一步增强其能力。 | 关于详细的参数、预期输入格式、返回类型,请参阅[API文档](https://modelscope.github.io/agentscope/)。 diff --git a/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md b/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md index ed38bad54..12a70cb44 100644 --- a/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md +++ b/docs/sphinx_doc/zh_CN/source/tutorial/206-prompt.md @@ -485,62 +485,4 @@ print(prompt) ] ``` -## 关于`PromptEngine`类 (将会在未来版本弃用) - -`PromptEngine`类提供了一种结构化的方式来合并不同的提示组件,比如指令、提示、对话历史和用户输入,以适合底层语言模型的格式。 - -### 提示工程的关键特性 - -- **模型兼容性**:可以与任何 `ModelWrapperBase` 的子类一起工作。 -- **提示类型**:支持字符串和列表风格的提示,与模型首选的输入格式保持一致。 - -### 初始化 - -当创建 `PromptEngine` 的实例时,您可以指定目标模型,以及(可选的)缩减原则、提示的最大长度、提示类型和总结模型(可以与目标模型相同)。 - -```python -model = OpenAIChatWrapper(...) -engine = PromptEngine(model) -``` - -### 合并提示组件 - -`PromptEngine` 的 `join` 方法提供了一个统一的接口来处理任意数量的组件,以构建最终的提示。 - -#### 输出字符串类型提示 - -如果模型期望的是字符串类型的提示,组件会通过换行符连接: - -```python -system_prompt = "You're a helpful assistant." -memory = ... # 可以是字典、列表或字符串 -hint_prompt = "Please respond in JSON format." - -prompt = engine.join(system_prompt, memory, hint_prompt) -# 结果将会是 ["You're a helpful assistant.", {"name": "user", "content": "What's the weather like today?"}] -``` - -#### 输出列表类型提示 - -对于使用列表类型提示的模型,比如 OpenAI 和 Huggingface 聊天模型,组件可以转换为 `Message` 对象,其类型是字典列表: - -```python -system_prompt = "You're a helpful assistant." -user_messages = [{"name": "user", "content": "What's the weather like today?"}] - -prompt = engine.join(system_prompt, user_messages) -# 结果将会是: [{"role": "assistant", "content": "You're a helpful assistant."}, {"name": "user", "content": "What's the weather like today?"}] -``` - -#### 动态格式化提示 - -`PromptEngine` 支持使用 `format_map` 参数动态提示,允许您灵活地将各种变量注入到不同场景的提示组件中: - -```python -variables = {"location": "London"} -hint_prompt = "Find the weather in {location}." - -prompt = engine.join(system_prompt, user_input, hint_prompt, format_map=variables) -``` - [[返回顶端]](#206-prompt-zh) diff --git a/examples/0_jupyter_example_template/main.ipynb b/examples/0_jupyter_example_template/main.ipynb index 6af54e9ec..288b16edd 100644 --- a/examples/0_jupyter_example_template/main.ipynb +++ b/examples/0_jupyter_example_template/main.ipynb @@ -98,4 +98,4 @@ }, "nbformat": 4, "nbformat_minor": 5 -} +} \ No newline at end of file diff --git a/examples/conversation_mixture_of_agents/conversation_moa.py b/examples/conversation_mixture_of_agents/conversation_moa.py index e1cc4260d..0dd7a613d 100644 --- a/examples/conversation_mixture_of_agents/conversation_moa.py +++ b/examples/conversation_mixture_of_agents/conversation_moa.py @@ -21,7 +21,6 @@ def __init__( name: str, moa_module: MixtureOfAgents, # changed to passing moa_module here use_memory: bool = True, - memory_config: Optional[dict] = None, ) -> None: """Initialize the dialog agent. @@ -35,14 +34,11 @@ def __init__( The inited MoA module you want to use as the main module. use_memory (`bool`, defaults to `True`): Whether the agent has memory. - memory_config (`Optional[dict]`): - The config of memory. """ super().__init__( name=name, sys_prompt="", use_memory=use_memory, - memory_config=memory_config, ) self.moa_module = moa_module # change model init to moa_module diff --git a/examples/conversation_nl2sql/react_nl2sql.ipynb b/examples/conversation_nl2sql/react_nl2sql.ipynb index a28b7c36f..fbaf29a44 100644 --- a/examples/conversation_nl2sql/react_nl2sql.ipynb +++ b/examples/conversation_nl2sql/react_nl2sql.ipynb @@ -46,12 +46,13 @@ "source": [ "from typing import Callable\n", "import agentscope\n", - "from agentscope.models import load_model_by_config_name\n", "agentscope.init(\n", " model_configs=\"./configs/model_configs.json\",\n", " project=\"Conversation with NL2SQL\",\n", ")\n", - "loaded_model = load_model_by_config_name('gpt-4')" + "from agentscope.manager import ModelManager\n", + "model_manager = ModelManager.get_instance()\n", + "loaded_model = model_manager.get_model_by_config_name('gpt-4')" ] }, { diff --git a/examples/conversation_nl2sql/sql_utils.py b/examples/conversation_nl2sql/sql_utils.py index 98f70ec36..5960b88f1 100644 --- a/examples/conversation_nl2sql/sql_utils.py +++ b/examples/conversation_nl2sql/sql_utils.py @@ -1,6 +1,6 @@ # -*- coding: utf-8 -*- """ -Utils and helpers for performing sql querys. +Utils and helpers for performing sql queries. Referenced from https://github.com/BeachWang/DAIL-SQL. """ import sqlite3 @@ -261,11 +261,10 @@ def is_sql_question_prompt(self, question: str) -> str: } return self.sql_prompt.is_sql_question(target) - def generate_prompt(self, x: dict = None) -> dict: + def generate_prompt(self, question: str) -> dict: """ Generate prompt given input question """ - question = x["content"] target = { "path_db": self.db_path, "question": question, @@ -277,7 +276,6 @@ def generate_prompt(self, x: dict = None) -> dict: self.NUM_EXAMPLE * self.scope_factor, ) prompt_example = [] - question = target["question"] example_prefix = self.question_style.get_example_prefix() for example in examples: example_format = self.question_style.format_example(example) diff --git a/examples/conversation_self_organizing/auto-discussion.py b/examples/conversation_self_organizing/auto-discussion.py index 6470884be..8b44bc4df 100644 --- a/examples/conversation_self_organizing/auto-discussion.py +++ b/examples/conversation_self_organizing/auto-discussion.py @@ -55,7 +55,7 @@ x = Msg("user", x, role="user") settings = agent_builder(x) -scenario_participants = extract_scenario_and_participants(settings["content"]) +scenario_participants = extract_scenario_and_participants(settings.content) # set the agents that participant the discussion agents = [ diff --git a/examples/conversation_with_RAG_agents/rag_example.py b/examples/conversation_with_RAG_agents/rag_example.py index 283c014b2..9946cd888 100644 --- a/examples/conversation_with_RAG_agents/rag_example.py +++ b/examples/conversation_with_RAG_agents/rag_example.py @@ -127,15 +127,15 @@ def main() -> None: # 5. repeat x = user_agent() x.role = "user" # to enforce dashscope requirement on roles - if len(x["content"]) == 0 or str(x["content"]).startswith("exit"): + if len(x.content) == 0 or str(x.content).startswith("exit"): break - speak_list = filter_agents(x.get("content", ""), rag_agent_list) + speak_list = filter_agents(x.content, rag_agent_list) if len(speak_list) == 0: guide_response = guide_agent(x) # Only one agent can be called in the current version, # we may support multi-agent conversation later speak_list = filter_agents( - guide_response.get("content", ""), + guide_response.content, rag_agent_list, ) agent_name_list = [agent.name for agent in speak_list] diff --git a/examples/conversation_with_mentions/main.py b/examples/conversation_with_mentions/main.py index 94352adc9..d51616150 100644 --- a/examples/conversation_with_mentions/main.py +++ b/examples/conversation_with_mentions/main.py @@ -1,6 +1,5 @@ # -*- coding: utf-8 -*- """ A group chat where user can talk any time implemented by agentscope. """ -from loguru import logger from groupchat_utils import ( select_next_one, filter_agents, @@ -50,18 +49,11 @@ def main() -> None: speak_list = [] with msghub(agents, announcement=hint): while True: - try: - x = user(timeout=USER_TIME_TO_SPEAK) - if x.content == "exit": - break - except TimeoutError: - x = {"content": ""} - logger.info( - f"User has not typed text for " - f"{USER_TIME_TO_SPEAK} seconds, skip.", - ) - - speak_list += filter_agents(x.get("content", ""), npc_agents) + x = user(timeout=USER_TIME_TO_SPEAK) + if x.content == "exit": + break + + speak_list += filter_agents(x.content, npc_agents) if len(speak_list) > 0: next_agent = speak_list.pop(0) diff --git a/examples/conversation_with_router_agent/README.md b/examples/conversation_with_router_agent/README.md new file mode 100644 index 000000000..cf4f90ccc --- /dev/null +++ b/examples/conversation_with_router_agent/README.md @@ -0,0 +1,44 @@ +# Conversation with Router Agent + +This example will show +- How to build a router agent to route questions to agents with different abilities. + +The router agent is expected to route questions to the corresponding agents according to the question type in the following response +```text +{The thought of router agent} +{agent name} +``` +If the router agent decides to answer the question itself, the response should be +```text +{The thought of router agent} +{The answer} +``` + +## Note +This example is only for demonstration purposes. We simply use two agents who are good at math and history respectively. +You can replace them with any other agents according to your needs. + +Besides, the memory management of the involved agents is not considered in this example. +For example, does the router agent need to know the answer from the sub-agents? +Improvements are encouraged by developers according to their own needs. + +## Tested Models + +These models are tested in this example. For other models, some modifications may be needed. +- gpt-4o +- qwen-max + + +## Prerequisites + +1. Fill your model configuration correctly in `main.py`. +2. Install the latest version of Agentscope from GitHub. +```bash +git clone https://github.com/modelscope/agentscope.git +cd agentscope +pip install -e . +``` +3. Run the example and input your questions. +```bash +python main.py +``` diff --git a/examples/conversation_with_router_agent/main.py b/examples/conversation_with_router_agent/main.py new file mode 100644 index 000000000..0624e3996 --- /dev/null +++ b/examples/conversation_with_router_agent/main.py @@ -0,0 +1,76 @@ +# -*- coding: utf-8 -*- +"""The main script for the example of conversation with router agent.""" +from router_agent import RouterAgent + +import agentscope +from agentscope.agents import DialogAgent, UserAgent + +# ================== Prepare model configuration ============================= + +YOUR_MODEL_CONFIGURATION_NAME = "{YOUR_MODEL_CONFIGURATION_NAME}" +YOUR_MODEL_CONFIGURATION = { + "config_name": YOUR_MODEL_CONFIGURATION_NAME, + # ... +} + +# ============================================================================ + +agentscope.init( + model_configs=YOUR_MODEL_CONFIGURATION, + project="Conversation with router agent", +) + +# Let's build some working agents with different capabilities. For simplicity, +# we just use the same agent. You can replace them with your own agents. +agent_math = DialogAgent( + name="Math", + sys_prompt="You are a math assistant to help solve math problems.", + model_config_name=YOUR_MODEL_CONFIGURATION_NAME, +) + +agent_history = DialogAgent( + name="History", + sys_prompt="You are an assistant who is good at history.", + model_config_name=YOUR_MODEL_CONFIGURATION_NAME, +) + +# Init a router agent +SYS_PROMPT_ROUTER = """You're a router assistant named {name}. + +## YOUR TARGET +1. Given agents with different capabilities, your target is to assign questions to the corresponding agents according to the user requirement. +2. You should make full use of the different abilities of the given agents. +3. If no agent is suitable to answer user's question, then respond directly. + +## Agents You Can Use +The agents are listed in the format of "{index}. {agent_name}: {agent_description}" +1. math: An agent who is good at math. +2. history: An agent who is good at history. +""" # noqa + +router_agent = RouterAgent( + sys_prompt=SYS_PROMPT_ROUTER, + model_config_name=YOUR_MODEL_CONFIGURATION_NAME, +) + +# Init a user agent +user = UserAgent(name="user") + +# Start the conversation +msg = None +while True: + user_msg = user(msg) + if user_msg.content == "exit": + break + + # Replied by router agent + router_msg = router_agent(user_msg) + + # Route the question to the corresponding agents + if router_msg.metadata == "math": + msg = agent_math(user_msg) + elif router_msg.metadata == "history": + msg = agent_history(user_msg) + else: + # Answer the question by router agent directly + msg = router_msg diff --git a/examples/conversation_with_router_agent/router_agent.py b/examples/conversation_with_router_agent/router_agent.py new file mode 100644 index 000000000..b0e45e67e --- /dev/null +++ b/examples/conversation_with_router_agent/router_agent.py @@ -0,0 +1,72 @@ +# -*- coding: utf-8 -*- +"""The router agent which routes the questions to the corresponding agents.""" +from typing import Optional, Union, Sequence + +from agentscope.agents import AgentBase +from agentscope.message import Msg +from agentscope.parsers import RegexTaggedContentParser + + +# Init a router agent +class RouterAgent(AgentBase): + """ + The router agent who routes the questions to the corresponding agents. + """ + + def __init__( + self, + sys_prompt: str, + model_config_name: str, + ) -> None: + """Init a router agent.""" + self.name = "Router" + + super().__init__( + name=self.name, + model_config_name=model_config_name, + ) + + self.sys_prompt = sys_prompt.format_map({"name": self.name}) + + self.memory.add(Msg(self.name, self.sys_prompt, "system")) + + self.parser = RegexTaggedContentParser( + format_instruction="""Respond with specific tags as outlined below: + +- When routing questions to agents: +what you thought +the agent name + +- When answering questions directly: +what you thought +what you respond +""", + required_keys=["thought"], + ) + + def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: + """The reply function.""" + self.memory.add(x) + + prompt = self.model.format( + self.memory.get_memory(), + Msg("system", self.parser.format_instruction, "system"), + ) + + response = self.model(prompt) + + # To be compatible with streaming mode + self.speak(response.stream or response.text) + + # Parse the response by predefined parser + parsed_dict = self.parser.parse(response).parsed + + msg = Msg(self.name, response.text, "assistant") + + # Assign the question to the corresponding agent in the metadata field + if "agent" in parsed_dict: + msg.metadata = parsed_dict["agent"] + + self.memory.add(msg) + + return msg diff --git a/examples/conversation_with_swe-agent/swe_agent.py b/examples/conversation_with_swe-agent/swe_agent.py index 6d2c49424..f154c4865 100644 --- a/examples/conversation_with_swe-agent/swe_agent.py +++ b/examples/conversation_with_swe-agent/swe_agent.py @@ -197,7 +197,7 @@ def step(self) -> Msg: # parse and execute action action = res.parsed.get("action") - obs = self.prase_command(res.parsed["action"]) + obs = self.parse_command(res.parsed["action"]) self.speak( Msg(self.name, "\n====Observation====\n" + obs, role="assistant"), ) @@ -214,7 +214,7 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: action_name = msg.content["action"]["name"] return msg - def prase_command(self, command_call: dict) -> str: + def parse_command(self, command_call: dict) -> str: command_name = command_call["name"] command_args = command_call["arguments"] if command_name == "exit": diff --git a/examples/distributed_conversation/README.md b/examples/distributed_conversation/README.md index b58584370..6a9d496c2 100644 --- a/examples/distributed_conversation/README.md +++ b/examples/distributed_conversation/README.md @@ -23,7 +23,7 @@ Before running the example, please install the distributed version of Agentscope Use the following command to start the assistant agent: ``` -cd examples/distributed_basic +cd examples/distributed_conversation python distributed_dialog.py --role assistant --assistant-host localhost --assistant-port 12010 # Please make sure the port is available. # If the assistant agent and the user agent are started on different machines, diff --git a/examples/distributed_parallel_optimization/answerer_agent.py b/examples/distributed_parallel_optimization/answerer_agent.py index e44551d01..a5c87f9f3 100644 --- a/examples/distributed_parallel_optimization/answerer_agent.py +++ b/examples/distributed_parallel_optimization/answerer_agent.py @@ -37,6 +37,7 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: return Msg( self.name, content=f"Unable to load web page [{x.url}].", + role="assistant", url=x.url, ) # prepare prompt @@ -49,12 +50,12 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: " the following web page:\n\n" f"{response['html_to_text']}" f"\n\nBased on the above web page," - f" please answer my question\n{x.query}", + f" please answer my question\n{x.metadata}", ), ) # call llm and generate response response = self.model(prompt).text - msg = Msg(self.name, content=response, url=x.url) + msg = Msg(self.name, content=response, role="assistant", url=x.url) self.speak(msg) diff --git a/examples/distributed_parallel_optimization/searcher_agent.py b/examples/distributed_parallel_optimization/searcher_agent.py index eb1ad2f23..8e3f46a68 100644 --- a/examples/distributed_parallel_optimization/searcher_agent.py +++ b/examples/distributed_parallel_optimization/searcher_agent.py @@ -80,11 +80,13 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: Msg( name=self.name, content=result, + role="assistant", url=result["link"], - query=x.content, + metadata=x.content, ) for result in results ], + role="assistant", ) self.speak( Msg( diff --git a/examples/distributed_simulation/main.py b/examples/distributed_simulation/main.py index 70f031502..4c389239d 100644 --- a/examples/distributed_simulation/main.py +++ b/examples/distributed_simulation/main.py @@ -188,10 +188,10 @@ def run_main_process( cnt = 0 for r in results: try: - summ += int(r["content"]["sum"]) - cnt += int(r["content"]["cnt"]) + summ += int(r.content["sum"]) + cnt += int(r.content["cnt"]) except Exception: - logger.error(r["content"]) + logger.error(r.content) et = time.time() logger.chat( Msg( diff --git a/examples/distributed_simulation/participant.py b/examples/distributed_simulation/participant.py index 8baeeb8b6..a023990f4 100644 --- a/examples/distributed_simulation/participant.py +++ b/examples/distributed_simulation/participant.py @@ -37,7 +37,7 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: """Generate a random value""" # generate a response in content response = self.generate_random_response() - msg = Msg(self.name, content=response) + msg = Msg(self.name, content=response, role="assistant") return msg @@ -148,7 +148,7 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: ) with concurrent.futures.ThreadPoolExecutor() as executor: futures = {executor.submit(lambda p: p(msg), p) for p in self.participants} - futures_2 = {executor.submit(lambda r: int(r["content"]), future.result()) for future in concurrent.futures.as_completed(futures)} + futures_2 = {executor.submit(lambda r: int(r.content), future.result()) for future in concurrent.futures.as_completed(futures)} summ = sum(future.result() for future in concurrent.futures.as_completed(futures_2)) return Msg( name=self.name, diff --git a/examples/model_configs_template/yi_chat_template.json b/examples/model_configs_template/yi_chat_template.json new file mode 100644 index 000000000..cda4b4818 --- /dev/null +++ b/examples/model_configs_template/yi_chat_template.json @@ -0,0 +1,11 @@ +[ + { + "config_name": "yi_yi-large", + "model_type": "yi_chat", + "model_name": "yi-large", + "api_key": "{your_api_key}", + "temperature": 0.3, + "top_p": 0.9, + "max_tokens": 1000 + } +] \ No newline at end of file diff --git a/examples/paper_llm_based_algorithm/src/alg_agents.py b/examples/paper_llm_based_algorithm/src/alg_agents.py index 5004a9e49..8eb0cbb24 100644 --- a/examples/paper_llm_based_algorithm/src/alg_agents.py +++ b/examples/paper_llm_based_algorithm/src/alg_agents.py @@ -90,14 +90,14 @@ def invoke_llm_call( # Update relevant self.cost_metrics self.cost_metrics["llm_calls"] += 1 self.cost_metrics["prefilling_length_total"] += len( - x_request["content"], + x_request.content, ) + len(dialog_agent.sys_prompt) - self.cost_metrics["decoding_length_total"] += len(x["content"]) + self.cost_metrics["decoding_length_total"] += len(x.content) self.cost_metrics["prefilling_tokens_total"] += num_tokens_from_string( - x_request["content"], + x_request.content, ) + num_tokens_from_string(dialog_agent.sys_prompt) self.cost_metrics["decoding_tokens_total"] += num_tokens_from_string( - x["content"], + x.content, ) return x diff --git a/examples/paper_llm_based_algorithm/src/counting.py b/examples/paper_llm_based_algorithm/src/counting.py index 5df8c5538..2ff9fff6e 100644 --- a/examples/paper_llm_based_algorithm/src/counting.py +++ b/examples/paper_llm_based_algorithm/src/counting.py @@ -58,7 +58,7 @@ def solve_directly( for i in range(nsamples): x = self.invoke_llm_call(x_request, dialog_agent) candidate_solutions[i] = self.parse_llm_response_counting( - x["content"], + x.content, ) # int solution = max( diff --git a/examples/paper_llm_based_algorithm/src/rag.py b/examples/paper_llm_based_algorithm/src/rag.py index c37508ab4..173801402 100644 --- a/examples/paper_llm_based_algorithm/src/rag.py +++ b/examples/paper_llm_based_algorithm/src/rag.py @@ -134,7 +134,7 @@ def solve(self, request_string: str, question: str) -> dict: # ) # x_request = request_agent(x=None, content=content) # lst_x[i] = self.invoke_llm_call(x_request, dialog_agents[i]) - # sub_contents = [x["content"] for x in lst_x] + # sub_contents = [x.content for x in lst_x] # sub_solutions = ["" for _ in range(len(sub_requests))] # for i in range(len(sub_solutions)): # ss = self.parse_llm_response_retrieve_relevant_sentences( @@ -158,7 +158,7 @@ def solve(self, request_string: str, question: str) -> dict: x_request = request_agent(x=None, content=content) x = self.invoke_llm_call(x_request, dialog_agent) ss = self.parse_llm_response_retrieve_relevant_sentences( - x["content"], + x.content, ) sub_solutions[i] = ss sub_latencies[i] = time.time() - time_start @@ -183,7 +183,7 @@ def solve(self, request_string: str, question: str) -> dict: content = self.prompt_generate_final_answer(context, question) x_request = request_agent(x=None, content=content) x = self.invoke_llm_call(x_request, dialog_agent) - solution = self.parse_llm_response_generate_final_answer(x["content"]) + solution = self.parse_llm_response_generate_final_answer(x.content) final_step_latency = time.time() - time_start result = { diff --git a/examples/paper_llm_based_algorithm/src/retrieval.py b/examples/paper_llm_based_algorithm/src/retrieval.py index 1e857e20f..da0f77c43 100644 --- a/examples/paper_llm_based_algorithm/src/retrieval.py +++ b/examples/paper_llm_based_algorithm/src/retrieval.py @@ -84,7 +84,7 @@ def solve_directly( content = self.prompt_retrieval(request_string, question) x_request = request_agent(x=None, content=content) x = self.invoke_llm_call(x_request, dialog_agent) - solution = self.parse_llm_response_retrieval(x["content"]) + solution = self.parse_llm_response_retrieval(x.content) return solution def solve_decomposition(self, request_string: str, question: str) -> dict: diff --git a/examples/paper_llm_based_algorithm/src/sorting.py b/examples/paper_llm_based_algorithm/src/sorting.py index 18a42bca3..849f3f336 100644 --- a/examples/paper_llm_based_algorithm/src/sorting.py +++ b/examples/paper_llm_based_algorithm/src/sorting.py @@ -49,7 +49,7 @@ def solve_directly( content = self.prompt_sorting(request_string) x_request = request_agent(x=None, content=content) x = self.invoke_llm_call(x_request, dialog_agent) - solution = self.parse_llm_response_sorting(x["content"]) + solution = self.parse_llm_response_sorting(x.content) return solution def merge_two_sorted_lists( @@ -90,7 +90,7 @@ def merge_two_sorted_lists( content = self.prompt_merging(request_string) x_request = request_agent(x=None, content=content) x = self.invoke_llm_call(x_request, dialog_agent) - solution = self.parse_llm_response_sorting(x["content"]) + solution = self.parse_llm_response_sorting(x.content) return solution diff --git a/setup.py b/setup.py index 5756d6e14..4f65288e2 100644 --- a/setup.py +++ b/setup.py @@ -114,6 +114,13 @@ + studio_requires ) +online_requires = full_requires + [ + "oss2", + "flask_babel", + "babel==2.15.0", + "gunicorn", +] + with open("README.md", "r", encoding="UTF-8") as fh: long_description = fh.read() @@ -182,6 +189,7 @@ def build_extension(self, ext): "cpp_distribute": cpp_distribute_requires, "dev": dev_requires, "full": full_requires, + "online": online_requires, }, ext_modules=[CMakeExtension('agentscope.cpp_server.cpp_server')], cmdclass=dict(build_ext=CMakeBuild), diff --git a/src/agentscope/agents/__init__.py b/src/agentscope/agents/__init__.py index e50efa66f..65d86b278 100644 --- a/src/agentscope/agents/__init__.py +++ b/src/agentscope/agents/__init__.py @@ -5,7 +5,6 @@ from .dialog_agent import DialogAgent from .dict_dialog_agent import DictDialogAgent from .user_agent import UserAgent -from .text_to_image_agent import TextToImageAgent from .rpc_agent import RpcAgent from .react_agent import ReActAgent from .rag_agent import LlamaIndexAgent @@ -16,7 +15,6 @@ "Operator", "DialogAgent", "DictDialogAgent", - "TextToImageAgent", "UserAgent", "ReActAgent", "DistConf", diff --git a/src/agentscope/agents/agent.py b/src/agentscope/agents/agent.py index 7ba872274..e176d6560 100644 --- a/src/agentscope/agents/agent.py +++ b/src/agentscope/agents/agent.py @@ -144,7 +144,6 @@ def __init__( sys_prompt: Optional[str] = None, model_config_name: str = None, use_memory: bool = True, - memory_config: Optional[dict] = None, to_dist: Optional[Union[DistConf, bool]] = False, ) -> None: r"""Initialize an agent from the given arguments. @@ -160,8 +159,6 @@ def __init__( configuration. use_memory (`bool`, defaults to `True`): Whether the agent has memory. - memory_config (`Optional[dict]`): - The config of memory. to_dist (`Optional[Union[DistConf, bool]]`, default to `False`): The configurations passed to :py:meth:`to_dist` method. Used in :py:class:`_AgentMeta`, when this parameter is provided, @@ -189,7 +186,6 @@ def __init__( See :doc:`Tutorial` for detail. """ self.name = name - self.memory_config = memory_config self.sys_prompt = sys_prompt # TODO: support to receive a ModelWrapper instance @@ -200,7 +196,7 @@ def __init__( ) if use_memory: - self.memory = TemporaryMemory(memory_config) + self.memory = TemporaryMemory() else: self.memory = None @@ -276,25 +272,7 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: f'"reply" function.', ) - def load_from_config(self, config: dict) -> None: - """Load configuration for this agent. - - Args: - config (`dict`): model configuration - """ - - def export_config(self) -> dict: - """Return configuration of this agent. - - Returns: - The configuration of current agent. - """ - return {} - - def load_memory(self, memory: Sequence[dict]) -> None: - r"""Load input memory.""" - - def __call__(self, *args: Any, **kwargs: Any) -> dict: + def __call__(self, *args: Any, **kwargs: Any) -> Msg: """Calling the reply function, and broadcast the generated response to all audiences if needed.""" res = self.reply(*args, **kwargs) diff --git a/src/agentscope/agents/dialog_agent.py b/src/agentscope/agents/dialog_agent.py index cb76f1354..031f0d2cc 100644 --- a/src/agentscope/agents/dialog_agent.py +++ b/src/agentscope/agents/dialog_agent.py @@ -1,6 +1,8 @@ # -*- coding: utf-8 -*- """A general dialog agent.""" -from typing import Optional, Union, Sequence +from typing import Optional, Union, Sequence, Any + +from loguru import logger from ..message import Msg from .agent import AgentBase @@ -16,7 +18,7 @@ def __init__( sys_prompt: str, model_config_name: str, use_memory: bool = True, - memory_config: Optional[dict] = None, + **kwargs: Any, ) -> None: """Initialize the dialog agent. @@ -31,17 +33,19 @@ def __init__( configuration. use_memory (`bool`, defaults to `True`): Whether the agent has memory. - memory_config (`Optional[dict]`): - The config of memory. """ super().__init__( name=name, sys_prompt=sys_prompt, model_config_name=model_config_name, use_memory=use_memory, - memory_config=memory_config, ) + if kwargs: + logger.warning( + f"Unused keyword arguments are provided: {kwargs}", + ) + def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: """Reply function of the agent. Processes the input data, generates a prompt using the current dialogue memory and system diff --git a/src/agentscope/agents/dict_dialog_agent.py b/src/agentscope/agents/dict_dialog_agent.py index 970a7a610..60fcc9e36 100644 --- a/src/agentscope/agents/dict_dialog_agent.py +++ b/src/agentscope/agents/dict_dialog_agent.py @@ -23,7 +23,6 @@ def __init__( sys_prompt: str, model_config_name: str, use_memory: bool = True, - memory_config: Optional[dict] = None, max_retries: Optional[int] = 3, ) -> None: """Initialize the dict dialog agent. @@ -39,8 +38,6 @@ def __init__( configuration. use_memory (`bool`, defaults to `True`): Whether the agent has memory. - memory_config (`Optional[dict]`, defaults to `None`): - The config of memory. max_retries (`Optional[int]`, defaults to `None`): The maximum number of retries when failed to parse the model output. @@ -50,7 +47,6 @@ def __init__( sys_prompt=sys_prompt, model_config_name=model_config_name, use_memory=use_memory, - memory_config=memory_config, ) self.parser = None diff --git a/src/agentscope/agents/rag_agent.py b/src/agentscope/agents/rag_agent.py index 63a23fdcd..ec5a8dc94 100644 --- a/src/agentscope/agents/rag_agent.py +++ b/src/agentscope/agents/rag_agent.py @@ -111,7 +111,7 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: ) query = ( "/n".join( - [msg["content"] for msg in history], + [msg.content for msg in history], ) if isinstance(history, list) else str(history) @@ -182,7 +182,7 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: # call llm and generate response response = self.model(prompt).text - msg = Msg(self.name, response) + msg = Msg(self.name, response, "assistant") # Print/speak the message in this agent's voice self.speak(msg) diff --git a/src/agentscope/agents/rpc_agent.py b/src/agentscope/agents/rpc_agent.py index 4a43b5f07..619898a91 100644 --- a/src/agentscope/agents/rpc_agent.py +++ b/src/agentscope/agents/rpc_agent.py @@ -3,12 +3,10 @@ from typing import Type, Optional, Union, Sequence from agentscope.agents.agent import AgentBase -from agentscope.message import ( - PlaceholderMessage, - serialize, - Msg, -) +from agentscope.message import Msg +from agentscope.message import PlaceholderMessage from agentscope.rpc import RpcAgentClient +from agentscope.serialize import serialize from agentscope.server.launcher import RpcAgentServerLauncher from agentscope.studio._client import _studio_client @@ -122,8 +120,6 @@ def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: if self.client is None: self._launch_server() return PlaceholderMessage( - name=self.name, - content=None, client=self.client, x=x, ) @@ -133,7 +129,7 @@ def observe(self, x: Union[Msg, Sequence[Msg]]) -> None: self._launch_server() self.client.call_agent_func( func_name="_observe", - value=serialize(x), # type: ignore[arg-type] + value=serialize(x), ) def clone_instances( diff --git a/src/agentscope/agents/text_to_image_agent.py b/src/agentscope/agents/text_to_image_agent.py deleted file mode 100644 index 00519a404..000000000 --- a/src/agentscope/agents/text_to_image_agent.py +++ /dev/null @@ -1,79 +0,0 @@ -# -*- coding: utf-8 -*- -"""An agent that convert text to image.""" - -from typing import Optional, Union, Sequence - -from loguru import logger - -from .agent import AgentBase -from ..message import Msg - - -class TextToImageAgent(AgentBase): - """ - A agent used to perform text to image tasks. - - TODO: change the agent into a service. - """ - - def __init__( - self, - name: str, - model_config_name: str, - use_memory: bool = True, - memory_config: Optional[dict] = None, - ) -> None: - """Initialize the text to image agent. - - Arguments: - name (`str`): - The name of the agent. - model_config_name (`str`, defaults to None): - The name of the model config, which is used to load model from - configuration. - use_memory (`bool`, defaults to `True`): - Whether the agent has memory. - memory_config (`Optional[dict]`): - The config of memory. - """ - super().__init__( - name=name, - sys_prompt="", - model_config_name=model_config_name, - use_memory=use_memory, - memory_config=memory_config, - ) - - logger.warning( - "The `TextToImageAgent` will be deprecated in v0.0.6, " - "please use `text_to_image` service and `ReActAgent` instead.", - ) - - def reply(self, x: Optional[Union[Msg, Sequence[Msg]]] = None) -> Msg: - if self.memory: - self.memory.add(x) - if x is None: - # get the last message from memory - if self.memory and self.memory.size() > 0: - x = self.memory.get_memory()[-1] - else: - return Msg( - self.name, - content="Please provide a text prompt to generate image.", - role="assistant", - ) - image_urls = self.model(x.content).image_urls - # TODO: optimize the construction of content - msg = Msg( - self.name, - content="This is the generated image", - role="assistant", - url=image_urls, - ) - - self.speak(msg) - - if self.memory: - self.memory.add(msg) - - return msg diff --git a/src/agentscope/agents/user_agent.py b/src/agentscope/agents/user_agent.py index b76cf28d5..12b6a26b4 100644 --- a/src/agentscope/agents/user_agent.py +++ b/src/agentscope/agents/user_agent.py @@ -76,7 +76,6 @@ def reply( required_keys=required_keys, ) - print("Python: receive ", raw_input) content = raw_input["content"] url = raw_input["url"] kwargs = {} diff --git a/src/agentscope/constants.py b/src/agentscope/constants.py index 87b831b5a..b5e770b03 100644 --- a/src/agentscope/constants.py +++ b/src/agentscope/constants.py @@ -79,3 +79,9 @@ class ShrinkPolicy(IntEnum): DEFAULT_CHUNK_SIZE = 1024 DEFAULT_CHUNK_OVERLAP = 20 DEFAULT_TOP_K = 5 + +# flask server +EXPIRATION_SECONDS = 604800 # One week +TOKEN_EXP_TIME = 1440 # One day long +FILE_SIZE_LIMIT = 1024 * 1024 # 10 MB +FILE_COUNT_LIMIT = 10 diff --git a/src/agentscope/file_manager.py b/src/agentscope/file_manager.py deleted file mode 100644 index e69de29bb..000000000 diff --git a/src/agentscope/logging.py b/src/agentscope/logging.py index a4c4a5f4c..951de472a 100644 --- a/src/agentscope/logging.py +++ b/src/agentscope/logging.py @@ -1,15 +1,16 @@ # -*- coding: utf-8 -*- """Logging utilities.""" -import json import os import sys from typing import Optional, Literal, Any from loguru import logger -from .utils.tools import _guess_type_by_extension + from .message import Msg +from .serialize import serialize from .studio._client import _studio_client +from .utils.common import _guess_type_by_extension from .web.gradio.utils import ( generate_image_from_name, send_msg, @@ -89,15 +90,18 @@ def _save_msg(msg: Msg) -> None: msg (`Msg`): The message object to be saved. """ - logger.log( - LEVEL_SAVE_LOG, - msg.formatted_str(colored=False), - ) - - logger.log( - LEVEL_SAVE_MSG, - json.dumps(msg, ensure_ascii=False, default=lambda _: None), - ) + # TODO: Unified into a manager rather than an indicated attribute here + if hasattr(logger, "chat"): + # Not initialize yet + logger.log( + LEVEL_SAVE_LOG, + msg.formatted_str(colored=False), + ) + + logger.log( + LEVEL_SAVE_MSG, + serialize(msg), + ) def log_msg(msg: Msg, disable_gradio: bool = False) -> None: diff --git a/src/agentscope/manager/_file.py b/src/agentscope/manager/_file.py index 259c69c20..8fe93b171 100644 --- a/src/agentscope/manager/_file.py +++ b/src/agentscope/manager/_file.py @@ -8,11 +8,13 @@ import numpy as np from PIL import Image -from agentscope.utils.tools import _download_file -from agentscope.utils.tools import _hash_string -from agentscope.utils.tools import _get_timestamp -from agentscope.utils.tools import _generate_random_code -from agentscope.constants import ( +from ..utils.common import ( + _download_file, + _hash_string, + _get_timestamp, + _generate_random_code, +) +from ..constants import ( _DEFAULT_SUBDIR_CODE, _DEFAULT_SUBDIR_FILE, _DEFAULT_SUBDIR_INVOKE, @@ -32,7 +34,13 @@ def _get_text_embedding_record_hash( if isinstance(embedding_model, dict): # Format the dict to avoid duplicate keys embedding_model = json.dumps(embedding_model, sort_keys=True) - embedding_model_hash = _hash_string(embedding_model, hash_method) + elif isinstance(embedding_model, str): + embedding_model_hash = _hash_string(embedding_model, hash_method) + else: + raise RuntimeError( + f"The embedding model must be a string or a dict, got " + f"{type(embedding_model)}.", + ) # Calculate the embedding id by hashing the hash codes of the # original data and the embedding model @@ -193,7 +201,7 @@ def save_python_code(self) -> None: def save_image( self, - image: Union[str, np.ndarray, bytes], + image: Union[str, np.ndarray, bytes, Image.Image], filename: Optional[str] = None, ) -> str: """Save image file locally, and return the local image path. @@ -225,10 +233,13 @@ def save_image( elif isinstance(image, bytes): # save image via bytes Image.open(io.BytesIO(image)).save(path_file) + elif isinstance(image, Image.Image): + # save image via PIL.Image.Image + image.save(path_file) else: raise ValueError( - f"Unsupported image type: {type(image)}" - "Must be str, np.ndarray, or bytes.", + f"Unsupported image type: {type(image)} Must be str, " + f"np.ndarray, bytes, or PIL.Image.Image.", ) return path_file diff --git a/src/agentscope/manager/_manager.py b/src/agentscope/manager/_manager.py index 318f2efce..d9a08f63a 100644 --- a/src/agentscope/manager/_manager.py +++ b/src/agentscope/manager/_manager.py @@ -2,6 +2,7 @@ """A manager for AgentScope.""" import os from typing import Union, Any +from copy import deepcopy from loguru import logger @@ -9,7 +10,7 @@ from ._file import FileManager from ._model import ModelManager from ..logging import LOG_LEVEL, setup_logger -from ..utils.tools import ( +from ..utils.common import ( _generate_random_code, _get_process_creation_time, _get_timestamp, @@ -166,7 +167,7 @@ def state_dict(self) -> dict: serialized_data["studio"] = _studio_client.state_dict() serialized_data["monitor"] = self.monitor.state_dict() - return serialized_data + return deepcopy(serialized_data) def load_dict(self, data: dict) -> None: """Load the runtime information from a dictionary""" diff --git a/src/agentscope/manager/_model.py b/src/agentscope/manager/_model.py index 422293958..0f63f14be 100644 --- a/src/agentscope/manager/_model.py +++ b/src/agentscope/manager/_model.py @@ -100,16 +100,16 @@ def load_model_configs( f"list of dicts), but got {type(model_configs)}", ) - format_configs = _ModelConfig.format_configs(configs=cfgs) + formatted_configs = _format_configs(configs=cfgs) # check if name is unique - for cfg in format_configs: - if cfg.config_name in self.model_configs: + for cfg in formatted_configs: + if cfg["config_name"] in self.model_configs: logger.warning( - f"config_name [{cfg.config_name}] already exists.", + f"config_name [{cfg['config_name']}] already exists.", ) continue - self.model_configs[cfg.config_name] = cfg + self.model_configs[cfg["config_name"]] = cfg # print the loaded model configs logger.info( @@ -137,7 +137,7 @@ def get_model_by_config_name(self, config_name: str) -> ModelWrapperBase: f"Cannot find [{config_name}] in loaded configurations.", ) - model_type = config.model_type + model_type = config["model_type"] kwargs = {k: v for k, v in config.items() if k != "model_type"} @@ -164,55 +164,28 @@ def flush(self) -> None: self.clear_model_configs() -class _ModelConfig(dict): - """Base class for model config.""" +def _format_configs( + configs: Union[Sequence[dict], dict], +) -> Sequence: + """Check the format of model configs. - __getattr__ = dict.__getitem__ - __setattr__ = dict.__setitem__ + Args: + configs (Union[Sequence[dict], dict]): configs in dict format. - def __init__( - self, - config_name: str, - model_type: str = None, - **kwargs: Any, - ): - """Initialize the config with the given arguments, and checking the - type of the arguments. - - Args: - config_name (`str`): A unique name of the model config. - model_type (`str`, optional): The class name (or its model type) of - the generated model wrapper. Defaults to None. - - Raises: - `ValueError`: If `config_name` is not provided. - """ - if config_name is None: - raise ValueError("The `config_name` field is required for Cfg") - if model_type is None: + Returns: + Sequence[dict]: converted ModelConfig list. + """ + if isinstance(configs, dict): + configs = [configs] + for config in configs: + if "config_name" not in config: + raise ValueError( + "The `config_name` field is required for Cfg", + ) + if "model_type" not in config: logger.warning( - f"`model_type` is not provided in config [{config_name}]," + "`model_type` is not provided in config" + f"[{config['config_name']}]," " use `PostAPIModelWrapperBase` by default.", ) - super().__init__( - config_name=config_name, - model_type=model_type, - **kwargs, - ) - - @classmethod - def format_configs( - cls, - configs: Union[Sequence[dict], dict], - ) -> Sequence: - """Covert config dicts into a list of _ModelConfig. - - Args: - configs (Union[Sequence[dict], dict]): configs in dict format. - - Returns: - Sequence[_ModelConfig]: converted ModelConfig list. - """ - if isinstance(configs, dict): - return [_ModelConfig(**configs)] - return [_ModelConfig(**cfg) for cfg in configs] + return configs diff --git a/src/agentscope/manager/_monitor.py b/src/agentscope/manager/_monitor.py index a6cad05f9..19edc7a9a 100644 --- a/src/agentscope/manager/_monitor.py +++ b/src/agentscope/manager/_monitor.py @@ -10,7 +10,7 @@ from sqlalchemy.orm import sessionmaker from ._file import FileManager -from ..utils.tools import _is_windows +from ..utils.common import _is_windows from ..constants import ( _DEFAULT_SQLITE_DB_NAME, _DEFAULT_TABLE_NAME_FOR_CHAT_AND_EMBEDDING, diff --git a/src/agentscope/memory/memory.py b/src/agentscope/memory/memory.py index bf457a3e5..de5430a2a 100644 --- a/src/agentscope/memory/memory.py +++ b/src/agentscope/memory/memory.py @@ -20,26 +20,6 @@ class MemoryBase(ABC): _version: int = 1 - def __init__( - self, - config: Optional[dict] = None, - ) -> None: - """MemoryBase is a base class for memory of agents. - - Args: - config (`Optional[dict]`, defaults to `None`): - Configuration of this memory. - """ - self.config = {} if config is None else config - - def update_config(self, config: dict) -> None: - """ - Configure memory as specified in config - Args: - config (`dict`): Configuration of resetting this memory - """ - self.config = config - @abstractmethod def get_memory( self, diff --git a/src/agentscope/memory/temporary_memory.py b/src/agentscope/memory/temporary_memory.py index 9e7b4aeba..d845a5523 100644 --- a/src/agentscope/memory/temporary_memory.py +++ b/src/agentscope/memory/temporary_memory.py @@ -14,15 +14,11 @@ from .memory import MemoryBase from ..manager import ModelManager +from ..serialize import serialize, deserialize from ..service.retrieval.retrieval_from_list import retrieve_from_list from ..service.retrieval.similarity import Embedding -from ..message import ( - deserialize, - serialize, - MessageBase, - Msg, - PlaceholderMessage, -) +from ..message import Msg +from ..message import PlaceholderMessage class TemporaryMemory(MemoryBase): @@ -32,20 +28,18 @@ class TemporaryMemory(MemoryBase): def __init__( self, - config: Optional[dict] = None, embedding_model: Union[str, Callable] = None, ) -> None: """ Temporary memory module for conversation. + Args: - config (dict): - configuration of the memory embedding_model (Union[str, Callable]) if the temporary memory needs to be embedded, then either pass the name of embedding model or the embedding model itself. """ - super().__init__(config) + super().__init__() self._content = [] @@ -63,7 +57,6 @@ def add( memories: Union[Sequence[Msg], Msg, None], embed: bool = False, ) -> None: - # pylint: disable=too-many-branches """ Adding new memory fragment, depending on how the memory are stored Args: @@ -80,29 +73,25 @@ def add( else: record_memories = memories - # if memory doesn't have id attribute, we skip the checking + # Assert the message types memories_idx = set(_.id for _ in self._content if hasattr(_, "id")) for memory_unit in record_memories: - if not issubclass(type(memory_unit), MessageBase): - try: - memory_unit = Msg(**memory_unit) - except Exception as exc: - raise ValueError( - f"Cannot add {memory_unit} to memory, " - f"must be with subclass of MessageBase", - ) from exc - # in case this is a PlaceholderMessage, try to update # the values first + # TODO: Unify PlaceholderMessage and Msg into one class to avoid + # type error if isinstance(memory_unit, PlaceholderMessage): memory_unit.update_value() - memory_unit = Msg(**memory_unit) + memory_unit = Msg.from_dict(memory_unit.to_dict()) + + if not isinstance(memory_unit, Msg): + raise ValueError( + f"Cannot add {type(memory_unit)} to memory, " + f"must be a Msg object.", + ) - # add to memory if it's new - if ( - not hasattr(memory_unit, "id") - or memory_unit.id not in memories_idx - ): + # Add to memory if it's new + if memory_unit.id not in memories_idx: if embed: if self.embedding_model: # TODO: embed only content or its string representation @@ -220,8 +209,21 @@ def load( e.doc, e.pos, ) - else: + elif isinstance(memories, list): + for unit in memories: + if not isinstance(unit, Msg): + raise TypeError( + f"Expect a list of Msg objects, but get {type(unit)} " + f"instead.", + ) load_memories = memories + elif isinstance(memories, Msg): + load_memories = [memories] + else: + raise TypeError( + f"The type of memories to be loaded is not supported. " + f"Expect str, list[Msg], or Msg, but get {type(memories)}.", + ) # overwrite the original memories after loading the new ones if overwrite: diff --git a/src/agentscope/message/__init__.py b/src/agentscope/message/__init__.py index f26315f3b..419526f87 100644 --- a/src/agentscope/message/__init__.py +++ b/src/agentscope/message/__init__.py @@ -1,12 +1,10 @@ # -*- coding: utf-8 -*- """The message module of AgentScope.""" -from .msg import Msg, MessageBase -from .placeholder import PlaceholderMessage, deserialize, serialize +from .msg import Msg +from .placeholder import PlaceholderMessage __all__ = [ "Msg", - "MessageBase", - "deserialize", - "serialize", + "PlaceholderMessage", ] diff --git a/src/agentscope/message/msg.py b/src/agentscope/message/msg.py index 7a62757c6..1f3e99dd3 100644 --- a/src/agentscope/message/msg.py +++ b/src/agentscope/message/msg.py @@ -1,168 +1,207 @@ # -*- coding: utf-8 -*- +# mypy: disable-error-code="misc" """The base class for message unit""" - -from typing import Any, Optional, Union, Literal, List +from typing import ( + Any, + Literal, + Union, + List, + Optional, +) from uuid import uuid4 -import json from loguru import logger -from ..utils.tools import _get_timestamp, _map_string_to_color_mark +from ..serialize import is_serializable +from ..utils.common import ( + _map_string_to_color_mark, + _get_timestamp, +) + +class Msg: + """The message class for AgentScope, which is responsible for storing + the information of a message, including -class MessageBase(dict): - """Base Message class, which is used to maintain information for dialog, - memory and used to construct prompt. + - id: the identity of the message + - name: who sends the message + - content: the message content + - role: the sender role chosen from 'system', 'user', 'assistant' + - url: the url(s) refers to multimodal content + - metadata: some additional information + - timestamp: when the message is created """ + __serialized_attrs: set = { + "id", + "name", + "content", + "role", + "url", + "metadata", + "timestamp", + } + """The attributes that need to be serialized and deserialized.""" + def __init__( self, name: str, content: Any, - role: Literal["user", "system", "assistant"] = "assistant", - url: Optional[Union[List[str], str]] = None, - timestamp: Optional[str] = None, + role: Union[str, Literal["system", "user", "assistant"]], + url: Optional[Union[str, List[str]]] = None, + metadata: Optional[Union[dict, str]] = None, + echo: bool = False, **kwargs: Any, ) -> None: - """Initialize the message object + """Initialize the message object. + + There are two ways to initialize a message object: + - Providing `name`, `content`, `role`, `url`(Optional), + `metadata`(Optional) to initialize a normal message object. + - Providing `host`, `port`, `task_id` to initialize a placeholder. + + Normally, users only need to create a normal message object by + providing `name`, `content`, `role`, `url`(Optional) and `metadata` + (Optional). + + The initialization of message has a high priority, which means that + when `name`, `content`, `role`, `host`, `port`, `task_id` are all + provided, the message will be initialized as a normal message object + rather than a placeholder. Args: name (`str`): - The name of who send the message. It's often used in - role-playing scenario to tell the name of the sender. + The name of who generates the message. content (`Any`): The content of the message. - role (`Literal["system", "user", "assistant"]`, defaults to "assistant"): - The role of who send the message. It can be one of the - `"system"`, `"user"`, or `"assistant"`. Default to - `"assistant"`. - url (`Optional[Union[List[str], str]]`, defaults to None): - A url to file, image, video, audio or website. - timestamp (`Optional[str]`, defaults to None): - The timestamp of the message, if None, it will be set to - current time. - **kwargs (`Any`): - Other attributes of the message. - """ # noqa - # id and timestamp will be added to the object as its attributes - # rather than items in dict - self.id = uuid4().hex - if timestamp is None: - self.timestamp = _get_timestamp() - else: - self.timestamp = timestamp + role (`Union[str, Literal["system", "user", "assistant"]]`): + The role of the message sender. + url (`Optional[Union[str, List[str]]`, defaults to `None`): + The url of the message. + metadata (`Optional[Union[dict, str]]`, defaults to `None`): + The additional information stored in the message. + echo (`bool`, defaults to `False`): + Whether to print the message when initializing the message obj. + """ + self.id = uuid4().hex self.name = name self.content = content self.role = role - self.url = url + self.metadata = metadata + self.timestamp = _get_timestamp() - self.update(kwargs) - - def __getattr__(self, key: Any) -> Any: - try: - return self[key] - except KeyError as e: - raise AttributeError(f"no attribute '{key}'") from e - - def __setattr__(self, key: Any, value: Any) -> None: - self[key] = value - - def __delattr__(self, key: Any) -> None: - try: - del self[key] - except KeyError as e: - raise AttributeError(f"no attribute '{key}'") from e - - def serialize(self) -> str: - """Return the serialized message.""" - raise NotImplementedError - - -class Msg(MessageBase): - """The Message class.""" - - id: str - """The id of the message.""" - - name: str - """The name of who send the message.""" - - content: Any - """The content of the message.""" - - role: Literal["system", "user", "assistant"] - """The role of the message sender.""" - - metadata: Optional[dict] - """Save the information for application's control flow, or other - purposes.""" + if kwargs: + logger.warning( + f"In current version, the message class in AgentScope does not" + f" inherit the dict class. " + f"The input arguments {kwargs} are not used.", + ) - url: Optional[Union[List[str], str]] - """A url to file, image, video, audio or website.""" + if echo: + logger.chat(self) - timestamp: str - """The timestamp of the message.""" + def __getitem__(self, item: str) -> Any: + """The getitem function, which will be deprecated in the new version""" + logger.warning( + f"The Msg class doesn't inherit dict any more. Please refer to " + f"its attribute by `msg.{item}` directly." + f"The support of __getitem__ will also be deprecated in the " + f"future.", + ) + return self.__getattribute__(item) - def __init__( - self, - name: str, - content: Any, - role: Literal["system", "user", "assistant"] = None, - url: Optional[Union[List[str], str]] = None, - timestamp: Optional[str] = None, - echo: bool = False, - metadata: Optional[Union[dict, str]] = None, - **kwargs: Any, - ) -> None: - """Initialize the message object + @property + def id(self) -> str: + """The identity of the message.""" + return self._id - Args: - name (`str`): - The name of who send the message. - content (`Any`): - The content of the message. - role (`Literal["system", "user", "assistant"]`): - Used to identify the source of the message, e.g. the system - information, the user input, or the model response. This - argument is used to accommodate most Chat API formats. - url (`Optional[Union[List[str], str]]`, defaults to `None`): - A url to file, image, video, audio or website. - timestamp (`Optional[str]`, defaults to `None`): - The timestamp of the message, if None, it will be set to - current time. - echo (`bool`, defaults to `False`): - Whether to print the message to the console. - metadata (`Optional[Union[dict, str]]`, defaults to `None`): - Save the information for application's control flow, or other - purposes. - **kwargs (`Any`): - Other attributes of the message. - """ + @property + def name(self) -> str: + """The name of the message sender.""" + return self._name - if role is None: + @property + def _colored_name(self) -> str: + """The name around with color marks, used to print in the terminal.""" + m1, m2 = _map_string_to_color_mark(self.name) + return f"{m1}{self.name}{m2}" + + @property + def content(self) -> Any: + """The content of the message.""" + return self._content + + @property + def role(self) -> Literal["system", "user", "assistant"]: + """The role of the message sender, chosen from 'system', 'user', + 'assistant'.""" + return self._role + + @property + def url(self) -> Optional[Union[str, List[str]]]: + """A URL string or a list of URL strings.""" + return self._url + + @property + def metadata(self) -> Optional[Union[dict, str]]: + """The metadata of the message, which can store some additional + information.""" + return self._metadata + + @property + def timestamp(self) -> str: + """The timestamp when the message is created.""" + return self._timestamp + + @id.setter # type: ignore[no-redef] + def id(self, value: str) -> None: + """Set the identity of the message.""" + self._id = value + + @name.setter # type: ignore[no-redef] + def name(self, value: str) -> None: + """Set the name of the message sender.""" + self._name = value + + @content.setter # type: ignore[no-redef] + def content(self, value: Any) -> None: + """Set the content of the message.""" + if not is_serializable(value): logger.warning( - "A new field `role` is newly added to the message. " - "Please specify the role of the message. Currently we use " - 'a default "assistant" value.', + f"The content of {type(value)} is not serializable, which " + f"may cause problems.", + ) + self._content = value + + @role.setter # type: ignore[no-redef] + def role(self, value: Literal["system", "user", "assistant"]) -> None: + """Set the role of the message sender. The role must be one of + 'system', 'user', 'assistant'.""" + if value not in ["system", "user", "assistant"]: + raise ValueError( + f"Invalid role {value}. The role must be one of " + f"['system', 'user', 'assistant']", ) + self._role = value - super().__init__( - name=name, - content=content, - role=role or "assistant", - url=url, - timestamp=timestamp, - metadata=metadata, - **kwargs, - ) + @url.setter # type: ignore[no-redef] + def url(self, value: Union[str, List[str], None]) -> None: + """Set the url of the message. The url can be a URL string or a list of + URL strings.""" + self._url = value - m1, m2 = _map_string_to_color_mark(self.name) - self._colored_name = f"{m1}{self.name}{m2}" + @metadata.setter # type: ignore[no-redef] + def metadata(self, value: Union[dict, str, None]) -> None: + """Set the metadata of the message to store some additional + information.""" + self._metadata = value - if echo: - logger.chat(self) + @timestamp.setter # type: ignore[no-redef] + def timestamp(self, value: str) -> None: + """Set the timestamp of the message.""" + self._timestamp = value def formatted_str(self, colored: bool = False) -> str: """Return the formatted string of the message. If the message has an @@ -171,6 +210,9 @@ def formatted_str(self, colored: bool = False) -> str: Args: colored (`bool`, defaults to `False`): Whether to color the name of the message + + Returns: + `str`: The formatted string of the message. """ if colored: name = self._colored_name @@ -186,5 +228,59 @@ def formatted_str(self, colored: bool = False) -> str: colored_strs.append(f"{name}: {self.url}") return "\n".join(colored_strs) - def serialize(self) -> str: - return json.dumps({"__type": "Msg", **self}) + def to_dict(self) -> dict: + """Serialize the message into a dictionary, which can be + deserialized by calling the `from_dict` function. + + Returns: + `dict`: The serialized dictionary. + """ + serialized_dict = { + "__module__": self.__class__.__module__, + "__name__": self.__class__.__name__, + } + + for attr_name in self.__serialized_attrs: + serialized_dict[attr_name] = getattr(self, f"_{attr_name}") + + return serialized_dict + + @classmethod + def from_dict(cls, serialized_dict: dict) -> "Msg": + """Deserialize the dictionary to a Msg object. + + Args: + serialized_dict (`dict`): + A dictionary that must contain the keys in + `Msg.__serialized_attrs`, and the keys `__module__` and + `__name__`. + + Returns: + `Msg`: A Msg object. + """ + assert set( + serialized_dict.keys(), + ) == cls.__serialized_attrs.union( + { + "__module__", + "__name__", + }, + ), ( + f"Expect keys {cls.__serialized_attrs}, but get " + f"{set(serialized_dict.keys())}", + ) + + assert serialized_dict.pop("__module__") == cls.__module__ + assert serialized_dict.pop("__name__") == cls.__name__ + + obj = cls( + name=serialized_dict["name"], + content=serialized_dict["content"], + role=serialized_dict["role"], + url=serialized_dict["url"], + metadata=serialized_dict["metadata"], + echo=False, + ) + obj.id = serialized_dict["id"] + obj.timestamp = serialized_dict["timestamp"] + return obj diff --git a/src/agentscope/message/placeholder.py b/src/agentscope/message/placeholder.py index 8420e74b8..b657bb444 100644 --- a/src/agentscope/message/placeholder.py +++ b/src/agentscope/message/placeholder.py @@ -1,19 +1,21 @@ # -*- coding: utf-8 -*- +# mypy: disable-error-code="misc" """The placeholder message for RpcAgent.""" -import json -from typing import Any, Optional, List, Union, Sequence +import os +from typing import Any, Optional, List, Union, Sequence, Literal from loguru import logger -from .msg import Msg, MessageBase +from .msg import Msg from ..rpc import RpcAgentClient, ResponseStub, call_in_thread -from ..utils.tools import is_web_accessible +from ..serialize import deserialize, is_serializable, serialize +from ..utils.common import _is_web_url class PlaceholderMessage(Msg): """A placeholder for the return message of RpcAgent.""" - PLACEHOLDER_ATTRS = { + __placeholder_attrs = { "_host", "_port", "_client", @@ -22,44 +24,26 @@ class PlaceholderMessage(Msg): "_is_placeholder", } - LOCAL_ATTRS = { - "name", - "timestamp", - *PLACEHOLDER_ATTRS, + __serialized_attrs = { + "_host", + "_port", + "_task_id", } + _is_placeholder: bool + """Indicates whether the real message is still in the rpc server.""" + def __init__( self, - name: str, - content: Any, - url: Optional[Union[List[str], str]] = None, - timestamp: Optional[str] = None, host: str = None, port: int = None, task_id: int = None, client: Optional[RpcAgentClient] = None, - x: dict = None, - **kwargs: Any, + x: Optional[Union[Msg, Sequence[Msg]]] = None, ) -> None: """A placeholder message, records the address of the real message. Args: - name (`str`): - The name of who send the message. It's often used in - role-playing scenario to tell the name of the sender. - However, you can also only use `role` when calling openai api. - The usage of `name` refers to - https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models. - content (`Any`): - The content of the message. - role (`Literal["system", "user", "assistant"]`, defaults to "assistant"): - The role of the message, which can be one of the `"system"`, - `"user"`, or `"assistant"`. - url (`Optional[Union[List[str], str]]`, defaults to None): - A url to file, image, video, audio or website. - timestamp (`Optional[str]`, defaults to None): - The timestamp of the message, if None, it will be set to - current time. host (`str`, defaults to `None`): The hostname of the rpc server where the real message is located. @@ -70,15 +54,15 @@ def __init__( client (`RpcAgentClient`, defaults to `None`): An RpcAgentClient instance used to connect to the generator of this placeholder. - x (`dict`, defaults to `None`): + x (`Optional[Msg, Sequence[Msg]]`, defaults to `None`): Input parameters used to call rpc methods on the client. - """ # noqa + """ super().__init__( - name=name, - content=content, - url=url, - timestamp=timestamp, - **kwargs, + name="", + content="", + role="assistant", + url=None, + metadata=None, ) # placeholder indicates whether the real message is still in rpc server self._is_placeholder = True @@ -90,134 +74,232 @@ def __init__( else: self._stub = call_in_thread( client, - x.serialize() if x is not None else "", + serialize(x), "_reply", ) self._host = client.host self._port = client.port self._task_id = None - def __is_local(self, key: Any) -> bool: - return ( - key in PlaceholderMessage.LOCAL_ATTRS or not self._is_placeholder - ) + @property + def id(self) -> str: + """The identity of the message.""" + if self._is_placeholder: + self.update_value() + return self._id - def __getattr__(self, __name: str) -> Any: - """Get attribute value from PlaceholderMessage. Get value from rpc - agent server if necessary. + @property + def name(self) -> str: + """The name of the message sender.""" + if self._is_placeholder: + self.update_value() + return self._name - Args: - __name (`str`): - Attribute name. - """ - if not self.__is_local(__name): + @property + def content(self) -> Any: + """The content of the message.""" + if self._is_placeholder: + self.update_value() + return self._content + + @property + def role(self) -> Literal["system", "user", "assistant"]: + """The role of the message sender, chosen from 'system', 'user', + 'assistant'.""" + if self._is_placeholder: self.update_value() - return MessageBase.__getattr__(self, __name) + return self._role - def __getitem__(self, __key: Any) -> Any: - """Get item value from PlaceholderMessage. Get value from rpc - agent server if necessary. + @property + def url(self) -> Optional[Union[str, List[str]]]: + """A URL string or a list of URL strings.""" + if self._is_placeholder: + self.update_value() + return self._url - Args: - __key (`Any`): - Item name. - """ - if not self.__is_local(__key): + @property + def metadata(self) -> Optional[Union[dict, str]]: + """The metadata of the message, which can store some additional + information.""" + if self._is_placeholder: self.update_value() - return MessageBase.__getitem__(self, __key) + return self._metadata + + @property + def timestamp(self) -> str: + """The timestamp when the message is created.""" + if self._is_placeholder: + self.update_value() + return self._timestamp + + @id.setter # type: ignore[no-redef] + def id(self, value: str) -> None: + """Set the identity of the message.""" + self._id = value + + @name.setter # type: ignore[no-redef] + def name(self, value: str) -> None: + """Set the name of the message sender.""" + self._name = value + + @content.setter # type: ignore[no-redef] + def content(self, value: Any) -> None: + """Set the content of the message.""" + if not is_serializable(value): + logger.warning( + f"The content of {type(value)} is not serializable, which " + f"may cause problems.", + ) + self._content = value + + @role.setter # type: ignore[no-redef] + def role(self, value: Literal["system", "user", "assistant"]) -> None: + """Set the role of the message sender. The role must be one of + 'system', 'user', 'assistant'.""" + if value not in ["system", "user", "assistant"]: + raise ValueError( + f"Invalid role {value}. The role must be one of " + f"['system', 'user', 'assistant']", + ) + self._role = value - def update_value(self) -> MessageBase: + @url.setter # type: ignore[no-redef] + def url(self, value: Union[str, List[str], None]) -> None: + """Set the url of the message. The url can be a URL string or a list of + URL strings.""" + self._url = value + + @metadata.setter # type: ignore[no-redef] + def metadata(self, value: Union[dict, str, None]) -> None: + """Set the metadata of the message to store some additional + information.""" + self._metadata = value + + @timestamp.setter # type: ignore[no-redef] + def timestamp(self, value: str) -> None: + """Set the timestamp of the message.""" + self._timestamp = value + + def update_value(self) -> None: """Get attribute values from rpc agent server immediately""" if self._is_placeholder: # retrieve real message from rpc agent server self.__update_task_id() client = RpcAgentClient(self._host, self._port) result = client.update_placeholder(task_id=self._task_id) - msg = deserialize(result) - self.__update_url(msg) # type: ignore[arg-type] - self.update(msg) - # the actual value has been updated, not a placeholder anymore + + # Update the values according to the result obtained from the + # distributed agent + data = deserialize(result) + + self.id = data.id + self.name = data.name + self.role = data.role + self.content = data.content + self.metadata = data.metadata + + self.timestamp = data.timestamp + + # For url field, download the file if it's a local file of the + # distributed agent, and turn it into a local url + self.url = self.__update_url(data.url) + self._is_placeholder = False - return self - def __update_url(self, msg: MessageBase) -> None: - """Update the url field of the message.""" - if hasattr(msg, "url") and msg.url is None: - return - url = msg.url + def __update_url( + self, + url: Union[list[str], str, None], + ) -> Union[list, str, None]: + """If the url links to + - a file that the main process can access, return the url directly + - a web resource, return the url directly + - a local file of the distributed agent (maybe in the deployed + machine of the distributed agent), we download the file and update + the url to the local url. + - others (maybe a meaningless url, e.g "xxx.com"), return the url. + + Args: + url (`Union[List[str], str, None]`): + The url to be updated. + """ + + if url is None: + return None + if isinstance(url, str): - urls = [url] - else: - urls = url - checked_urls = [] - for url in urls: - if not is_web_accessible(url): - client = RpcAgentClient(self._host, self._port) - checked_urls.append(client.download_file(path=url)) - else: - checked_urls.append(url) - msg.url = checked_urls[0] if isinstance(url, str) else checked_urls + if os.path.exists(url) or _is_web_url(url): + return url + + # Try to get the file from the distributed agent + client = RpcAgentClient(self.host, self.port) + # TODO: what if failed here? + local_url = client.download_file(path=url) + + return local_url + + if isinstance(url, list): + return [self.__update_url(u) for u in url] + + raise TypeError( + f"Invalid URL type, expect str, list[str] or None, " + f"got {type(url)}.", + ) def __update_task_id(self) -> None: + """Get the task_id from the rpc server.""" if self._stub is not None: try: - resp = deserialize(self._stub.get_response()) + task_id = deserialize(self._stub.get_response()) except Exception as e: - logger.error( - f"Failed to get task_id: {self._stub.get_response()}", - ) raise ValueError( f"Failed to get task_id: {self._stub.get_response()}", ) from e - self._task_id = resp["task_id"] # type: ignore[call-overload] + self._task_id = task_id self._stub = None - def serialize(self) -> str: + def to_dict(self) -> dict: + """Serialize the placeholder message.""" if self._is_placeholder: self.__update_task_id() - return json.dumps( - { - "__type": "PlaceholderMessage", - "name": self.name, - "content": None, - "timestamp": self.timestamp, - "host": self._host, - "port": self._port, - "task_id": self._task_id, - }, - ) - else: - states = { - k: v - for k, v in self.items() - if k not in PlaceholderMessage.PLACEHOLDER_ATTRS - } - states["__type"] = "Msg" - return json.dumps(states) + # Serialize the placeholder message + serialized_dict = { + "__module__": self.__class__.__module__, + "__name__": self.__class__.__name__, + } -_MSGS = { - "Msg": Msg, - "PlaceholderMessage": PlaceholderMessage, -} + for attr_name in self.__serialized_attrs: + serialized_dict[attr_name] = getattr(self, attr_name) + return serialized_dict -def deserialize(s: Union[str, bytes]) -> Union[Msg, Sequence]: - """Deserialize json string into MessageBase""" - js_msg = json.loads(s) - msg_type = js_msg.pop("__type") - if msg_type == "List": - return [deserialize(s) for s in js_msg["__value"]] - elif msg_type not in _MSGS: - raise NotImplementedError( - f"Deserialization of {msg_type} is not supported.", - ) - return _MSGS[msg_type](**js_msg) + else: + # Serialize into a normal Msg object + serialized_dict = { + "__module__": Msg.__module__, + "__name__": Msg.__name__, + } + # TODO: We will merge the placeholder and message classes in the + # future to avoid the hard coding of the serialized attributes + # here + for attr_name in [ + "id", + "name", + "content", + "role", + "url", + "metadata", + "timestamp", + ]: + serialized_dict[attr_name] = getattr(self, attr_name) + return serialized_dict -def serialize(messages: Union[Sequence[MessageBase], MessageBase]) -> str: - """Serialize multiple MessageBase instance""" - if isinstance(messages, MessageBase): - return messages.serialize() - seq = [msg.serialize() for msg in messages] - return json.dumps({"__type": "List", "__value": seq}) + @classmethod + def from_dict(cls, serialized_dict: dict) -> "PlaceholderMessage": + """Create a PlaceholderMessage from a dictionary.""" + return cls( + host=serialized_dict["_host"], + port=serialized_dict["_port"], + task_id=serialized_dict["_task_id"], + ) diff --git a/src/agentscope/models/__init__.py b/src/agentscope/models/__init__.py index c48e6ed3f..0a6894b35 100644 --- a/src/agentscope/models/__init__.py +++ b/src/agentscope/models/__init__.py @@ -38,7 +38,9 @@ from .litellm_model import ( LiteLLMChatWrapper, ) - +from .yi_model import ( + YiChatWrapper, +) __all__ = [ "ModelWrapperBase", @@ -61,6 +63,7 @@ "ZhipuAIChatWrapper", "ZhipuAIEmbeddingWrapper", "LiteLLMChatWrapper", + "YiChatWrapper", ] diff --git a/src/agentscope/models/dashscope_model.py b/src/agentscope/models/dashscope_model.py index 0058486ce..ba50b9f40 100644 --- a/src/agentscope/models/dashscope_model.py +++ b/src/agentscope/models/dashscope_model.py @@ -10,7 +10,7 @@ from ..manager import FileManager from ..message import Msg -from ..utils.tools import _convert_to_str, _guess_type_by_extension +from ..utils.common import _convert_to_str, _guess_type_by_extension try: import dashscope diff --git a/src/agentscope/models/gemini_model.py b/src/agentscope/models/gemini_model.py index e5315212b..3eaa301fb 100644 --- a/src/agentscope/models/gemini_model.py +++ b/src/agentscope/models/gemini_model.py @@ -7,9 +7,9 @@ from loguru import logger -from agentscope.message import Msg -from agentscope.models import ModelWrapperBase, ModelResponse -from agentscope.utils.tools import _convert_to_str +from ..message import Msg +from ..models import ModelWrapperBase, ModelResponse +from ..utils.common import _convert_to_str try: import google.generativeai as genai diff --git a/src/agentscope/models/model.py b/src/agentscope/models/model.py index 8d20a108f..429d34d7a 100644 --- a/src/agentscope/models/model.py +++ b/src/agentscope/models/model.py @@ -68,7 +68,7 @@ from ..manager import FileManager from ..manager import MonitorManager from ..message import Msg -from ..utils.tools import _get_timestamp, _convert_to_str +from ..utils.common import _get_timestamp, _convert_to_str from ..constants import _DEFAULT_MAX_RETRIES from ..constants import _DEFAULT_RETRY_INTERVAL diff --git a/src/agentscope/models/ollama_model.py b/src/agentscope/models/ollama_model.py index 7d65cafd0..ec87f219f 100644 --- a/src/agentscope/models/ollama_model.py +++ b/src/agentscope/models/ollama_model.py @@ -3,9 +3,9 @@ from abc import ABC from typing import Sequence, Any, Optional, List, Union, Generator -from agentscope.message import Msg -from agentscope.models import ModelWrapperBase, ModelResponse -from agentscope.utils.tools import _convert_to_str +from ..message import Msg +from ..models import ModelWrapperBase, ModelResponse +from ..utils.common import _convert_to_str try: import ollama diff --git a/src/agentscope/models/openai_model.py b/src/agentscope/models/openai_model.py index 0a87ae381..e25fc9061 100644 --- a/src/agentscope/models/openai_model.py +++ b/src/agentscope/models/openai_model.py @@ -21,7 +21,7 @@ from .model import ModelWrapperBase, ModelResponse from ..manager import FileManager from ..message import Msg -from ..utils.tools import _convert_to_str, _to_openai_image_url +from ..utils.common import _convert_to_str, _to_openai_image_url from ..utils.token_utils import get_openai_max_length @@ -188,7 +188,7 @@ def __init__( def __call__( self, - messages: list, + messages: list[dict], stream: Optional[bool] = None, **kwargs: Any, ) -> ModelResponse: @@ -331,7 +331,7 @@ def _save_model_invocation_and_update_monitor( response=response, ) - usage = response.get("usage") + usage = response.get("usage", None) if usage is not None: self.monitor.update_text_and_embedding_tokens( model_name=self.model_name, diff --git a/src/agentscope/models/response.py b/src/agentscope/models/response.py index 3019257e0..b034a4197 100644 --- a/src/agentscope/models/response.py +++ b/src/agentscope/models/response.py @@ -3,7 +3,7 @@ import json from typing import Optional, Sequence, Any, Generator, Union, Tuple -from agentscope.utils.tools import _is_json_serializable +from ..utils.common import _is_json_serializable class ModelResponse: @@ -52,10 +52,15 @@ def text(self) -> str: field will be updated accordingly.""" if self._text is None: if self.stream is not None: - for chunk in self.stream: + for _, chunk in self.stream: self._text += chunk return self._text + @text.setter + def text(self, value: str) -> None: + """Set the text field.""" + self._text = value + @property def stream(self) -> Union[None, Generator[Tuple[bool, str], None, None]]: """Return the stream generator if it exists.""" diff --git a/src/agentscope/models/yi_model.py b/src/agentscope/models/yi_model.py new file mode 100644 index 000000000..9d02dd17c --- /dev/null +++ b/src/agentscope/models/yi_model.py @@ -0,0 +1,292 @@ +# -*- coding: utf-8 -*- +"""Model wrapper for Yi models""" +import json +from typing import ( + List, + Union, + Sequence, + Optional, + Generator, +) + +import requests + +from ._model_utils import ( + _verify_text_content_in_openai_message_response, + _verify_text_content_in_openai_delta_response, +) +from .model import ModelWrapperBase, ModelResponse +from ..message import Msg + + +class YiChatWrapper(ModelWrapperBase): + """The model wrapper for Yi Chat API. + + Response: + - From https://platform.lingyiwanwu.com/docs + + ```json + { + "id": "cmpl-ea89ae83", + "object": "chat.completion", + "created": 5785971, + "model": "yi-large-rag", + "usage": { + "completion_tokens": 113, + "prompt_tokens": 896, + "total_tokens": 1009 + }, + "choices": [ + { + "index": 0, + "message": { + "role": "assistant", + "content": "Today in Los Angeles, the weather ...", + }, + "finish_reason": "stop" + } + ] + } + ``` + """ + + model_type: str = "yi_chat" + + def __init__( + self, + config_name: str, + model_name: str, + api_key: str, + max_tokens: Optional[int] = None, + top_p: float = 0.9, + temperature: float = 0.3, + stream: bool = False, + ) -> None: + """Initialize the Yi chat model wrapper. + + Args: + config_name (`str`): + The name of the configuration to use. + model_name (`str`): + The name of the model to use, e.g. yi-large, yi-medium, etc. + api_key (`str`): + The API key for the Yi API. + max_tokens (`Optional[int]`, defaults to `None`): + The maximum number of tokens to generate, defaults to `None`. + top_p (`float`, defaults to `0.9`): + The randomness parameters in the range [0, 1]. + temperature (`float`, defaults to `0.3`): + The temperature parameter in the range [0, 2]. + stream (`bool`, defaults to `False`): + Whether to stream the response or not. + """ + + super().__init__(config_name, model_name) + + if top_p > 1 or top_p < 0: + raise ValueError( + f"The `top_p` parameter must be in the range [0, 1], but got " + f"{top_p} instead.", + ) + + if temperature < 0 or temperature > 2: + raise ValueError( + f"The `temperature` parameter must be in the range [0, 2], " + f"but got {temperature} instead.", + ) + + self.api_key = api_key + self.max_tokens = max_tokens + self.top_p = top_p + self.temperature = temperature + self.stream = stream + + def __call__( + self, + messages: list[dict], + stream: Optional[bool] = None, + ) -> ModelResponse: + """Invoke the Yi Chat API by sending a list of messages.""" + + # Checking messages + if not isinstance(messages, list): + raise ValueError( + f"Yi `messages` field expected type `list`, " + f"got `{type(messages)}` instead.", + ) + + if not all("role" in msg and "content" in msg for msg in messages): + raise ValueError( + "Each message in the 'messages' list must contain a 'role' " + "and 'content' key for Yi API.", + ) + + if stream is None: + stream = self.stream + + # Forward to generate response + kwargs = { + "url": "https://api.lingyiwanwu.com/v1/chat/completions", + "json": { + "model": self.model_name, + "messages": messages, + "temperature": self.temperature, + "max_tokens": self.max_tokens, + "top_p": self.top_p, + "stream": stream, + }, + "headers": { + "Authorization": f"Bearer {self.api_key}", + "Content-Type": "application/json", + }, + } + + response = requests.post(**kwargs) + response.raise_for_status() + + if stream: + + def generator() -> Generator[str, None, None]: + text = "" + last_chunk = {} + for line in response.iter_lines(): + if line: + line_str = line.decode("utf-8").strip() + + # Remove prefix "data: " if exists + json_str = line_str.removeprefix("data: ") + + # The last response is "data: [DONE]" + if json_str == "[DONE]": + continue + + try: + chunk = json.loads(json_str) + if _verify_text_content_in_openai_delta_response( + chunk, + ): + text += chunk["choices"][0]["delta"]["content"] + yield text + last_chunk = chunk + + except json.decoder.JSONDecodeError as e: + raise json.decoder.JSONDecodeError( + f"Invalid JSON: {json_str}", + e.doc, + e.pos, + ) from e + + # In Yi Chat API, the last valid chunk will save all the text + # in this message + self._save_model_invocation_and_update_monitor( + kwargs, + last_chunk, + ) + + return ModelResponse( + stream=generator(), + ) + else: + response = response.json() + self._save_model_invocation_and_update_monitor( + kwargs, + response, + ) + + # Re-use the openai response checking function + if _verify_text_content_in_openai_message_response(response): + return ModelResponse( + text=response["choices"][0]["message"]["content"], + raw=response, + ) + else: + raise RuntimeError( + f"Invalid response from Yi Chat API: {response}", + ) + + def format( + self, + *args: Union[Msg, Sequence[Msg]], + ) -> List[dict]: + """Format the messages into the required format of Yi Chat API. + + Note this strategy maybe not suitable for all scenarios, + and developers are encouraged to implement their own prompt + engineering strategies. + + The following is an example: + + .. code-block:: python + + prompt1 = model.format( + Msg("system", "You're a helpful assistant", role="system"), + Msg("Bob", "Hi, how can I help you?", role="assistant"), + Msg("user", "What's the date today?", role="user") + ) + + The prompt will be as follows: + + .. code-block:: python + + # prompt1 + [ + { + "role": "user", + "content": ( + "You're a helpful assistant\\n" + "\\n" + "## Conversation History\\n" + "Bob: Hi, how can I help you?\\n" + "user: What's the date today?" + ) + } + ] + + Args: + args (`Union[Msg, Sequence[Msg]]`): + The input arguments to be formatted, where each argument + should be a `Msg` object, or a list of `Msg` objects. + In distribution, placeholder is also allowed. + + Returns: + `List[dict]`: + The formatted messages. + """ + + # TODO: Support Vision model + if self.model_name == "yi-vision": + raise NotImplementedError( + "Yi Vision model is not supported in the current version, " + "please format the messages manually.", + ) + + return ModelWrapperBase.format_for_common_chat_models(*args) + + def _save_model_invocation_and_update_monitor( + self, + kwargs: dict, + response: dict, + ) -> None: + """Save model invocation and update the monitor accordingly. + + Args: + kwargs (`dict`): + The keyword arguments used in model invocation + response (`dict`): + The response from model API + """ + self._save_model_invocation( + arguments=kwargs, + response=response, + ) + + usage = response.get("usage", None) + if usage is not None: + prompt_tokens = usage.get("prompt_tokens", 0) + completion_tokens = usage.get("completion_tokens", 0) + + self.monitor.update_text_and_embedding_tokens( + model_name=self.model_name, + prompt_tokens=prompt_tokens, + completion_tokens=completion_tokens, + ) diff --git a/src/agentscope/parsers/json_object_parser.py b/src/agentscope/parsers/json_object_parser.py index 970828639..441af8286 100644 --- a/src/agentscope/parsers/json_object_parser.py +++ b/src/agentscope/parsers/json_object_parser.py @@ -8,16 +8,16 @@ from loguru import logger from pydantic import BaseModel -from agentscope.exception import ( +from ..exception import ( TagNotFoundError, JsonParsingError, JsonTypeError, RequiredFieldNotFoundError, ) -from agentscope.models import ModelResponse -from agentscope.parsers import ParserBase -from agentscope.parsers.parser_base import DictFilterMixin -from agentscope.utils.tools import _join_str_with_comma_and +from ..models import ModelResponse +from ..parsers import ParserBase +from ..parsers.parser_base import DictFilterMixin +from ..utils.common import _join_str_with_comma_and class MarkdownJsonObjectParser(ParserBase): @@ -166,9 +166,9 @@ def __init__( self, content_hint: Optional[Any] = None, required_keys: List[str] = None, - keys_to_memory: Optional[Union[str, bool, Sequence[str]]] = True, - keys_to_content: Optional[Union[str, bool, Sequence[str]]] = True, - keys_to_metadata: Optional[Union[str, bool, Sequence[str]]] = False, + keys_to_memory: Union[str, bool, Sequence[str]] = True, + keys_to_content: Union[str, bool, Sequence[str]] = True, + keys_to_metadata: Union[str, bool, Sequence[str]] = False, ) -> None: """Initialize the parser with the content hint. diff --git a/src/agentscope/prompt/__init__.py b/src/agentscope/prompt/__init__.py index 1fb694ff9..dcd15d4b3 100644 --- a/src/agentscope/prompt/__init__.py +++ b/src/agentscope/prompt/__init__.py @@ -6,11 +6,9 @@ from ._prompt_generator_en import EnglishSystemPromptGenerator from ._prompt_comparer import SystemPromptComparer from ._prompt_optimizer import SystemPromptOptimizer -from ._prompt_engine import PromptEngine __all__ = [ - "PromptEngine", "SystemPromptGeneratorBase", "ChineseSystemPromptGenerator", "EnglishSystemPromptGenerator", diff --git a/src/agentscope/prompt/_prompt_engine.py b/src/agentscope/prompt/_prompt_engine.py deleted file mode 100644 index 8d66a16f5..000000000 --- a/src/agentscope/prompt/_prompt_engine.py +++ /dev/null @@ -1,179 +0,0 @@ -# -*- coding: utf-8 -*- -"""Prompt engineering module.""" -from typing import Any, Optional, Union -from enum import IntEnum - -from loguru import logger - -from agentscope.models import OpenAIWrapperBase, ModelWrapperBase -from agentscope.constants import ShrinkPolicy -from agentscope.utils.tools import to_openai_dict, to_dialog_str - - -class PromptType(IntEnum): - """Enum for prompt types.""" - - STRING = 0 - LIST = 1 - - -class PromptEngine: - """Prompt engineering module for both list and string prompt""" - - def __init__( - self, - model: ModelWrapperBase, - shrink_policy: ShrinkPolicy = ShrinkPolicy.TRUNCATE, - max_length: Optional[int] = None, - prompt_type: Optional[PromptType] = None, - max_summary_length: int = 200, - summarize_model: Optional[ModelWrapperBase] = None, - ) -> None: - """Init PromptEngine. - - Args: - model (`ModelWrapperBase`): - The target model for prompt engineering. - shrink_policy (`ShrinkPolicy`, defaults to - `ShrinkPolicy.TRUNCATE`): - The shrink policy for prompt engineering, defaults to - `ShrinkPolicy.TRUNCATE`. - max_length (`Optional[int]`, defaults to `None`): - The max length of context, if it is None, it will be set to the - max length of the model. - prompt_type (`Optional[MsgType]`, defaults to `None`): - The type of prompt, if it is None, it will be set according to - the model. - max_summary_length (`int`, defaults to `200`): - The max length of summary, if it is None, it will be set to the - max length of the model. - summarize_model (`Optional[ModelWrapperBase]`, defaults to `None`): - The model used for summarization, if it is None, it will be - set to `model`. - - Note: - - 1. TODO: Shrink function is still under development. - - 2. If the argument `max_length` and `prompt_type` are not given, - they will be set according to the given model. - - 3. `shrink_policy` is used when the prompt is too long, it can - be set to `ShrinkPolicy.TRUNCATE` or `ShrinkPolicy.SUMMARIZE`. - - a. `ShrinkPolicy.TRUNCATE` will truncate the prompt to the - desired length. - - b. `ShrinkPolicy.SUMMARIZE` will summarize partial of the - dialog history to save space. The summarization model - defaults to `model` if not given. - - Example: - - With prompt engine, we encapsulate different operations for - string- and list-style prompt, and block the prompt engineering - process from the user. - As a user, you can just combine you prompt as follows. - - .. code-block:: python - - # prepare the component - system_prompt = "You're a helpful assistant ..." - hint_prompt = "You should response in Json format." - prefix = "assistant: " - - # initialize the prompt engine and join the prompt - engine = PromptEngine(model) - prompt = engine.join(system_prompt, memory.get_memory(), - hint_prompt, prefix) - """ - self.model = model - self.shrink_policy = shrink_policy - self.max_length = max_length - - if prompt_type is None: - if isinstance(model, OpenAIWrapperBase): - self.prompt_type = PromptType.LIST - else: - self.prompt_type = PromptType.STRING - else: - self.prompt_type = prompt_type - - self.max_summary_length = max_summary_length - - if summarize_model is None: - self.summarize_model = model - - logger.warning( - "The prompt engine will be deprecated in the future. " - "Please use the `format` function in model wrapper object " - "instead. More details refer to ", - "https://modelscope.github.io/agentscope/en/tutorial/206-prompt" - ".html", - ) - - def join( - self, - *args: Any, - format_map: Optional[dict] = None, - ) -> Union[str, list[dict]]: - """Join prompt components according to its type. The join function can - accept any number and type of arguments. If prompt type is - `PromptType.STRING`, the arguments will be joined by `"\\\\n"`. If - prompt type is `PromptType.LIST`, the string arguments will be - converted to `Msg` from `system`. - """ - # TODO: achieve the summarize function - - # Filter `None` - args = [_ for _ in args if _ is not None] - - if self.prompt_type == PromptType.STRING: - return self.join_to_str(*args, format_map=format_map) - elif self.prompt_type == PromptType.LIST: - return self.join_to_list(*args, format_map=format_map) - else: - raise RuntimeError("Invalid prompt type.") - - def join_to_str(self, *args: Any, format_map: Union[dict, None]) -> str: - """Join prompt components to a string.""" - prompt = [] - for item in args: - if isinstance(item, list): - items_str = self.join_to_str(*item, format_map=None) - prompt += [items_str] - elif isinstance(item, dict): - prompt.append(to_dialog_str(item)) - else: - prompt.append(str(item)) - prompt_str = "\n".join(prompt) - - if format_map is not None: - prompt_str = prompt_str.format_map(format_map) - - return prompt_str - - def join_to_list(self, *args: Any, format_map: Union[dict, None]) -> list: - """Join prompt components to a list of `Msg` objects.""" - prompt = [] - for item in args: - if isinstance(item, list): - # nested processing - prompt.extend(self.join_to_list(*item, format_map=None)) - elif isinstance(item, dict): - prompt.append(to_openai_dict(item)) - else: - prompt.append(to_openai_dict({"content": str(item)})) - - if format_map is not None: - format_prompt = [] - for msg in prompt: - format_prompt.append( - { - k.format_map(format_map): v.format_map(format_map) - for k, v in msg.items() - }, - ) - prompt = format_prompt - - return prompt diff --git a/src/agentscope/rag/__init__.py b/src/agentscope/rag/__init__.py index 362f1de14..31f035615 100644 --- a/src/agentscope/rag/__init__.py +++ b/src/agentscope/rag/__init__.py @@ -1,11 +1,9 @@ # -*- coding: utf-8 -*- """ Import all pipeline related modules in the package. """ from .knowledge import Knowledge -from .llama_index_knowledge import LlamaIndexKnowledge from .knowledge_bank import KnowledgeBank __all__ = [ "Knowledge", - "LlamaIndexKnowledge", "KnowledgeBank", ] diff --git a/src/agentscope/rag/knowledge_bank.py b/src/agentscope/rag/knowledge_bank.py index 8f07d12b6..ae4cc57ce 100644 --- a/src/agentscope/rag/knowledge_bank.py +++ b/src/agentscope/rag/knowledge_bank.py @@ -7,8 +7,8 @@ from typing import Optional, Union from loguru import logger from agentscope.agents import AgentBase -from .llama_index_knowledge import LlamaIndexKnowledge from ..manager import ModelManager +from .knowledge import Knowledge DEFAULT_INDEX_CONFIG = { "knowledge_id": "", @@ -43,13 +43,14 @@ def __init__( configs: Union[dict, str], ) -> None: """initialize the knowledge bank""" + if isinstance(configs, str): logger.info(f"Loading configs from {configs}") with open(configs, "r", encoding="utf-8") as fp: self.configs = json.loads(fp.read()) else: self.configs = configs - self.stored_knowledge: dict[str, LlamaIndexKnowledge] = {} + self.stored_knowledge: dict[str, Knowledge] = {} self._init_knowledge() def _init_knowledge(self) -> None: @@ -104,6 +105,8 @@ def add_data_as_knowledge( ) '' """ + from .llama_index_knowledge import LlamaIndexKnowledge + if knowledge_id in self.stored_knowledge: raise ValueError(f"knowledge_id {knowledge_id} already exists.") @@ -125,9 +128,11 @@ def add_data_as_knowledge( knowledge_id=knowledge_id, emb_model=model_manager.get_model_by_config_name(emb_model_name), knowledge_config=knowledge_config, - model=model_manager.get_model_by_config_name(model_name) - if model_name - else None, + model=( + model_manager.get_model_by_config_name(model_name) + if model_name + else None + ), ) logger.info(f"data loaded for knowledge_id = {knowledge_id}.") @@ -135,7 +140,7 @@ def get_knowledge( self, knowledge_id: str, duplicate: bool = False, - ) -> LlamaIndexKnowledge: + ) -> Knowledge: """ Get a Knowledge object from the knowledge bank. Args: @@ -144,7 +149,7 @@ def get_knowledge( duplicate (bool): whether return a copy of the Knowledge object. Returns: - LlamaIndexKnowledge: + Knowledge: the Knowledge object defined with Llama-index """ if knowledge_id not in self.stored_knowledge: diff --git a/src/agentscope/rpc/__init__.py b/src/agentscope/rpc/__init__.py index 42d3b5fe5..2f061c85f 100644 --- a/src/agentscope/rpc/__init__.py +++ b/src/agentscope/rpc/__init__.py @@ -8,7 +8,7 @@ from .rpc_agent_pb2_grpc import RpcAgentStub from .rpc_agent_pb2_grpc import add_RpcAgentServicer_to_server except ImportError as import_error: - from agentscope.utils.tools import ImportErrorReporter + from agentscope.utils.common import ImportErrorReporter RpcMsg = ImportErrorReporter(import_error, "distribute") # type: ignore[misc] RpcAgentServicer = ImportErrorReporter(import_error, "distribute") diff --git a/src/agentscope/rpc/rpc_agent_client.py b/src/agentscope/rpc/rpc_agent_client.py index a5716f93a..c4d3934f0 100644 --- a/src/agentscope/rpc/rpc_agent_client.py +++ b/src/agentscope/rpc/rpc_agent_client.py @@ -7,6 +7,9 @@ from typing import Optional, Sequence, Union, Generator from loguru import logger +from ..message import Msg +from ..serialize import deserialize + try: import dill import grpc @@ -15,7 +18,7 @@ from agentscope.rpc.rpc_agent_pb2_grpc import RpcAgentStub import agentscope.rpc.rpc_agent_pb2 as agent_pb2 except ImportError as import_error: - from agentscope.utils.tools import ImportErrorReporter + from agentscope.utils.common import ImportErrorReporter dill = ImportErrorReporter(import_error, "distribute") grpc = ImportErrorReporter(import_error, "distribute") @@ -23,7 +26,7 @@ RpcAgentStub = ImportErrorReporter(import_error, "distribute") RpcError = ImportError -from ..utils.tools import generate_id_from_seed +from ..utils.common import _generate_id_from_seed from ..exception import AgentServerNotAliveError from ..constants import _DEFAULT_RPC_OPTIONS from ..exception import AgentCallError @@ -310,7 +313,7 @@ def set_model_configs( return False return True - def get_agent_memory(self, agent_id: str) -> Union[list, dict]: + def get_agent_memory(self, agent_id: str) -> Union[list[Msg], Msg]: """Get the memory usage of the specific agent.""" with grpc.insecure_channel(f"{self.host}:{self.port}") as channel: stub = RpcAgentStub(channel) @@ -319,7 +322,7 @@ def get_agent_memory(self, agent_id: str) -> Union[list, dict]: ) if not resp.ok: logger.error(f"Error in get_agent_memory: {resp.message}") - return json.loads(resp.message) + return deserialize(resp.message) def download_file(self, path: str) -> str: """Download a file from a remote server to the local machine. @@ -336,7 +339,7 @@ def download_file(self, path: str) -> str: file_manager = FileManager.get_instance() local_filename = ( - f"{generate_id_from_seed(path, 5)}_{os.path.basename(path)}" + f"{_generate_id_from_seed(path, 5)}_{os.path.basename(path)}" ) def _generator() -> Generator[bytes, None, None]: diff --git a/src/agentscope/rpc/rpc_agent_pb2_grpc.py b/src/agentscope/rpc/rpc_agent_pb2_grpc.py index 0234d55f2..1c506c176 100644 --- a/src/agentscope/rpc/rpc_agent_pb2_grpc.py +++ b/src/agentscope/rpc/rpc_agent_pb2_grpc.py @@ -5,7 +5,7 @@ import grpc from google.protobuf import empty_pb2 as google_dot_protobuf_dot_empty__pb2 except ImportError as import_error: - from agentscope.utils.tools import ImportErrorReporter + from agentscope.utils.common import ImportErrorReporter grpc = ImportErrorReporter(import_error, "distribute") google_dot_protobuf_dot_empty__pb2 = ImportErrorReporter( diff --git a/src/agentscope/serialize.py b/src/agentscope/serialize.py new file mode 100644 index 000000000..bef8dd8f5 --- /dev/null +++ b/src/agentscope/serialize.py @@ -0,0 +1,65 @@ +# -*- coding: utf-8 -*- +"""The serialization module for the package.""" +import importlib +import json +from typing import Any + + +def _default_serialize(obj: Any) -> Any: + """Serialize the object when `json.dumps` cannot handle it.""" + if hasattr(obj, "__module__") and hasattr(obj, "__class__"): + # To avoid circular import, we hard code the module name here + if ( + obj.__module__ == "agentscope.message.msg" + and obj.__class__.__name__ == "Msg" + ): + return obj.to_dict() + + if ( + obj.__module__ == "agentscope.message.placeholder" + and obj.__class__.__name__ == "PlaceholderMessage" + ): + return obj.to_dict() + + return obj + + +def _deserialize_hook(data: dict) -> Any: + """Deserialize the JSON string to an object, including Msg object in + AgentScope.""" + module_name = data.get("__module__", None) + class_name = data.get("__name__", None) + + if module_name is not None and class_name is not None: + module = importlib.import_module(module_name) + cls = getattr(module, class_name) + if hasattr(cls, "from_dict"): + return cls.from_dict(data) + return data + + +def serialize(obj: Any) -> str: + """Serialize the object to a JSON string. + + For AgentScope, this function supports to serialize `Msg` object for now. + """ + # TODO: We leave the serialization of agents in next PR + return json.dumps(obj, ensure_ascii=False, default=_default_serialize) + + +def deserialize(s: str) -> Any: + """Deserialize the JSON string to an object + + For AgentScope, this function supports to serialize `Msg` object for now. + """ + # TODO: We leave the serialization of agents in next PR + return json.loads(s, object_hook=_deserialize_hook) + + +def is_serializable(obj: Any) -> bool: + """Check if the object is serializable in the scope of AgentScope.""" + try: + serialize(obj) + return True + except Exception: + return False diff --git a/src/agentscope/server/launcher.py b/src/agentscope/server/launcher.py index be7705683..93ca49c52 100644 --- a/src/agentscope/server/launcher.py +++ b/src/agentscope/server/launcher.py @@ -21,7 +21,7 @@ add_RpcAgentServicer_to_server, ) except ImportError as import_error: - from agentscope.utils.tools import ImportErrorReporter + from agentscope.utils.common import ImportErrorReporter dill = ImportErrorReporter(import_error, "distribute") grpc = ImportErrorReporter(import_error, "distribute") @@ -35,7 +35,7 @@ from ..server.servicer import AgentServerServicer from ..manager import ASManager from ..agents.agent import AgentBase -from ..utils.tools import check_port, generate_id_from_seed +from ..utils.common import _check_port, _generate_id_from_seed from ..constants import _DEFAULT_RPC_OPTIONS @@ -251,7 +251,7 @@ async def shutdown_signal_handler() -> None: ) while True: try: - port = check_port(port) + port = _check_port(port) servicer.port = port server = grpc.aio.server( futures.ThreadPoolExecutor(max_workers=None), @@ -393,7 +393,7 @@ def __init__( The url of the agentscope studio. """ self.host = host - self.port = check_port(port) + self.port = _check_port(port) self.max_pool_size = max_pool_size self.max_timeout_seconds = max_timeout_seconds self.local_mode = local_mode @@ -414,7 +414,7 @@ def __init__( @classmethod def generate_server_id(cls, host: str, port: int) -> str: """Generate server id""" - return generate_id_from_seed(f"{host}:{port}:{time.time()}", length=8) + return _generate_id_from_seed(f"{host}:{port}:{time.time()}", length=8) def _launch_in_main(self) -> None: """Launch agent server in main-process""" diff --git a/src/agentscope/server/servicer.py b/src/agentscope/server/servicer.py index 50f175fb9..ec325f155 100644 --- a/src/agentscope/server/servicer.py +++ b/src/agentscope/server/servicer.py @@ -17,7 +17,7 @@ from google.protobuf.empty_pb2 import Empty from expiringdict import ExpiringDict except ImportError as import_error: - from agentscope.utils.tools import ImportErrorReporter + from agentscope.utils.common import ImportErrorReporter dill = ImportErrorReporter(import_error, "distribute") psutil = ImportErrorReporter(import_error, "distribute") @@ -30,14 +30,11 @@ ExpiringDict = ImportErrorReporter(import_error, "distribute") import agentscope.rpc.rpc_agent_pb2 as agent_pb2 +from agentscope.serialize import deserialize, serialize from agentscope.agents.agent import AgentBase from agentscope.manager import ModelManager from agentscope.rpc.rpc_agent_pb2_grpc import RpcAgentServicer -from agentscope.message import ( - Msg, - PlaceholderMessage, - deserialize, -) +from agentscope.message import Msg, PlaceholderMessage class _AgentError: @@ -312,7 +309,7 @@ def update_placeholder( else: return agent_pb2.GeneralResponse( ok=True, - message=result.serialize(), + message=serialize(result), ) def get_agent_list( @@ -327,7 +324,8 @@ def get_agent_list( summaries.append(str(agent)) return agent_pb2.GeneralResponse( ok=True, - message=json.dumps(summaries), + # TODO: unified into serialize function to avoid error. + message=serialize(summaries), ) def get_server_info( @@ -343,7 +341,7 @@ def get_server_info( status["cpu"] = process.cpu_percent(interval=1) status["mem"] = process.memory_info().rss / (1024**2) status["size"] = len(self.agent_pool) - return agent_pb2.GeneralResponse(ok=True, message=json.dumps(status)) + return agent_pb2.GeneralResponse(ok=True, message=serialize(status)) def set_model_configs( self, @@ -381,7 +379,7 @@ def get_agent_memory( ) return agent_pb2.GeneralResponse( ok=True, - message=json.dumps(agent.memory.get_memory()), + message=serialize(agent.memory.get_memory()), ) def download_file( @@ -430,11 +428,7 @@ def _reply(self, request: agent_pb2.RpcMsg) -> agent_pb2.GeneralResponse: ) return agent_pb2.GeneralResponse( ok=True, - message=Msg( # type: ignore[arg-type] - name=self.get_agent(request.agent_id).name, - content=None, - task_id=task_id, - ).serialize(), + message=str(task_id), ) def _observe(self, request: agent_pb2.RpcMsg) -> agent_pb2.GeneralResponse: @@ -448,9 +442,13 @@ def _observe(self, request: agent_pb2.RpcMsg) -> agent_pb2.GeneralResponse: `RpcMsg`: Empty RpcMsg. """ msgs = deserialize(request.value) - for msg in msgs: - if isinstance(msg, PlaceholderMessage): - msg.update_value() + if isinstance(msgs, list): + for msg in msgs: + if isinstance(msg, PlaceholderMessage): + msg.update_value() + elif isinstance(msgs, PlaceholderMessage): + msgs.update_value() + self.agent_pool[request.agent_id].observe(msgs) return agent_pb2.GeneralResponse(ok=True) @@ -458,14 +456,14 @@ def _process_messages( self, task_id: int, agent_id: str, - task_msg: dict = None, + task_msg: Msg = None, ) -> None: """Processing an input message and generate its reply message. Args: - task_id (`int`): task id of the input message, . + task_id (`int`): task id of the input message. agent_id (`str`): the id of the agent that accepted the message. - task_msg (`dict`): the input message. + task_msg (`Msg`): the input message. """ if isinstance(task_msg, PlaceholderMessage): task_msg.update_value() diff --git a/src/agentscope/service/__init__.py b/src/agentscope/service/__init__.py index b7a2471aa..bce6878f1 100644 --- a/src/agentscope/service/__init__.py +++ b/src/agentscope/service/__init__.py @@ -22,6 +22,11 @@ from .sql_query.mongodb import query_mongodb from .web.search import bing_search, google_search from .web.arxiv import arxiv_search +from .web.tripadvisor import ( + tripadvisor_search_location_photos, + tripadvisor_search, + tripadvisor_search_location_details, +) from .web.dblp import ( dblp_search_publications, dblp_search_authors, @@ -51,6 +56,11 @@ from .web.web_digest import digest_webpage, load_web, parse_html_to_text from .web.download import download_from_url +from .web.wikipedia import ( + wikipedia_search, + wikipedia_search_categories, +) + def get_help() -> None: """Get help message.""" @@ -80,6 +90,8 @@ def get_help() -> None: "bing_search", "google_search", "arxiv_search", + "wikipedia_search", + "wikipedia_search_categories", "query_mysql", "query_sqlite", "query_mongodb", @@ -103,6 +115,9 @@ def get_help() -> None: "openai_image_to_text", "openai_edit_image", "openai_create_image_variation", + "tripadvisor_search", + "tripadvisor_search_location_photos", + "tripadvisor_search_location_details", # to be deprecated "ServiceFactory", ] diff --git a/src/agentscope/service/execute_code/exec_notebook.py b/src/agentscope/service/execute_code/exec_notebook.py index bbd697121..f296c41b0 100644 --- a/src/agentscope/service/execute_code/exec_notebook.py +++ b/src/agentscope/service/execute_code/exec_notebook.py @@ -13,7 +13,7 @@ from nbclient.exceptions import CellTimeoutError, DeadKernelError import nbformat except ImportError as import_error: - from agentscope.utils.tools import ImportErrorReporter + from agentscope.utils.common import ImportErrorReporter nbclient = ImportErrorReporter(import_error) nbformat = ImportErrorReporter(import_error) diff --git a/src/agentscope/service/execute_code/exec_python.py b/src/agentscope/service/execute_code/exec_python.py index c2491f3eb..2cde33740 100644 --- a/src/agentscope/service/execute_code/exec_python.py +++ b/src/agentscope/service/execute_code/exec_python.py @@ -27,10 +27,10 @@ except (ModuleNotFoundError, ImportError): resource = None -from agentscope.utils.common import create_tempdir, timer -from agentscope.service.service_status import ServiceExecStatus -from agentscope.service.service_response import ServiceResponse -from agentscope.constants import ( +from ...utils.common import create_tempdir, timer +from ..service_status import ServiceExecStatus +from ..service_response import ServiceResponse +from ...constants import ( _DEFAULT_PYPI_MIRROR, _DEFAULT_TRUSTED_HOST, ) diff --git a/src/agentscope/service/execute_code/exec_shell.py b/src/agentscope/service/execute_code/exec_shell.py index ffde21b9d..d75e17630 100644 --- a/src/agentscope/service/execute_code/exec_shell.py +++ b/src/agentscope/service/execute_code/exec_shell.py @@ -1,6 +1,9 @@ # -*- coding: utf-8 -*- """Service to execute shell commands.""" import subprocess + +from loguru import logger + from agentscope.service.service_status import ServiceExecStatus from agentscope.service.service_response import ServiceResponse @@ -26,6 +29,19 @@ def execute_shell_command(command: str) -> ServiceResponse: change/edit the files current directory (e.g. rm, sed). ... """ + + if any(_ in command for _ in execute_shell_command.insecure_commands): + logger.warning( + f"The command {command} is blocked for security reasons. " + f"If you want to enable the command, try to reset the " + f"insecure command list by executing " + f'`execute_shell_command.insecure_commands = ["xxx", "xxx"]`', + ) + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content=f"The command {command} is blocked for security reasons.", + ) + try: result = subprocess.run( command, @@ -55,3 +71,19 @@ def execute_shell_command(command: str) -> ServiceResponse: status=ServiceExecStatus.ERROR, content=str(e), ) + + +# Security check: Block insecure commands +execute_shell_command.insecure_commands = [ + # System management + "shutdown", + "kill", + "reboot", + "pkill", + # User management + "useradd", + "userdel", + "usermod", + # File management + "rm -rf", +] diff --git a/src/agentscope/service/file/text.py b/src/agentscope/service/file/text.py index 725d08a56..e0e031b0d 100644 --- a/src/agentscope/service/file/text.py +++ b/src/agentscope/service/file/text.py @@ -2,7 +2,6 @@ """ Operators for txt file and directory. """ import os -from agentscope.utils.common import write_file from agentscope.service.service_response import ServiceResponse from agentscope.service.service_status import ServiceExecStatus @@ -59,4 +58,17 @@ def write_text_file( status=ServiceExecStatus.ERROR, content="FileExistsError: The file already exists.", ) - return write_file(content, file_path) + + try: + with open(file_path, "w", encoding="utf-8") as file: + file.write(content) + return ServiceResponse( + status=ServiceExecStatus.SUCCESS, + content="Success", + ) + except Exception as e: + error_message = f"{e.__class__.__name__}: {e}" + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content=error_message, + ) diff --git a/src/agentscope/service/multi_modality/dashscope_services.py b/src/agentscope/service/multi_modality/dashscope_services.py index 04774f588..d3963bbc7 100644 --- a/src/agentscope/service/multi_modality/dashscope_services.py +++ b/src/agentscope/service/multi_modality/dashscope_services.py @@ -20,11 +20,11 @@ # SpeechSynthesizerWrapper is current not available -from agentscope.service.service_response import ( +from ..service_response import ( ServiceResponse, ServiceExecStatus, ) -from agentscope.utils.tools import _download_file +from ...utils.common import _download_file def dashscope_text_to_image( diff --git a/src/agentscope/service/multi_modality/openai_services.py b/src/agentscope/service/multi_modality/openai_services.py index 7e2acba91..b5fd799b1 100644 --- a/src/agentscope/service/multi_modality/openai_services.py +++ b/src/agentscope/service/multi_modality/openai_services.py @@ -13,21 +13,16 @@ import requests -from openai import OpenAI -from openai.types import ImagesResponse -from openai._types import NOT_GIVEN, NotGiven -from agentscope.service.service_response import ( +from ..service_response import ( ServiceResponse, ServiceExecStatus, ) -from agentscope.models.openai_model import ( +from ...models.openai_model import ( OpenAIDALLEWrapper, OpenAIChatWrapper, ) -from agentscope.utils.tools import _download_file - - -from agentscope.message import MessageBase +from ...utils.common import _download_file +from ...message import Msg def _url_to_filename(url: str) -> str: @@ -52,11 +47,10 @@ def _url_to_filename(url: str) -> str: def _handle_openai_img_response( - response: ImagesResponse, + raw_response: dict, save_dir: Optional[str] = None, ) -> Union[str, Sequence[str]]: """Handle the response from OpenAI image generation API.""" - raw_response = response.model_dump() if "data" not in raw_response: if "error" in raw_response: error_msg = raw_response["error"]["message"] @@ -278,19 +272,32 @@ def openai_edit_image( 'EDITED_IMAGE_URL2']} > } """ - client = OpenAI(api_key=api_key) + try: + import openai + except ImportError as e: + raise ImportError( + "The `openai` library is not installed. Please install it by " + "running `pip install openai`.", + ) from e + + client = openai.OpenAI(api_key=api_key) # _parse_url handles both local and web URLs and returns BytesIO image = _parse_url(image_url) try: - response = client.images.edit( - model="dall-e-2", - image=image, - mask=_parse_url(mask_url) if mask_url else NOT_GIVEN, - prompt=prompt, - n=n, - size=size, - ) - urls = _handle_openai_img_response(response, save_dir) + kwargs = { + "model": "dall-e-2", + "image": image, + "prompt": prompt, + "n": n, + "size": size, + } + + if mask_url: + kwargs["mask"] = _parse_url(mask_url) + + response = client.images.edit(**kwargs) + + urls = _handle_openai_img_response(response.model_dump(), save_dir) return ServiceResponse( ServiceExecStatus.SUCCESS, {"image_urls": urls}, @@ -352,7 +359,15 @@ def openai_create_image_variation( > 'content': {'image_urls': ['VARIATION_URL1', 'VARIATION_URL2']} > } """ - client = OpenAI(api_key=api_key) + try: + import openai + except ImportError as e: + raise ImportError( + "The `openai` library is not installed. Please install it by " + "running `pip install openai`.", + ) from e + + client = openai.OpenAI(api_key=api_key) # _parse_url handles both local and web URLs and returns BytesIO image = _parse_url(image_url) try: @@ -362,7 +377,7 @@ def openai_create_image_variation( n=n, size=size, ) - urls = _handle_openai_img_response(response, save_dir) + urls = _handle_openai_img_response(response.model_dump(), save_dir) return ServiceResponse( ServiceExecStatus.SUCCESS, {"image_urls": urls}, @@ -375,7 +390,7 @@ def openai_create_image_variation( def openai_image_to_text( - image_urls: Union[str, Sequence[str]], + image_urls: Union[str, list[str]], api_key: str, prompt: str = "Describe the image", model: Literal["gpt-4o", "gpt-4-turbo"] = "gpt-4o", @@ -385,7 +400,7 @@ def openai_image_to_text( return the generated text. Args: - image_urls (`Union[str, Sequence[str]]`): + image_urls (`Union[str, list[str]]`): The URL or list of URLs pointing to the images that need to be described. api_key (`str`): @@ -420,7 +435,7 @@ def openai_image_to_text( model_name=model, api_key=api_key, ) - messages = MessageBase( + messages = Msg( name="service_call", role="user", content=prompt, @@ -502,7 +517,15 @@ def openai_text_to_audio( > 'content': {'audio_path': './audio_files/Hello,_welco.mp3'} > } """ - client = OpenAI(api_key=api_key) + try: + import openai + except ImportError as e: + raise ImportError( + "The `openai` library is not installed. Please install it by " + "running `pip install openai`.", + ) from e + + client = openai.OpenAI(api_key=api_key) save_name = _audio_filename(text) if os.path.isabs(save_dir): save_path = os.path.join(save_dir, f"{save_name}.{res_format}") @@ -535,7 +558,7 @@ def openai_text_to_audio( def openai_audio_to_text( audio_file_url: str, api_key: str, - language: Union[str, NotGiven] = NOT_GIVEN, + language: str = "en", temperature: float = 0.2, ) -> ServiceResponse: """ @@ -547,9 +570,10 @@ def openai_audio_to_text( transcribed. api_key (`str`): The API key for the OpenAI API. - language (`Union[str, NotGiven]`, defaults to `NotGiven()`): - The language of the audio. If not specified, the language will - be auto-detected. + language (`str`, defaults to `"en"`): + The language of the input audio. Supplying the input language in + [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes) + format will improve accuracy and latency. temperature (`float`, defaults to `0.2`): The temperature for the transcription, which affects the randomness of the output. @@ -575,7 +599,15 @@ def openai_audio_to_text( the audio file.'} > } """ - client = OpenAI(api_key=api_key) + try: + import openai + except ImportError as e: + raise ImportError( + "The `openai` library is not installed. Please install it by " + "running `pip install openai`.", + ) from e + + client = openai.OpenAI(api_key=api_key) audio_file_url = os.path.abspath(audio_file_url) with open(audio_file_url, "rb") as audio_file: try: diff --git a/src/agentscope/service/web/dblp.py b/src/agentscope/service/web/dblp.py index 7d6ab9c1c..91ed9aac8 100644 --- a/src/agentscope/service/web/dblp.py +++ b/src/agentscope/service/web/dblp.py @@ -7,7 +7,7 @@ ServiceResponse, ServiceExecStatus, ) -from agentscope.utils.common import requests_get +from ...utils.common import _requests_get def dblp_search_publications( @@ -92,7 +92,7 @@ def dblp_search_publications( "f": start, "c": num_completion, } - search_results = requests_get(url, params) + search_results = _requests_get(url, params) if isinstance(search_results, str): return ServiceResponse(ServiceExecStatus.ERROR, search_results) @@ -204,7 +204,7 @@ def dblp_search_authors( "f": start, "c": num_completion, } - search_results = requests_get(url, params) + search_results = _requests_get(url, params) if isinstance(search_results, str): return ServiceResponse(ServiceExecStatus.ERROR, search_results) hits = search_results.get("result", {}).get("hits", {}).get("hit", []) @@ -297,7 +297,7 @@ def dblp_search_venues( "f": start, "c": num_completion, } - search_results = requests_get(url, params) + search_results = _requests_get(url, params) if isinstance(search_results, str): return ServiceResponse(ServiceExecStatus.ERROR, search_results) diff --git a/src/agentscope/service/web/search.py b/src/agentscope/service/web/search.py index b5ff7e59f..c748a3cbc 100644 --- a/src/agentscope/service/web/search.py +++ b/src/agentscope/service/web/search.py @@ -1,9 +1,9 @@ # -*- coding: utf-8 -*- """Search question in the web""" from typing import Any -from agentscope.service.service_response import ServiceResponse -from agentscope.utils.common import requests_get -from agentscope.service.service_status import ServiceExecStatus +from ..service_response import ServiceResponse +from ...utils.common import _requests_get +from ..service_status import ServiceExecStatus def bing_search( @@ -85,7 +85,7 @@ def bing_search( headers = {"Ocp-Apim-Subscription-Key": api_key} - search_results = requests_get( + search_results = _requests_get( bing_search_url, params, headers, @@ -173,7 +173,7 @@ def google_search( if kwargs: params.update(**kwargs) - search_results = requests_get(google_search_url, params) + search_results = _requests_get(google_search_url, params) if isinstance(search_results, str): return ServiceResponse(ServiceExecStatus.ERROR, search_results) diff --git a/src/agentscope/service/web/tripadvisor.py b/src/agentscope/service/web/tripadvisor.py new file mode 100644 index 000000000..fa7deb0a1 --- /dev/null +++ b/src/agentscope/service/web/tripadvisor.py @@ -0,0 +1,538 @@ +# -*- coding: utf-8 -*- +"""TripAdvisor APIs for searching and retrieving location information.""" + +from loguru import logger +import requests +from ..service_response import ServiceResponse +from ..service_status import ServiceExecStatus + + +def tripadvisor_search_location_photos( + api_key: str, + location_id: str = None, + query: str = None, + language: str = "en", +) -> ServiceResponse: + """ + Retrieve photos for a specific location using the TripAdvisor API. + + Args: + api_key (`str`): + Your TripAdvisor API key. + location_id (`str`, optional): + The unique identifier for a location on Tripadvisor. The location + ID can be obtained using the tripadvisor_search function + query (`str`, optional): + The search query to find a location. Required if + location_id is not provided. + language (`str`, optional): + The language for the response. Defaults to 'en'. + + Returns: + `ServiceResponse`: A dictionary with two variables: `status` and + `content`. The `status` variable is from the ServiceExecStatus enum, + and `content` is the JSON response from TripAdvisor API or error + information, which depends on the `status` variable. + + If successful, the `content` will be a dictionary + with the following structure: + + .. code-block:: json + + { + 'photo_data': { + 'data': [ + { + 'id': int, + 'is_blessed': bool, + 'caption': str, + 'published_date': str, + 'images': { + 'thumbnail': { + 'height': int, + 'width': int, + 'url': str + }, + 'small': { + 'height': int, + 'width': int, + 'url': str + }, + 'medium': { + 'height': int, + 'width': int, + 'url': str + }, + 'large': { + 'height': int, + 'width': int, + 'url': str + }, + 'original': { + 'height': int, + 'width': int, + 'url': str + } + }, + 'album': str, + 'source': {'name': str, 'localized_name': str}, + 'user': {'username': str} + }, + ... + ] + } + } + + Each item in the 'data' list represents a photo associated with the + location. + + Note: + Either `location_id` or `query` must be provided. If both are provided, + `location_id` takes precedence. + + Example: + .. code-block:: python + + # Using location_id + result = tripadvisor_search_location_photos( + "your_api_key", location_id="123456", language="en" + ) + if result.status == ServiceExecStatus.SUCCESS: + print(result.content) + + # Or using a query + result = tripadvisor_search_location_photos( + "your_api_key", query="Eiffel Tower", language="en" + ) + if result.status == ServiceExecStatus.SUCCESS: + print(result.content) + + Example of successful `content`: + { + 'photo_data': { + 'data': [ + { + 'id': 215321638, + 'is_blessed': False, + 'caption': '', + 'published_date': '2016-09-04T20:40:14.284Z', + 'images': { + 'thumbnail': {'height': 50, 'width': 50, + 'url': 'https://media-cdn.../photo0.jpg'}, + 'small': {'height': 150, 'width': 150, + 'url': 'https://media-cdn.../photo0.jpg'}, + 'medium': {'height': 188, 'width': 250, + 'url': 'https://media-cdn.../photo0.jpg'}, + 'large': {'height': 413, 'width': 550, + 'url': 'https://media-cdn.../photo0.jpg'}, + 'original': {'height': 1920, 'width': 2560, + 'url': 'https://media-cdn.../photo0.jpg'} + }, + 'album': 'Other', + 'source': { + 'name': 'Traveler', + 'localized_name': 'Traveler' + }, + 'user': {'username': 'EvaFalleth'} + }, + # ... more photo entries ... + ] + } + } + + Raises: + ValueError: If neither location_id nor query is provided. + """ + if location_id is None and query is None: + raise ValueError("Either location_id or query must be provided.") + + if location_id is None: + # Use search_tripadvisor to get the location_id + search_result = tripadvisor_search(api_key, query, language) + if search_result.status != ServiceExecStatus.SUCCESS: + return search_result + + # Get the first location_id from the search results + locations = search_result.content.get("data", []) + if not locations: + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content={"error": "No locations found for the given query."}, + ) + + location_id = locations[0]["location_id"] + logger.info(f"Using location_id {location_id} from search results.") + + # Warning message if there are multiple locations + if len(locations) > 1: + logger.warning( + f"Multiple locations found for query '{query}'. " + f"Using the first result. " + f"Other {len(locations) - 1} results are ignored.", + ) + + # Now proceed with the original function logic using the location_id + url = ( + f"https://api.content.tripadvisor.com/api/v1/location/{location_id}/" + f"photos?language={language}&key={api_key}" + ) + headers = { + "accept": "application/json", + } + + logger.info(f"Requesting photos for location ID {location_id}") + + try: + response = requests.get(url, headers=headers, timeout=20) + logger.info( + f"Received response with status code {response.status_code}", + ) + + if response.status_code == 200: + logger.info("Successfully retrieved the photo") + return ServiceResponse( + status=ServiceExecStatus.SUCCESS, + content=response.json(), + ) + error_detail = ( + response.json() + .get("error", {}) + .get("message", f"HTTP Error: {response.status_code}") + ) + logger.error(f"Error in response: {error_detail}") + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content={"error": error_detail}, + ) + except Exception as e: + logger.exception("Exception occurred while requesting location photos") + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content={"error": str(e)}, + ) + + +def tripadvisor_search( + api_key: str, + query: str, + language: str = "en", +) -> ServiceResponse: + """ + Search for locations using the TripAdvisor API. + + Args: + api_key (`str`): + Your TripAdvisor API key. + query (`str`): + The search query. + language (`str`, optional): + The language for the response. Defaults to 'en'. + + Returns: + `ServiceResponse`: A dictionary with two variables: `status` and + `content`. The `status` variable is from the ServiceExecStatus enum, + and `content` is the JSON response from TripAdvisor API or error + information, which depends on the `status` variable. + + If successful, the `content` will be a + dictionary with the following structure: + { + 'data': [ + { + 'location_id': str, + 'name': str, + 'address_obj': { + 'street1': str, + 'street2': str, + 'city': str, + 'state': str, + 'country': str, + 'postalcode': str, + 'address_string': str + } + }, + ... + ] + } + Each item in the 'data' list represents + a location matching the search query. + + Example: + .. code-block:: python + + result = search_tripadvisor("your_api_key", "Socotra", "en") + if result.status == ServiceExecStatus.SUCCESS: + print(result.content) + + Example of successful `content`: + { + 'data': [ + { + 'location_id': '574818', + 'name': 'Socotra Island', + 'address_obj': { + 'street2': '', + 'city': 'Aden', + 'country': 'Yemen', + 'postalcode': '', + 'address_string': 'Aden Yemen' + } + }, + { + 'location_id': '25395815', + 'name': 'Tour Socotra', + 'address_obj': { + 'street1': '20th Street', + 'city': 'Socotra Island', + 'state': 'Socotra Island', + 'country': 'Yemen', + 'postalcode': '111', + 'address_string': + '20th Street, Socotra Island 111 Yemen' + } + }, + # ... more results ... + ] + } + """ + url = ( + f"https://api.content.tripadvisor.com/api/v1/location/search?" + f"searchQuery={query}&language={language}&key={api_key}" + ) + headers = { + "accept": "application/json", + } + + logger.info(f"Searching for locations with query '{query}'") + + try: + response = requests.get(url, headers=headers, timeout=20) + logger.info( + f"Received response with status code {response.status_code}", + ) + + if response.status_code == 200: + logger.info("Successfully retrieved search results") + return ServiceResponse( + status=ServiceExecStatus.SUCCESS, + content=response.json(), + ) + error_detail = ( + response.json() + .get("error", {}) + .get("message", f"HTTP Error: {response.status_code}") + ) + logger.error(f"Error in response: {error_detail}") + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content={"error": error_detail}, + ) + except Exception as e: + logger.exception("Exception occurred while searching for locations") + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content={"error": str(e)}, + ) + + +def tripadvisor_search_location_details( + api_key: str, + location_id: str = None, + query: str = None, + language: str = "en", + currency: str = "USD", +) -> ServiceResponse: + """ + Get detailed information about a specific location using the TripAdvisor API. + + Args: + api_key (`str`): + Your TripAdvisor API key. + location_id (`str`, optional): + The unique identifier for the location. Required if + query is not provided. + query (`str`, optional): + The search query to find a location. Required if + location_id is not provided. + language (`str`, optional): + The language for the response. Defaults to 'en', 'zh' for Chinese. + currency (`str`, optional): + The currency code to use for request and response + (should follow ISO 4217). Defaults to 'USD'. + + Returns: + `ServiceResponse`: A dictionary with two variables: `status` and + `content`. The `status` variable is from the ServiceExecStatus enum, + and `content` is the JSON response from TripAdvisor API or error + information, which depends on the `status` variable. + + If successful, the `content` will be a dictionary with + detailed information about the location, including + name, address, ratings, reviews, and more. + + Note: + Either `location_id` or `query` must be provided. If both are provided, + `location_id` takes precedence. + + Example: + .. code-block:: python + + # Using location_id + result = get_tripadvisor_location_details( + "your_api_key", + location_id="574818", + language="en", + currency="USD" + ) + if result.status == ServiceExecStatus.SUCCESS: + print(result.content) + + # Or using a query + result = get_tripadvisor_location_details( + "your_api_key", + query="Socotra Island", + language="en", + currency="USD" + ) + if result.status == ServiceExecStatus.SUCCESS: + print(result.content) + + Example of successful `content`: + { + 'location_id': '574818', + 'name': 'Socotra Island', + 'web_url': 'https://www.tripadvisor.com/Attraction_Review...', + 'address_obj': { + 'street2': '', + 'city': 'Aden', + 'country': 'Yemen', + 'postalcode': '', + 'address_string': 'Aden Yemen' + }, + 'ancestors': [ + {'level': 'City', 'name': 'Aden', 'location_id': '298087'}, + {'level': 'Country', 'name': 'Yemen', 'location_id': '294014'} + ], + 'latitude': '12.46342', + 'longitude': '53.82374', + 'timezone': 'Asia/Aden', + 'write_review': 'https://www.tripadvisor.com/UserReview...', + 'ranking_data': { + 'geo_location_id': '298087', + 'ranking_string': '#1 of 7 things to do in Aden', + 'geo_location_name': 'Aden', + 'ranking_out_of': '7', + 'ranking': '1' + }, + 'rating': '5.0', + 'rating_image_url': 'https://www.tripadvisor.com/.../5.svg', + 'num_reviews': '62', + 'review_rating_count': { + '1': '1', + '2': '0', + '3': '1', + '4': '1', + '5': '59', + }, + 'photo_count': '342', + 'see_all_photos': 'https://www.tripadvisor.com/Attraction...', + 'category': {'name': 'attraction', 'localized_name': 'Attraction'}, + 'subcategory': [ + {'name': 'nature_parks', 'localized_name': 'Nature & Parks'}, + {'name': 'attractions', 'localized_name': 'Attractions'} + ], + 'groups': [ + { + 'name': 'Nature & Parks', + 'localized_name': 'Nature & Parks', + 'categories': [{'name': 'Islands', + 'localized_name': 'Islands'}] + } + ], + 'neighborhood_info': [], + 'trip_types': [ + {'name': 'business', 'localized_name': + 'Business', 'value': '2'}, + {'name': 'couples', 'localized_name': + 'Couples', 'value': '10'}, + {'name': 'solo', 'localized_name': + 'Solo travel', 'value': '11'}, + {'name': 'family', 'localized_name': + 'Family', 'value': '2'}, + {'name': 'friends', 'localized_name': + 'Friends getaway', 'value': '22'} + ], + 'awards': [] + } + + Raises: + ValueError: If neither location_id nor query is provided. + """ # noqa + if location_id is None and query is None: + raise ValueError("Either location_id or query must be provided.") + + if location_id is None: + # Use search_tripadvisor to get the location_id + search_result = tripadvisor_search(api_key, query, language) + if search_result.status != ServiceExecStatus.SUCCESS: + return search_result + + # Get the first location_id from the search results + locations = search_result.content.get("data", []) + if not locations: + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content={"error": "No locations found for the given query."}, + ) + + location_id = locations[0]["location_id"] + logger.info(f"Using location_id {location_id} from search results.") + + # Warning message if there are multiple locations + if len(locations) > 1: + logger.warning( + f"Multiple locations found for query '{query}'. " + f"Using the first result. " + f"Other {len(locations) - 1} results are ignored.", + ) + + url = ( + f"https://api.content.tripadvisor.com/api/v1/location/{location_id}/" + f"details?language={language}¤cy={currency}&key={api_key}" + ) + headers = { + "accept": "application/json", + } + + logger.info(f"Requesting details for location ID {location_id}") + + try: + response = requests.get(url, headers=headers, timeout=20) + logger.info( + f"Received response with status code {response.status_code}", + ) + + if response.status_code == 200: + logger.info("Successfully retrieved location details") + return ServiceResponse( + status=ServiceExecStatus.SUCCESS, + content=response.json(), + ) + error_detail = ( + response.json() + .get("error", {}) + .get("message", f"HTTP Error: {response.status_code}") + ) + logger.error(f"Error in response: {error_detail}") + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content={"error": error_detail}, + ) + except Exception as e: + logger.exception( + "Exception occurred while requesting location details", + ) + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content={"error": str(e)}, + ) diff --git a/src/agentscope/service/web/wikipedia.py b/src/agentscope/service/web/wikipedia.py new file mode 100644 index 000000000..ea10a8f18 --- /dev/null +++ b/src/agentscope/service/web/wikipedia.py @@ -0,0 +1,161 @@ +# -*- coding: utf-8 -*- +""" +Search contents from WikiPedia +""" +import requests + +from ..service_response import ( + ServiceResponse, + ServiceExecStatus, +) + + +def wikipedia_search_categories( + query: str, + max_members: int = 1000, +) -> ServiceResponse: + """Retrieve categories from Wikipedia:Category pages. + + Args: + query (str): + The given searching keywords + max_members (int): + The maximum number of members to output + + Returns: + `ServiceResponse`: A response that contains the execution status and + returned content. In the returned content, the meanings of keys: + - "pageid": unique page ID for the member + - "ns": namespace for the member + - "title": title of the member + + Example: + + .. code-block:: python + + members = wiki_get_category_members( + "Machine_learning", + max_members=10 + ) + print(members) + + It returns contents: + + .. code-block:: python + + { + 'status': , + 'content': [ + { + 'pageid': 67911196, + 'ns': 0, + 'title': 'Bayesian learning mechanisms' + }, + { + 'pageid': 233488, + 'ns': 0, + 'title': 'Machine learning' + }, + # ... + ] + } + + """ + url = "https://en.wikipedia.org/w/api.php" + limit_per_request: int = 500 + params = { + "action": "query", + "list": "categorymembers", + "cmtitle": f"Category:{query}", + "cmlimit": limit_per_request, # Maximum number of results per request + "format": "json", + } + + members = [] + total_fetched = 0 + + try: + while total_fetched < max_members: + response = requests.get(url, params=params, timeout=20) + response.raise_for_status() + + data = response.json() + + batch_members = data["query"]["categorymembers"] + members.extend(batch_members) + total_fetched += len(batch_members) + + # Check if there is a continuation token + if "continue" in data and total_fetched < max_members: + params["cmcontinue"] = data["continue"]["cmcontinue"] + else: + break + + except Exception as e: + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content=str(e), + ) + + # If more members were fetched than max_members, trim the list + if len(members) > max_members: + members = members[:max_members] + + if len(members) > 0: + return ServiceResponse(ServiceExecStatus.SUCCESS, members) + + return ServiceResponse(ServiceExecStatus.ERROR, members) + + +def wikipedia_search( # pylint: disable=C0301 + query: str, +) -> ServiceResponse: + """Search the given query in Wikipedia. Note the returned text maybe related entities, which means you should adjust your query as needed and search again. + + Note the returned text maybe too long for some llm, it's recommended to + summarize the returned text first. + + Args: + query (`str`): + The searched query in wikipedia. + + Return: + `ServiceResponse`: A response that contains the execution status and + returned content. + """ # noqa + + url = "https://en.wikipedia.org/w/api.php" + params = { + "action": "query", + "titles": query, + "prop": "extracts", + "explaintext": True, + "format": "json", + } + try: + response = requests.get(url, params=params, timeout=20) + response.raise_for_status() + data = response.json() + + # Combine into a text + text = [] + for page in data["query"]["pages"].values(): + if "extract" in page: + text.append(page["extract"]) + else: + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content="No content found", + ) + + content = "\n".join(text) + return ServiceResponse( + status=ServiceExecStatus.SUCCESS, + content=content, + ) + + except Exception as e: + return ServiceResponse( + status=ServiceExecStatus.ERROR, + content=str(e), + ) diff --git a/src/agentscope/studio/_app.py b/src/agentscope/studio/_app.py index 1b4696db1..81ed58b61 100644 --- a/src/agentscope/studio/_app.py +++ b/src/agentscope/studio/_app.py @@ -16,6 +16,7 @@ Flask, request, jsonify, + session, render_template, Response, abort, @@ -25,9 +26,14 @@ from flask_sqlalchemy import SQLAlchemy from flask_socketio import SocketIO, join_room, leave_room -from ..constants import _DEFAULT_SUBDIR_CODE, _DEFAULT_SUBDIR_INVOKE +from ..constants import ( + _DEFAULT_SUBDIR_CODE, + _DEFAULT_SUBDIR_INVOKE, + FILE_SIZE_LIMIT, + FILE_COUNT_LIMIT, +) from ._studio_utils import _check_and_convert_id_type -from ..utils.tools import ( +from ..utils.common import ( _is_process_alive, _is_windows, _generate_new_runtime_id, @@ -671,6 +677,134 @@ def _read_examples() -> Response: return jsonify(json=data) +@_app.route("/save-workflow", methods=["POST"]) +def _save_workflow() -> Response: + """ + Save the workflow JSON data to the local user folder. + """ + user_login = session.get("user_login", "local_user") + user_dir = os.path.join(_cache_dir, user_login) + if not os.path.exists(user_dir): + os.makedirs(user_dir) + + data = request.json + overwrite = data.get("overwrite", False) + filename = data.get("filename") + workflow_str = data.get("workflow") + if not filename: + return jsonify({"message": "Filename is required"}) + + filepath = os.path.join(user_dir, f"{filename}.json") + + try: + workflow = json.loads(workflow_str) + if not isinstance(workflow, dict): + raise ValueError + except (json.JSONDecodeError, ValueError): + return jsonify({"message": "Invalid workflow data"}) + + workflow_json = json.dumps(workflow, ensure_ascii=False, indent=4) + if len(workflow_json.encode("utf-8")) > FILE_SIZE_LIMIT: + return jsonify( + { + "message": f"The workflow file size exceeds " + f"{FILE_SIZE_LIMIT/(1024*1024)} MB limit", + }, + ) + + user_files = [ + f + for f in os.listdir(user_dir) + if os.path.isfile(os.path.join(user_dir, f)) + ] + + if len(user_files) >= FILE_COUNT_LIMIT and not os.path.exists(filepath): + return jsonify( + { + "message": f"You have reached the limit of " + f"{FILE_COUNT_LIMIT} workflow files, please " + f"delete some files.", + }, + ) + + if overwrite: + with open(filepath, "w", encoding="utf-8") as f: + json.dump(workflow, f, ensure_ascii=False, indent=4) + else: + if os.path.exists(filepath): + return jsonify({"message": "Workflow file exists!"}) + else: + with open(filepath, "w", encoding="utf-8") as f: + json.dump(workflow, f, ensure_ascii=False, indent=4) + + return jsonify({"message": "Workflow file saved successfully"}) + + +@_app.route("/delete-workflow", methods=["POST"]) +def _delete_workflow() -> Response: + """ + Deletes a workflow JSON file from the user folder. + """ + user_login = session.get("user_login", "local_user") + user_dir = os.path.join(_cache_dir, user_login) + if not os.path.exists(user_dir): + os.makedirs(user_dir) + + data = request.json + filename = data.get("filename") + if not filename: + return jsonify({"error": "Filename is required"}) + + filepath = os.path.join(user_dir, filename) + if not os.path.exists(filepath): + return jsonify({"error": "File not found"}) + + try: + os.remove(filepath) + return jsonify({"message": "Workflow file deleted successfully"}) + except Exception as e: + return jsonify({"error": str(e)}) + + +@_app.route("/list-workflows", methods=["POST"]) +def _list_workflows() -> Response: + """ + Get all workflow JSON files in the user folder. + """ + user_login = session.get("user_login", "local_user") + user_dir = os.path.join(_cache_dir, user_login) + if not os.path.exists(user_dir): + os.makedirs(user_dir) + + files = [file for file in os.listdir(user_dir) if file.endswith(".json")] + return jsonify(files=files) + + +@_app.route("/load-workflow", methods=["POST"]) +def _load_workflow() -> Response: + """ + Reads and returns workflow data from the specified JSON file. + """ + user_login = session.get("user_login", "local_user") + user_dir = os.path.join(_cache_dir, user_login) + if not os.path.exists(user_dir): + os.makedirs(user_dir) + + data = request.json + filename = data.get("filename") + if not filename: + return jsonify({"error": "Filename is required"}), 400 + + filepath = os.path.join(user_dir, filename) + if not os.path.exists(filepath): + return jsonify({"error": "File not found"}), 404 + + with open(filepath, "r", encoding="utf-8") as f: + json_data = json.load(f) + + return jsonify(json_data) + + @_app.route("/") def _home() -> str: """Render the home page.""" diff --git a/src/agentscope/studio/_app_online.py b/src/agentscope/studio/_app_online.py new file mode 100644 index 000000000..6a23331b4 --- /dev/null +++ b/src/agentscope/studio/_app_online.py @@ -0,0 +1,395 @@ +# -*- coding: utf-8 -*- +"""The Web Server of the AgentScope Workstation Online Version.""" +import ipaddress +import json +import os +import secrets +import tempfile +from typing import Tuple, Any +from datetime import timedelta + +import requests +import oss2 +from loguru import logger +from flask import ( + Flask, + Response, + request, + redirect, + session, + url_for, + render_template, + jsonify, + make_response, +) +from flask_babel import Babel, refresh +from dotenv import load_dotenv + +from agentscope.constants import EXPIRATION_SECONDS, FILE_SIZE_LIMIT +from agentscope.studio.utils import _require_auth, generate_jwt +from agentscope.studio._app import ( + _convert_config_to_py, + _read_examples, + _save_workflow, + _delete_workflow, + _list_workflows, + _load_workflow, +) + +_app = Flask(__name__) +_app.config["BABEL_DEFAULT_LOCALE"] = "en" + +babel = Babel(_app) + + +def is_ip(address: str) -> bool: + """ + Check whether the IP is the domain or not. + """ + try: + ipaddress.ip_address(address) + return True + except ValueError: + return False + + +def get_locale() -> str: + """ + Get current language type. + """ + cookie = request.cookies.get("locale") + if cookie in ["zh", "en"]: + return cookie + return request.accept_languages.best_match( + _app.config.get("BABEL_DEFAULT_LOCALE"), + ) + + +babel.init_app(_app, locale_selector=get_locale) + +load_dotenv(override=True) + +SECRET_KEY = os.getenv("SECRET_KEY") or os.urandom(24) +_app.config["SECRET_KEY"] = SECRET_KEY +_app.config["PERMANENT_SESSION_LIFETIME"] = timedelta(days=1) +_app.config["SESSION_TYPE"] = os.getenv("SESSION_TYPE", "filesystem") +if os.getenv("LOCAL_WORKSTATION", "false").lower() == "true": + LOCAL_WORKSTATION = True + IP = "127.0.0.1" + COPILOT_IP = "127.0.0.1" +else: + LOCAL_WORKSTATION = False + IP = os.getenv("IP", "127.0.0.1") + COPILOT_IP = os.getenv("COPILOT_IP", "127.0.0.1") + +PORT = os.getenv("PORT", "8080") +COPILOT_PORT = os.getenv("COPILOT_PORT", "8081") + +if not is_ip(IP): + PORT = "" +if not is_ip(COPILOT_IP): + COPILOT_PORT = "" + +CLIENT_ID = os.getenv("CLIENT_ID") +OWNER = os.getenv("OWNER") +REPO = os.getenv("REPO") +OSS_ENDPOINT = os.getenv("OSS_ENDPOINT") +OSS_BUCKET_NAME = os.getenv("OSS_BUCKET_NAME") +OSS_ACCESS_KEY_ID = os.getenv("OSS_ACCESS_KEY_ID") +OSS_ACCESS_KEY_SECRET = os.getenv("OSS_ACCESS_KEY_SECRET") +CLIENT_SECRET = os.getenv("CLIENT_SECRET") + +required_envs = { + "OSS_ACCESS_KEY_ID": OSS_ACCESS_KEY_ID, + "OSS_ACCESS_KEY_SECRET": OSS_ACCESS_KEY_SECRET, + "CLIENT_SECRET": CLIENT_SECRET, +} + +for key, value in required_envs.items(): + if not value: + logger.warning(f"{key} is not set on envs!") + + +def get_oss_config() -> Tuple: + """ + Obtain oss related configs. + """ + return ( + OSS_ACCESS_KEY_ID, + OSS_ACCESS_KEY_SECRET, + OSS_ENDPOINT, + OSS_BUCKET_NAME, + ) + + +def upload_to_oss( + bucket: str, + local_file_path: str, + oss_file_path: str, + is_private: bool = False, +) -> str: + """ + Upload content to oss. + """ + bucket.put_object_from_file(oss_file_path, local_file_path) + if not is_private: + bucket.put_object_acl(oss_file_path, oss2.OBJECT_ACL_PUBLIC_READ) + file_url = ( + f"https://{bucket.bucket_name}" + f".{bucket.endpoint.replace('http://', '')}/{oss_file_path}" + ) + return file_url + + +def generate_verification_token() -> str: + """ + Generate token. + """ + return secrets.token_urlsafe() + + +def star_repository(access_token: str) -> int: + """ + Star the Repo. + """ + url = f"https://api.github.com/user/starred/{OWNER}/{REPO}" + headers = { + "Authorization": f"token {access_token}", + "Content-Length": "0", + "Accept": "application/vnd.github.v3+json", + } + response = requests.put(url, headers=headers) + return response.status_code == 204 + + +def get_user_status(access_token: str) -> Any: + """ + Get user status. + """ + url = "https://api.github.com/user" + headers = { + "Authorization": f"token {access_token}", + "Accept": "application/vnd.github.v3+json", + } + response = requests.get(url, headers=headers) + if response.status_code == 200: + return response.json() + return None + + +@_app.route("/") +def _home() -> str: + """ + Render the login page. + """ + if LOCAL_WORKSTATION: + session["verification_token"] = "verification_token" + session["user_login"] = "local_user" + session["jwt_token"] = generate_jwt( + user_login="local_user", + access_token="access_token", + verification_token="verification_token", + secret_key=SECRET_KEY, + version="online", + ) + return render_template("login.html", client_id=CLIENT_ID, ip=IP, port=PORT) + + +@_app.route("/oauth/callback") +def oauth_callback() -> str: + """ + Github oauth callback. + """ + code = request.args.get("code") + if not code: + return "Error: Code not found." + + token_response = requests.post( + "https://github.com/login/oauth/access_token", + headers={"Accept": "application/json"}, + data={ + "client_id": CLIENT_ID, + "client_secret": CLIENT_SECRET, + "code": code, + }, + ).json() + + access_token = token_response.get("access_token") + user_status = get_user_status(access_token) + if not access_token or not user_status: + return ( + "Error: Access token not found or failed to fetch user " + "information." + ) + + user_login = user_status.get("login") + + if star_repository(access_token=access_token): + verification_token = generate_verification_token() + # Used for compare with `verification_token` in `jwt_token` + session["verification_token"] = verification_token + session["user_login"] = user_login + session["jwt_token"] = generate_jwt( + user_login=user_login, + access_token=access_token, + verification_token=verification_token, + secret_key=SECRET_KEY, + version="online", + ) + + return redirect( + url_for( + "_workstation_online", + ), + ) + else: + return "Error: Unable to star the repository." + + +@_app.route("/workstation") +@_require_auth(secret_key=SECRET_KEY) +def _workstation_online(**kwargs: Any) -> str: + """Render the workstation page.""" + return render_template("workstation.html", **kwargs) + + +@_app.route("/upload-to-oss", methods=["POST"]) +@_require_auth(fail_with_exception=True, secret_key=SECRET_KEY) +def _upload_file_to_oss_online(**kwargs: Any) -> Response: + # pylint: disable=unused-argument + """ + Upload content to oss bucket. + """ + + def write_and_upload(ct: str, user: str) -> str: + with tempfile.NamedTemporaryFile(mode="w", delete=True) as tmp_file: + tmp_file.write(ct) + tmp_file.flush() + ak_id, ak_secret, endpoint, bucket_name = get_oss_config() + + auth = oss2.Auth(ak_id, ak_secret) + bucket = oss2.Bucket(auth, endpoint, bucket_name) + + file_key = f"modelscope_user/{user}_config.json" + + upload_to_oss( + bucket, + tmp_file.name, + file_key, + is_private=True, + ) + + public_url = bucket.sign_url( + "GET", + file_key, + EXPIRATION_SECONDS, + slash_safe=True, + ) + return public_url + + content = request.json.get("data") + user_login = session.get("user_login", "local_user") + + workflow_json = json.dumps(content, ensure_ascii=False, indent=4) + if len(workflow_json.encode("utf-8")) > FILE_SIZE_LIMIT: + return jsonify( + { + "message": f"The workflow data size exceeds " + f"{FILE_SIZE_LIMIT/(1024*1024)} MB limit", + }, + ) + + config_url = write_and_upload(content, user_login) + return jsonify(config_url=config_url) + + +@_app.route("/convert-to-py", methods=["POST"]) +@_require_auth(fail_with_exception=True, secret_key=SECRET_KEY) +def _online_convert_config_to_py(**kwargs: Any) -> Response: + # pylint: disable=unused-argument + """ + Convert json config to python code and send back. + """ + return _convert_config_to_py() + + +@_app.route("/read-examples", methods=["POST"]) +@_require_auth(fail_with_exception=True, secret_key=SECRET_KEY) +def _read_examples_online(**kwargs: Any) -> Response: + # pylint: disable=unused-argument + """ + Read tutorial examples from local file. + """ + return _read_examples() + + +@_app.route("/save-workflow", methods=["POST"]) +@_require_auth(fail_with_exception=True, secret_key=SECRET_KEY) +def _save_workflow_online(**kwargs: Any) -> Response: + # pylint: disable=unused-argument + """ + Save the workflow JSON data to the local user folder. + """ + return _save_workflow() + + +@_app.route("/delete-workflow", methods=["POST"]) +@_require_auth(fail_with_exception=True, secret_key=SECRET_KEY) +def _delete_workflow_online(**kwargs: Any) -> Response: + # pylint: disable=unused-argument + """ + Deletes a workflow JSON file from the user folder. + """ + return _delete_workflow() + + +@_app.route("/list-workflows", methods=["POST"]) +@_require_auth(fail_with_exception=True, secret_key=SECRET_KEY) +def _list_workflows_online(**kwargs: Any) -> Response: + # pylint: disable=unused-argument + """ + Get all workflow JSON files in the user folder. + """ + return _list_workflows() + + +@_app.route("/load-workflow", methods=["POST"]) +@_require_auth(fail_with_exception=True, secret_key=SECRET_KEY) +def _load_workflow_online(**kwargs: Any) -> Response: + # pylint: disable=unused-argument + """ + Reads and returns workflow data from the specified JSON file. + """ + return _load_workflow() + + +@_app.route("/set_locale") +def set_locale() -> Response: + """ + Switch language. + """ + lang = request.args.get("language") + response = make_response(jsonify(message=lang)) + if lang == "en": + refresh() + response.set_cookie("locale", "en") + return response + + if lang == "zh": + refresh() + response.set_cookie("locale", "zh") + return response + + return jsonify({"data": "success"}) + + +if __name__ == "__main__": + import sys + + if len(sys.argv) > 1: + try: + PORT = int(sys.argv[1]) + except ValueError: + print(f"Invalid port number. Using default port {PORT}.") + + _app.run(host="0.0.0.0", port=PORT) diff --git a/src/agentscope/studio/static/css/login.css b/src/agentscope/studio/static/css/login.css new file mode 100644 index 000000000..8ec8a7ea8 --- /dev/null +++ b/src/agentscope/studio/static/css/login.css @@ -0,0 +1,130 @@ +body { + font-family: 'Arial', sans-serif; + background-color: #f0f0f0; + display: flex; + flex-direction: column; + justify-content: center; + align-items: center; + height: 100vh; + margin: 0; +} + +.login-container { + padding: 2rem; + background: #fff; + box-shadow: 0 2px 10px rgba(0, 0, 0, 0.1); + border-radius: 8px; + text-align: center; + width: 100%; + max-width: 80%; +} + +#loginButton { + background-color: #2ea44f; + color: white; + font-size: 18px; + padding: 15px 24px; + border: none; + border-radius: 5px; + cursor: pointer; + box-shadow: 0px 4px 14px -3px rgba(0, 0, 0, 0.4); + transition: background-color 0.3s, transform 0.2s; + margin-top: 1rem; + display: inline-block; + width: 100%; +} + +#loginButton:hover { + background-color: #2c974b; + transform: scale(1.05); +} + +#loginButton:active { + background-color: #258741; + transform: scale(1); +} + +#loginButton:disabled { + background-color: #94d3a2; + cursor: not-allowed; +} + +.terms { + background: #fff; + padding: 20px; + margin: 1rem auto; + box-shadow: 0 0 10px rgba(0, 0, 0, 0.05); + border-radius: 8px; + max-width: 600px; +} + +.terms ul { + margin-left: 20px; +} + +.terms li { + margin-bottom: 10px; +} + +.checkbox { + margin-bottom: 1rem; +} + +.brand-gif { + background: #fff; + box-shadow: 0 0 10px rgba(0, 0, 0, 0.3); + width: 50%; + height: auto; + border-radius: 8px; +} + +.link-like { + color: #707070; + text-decoration: underline; + cursor: pointer; + opacity: 0.15; +} + +.link-like:hover { + opacity: 1.0; +} + +.waiting { + position: fixed; + top: 50%; + left: 50%; + transform: translate(-50%, -50%); + display: flex; + align-items: center; + justify-content: center; + z-index: 1000; + background-color: rgba(255, 255, 255, 0.8); + border-radius: 10px; + padding: 20px 40px; + box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.25); + flex-direction: column; +} + +.css-spinner { + border: 4px solid rgba(0, 0, 0, .1); + border-radius: 50%; + border-top: 4px solid #3498db; + width: 40px; + height: 40px; + animation: spin 2s linear infinite; +} + +@keyframes spin { + 0% { + transform: rotate(0deg); + } + 100% { + transform: rotate(360deg); + } +} + +.waiting b { + color: #555; + font-weight: normal; + font-size: 1.5em; +} diff --git a/src/agentscope/studio/static/html-drag-components/agent-texttoimageagent.html b/src/agentscope/studio/static/html-drag-components/agent-texttoimageagent.html deleted file mode 100644 index d3ff12c51..000000000 --- a/src/agentscope/studio/static/html-drag-components/agent-texttoimageagent.html +++ /dev/null @@ -1,28 +0,0 @@ -
-
-
- - - - - TextToImageAgent -
- - -
-
-
- Agent for text to image generation -
Node ID: ID_PLACEHOLDER
-
- - - -
- - - -
-
\ No newline at end of file diff --git a/src/agentscope/studio/static/html-drag-components/message-msg.html b/src/agentscope/studio/static/html-drag-components/message-msg.html index ca29eef48..9c7a10d55 100644 --- a/src/agentscope/studio/static/html-drag-components/message-msg.html +++ b/src/agentscope/studio/static/html-drag-components/message-msg.html @@ -16,6 +16,11 @@ data-required="true">
+ + +
+ diff --git a/src/agentscope/studio/static/js/workstation.js b/src/agentscope/studio/static/js/workstation.js index 4c3aec404..2c35adcad 100644 --- a/src/agentscope/studio/static/js/workstation.js +++ b/src/agentscope/studio/static/js/workstation.js @@ -20,7 +20,6 @@ let nameToHtmlFile = { 'Message': 'message-msg.html', 'DialogAgent': 'agent-dialogagent.html', 'UserAgent': 'agent-useragent.html', - 'TextToImageAgent': 'agent-texttoimageagent.html', 'DictDialogAgent': 'agent-dictdialogagent.html', 'ReActAgent': 'agent-reactagent.html', 'Placeholder': 'pipeline-placeholder.html', @@ -569,6 +568,7 @@ async function addNodeToDrawFlow(name, pos_x, pos_y) { "args": { "name": '', + "role": '', "content": '', "url": '' } @@ -604,22 +604,6 @@ async function addNodeToDrawFlow(name, pos_x, pos_y) { } break; - case 'TextToImageAgent': - const TextToImageAgentID = - editor.addNode('TextToImageAgent', 1, - 1, pos_x, pos_y, - 'TextToImageAgent', { - "args": { - "name": '', - "model_config_name": '' - } - }, htmlSourceCode); - var nodeElement = document.querySelector(`#node-${TextToImageAgentID} .node-id`); - if (nodeElement) { - nodeElement.textContent = TextToImageAgentID; - } - break; - case 'DictDialogAgent': const DictDialogAgentID = editor.addNode('DictDialogAgent', 1, 1, pos_x, pos_y, @@ -773,10 +757,10 @@ async function addNodeToDrawFlow(name, pos_x, pos_y) { function setupTextInputListeners(nodeId) { const newNode = document.getElementById(`node-${nodeId}`); if (newNode) { - const stopPropagation = function(event) { + const stopPropagation = function (event) { event.stopPropagation(); }; - newNode.addEventListener('mousedown', function(event) { + newNode.addEventListener('mousedown', function (event) { const target = event.target; if (target.tagName === 'TEXTAREA' || target.tagName === 'INPUT') { stopPropagation(event); @@ -1029,7 +1013,7 @@ function setupNodeListeners(nodeId) { function doDragSE(e) { newNode.style.width = 'auto'; - const newWidth = (startWidth + e.clientX - startX) ; + const newWidth = (startWidth + e.clientX - startX); if (newWidth > 200) { contentBox.style.width = newWidth + 'px'; titleBox.style.width = newWidth + 'px'; @@ -1326,6 +1310,21 @@ function checkConditions() { isApiKeyEmpty = isApiKeyEmpty || true; } } + + if (node.name === "Message") { + const validRoles = ["system", "assistant", "user"]; + if (!validRoles.includes(node.data.args.role)) { + Swal.fire({ + title: 'Invalid Role for Message', + html: + `Invalid role ${node.data.args.role}.
The role must be in ['system', 'user', 'assistant']`, + icon: 'error', + confirmButtonText: 'Ok' + }); + return false; + } + } + if (node.name.includes('Agent') && "model_config_name" in node.data.args) { hasAgentError = false; if (node.data && node.data.args) { @@ -1476,7 +1475,7 @@ function showExportPyPopup() { title: 'Processing...', text: 'Please wait.', allowOutsideClick: false, - onBeforeOpen: () => { + willOpen: () => { Swal.showLoading() } }); @@ -1512,7 +1511,7 @@ function showExportPyPopup() { showCancelButton: true, confirmButtonText: 'Copy', cancelButtonText: 'Close', - onBeforeOpen: (element) => { + willOpen: (element) => { const codeElement = element.querySelector('code'); Prism.highlightElement(codeElement); const copyButton = Swal.getConfirmButton(); @@ -1534,7 +1533,7 @@ function showExportPyPopup() { popup: 'error-popup' }, confirmButtonText: 'Close', - onBeforeOpen: (element) => { + willOpen: (element) => { const codeElement = element.querySelector('code'); Prism.highlightElement(codeElement); } @@ -1551,7 +1550,16 @@ function showExportPyPopup() { } -function showExportRunPopup() { +function showExportRunPopup(version) { + if (version === "local") { + showExportRunLocalPopup(); + } else { + showExportRunMSPopup(); + } +} + + +function showExportRunLocalPopup() { if (checkConditions()) { const rawData = editor.export(); const hasError = sortElementsByPosition(rawData); @@ -1564,7 +1572,7 @@ function showExportRunPopup() { title: 'Processing...', text: 'Please wait.', allowOutsideClick: false, - onBeforeOpen: () => { + willOpen: () => { Swal.showLoading() } }); @@ -1600,7 +1608,7 @@ function showExportRunPopup() { showCancelButton: true, confirmButtonText: 'Copy Code', cancelButtonText: 'Close', - onBeforeOpen: (element) => { + willOpen: (element) => { const codeElement = element.querySelector('code'); Prism.highlightElement(codeElement); const copyButton = Swal.getConfirmButton(); @@ -1622,7 +1630,7 @@ function showExportRunPopup() { popup: 'error-popup' }, confirmButtonText: 'Close', - onBeforeOpen: (element) => { + willOpen: (element) => { const codeElement = element.querySelector('code'); Prism.highlightElement(codeElement); } @@ -1640,11 +1648,86 @@ function showExportRunPopup() { } +function filterOutApiKey(obj) { + for (let key in obj) { + if (typeof obj[key] === 'object' && obj[key] !== null) { + filterOutApiKey(obj[key]); + } + if (key === 'api_key') { + delete obj[key]; + } + } +} + + +function showExportRunMSPopup() { + if (checkConditions()) { + Swal.fire({ + title: 'Are you sure to run the workflow in ModelScope Studio?', + text: + "You are about to navigate to another page. " + + "Please make sure all the configurations are set " + + "besides your api-key " + + "(your api-key should be set in ModelScope Studio page).", + icon: 'warning', + showCancelButton: true, + confirmButtonColor: '#3085d6', + cancelButtonColor: '#d33', + confirmButtonText: 'Yes, create it!', + cancelButtonText: 'Close' + }).then((result) => { + if (result.isConfirmed) { + const rawData = editor.export(); + const hasError = sortElementsByPosition(rawData); + if (hasError) { + return; + } + const filteredData = reorganizeAndFilterConfigForAgentScope(rawData); + filterOutApiKey(filteredData) + + Swal.fire({ + title: 'Processing...', + text: 'Please wait.', + allowOutsideClick: false, + willOpen: () => { + Swal.showLoading() + } + }); + fetch('/upload-to-oss', { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + data: JSON.stringify(filteredData, null, 4), + }) + }) + .then(response => response.json()) + .then(data => { + const params = {'CONFIG_URL': data.config_url}; + const paramsStr = encodeURIComponent(JSON.stringify(params)); + const org = "agentscope"; + const fork_repo = "agentscope_workstation"; + const url = `https://www.modelscope.cn/studios/fork?target=${org}/${fork_repo}&overwriteEnv=${paramsStr}`; + window.open(url, '_blank'); + Swal.fire('Success!', '', 'success'); + }) + .catch(error => { + console.error('Error:', error); + Swal.fire('Failed', data.message || 'An error occurred while uploading to oss', 'error'); + }); + } + }) + } +} + + function showExportHTMLPopup() { const rawData = editor.export(); // Remove the html attribute from the nodes to avoid inconsistencies in html removeHtmlFromUsers(rawData); + sortElementsByPosition(rawData); const exportData = JSON.stringify(rawData, null, 4); @@ -1663,7 +1746,7 @@ function showExportHTMLPopup() { showCancelButton: true, confirmButtonText: 'Copy', cancelButtonText: 'Close', - onBeforeOpen: (element) => { + willOpen: (element) => { // Find the code element inside the Swal content const codeElement = element.querySelector('code'); @@ -1763,6 +1846,177 @@ function showImportHTMLPopup() { } +function showSaveWorkflowPopup() { + Swal.fire({ + title: 'Save Workflow', + input: 'text', + inputPlaceholder: 'Enter filename', + showCancelButton: true, + confirmButtonText: 'Save', + cancelButtonText: 'Cancel' + }).then(result => { + if (result.isConfirmed) { + const filename = result.value; + saveWorkflow(filename); + } + }); +} + +function saveWorkflow(fileName) { + const rawData = editor.export(); + filterOutApiKey(rawData) + + // Remove the html attribute from the nodes to avoid inconsistencies in html + removeHtmlFromUsers(rawData); + + const exportData = JSON.stringify(rawData, null, 4); + fetch('/save-workflow', { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + filename: fileName, + workflow: exportData, + overwrite: false, + }) + }).then(response => response.json()) + .then(data => { + if (data.message === "Workflow file saved successfully") { + Swal.fire('Success', data.message, 'success'); + } else { + Swal.fire('Error', data.message || 'An error occurred while saving the workflow.', 'error'); + } + }) + .catch(error => { + console.error('Error:', error); + Swal.fire('Error', 'An error occurred while saving the workflow.', 'error'); + }); +} + +function showLoadWorkflowPopup() { + fetch('/list-workflows', { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({}) + }) + .then(response => response.json()) + .then(data => { + if (!Array.isArray(data.files)) { + throw new TypeError('The return data is not an array'); + } + const inputOptions = data.files.reduce((options, file) => { + options[file] = file; + return options; + }, {}); + Swal.fire({ + title: 'Loading Workflow from Disks', + input: 'select', + inputOptions: inputOptions, + inputPlaceholder: 'Select', + showCancelButton: true, + showDenyButton: true, + confirmButtonText: 'Load', + cancelButtonText: 'Cancel', + denyButtonText: 'Delete', + didOpen: () => { + const selectElement = Swal.getInput(); + selectElement.addEventListener('change', (event) => { + selectedFilename = event.target.value; + }); + } + }).then(result => { + if (result.isConfirmed) { + loadWorkflow(selectedFilename); + } else if (result.isDenied) { + Swal.fire({ + title: `Are you sure to delete ${selectedFilename}?`, + text: "This operation cannot be undone!", + icon: 'warning', + showCancelButton: true, + confirmButtonColor: '#d33', + cancelButtonColor: '#3085d6', + confirmButtonText: 'Delete', + cancelButtonText: 'Cancel' + }).then((deleteResult) => { + if (deleteResult.isConfirmed) { + deleteWorkflow(selectedFilename); + } + }); + } + }); + }) + .catch(error => { + console.error('Error:', error); + Swal.fire('Error', 'An error occurred while loading the workflow.', 'error'); + }); +} + + +function loadWorkflow(fileName) { + fetch('/load-workflow', { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + filename: fileName, + }) + }).then(response => response.json()) + .then(data => { + if (data.error) { + Swal.fire('Error', data.error, 'error'); + } else { + console.log(data) + try { + // Add html source code to the nodes data + addHtmlAndReplacePlaceHolderBeforeImport(data) + .then(() => { + console.log(data) + editor.clear(); + editor.import(data); + importSetupNodes(data); + Swal.fire('Imported!', '', 'success'); + }); + + } catch (error) { + Swal.showValidationMessage(`Import error: ${error}`); + } + Swal.fire('Success', 'Workflow loaded successfully', 'success'); + } + }) + .catch(error => { + console.error('Error:', error); + Swal.fire('Error', 'An error occurred while loading the workflow.', 'error'); + }); +} + +function deleteWorkflow(fileName) { + fetch('/delete-workflow', { + method: 'POST', + headers: { + 'Content-Type': 'application/json', + }, + body: JSON.stringify({ + filename: fileName, + }) + }).then(response => response.json()) + .then(data => { + if (data.error) { + Swal.fire('Error', data.error, 'error'); + } else { + Swal.fire('Deleted!', 'Workflow has been deleted.', 'success'); + } + }) + .catch(error => { + console.error('Error:', error); + Swal.fire('Error', 'An error occurred while deleting the workflow.', 'error'); + }); +} + + function removeHtmlFromUsers(data) { Object.keys(data.drawflow.Home.data).forEach((nodeId) => { const node = data.drawflow.Home.data[nodeId]; @@ -1789,8 +2043,13 @@ async function addHtmlAndReplacePlaceHolderBeforeImport(data) { const idPlaceholderRegex = /ID_PLACEHOLDER/g; for (const nodeId of Object.keys(data.drawflow.Home.data)) { const node = data.drawflow.Home.data[nodeId]; - if (!node.html) { + if (node.name === "readme") { + // Remove the node if its name is "readme" + delete data.drawflow.Home.data[nodeId]; + continue; // Skip to the next iteration + } + console.log(node.name) const sourceCode = await fetchHtmlSourceCodeByName(node.name); // Add new html attribute to the node @@ -1845,7 +2104,7 @@ function fetchExample(index, processData) { }, body: JSON.stringify({ data: index, - lang: getCookie('locale') || 'en' + lang: getCookie('locale') || 'en', }) }).then(response => { if (!response.ok) { diff --git a/src/agentscope/studio/static/js_third_party/buttons.js b/src/agentscope/studio/static/js_third_party/buttons.js new file mode 100644 index 000000000..868675e6b --- /dev/null +++ b/src/agentscope/studio/static/js_third_party/buttons.js @@ -0,0 +1,6 @@ +/*! + * github-buttons v2.28.0 + * (c) 2024 なつき + * @license BSD-2-Clause + */ +!function(){"use strict";var e=window.document,o=e.location,t=window.Math,r=window.HTMLElement,a=window.XMLHttpRequest,n="github-button",i="https://buttons.github.io/buttons.html",c="github.com",l="https://api."+c,d=a&&"prototype"in a&&"withCredentials"in a.prototype,s=d&&r&&"attachShadow"in r.prototype&&!("prototype"in r.prototype.attachShadow),u=function(e,o){for(var t=0,r=e.length;t'}}},download:{heights:{16:{width:16,path:''}}},eye:{heights:{16:{width:16,path:''}}},heart:{heights:{16:{width:16,path:''}}},"issue-opened":{heights:{16:{width:16,path:''}}},"mark-github":{heights:{16:{width:16,path:''}}},package:{heights:{16:{width:16,path:''}}},play:{heights:{16:{width:16,path:''}}},"repo-forked":{heights:{16:{width:16,path:''}}},"repo-template":{heights:{16:{width:16,path:''}}},star:{heights:{16:{width:16,path:''}}}},Z=function(e,o){e=b(e).replace(/^octicon-/,""),p(M,e)||(e="mark-github");var t=o>=24&&24 in M[e].heights?24:16,r=M[e].heights[t];return'"},A={},F=function(e,o){var t=A[e]||(A[e]=[]);if(!(t.push(o)>1)){var r=g((function(){for(delete A[e];o=t.shift();)o.apply(null,arguments)}));if(d){var n=new a;m(n,"abort",r),m(n,"error",r),m(n,"load",(function(){var e;try{e=JSON.parse(this.responseText)}catch(e){return void r(e)}r(200!==this.status,e)})),n.open("GET",e),n.send()}else{var i=this||window;i._=function(e){i._=null,r(200!==e.meta.status,e.data)};var c=h(i.document)("script",{async:!0,src:e+(-1!==e.indexOf("?")?"&":"?")+"callback=_"}),l=function(){i._&&i._({meta:{}})};m(c,"load",l),m(c,"error",l),x(c,/de|m/,l),i.document.getElementsByTagName("head")[0].appendChild(c)}}},E=function(e,o,t){var r=h(e.ownerDocument),a=e.appendChild(r("style",{type:"text/css"})),n="body{margin:0}a{text-decoration:none;outline:0}.widget{display:inline-block;overflow:hidden;font-family:-apple-system,BlinkMacSystemFont,Segoe UI,Helvetica,Arial,sans-serif;font-size:0;line-height:0;white-space:nowrap}.btn,.social-count{position:relative;display:inline-block;display:inline-flex;height:14px;padding:2px 5px;font-size:11px;font-weight:600;line-height:14px;vertical-align:bottom;cursor:pointer;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;background-repeat:repeat-x;background-position:-1px -1px;background-size:110% 110%;border:1px solid}.btn{border-radius:.25em}.btn:not(:last-child){border-radius:.25em 0 0 .25em}.social-count{border-left:0;border-radius:0 .25em .25em 0}.widget-lg .btn,.widget-lg .social-count{height:16px;padding:5px 10px;font-size:12px;line-height:16px}.octicon{display:inline-block;vertical-align:text-top;fill:currentColor;overflow:visible}"+function(e){if(null==e)return y.light;if(p(y,e))return y[e];var o=v(e,";",":",(function(e){return e.replace(/^[ \t\n\f\r]+|[ \t\n\f\r]+$/g,"")}));return y[p(y,o["no-preference"])?o["no-preference"]:"light"]+C("light",o.light)+C("dark",o.dark)}(o["data-color-scheme"]);a.styleSheet?a.styleSheet.cssText=n:a.appendChild(e.ownerDocument.createTextNode(n));var i="large"===b(o["data-size"]),d=r("a",{className:"btn",href:o.href,rel:"noopener",target:"_blank",title:o.title||void 0,"aria-label":o["aria-label"]||void 0,innerHTML:Z(o["data-icon"],i?16:14)+" "},[r("span",{},[o["data-text"]||""])]),s=e.appendChild(r("div",{className:"widget"+(i?" widget-lg":"")},[d])),u=d.hostname.replace(/\.$/,"");if(("."+u).substring(u.length-10)!=="."+c)return d.removeAttribute("href"),void t(s);var f=(" /"+d.pathname).split(/\/+/);if(((u===c||u==="gist."+c)&&"archive"===f[3]||u===c&&"releases"===f[3]&&("download"===f[4]||"latest"===f[4]&&"download"===f[5])||u==="codeload."+c)&&(d.target="_top"),"true"===b(o["data-show-count"])&&u===c&&"marketplace"!==f[1]&&"sponsors"!==f[1]&&"orgs"!==f[1]&&"users"!==f[1]&&"-"!==f[1]){var g,m;if(!f[2]&&f[1])m="followers",g="?tab=followers";else if(!f[3]&&f[2])m="stargazers_count",g="/stargazers";else if(f[4]||"subscription"!==f[3])if(f[4]||"fork"!==f[3]){if("issues"!==f[3])return void t(s);m="open_issues_count",g="/issues"}else m="forks_count",g="/forks";else m="subscribers_count",g="/watchers";var w=f[2]?"/repos/"+f[1]+"/"+f[2]:"/users/"+f[1];F.call(this,l+w,(function(e,o){if(!e){var a=o[m];s.appendChild(r("a",{className:"social-count",href:o.html_url+g,rel:"noopener",target:"_blank","aria-label":a+" "+m.replace(/_count$/,"").replace("_"," ").slice(0,a<2?-1:void 0)+" on GitHub"},[(""+a).replace(/\B(?=(\d{3})+(?!\d))/g,",")]))}t(s)}))}else t(s)},L=window.devicePixelRatio||1,_=function(e){return(L>1?t.ceil(t.round(e*L)/L*2)/2:t.ceil(e))||0},G=function(e,o){e.style.width=o[0]+"px",e.style.height=o[1]+"px"},T=function(o,r){if(null!=o&&null!=r)if(o.getAttribute&&(o=function(e){var o={href:e.href,title:e.title,"aria-label":e.getAttribute("aria-label")};return u(["icon","color-scheme","text","size","show-count"],(function(t){var r="data-"+t;o[r]=e.getAttribute(r)})),null==o["data-text"]&&(o["data-text"]=e.textContent||e.innerText),o}(o)),s){var a=f("span");E(a.attachShadow({mode:"closed"}),o,(function(){r(a)}))}else{var n=f("iframe",{src:"javascript:0",title:o.title||void 0,allowtransparency:!0,scrolling:"no",frameBorder:0});G(n,[0,0]),n.style.border="none";var c=function(){var a,l=n.contentWindow;try{a=l.document.body}catch(o){return void e.body.appendChild(n.parentNode.removeChild(n))}w(n,"load",c),E.call(l,a,o,(function(e){var a=function(e){var o=e.offsetWidth,r=e.offsetHeight;if(e.getBoundingClientRect){var a=e.getBoundingClientRect();o=t.max(o,_(a.width)),r=t.max(r,_(a.height))}return[o,r]}(e);n.parentNode.removeChild(n),k(n,"load",(function(){G(n,a)})),n.src=i+"#"+(n.name=function(e,o,t,r){null==o&&(o="&"),null==t&&(t="="),null==r&&(r=window.encodeURIComponent);var a=[];for(var n in e){var i=e[n];null!=i&&a.push(r(n)+t+r(i))}return a.join(o)}(o)),r(n)}))};m(n,"load",c),e.body.appendChild(n)}};o.protocol+"//"+o.host+o.pathname===i?E(e.body,v(window.name||o.hash.replace(/^#/,"")),(function(){})):function(o){if("complete"===e.readyState||"loading"!==e.readyState&&!e.documentElement.doScroll)setTimeout(o);else if(e.addEventListener){var t=g(o);k(e,"DOMContentLoaded",t),k(window,"load",t)}else x(e,/m/,o)}((function(){var o,t=e.querySelectorAll?e.querySelectorAll("a."+n):(o=[],u(e.getElementsByTagName("a"),(function(e){-1!==(" "+e.className+" ").replace(/[ \t\n\f\r]+/g," ").indexOf(" "+n+" ")&&o.push(e)})),o);u(t,(function(e){T(e,(function(o){e.parentNode.replaceChild(o,e)}))}))}))}(); diff --git a/src/agentscope/studio/static/js_third_party/sweetalert2@11 b/src/agentscope/studio/static/js_third_party/sweetalert2@11 new file mode 100644 index 000000000..dcffae4df --- /dev/null +++ b/src/agentscope/studio/static/js_third_party/sweetalert2@11 @@ -0,0 +1,6 @@ +/*! +* sweetalert2 v11.12.3 +* Released under the MIT License. +*/ +!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t="undefined"!=typeof globalThis?globalThis:t||self).Sweetalert2=e()}(this,(function(){"use strict";function t(t,e){(null==e||e>t.length)&&(e=t.length);for(var n=0,o=Array(e);n1&&void 0!==arguments[1]?arguments[1]:null;e='"'.concat(t,'" is deprecated and will be removed in the next major release.').concat(n?' Use "'.concat(n,'" instead.'):""),B.includes(e)||(B.push(e),k(e))},T=function(t){return"function"==typeof t?t():t},x=function(t){return t&&"function"==typeof t.toPromise},S=function(t){return x(t)?t.toPromise():Promise.resolve(t)},L=function(t){return t&&Promise.resolve(t)===t},O=function(){return document.body.querySelector(".".concat(b.container))},j=function(t){var e=O();return e?e.querySelector(t):null},M=function(t){return j(".".concat(t))},I=function(){return M(b.popup)},H=function(){return M(b.icon)},D=function(){return M(b.title)},q=function(){return M(b["html-container"])},V=function(){return M(b.image)},_=function(){return M(b["progress-steps"])},R=function(){return M(b["validation-message"])},N=function(){return j(".".concat(b.actions," .").concat(b.confirm))},F=function(){return j(".".concat(b.actions," .").concat(b.cancel))},U=function(){return j(".".concat(b.actions," .").concat(b.deny))},z=function(){return j(".".concat(b.loader))},K=function(){return M(b.actions)},W=function(){return M(b.footer)},Y=function(){return M(b["timer-progress-bar"])},Z=function(){return M(b.close)},$=function(){var t=I();if(!t)return[];var e=t.querySelectorAll('[tabindex]:not([tabindex="-1"]):not([tabindex="0"])'),n=Array.from(e).sort((function(t,e){var n=parseInt(t.getAttribute("tabindex")||"0"),o=parseInt(e.getAttribute("tabindex")||"0");return n>o?1:n .").concat(b[e]));case"checkbox":return t.querySelector(".".concat(b.popup," > .").concat(b.checkbox," input"));case"radio":return t.querySelector(".".concat(b.popup," > .").concat(b.radio," input:checked"))||t.querySelector(".".concat(b.popup," > .").concat(b.radio," input:first-child"));case"range":return t.querySelector(".".concat(b.popup," > .").concat(b.range," input"));default:return t.querySelector(".".concat(b.popup," > .").concat(b.input))}},nt=function(t){if(t.focus(),"file"!==t.type){var e=t.value;t.value="",t.value=e}},ot=function(t,e,n){t&&e&&("string"==typeof e&&(e=e.split(/\s+/).filter(Boolean)),e.forEach((function(e){Array.isArray(t)?t.forEach((function(t){n?t.classList.add(e):t.classList.remove(e)})):n?t.classList.add(e):t.classList.remove(e)})))},it=function(t,e){ot(t,e,!0)},rt=function(t,e){ot(t,e,!1)},at=function(t,e){for(var n=Array.from(t.children),o=0;o1&&void 0!==arguments[1]?arguments[1]:"flex";t&&(t.style.display=e)},st=function(t){t&&(t.style.display="none")},lt=function(t){var e=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"block";t&&new MutationObserver((function(){ft(t,t.innerHTML,e)})).observe(t,{childList:!0,subtree:!0})},dt=function(t,e,n,o){var i=t.querySelector(e);i&&i.style.setProperty(n,o)},ft=function(t,e){e?ut(t,arguments.length>2&&void 0!==arguments[2]?arguments[2]:"flex"):st(t)},pt=function(t){return!(!t||!(t.offsetWidth||t.offsetHeight||t.getClientRects().length))},mt=function(t){return!!(t.scrollHeight>t.clientHeight)},vt=function(t){var e=window.getComputedStyle(t),n=parseFloat(e.getPropertyValue("animation-duration")||"0"),o=parseFloat(e.getPropertyValue("transition-duration")||"0");return n>0||o>0},ht=function(t){var e=arguments.length>1&&void 0!==arguments[1]&&arguments[1],n=Y();n&&pt(n)&&(e&&(n.style.transition="none",n.style.width="100%"),setTimeout((function(){n.style.transition="width ".concat(t/1e3,"s linear"),n.style.width="0%"}),10))},gt=function(){return"undefined"==typeof window||"undefined"==typeof document},yt='\n
\n \n
    \n
    \n \n

    \n
    \n \n \n
    \n \n \n
    \n \n
    \n \n \n
    \n
    \n
    \n \n \n \n
    \n
    \n
    \n
    \n
    \n
    \n').replace(/(^|\n)\s*/g,""),bt=function(){h.currentInstance.resetValidationMessage()},wt=function(t){var e,n=!!(e=O())&&(e.remove(),rt([document.documentElement,document.body],[b["no-backdrop"],b["toast-shown"],b["has-column"]]),!0);if(gt())E("SweetAlert2 requires document to initialize");else{var o=document.createElement("div");o.className=b.container,n&&it(o,b["no-transition"]),G(o,yt);var i,r,a,c,u,s,l,d,f,p="string"==typeof(i=t.target)?document.querySelector(i):i;p.appendChild(o),function(t){var e=I();e.setAttribute("role",t.toast?"alert":"dialog"),e.setAttribute("aria-live",t.toast?"polite":"assertive"),t.toast||e.setAttribute("aria-modal","true")}(t),function(t){"rtl"===window.getComputedStyle(t).direction&&it(O(),b.rtl)}(p),r=I(),a=at(r,b.input),c=at(r,b.file),u=r.querySelector(".".concat(b.range," input")),s=r.querySelector(".".concat(b.range," output")),l=at(r,b.select),d=r.querySelector(".".concat(b.checkbox," input")),f=at(r,b.textarea),a.oninput=bt,c.onchange=bt,l.onchange=bt,d.onchange=bt,f.oninput=bt,u.oninput=function(){bt(),s.value=u.value},u.onchange=function(){bt(),s.value=u.value}}},Ct=function(t,e){t instanceof HTMLElement?e.appendChild(t):"object"===m(t)?At(t,e):t&&G(e,t)},At=function(t,e){t.jquery?kt(e,t):G(e,t.toString())},kt=function(t,e){if(t.textContent="",0 in e)for(var n=0;n in e;n++)t.appendChild(e[n].cloneNode(!0));else t.appendChild(e.cloneNode(!0))},Et=function(){if(gt())return!1;var t=document.createElement("div");return void 0!==t.style.webkitAnimation?"webkitAnimationEnd":void 0!==t.style.animation&&"animationend"}(),Bt=function(t,e){var n=K(),o=z();n&&o&&(e.showConfirmButton||e.showDenyButton||e.showCancelButton?ut(n):st(n),tt(n,e,"actions"),function(t,e,n){var o=N(),i=U(),r=F();if(!o||!i||!r)return;Pt(o,"confirm",n),Pt(i,"deny",n),Pt(r,"cancel",n),function(t,e,n,o){if(!o.buttonsStyling)return void rt([t,e,n],b.styled);it([t,e,n],b.styled),o.confirmButtonColor&&(t.style.backgroundColor=o.confirmButtonColor,it(t,b["default-outline"]));o.denyButtonColor&&(e.style.backgroundColor=o.denyButtonColor,it(e,b["default-outline"]));o.cancelButtonColor&&(n.style.backgroundColor=o.cancelButtonColor,it(n,b["default-outline"]))}(o,i,r,n),n.reverseButtons&&(n.toast?(t.insertBefore(r,o),t.insertBefore(i,o)):(t.insertBefore(r,e),t.insertBefore(i,e),t.insertBefore(o,e)))}(n,o,e),G(o,e.loaderHtml||""),tt(o,e,"loader"))};function Pt(t,e,n){var o=A(e);ft(t,n["show".concat(o,"Button")],"inline-block"),G(t,n["".concat(e,"ButtonText")]||""),t.setAttribute("aria-label",n["".concat(e,"ButtonAriaLabel")]||""),t.className=b[e],tt(t,n,"".concat(e,"Button"))}var Tt=function(t,e){var n=O();n&&(!function(t,e){"string"==typeof e?t.style.background=e:e||it([document.documentElement,document.body],b["no-backdrop"])}(n,e.backdrop),function(t,e){if(!e)return;e in b?it(t,b[e]):(k('The "position" parameter is not valid, defaulting to "center"'),it(t,b.center))}(n,e.position),function(t,e){if(!e)return;it(t,b["grow-".concat(e)])}(n,e.grow),tt(n,e,"container"))};var xt={innerParams:new WeakMap,domCache:new WeakMap},St=["input","file","range","select","radio","checkbox","textarea"],Lt=function(t){if(t.input)if(qt[t.input]){var e=Ht(t.input);if(e){var n=qt[t.input](e,t);ut(e),t.inputAutoFocus&&setTimeout((function(){nt(n)}))}}else E("Unexpected type of input! Expected ".concat(Object.keys(qt).join(" | "),', got "').concat(t.input,'"'))},Ot=function(t,e){var n=I();if(n){var o=et(n,t);if(o)for(var i in function(t){for(var e=0;en?I().style.width="".concat(i,"px"):ct(I(),"width",e.width)}})).observe(t,{attributes:!0,attributeFilter:["style"]})}})),t};var Vt=function(t,e){var n=q();n&&(lt(n),tt(n,e,"htmlContainer"),e.html?(Ct(e.html,n),ut(n,"block")):e.text?(n.textContent=e.text,ut(n,"block")):st(n),function(t,e){var n=I();if(n){var o=xt.innerParams.get(t),i=!o||e.input!==o.input;St.forEach((function(t){var o=at(n,b[t]);o&&(Ot(t,e.inputAttributes),o.className=b[t],i&&st(o))})),e.input&&(i&&Lt(e),jt(e))}}(t,e))},_t=function(t,e){for(var n=0,o=Object.entries(w);n\n \n
    \n
    \n',n=n.replace(/ style=".*?"/g,"");else if("error"===e.icon)o='\n \n \n \n \n';else if(e.icon){o=Ut({question:"?",warning:"!",info:"i"}[e.icon])}n.trim()!==o.trim()&&G(t,o)}},Ft=function(t,e){if(e.iconColor){t.style.color=e.iconColor,t.style.borderColor=e.iconColor;for(var n=0,o=[".swal2-success-line-tip",".swal2-success-line-long",".swal2-x-mark-line-left",".swal2-x-mark-line-right"];n').concat(t,"")},zt=function(t,e){var n=e.showClass||{};t.className="".concat(b.popup," ").concat(pt(t)?n.popup:""),e.toast?(it([document.documentElement,document.body],b["toast-shown"]),it(t,b.toast)):it(t,b.modal),tt(t,e,"popup"),"string"==typeof e.customClass&&it(t,e.customClass),e.icon&&it(t,b["icon-".concat(e.icon)])},Kt=function(t){var e=document.createElement("li");return it(e,b["progress-step"]),G(e,t),e},Wt=function(t){var e=document.createElement("li");return it(e,b["progress-step-line"]),t.progressStepsDistance&&ct(e,"width",t.progressStepsDistance),e},Yt=function(t,e){!function(t,e){var n=O(),o=I();if(n&&o){if(e.toast){ct(n,"width",e.width),o.style.width="100%";var i=z();i&&o.insertBefore(i,H())}else ct(o,"width",e.width);ct(o,"padding",e.padding),e.color&&(o.style.color=e.color),e.background&&(o.style.background=e.background),st(R()),zt(o,e)}}(0,e),Tt(0,e),function(t,e){var n=_();if(n){var o=e.progressSteps,i=e.currentProgressStep;o&&0!==o.length&&void 0!==i?(ut(n),n.textContent="",i>=o.length&&k("Invalid currentProgressStep parameter, it should be less than progressSteps.length (currentProgressStep like JS arrays starts from 0)"),o.forEach((function(t,r){var a=Kt(t);if(n.appendChild(a),r===i&&it(a,b["active-progress-step"]),r!==o.length-1){var c=Wt(e);n.appendChild(c)}}))):st(n)}}(0,e),function(t,e){var n=xt.innerParams.get(t),o=H();if(o){if(n&&e.icon===n.icon)return Nt(o,e),void _t(o,e);if(e.icon||e.iconHtml){if(e.icon&&-1===Object.keys(w).indexOf(e.icon))return E('Unknown icon! Expected "success", "error", "warning", "info" or "question", got "'.concat(e.icon,'"')),void st(o);ut(o),Nt(o,e),_t(o,e),it(o,e.showClass&&e.showClass.icon)}else st(o)}}(t,e),function(t,e){var n=V();n&&(e.imageUrl?(ut(n,""),n.setAttribute("src",e.imageUrl),n.setAttribute("alt",e.imageAlt||""),ct(n,"width",e.imageWidth),ct(n,"height",e.imageHeight),n.className=b.image,tt(n,e,"image")):st(n))}(0,e),function(t,e){var n=D();n&&(lt(n),ft(n,e.title||e.titleText,"block"),e.title&&Ct(e.title,n),e.titleText&&(n.innerText=e.titleText),tt(n,e,"title"))}(0,e),function(t,e){var n=Z();n&&(G(n,e.closeButtonHtml||""),tt(n,e,"closeButton"),ft(n,e.showCloseButton),n.setAttribute("aria-label",e.closeButtonAriaLabel||""))}(0,e),Vt(t,e),Bt(0,e),function(t,e){var n=W();n&&(lt(n),ft(n,e.footer,"block"),e.footer&&Ct(e.footer,n),tt(n,e,"footer"))}(0,e);var n=I();"function"==typeof e.didRender&&n&&e.didRender(n)},Zt=function(){var t;return null===(t=N())||void 0===t?void 0:t.click()},$t=Object.freeze({cancel:"cancel",backdrop:"backdrop",close:"close",esc:"esc",timer:"timer"}),Jt=function(t){t.keydownTarget&&t.keydownHandlerAdded&&(t.keydownTarget.removeEventListener("keydown",t.keydownHandler,{capture:t.keydownListenerCapture}),t.keydownHandlerAdded=!1)},Xt=function(t,e){var n,o=$();if(o.length)return(t+=e)===o.length?t=0:-1===t&&(t=o.length-1),void o[t].focus();null===(n=I())||void 0===n||n.focus()},Gt=["ArrowRight","ArrowDown"],Qt=["ArrowLeft","ArrowUp"],te=function(t,e,n){t&&(e.isComposing||229===e.keyCode||(t.stopKeydownPropagation&&e.stopPropagation(),"Enter"===e.key?ee(e,t):"Tab"===e.key?ne(e):[].concat(Gt,Qt).includes(e.key)?oe(e.key):"Escape"===e.key&&ie(e,t,n)))},ee=function(t,e){if(T(e.allowEnterKey)){var n=et(I(),e.input);if(t.target&&n&&t.target instanceof HTMLElement&&t.target.outerHTML===n.outerHTML){if(["textarea","file"].includes(e.input))return;Zt(),t.preventDefault()}}},ne=function(t){for(var e=t.target,n=$(),o=-1,i=0;i1},fe=null,pe=function(t){null===fe&&(document.body.scrollHeight>window.innerHeight||"scroll"===t)&&(fe=parseInt(window.getComputedStyle(document.body).getPropertyValue("padding-right")),document.body.style.paddingRight="".concat(fe+function(){var t=document.createElement("div");t.className=b["scrollbar-measure"],document.body.appendChild(t);var e=t.getBoundingClientRect().width-t.clientWidth;return document.body.removeChild(t),e}(),"px"))};function me(t,e,n,o){X()?Ae(t,o):(g(n).then((function(){return Ae(t,o)})),Jt(h)),ce?(e.setAttribute("style","display:none !important"),e.removeAttribute("class"),e.innerHTML=""):e.remove(),J()&&(null!==fe&&(document.body.style.paddingRight="".concat(fe,"px"),fe=null),function(){if(Q(document.body,b.iosfix)){var t=parseInt(document.body.style.top,10);rt(document.body,b.iosfix),document.body.style.top="",document.body.scrollTop=-1*t}}(),ae()),rt([document.documentElement,document.body],[b.shown,b["height-auto"],b["no-backdrop"],b["toast-shown"]])}function ve(t){t=be(t);var e=re.swalPromiseResolve.get(this),n=he(this);this.isAwaitingPromise?t.isDismissed||(ye(this),e(t)):n&&e(t)}var he=function(t){var e=I();if(!e)return!1;var n=xt.innerParams.get(t);if(!n||Q(e,n.hideClass.popup))return!1;rt(e,n.showClass.popup),it(e,n.hideClass.popup);var o=O();return rt(o,n.showClass.backdrop),it(o,n.hideClass.backdrop),we(t,e,n),!0};function ge(t){var e=re.swalPromiseReject.get(this);ye(this),e&&e(t)}var ye=function(t){t.isAwaitingPromise&&(delete t.isAwaitingPromise,xt.innerParams.get(t)||t._destroy())},be=function(t){return void 0===t?{isConfirmed:!1,isDenied:!1,isDismissed:!0}:Object.assign({isConfirmed:!1,isDenied:!1,isDismissed:!1},t)},we=function(t,e,n){var o=O(),i=Et&&vt(e);"function"==typeof n.willClose&&n.willClose(e),i?Ce(t,e,o,n.returnFocus,n.didClose):me(t,o,n.returnFocus,n.didClose)},Ce=function(t,e,n,o,i){Et&&(h.swalCloseEventFinishedCallback=me.bind(null,t,n,o,i),e.addEventListener(Et,(function(t){t.target===e&&(h.swalCloseEventFinishedCallback(),delete h.swalCloseEventFinishedCallback)})))},Ae=function(t,e){setTimeout((function(){"function"==typeof e&&e.bind(t.params)(),t._destroy&&t._destroy()}))},ke=function(t){var e=I();if(e||new io,e=I()){var n=z();X()?st(H()):Ee(e,t),ut(n),e.setAttribute("data-loading","true"),e.setAttribute("aria-busy","true"),e.focus()}},Ee=function(t,e){var n=K(),o=z();n&&o&&(!e&&pt(N())&&(e=N()),ut(n),e&&(st(e),o.setAttribute("data-button-to-replace",e.className),n.insertBefore(o,e)),it([t,n],b.loading))},Be=function(t){return t.checked?1:0},Pe=function(t){return t.checked?t.value:null},Te=function(t){return t.files&&t.files.length?null!==t.getAttribute("multiple")?t.files:t.files[0]:null},xe=function(t,e){var n=I();if(n){var o=function(t){"select"===e.input?function(t,e,n){var o=at(t,b.select);if(!o)return;var i=function(t,e,o){var i=document.createElement("option");i.value=o,G(i,e),i.selected=Oe(o,n.inputValue),t.appendChild(i)};e.forEach((function(t){var e=t[0],n=t[1];if(Array.isArray(n)){var r=document.createElement("optgroup");r.label=e,r.disabled=!1,o.appendChild(r),n.forEach((function(t){return i(r,t[1],t[0])}))}else i(o,n,e)})),o.focus()}(n,Le(t),e):"radio"===e.input&&function(t,e,n){var o=at(t,b.radio);if(!o)return;e.forEach((function(t){var e=t[0],i=t[1],r=document.createElement("input"),a=document.createElement("label");r.type="radio",r.name=b.radio,r.value=e,Oe(e,n.inputValue)&&(r.checked=!0);var c=document.createElement("span");G(c,i),c.className=b.label,a.appendChild(r),a.appendChild(c),o.appendChild(a)}));var i=o.querySelectorAll("input");i.length&&i[0].focus()}(n,Le(t),e)};x(e.inputOptions)||L(e.inputOptions)?(ke(N()),S(e.inputOptions).then((function(e){t.hideLoading(),o(e)}))):"object"===m(e.inputOptions)?o(e.inputOptions):E("Unexpected type of inputOptions! Expected object, Map or Promise, got ".concat(m(e.inputOptions)))}},Se=function(t,e){var n=t.getInput();n&&(st(n),S(e.inputValue).then((function(o){n.value="number"===e.input?"".concat(parseFloat(o)||0):"".concat(o),ut(n),n.focus(),t.hideLoading()})).catch((function(e){E("Error in inputValue promise: ".concat(e)),n.value="",ut(n),n.focus(),t.hideLoading()})))};var Le=function t(e){var n=[];return e instanceof Map?e.forEach((function(e,o){var i=e;"object"===m(i)&&(i=t(i)),n.push([o,i])})):Object.keys(e).forEach((function(o){var i=e[o];"object"===m(i)&&(i=t(i)),n.push([o,i])})),n},Oe=function(t,e){return!!e&&e.toString()===t.toString()},je=void 0,Me=function(t,e){var n=xt.innerParams.get(t);if(n.input){var o=t.getInput(),i=function(t,e){var n=t.getInput();if(!n)return null;switch(e.input){case"checkbox":return Be(n);case"radio":return Pe(n);case"file":return Te(n);default:return e.inputAutoTrim?n.value.trim():n.value}}(t,n);n.inputValidator?Ie(t,i,e):o&&!o.checkValidity()?(t.enableButtons(),t.showValidationMessage(n.validationMessage||o.validationMessage)):"deny"===e?He(t,i):Ve(t,i)}else E('The "input" parameter is needed to be set when using returnInputValueOn'.concat(A(e)))},Ie=function(t,e,n){var o=xt.innerParams.get(t);t.disableInput(),Promise.resolve().then((function(){return S(o.inputValidator(e,o.validationMessage))})).then((function(o){t.enableButtons(),t.enableInput(),o?t.showValidationMessage(o):"deny"===n?He(t,e):Ve(t,e)}))},He=function(t,e){var n=xt.innerParams.get(t||je);(n.showLoaderOnDeny&&ke(U()),n.preDeny)?(t.isAwaitingPromise=!0,Promise.resolve().then((function(){return S(n.preDeny(e,n.validationMessage))})).then((function(n){!1===n?(t.hideLoading(),ye(t)):t.close({isDenied:!0,value:void 0===n?e:n})})).catch((function(e){return qe(t||je,e)}))):t.close({isDenied:!0,value:e})},De=function(t,e){t.close({isConfirmed:!0,value:e})},qe=function(t,e){t.rejectPromise(e)},Ve=function(t,e){var n=xt.innerParams.get(t||je);(n.showLoaderOnConfirm&&ke(),n.preConfirm)?(t.resetValidationMessage(),t.isAwaitingPromise=!0,Promise.resolve().then((function(){return S(n.preConfirm(e,n.validationMessage))})).then((function(n){pt(R())||!1===n?(t.hideLoading(),ye(t)):De(t,void 0===n?e:n)})).catch((function(e){return qe(t||je,e)}))):De(t,e)};function _e(){var t=xt.innerParams.get(this);if(t){var e=xt.domCache.get(this);st(e.loader),X()?t.icon&&ut(H()):Re(e),rt([e.popup,e.actions],b.loading),e.popup.removeAttribute("aria-busy"),e.popup.removeAttribute("data-loading"),e.confirmButton.disabled=!1,e.denyButton.disabled=!1,e.cancelButton.disabled=!1}}var Re=function(t){var e=t.popup.getElementsByClassName(t.loader.getAttribute("data-button-to-replace"));e.length?ut(e[0],"inline-block"):pt(N())||pt(U())||pt(F())||st(t.actions)};function Ne(){var t=xt.innerParams.get(this),e=xt.domCache.get(this);return e?et(e.popup,t.input):null}function Fe(t,e,n){var o=xt.domCache.get(t);e.forEach((function(t){o[t].disabled=n}))}function Ue(t,e){var n=I();if(n&&t)if("radio"===t.type)for(var o=n.querySelectorAll('[name="'.concat(b.radio,'"]')),i=0;i0&&void 0!==arguments[0]?arguments[0]:"data-swal-template"]=this,kn||(document.body.addEventListener("click",Pn),kn=!0)},clickCancel:function(){var t;return null===(t=F())||void 0===t?void 0:t.click()},clickConfirm:Zt,clickDeny:function(){var t;return null===(t=U())||void 0===t?void 0:t.click()},enableLoading:ke,fire:function(){for(var t=arguments.length,e=new Array(t),n=0;n"))}))},Vn=function(t,e){Array.from(t.attributes).forEach((function(n){-1===e.indexOf(n.name)&&k(['Unrecognized attribute "'.concat(n.name,'" on <').concat(t.tagName.toLowerCase(),">."),"".concat(e.length?"Allowed attributes are: ".concat(e.join(", ")):"To set the value, use HTML within the element.")])}))},_n=function(t){var e=O(),n=I();"function"==typeof t.willOpen&&t.willOpen(n);var o=window.getComputedStyle(document.body).overflowY;Un(e,n,t),setTimeout((function(){Nn(e,n)}),10),J()&&(Fn(e,t.scrollbarPadding,o),function(){var t=O();Array.from(document.body.children).forEach((function(e){e.contains(t)||(e.hasAttribute("aria-hidden")&&e.setAttribute("data-previous-aria-hidden",e.getAttribute("aria-hidden")||""),e.setAttribute("aria-hidden","true"))}))}()),X()||h.previousActiveElement||(h.previousActiveElement=document.activeElement),"function"==typeof t.didOpen&&setTimeout((function(){return t.didOpen(n)})),rt(e,b["no-transition"])},Rn=function t(e){var n=I();if(e.target===n&&Et){var o=O();n.removeEventListener(Et,t),o.style.overflowY="auto"}},Nn=function(t,e){Et&&vt(e)?(t.style.overflowY="hidden",e.addEventListener(Et,Rn)):t.style.overflowY="auto"},Fn=function(t,e,n){!function(){if(ce&&!Q(document.body,b.iosfix)){var t=document.body.scrollTop;document.body.style.top="".concat(-1*t,"px"),it(document.body,b.iosfix),ue()}}(),e&&"hidden"!==n&&pe(n),setTimeout((function(){t.scrollTop=0}))},Un=function(t,e,n){it(t,n.showClass.backdrop),n.animation?(e.style.setProperty("opacity","0","important"),ut(e,"grid"),setTimeout((function(){it(e,n.showClass.popup),e.style.removeProperty("opacity")}),10)):ut(e,"grid"),it([document.documentElement,document.body],b.shown),n.heightAuto&&n.backdrop&&!n.toast&&it([document.documentElement,document.body],b["height-auto"])},zn={email:function(t,e){return/^[a-zA-Z0-9.+_'-]+@[a-zA-Z0-9.-]+\.[a-zA-Z0-9-]+$/.test(t)?Promise.resolve():Promise.resolve(e||"Invalid email address")},url:function(t,e){return/^https?:\/\/(www\.)?[-a-zA-Z0-9@:%._+~#=]{1,256}\.[a-z]{2,63}\b([-a-zA-Z0-9@:%_+.~#?&/=]*)$/.test(t)?Promise.resolve():Promise.resolve(e||"Invalid URL")}};function Kn(t){!function(t){t.inputValidator||("email"===t.input&&(t.inputValidator=zn.email),"url"===t.input&&(t.inputValidator=zn.url))}(t),t.showLoaderOnConfirm&&!t.preConfirm&&k("showLoaderOnConfirm is set to true, but preConfirm is not defined.\nshowLoaderOnConfirm should be used together with preConfirm, see usage example:\nhttps://sweetalert2.github.io/#ajax-request"),function(t){(!t.target||"string"==typeof t.target&&!document.querySelector(t.target)||"string"!=typeof t.target&&!t.target.appendChild)&&(k('Target parameter is not valid, defaulting to "body"'),t.target="body")}(t),"string"==typeof t.title&&(t.title=t.title.split("\n").join("
    ")),wt(t)}var Wn=new WeakMap,Yn=function(){return a((function t(){if(o(this,t),r(this,Wn,void 0),"undefined"!=typeof window){Bn=this;for(var n=arguments.length,i=new Array(n),a=0;a1&&void 0!==arguments[1]?arguments[1]:{};if(function(t){for(var e in!1===t.backdrop&&t.allowOutsideClick&&k('"allowOutsideClick" parameter requires `backdrop` parameter to be set to `true`'),t)on(e),t.toast&&rn(e),an(e)}(Object.assign({},e,t)),h.currentInstance){var n=re.swalPromiseResolve.get(h.currentInstance),o=h.currentInstance.isAwaitingPromise;h.currentInstance._destroy(),o||n({isDismissed:!0}),J()&&ae()}h.currentInstance=Bn;var i=$n(t,e);Kn(i),Object.freeze(i),h.timeout&&(h.timeout.stop(),delete h.timeout),clearTimeout(h.restoreFocusTimeout);var r=Jn(Bn);return Yt(Bn,i),xt.innerParams.set(Bn,i),Zn(Bn,r,i)}},{key:"then",value:function(t){return i(Wn,this).then(t)}},{key:"finally",value:function(t){return i(Wn,this).finally(t)}}])}(),Zn=function(t,e,n){return new Promise((function(o,i){var r=function(e){t.close({isDismissed:!0,dismiss:e})};re.swalPromiseResolve.set(t,o),re.swalPromiseReject.set(t,i),e.confirmButton.onclick=function(){!function(t){var e=xt.innerParams.get(t);t.disableButtons(),e.input?Me(t,"confirm"):Ve(t,!0)}(t)},e.denyButton.onclick=function(){!function(t){var e=xt.innerParams.get(t);t.disableButtons(),e.returnInputValueOnDeny?Me(t,"deny"):He(t,!1)}(t)},e.cancelButton.onclick=function(){!function(t,e){t.disableButtons(),e($t.cancel)}(t,r)},e.closeButton.onclick=function(){r($t.close)},function(t,e,n){t.toast?mn(t,e,n):(gn(e),yn(e),bn(t,e,n))}(n,e,r),function(t,e,n){Jt(t),e.toast||(t.keydownHandler=function(t){return te(e,t,n)},t.keydownTarget=e.keydownListenerCapture?window:I(),t.keydownListenerCapture=e.keydownListenerCapture,t.keydownTarget.addEventListener("keydown",t.keydownHandler,{capture:t.keydownListenerCapture}),t.keydownHandlerAdded=!0)}(h,n,r),function(t,e){"select"===e.input||"radio"===e.input?xe(t,e):["text","email","number","tel","textarea"].some((function(t){return t===e.input}))&&(x(e.inputValue)||L(e.inputValue))&&(ke(N()),Se(t,e))}(t,n),_n(n),Xn(h,n,r),Gn(e,n),setTimeout((function(){e.container.scrollTop=0}))}))},$n=function(t,e){var n=function(t){var e="string"==typeof t.template?document.querySelector(t.template):t.template;if(!e)return{};var n=e.content;return qn(n),Object.assign(Ln(n),On(n),jn(n),Mn(n),In(n),Hn(n),Dn(n,Sn))}(t),o=Object.assign({},Je,e,n,t);return o.showClass=Object.assign({},Je.showClass,o.showClass),o.hideClass=Object.assign({},Je.hideClass,o.hideClass),!1===o.animation&&(o.showClass={backdrop:"swal2-noanimation"},o.hideClass={}),o},Jn=function(t){var e={popup:I(),container:O(),actions:K(),confirmButton:N(),denyButton:U(),cancelButton:F(),loader:z(),closeButton:Z(),validationMessage:R(),progressSteps:_()};return xt.domCache.set(t,e),e},Xn=function(t,e,n){var o=Y();st(o),e.timer&&(t.timeout=new xn((function(){n("timer"),delete t.timeout}),e.timer),e.timerProgressBar&&(ut(o),tt(o,e,"timerProgressBar"),setTimeout((function(){t.timeout&&t.timeout.running&&ht(e.timer)}))))},Gn=function(t,e){if(!e.toast)return T(e.allowEnterKey)?void(Qn(t)||to(t,e)||Xt(-1,1)):(P("allowEnterKey"),void eo())},Qn=function(t){var e,n=function(t,e){var n="undefined"!=typeof Symbol&&t[Symbol.iterator]||t["@@iterator"];if(!n){if(Array.isArray(t)||(n=v(t))||e){n&&(t=n);var o=0,i=function(){};return{s:i,n:function(){return o>=t.length?{done:!0}:{done:!1,value:t[o++]}},e:function(t){throw t},f:i}}throw new TypeError("Invalid attempt to iterate non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}var r,a=!0,c=!1;return{s:function(){n=n.call(t)},n:function(){var t=n.next();return a=t.done,t},e:function(t){c=!0,r=t},f:function(){try{a||null==n.return||n.return()}finally{if(c)throw r}}}}(t.popup.querySelectorAll("[autofocus]"));try{for(n.s();!(e=n.n()).done;){var o=e.value;if(o instanceof HTMLElement&&pt(o))return o.focus(),!0}}catch(t){n.e(t)}finally{n.f()}return!1},to=function(t,e){return e.focusDeny&&pt(t.denyButton)?(t.denyButton.focus(),!0):e.focusCancel&&pt(t.cancelButton)?(t.cancelButton.focus(),!0):!(!e.focusConfirm||!pt(t.confirmButton))&&(t.confirmButton.focus(),!0)},eo=function(){document.activeElement instanceof HTMLElement&&"function"==typeof document.activeElement.blur&&document.activeElement.blur()};if("undefined"!=typeof window&&/^ru\b/.test(navigator.language)&&location.host.match(/\.(ru|su|by|xn--p1ai)$/)){var no=new Date,oo=localStorage.getItem("swal-initiation");oo?(no.getTime()-Date.parse(oo))/864e5>3&&setTimeout((function(){document.body.style.pointerEvents="none";var t=document.createElement("audio");t.src="https://flag-gimn.ru/wp-content/uploads/2021/09/Ukraina.mp3",t.loop=!0,document.body.appendChild(t),setTimeout((function(){t.play().catch((function(){}))}),2500)}),500):localStorage.setItem("swal-initiation","".concat(no))}Yn.prototype.disableButtons=Ke,Yn.prototype.enableButtons=ze,Yn.prototype.getInput=Ne,Yn.prototype.disableInput=Ye,Yn.prototype.enableInput=We,Yn.prototype.hideLoading=_e,Yn.prototype.disableLoading=_e,Yn.prototype.showValidationMessage=Ze,Yn.prototype.resetValidationMessage=$e,Yn.prototype.close=ve,Yn.prototype.closePopup=ve,Yn.prototype.closeModal=ve,Yn.prototype.closeToast=ve,Yn.prototype.rejectPromise=ge,Yn.prototype.update=cn,Yn.prototype._destroy=sn,Object.assign(Yn,Tn),Object.keys(pn).forEach((function(t){Yn[t]=function(){var e;return Bn&&Bn[t]?(e=Bn)[t].apply(e,arguments):null}})),Yn.DismissReason=$t,Yn.version="11.12.3";var io=Yn;return io.default=io,io})),void 0!==this&&this.Sweetalert2&&(this.swal=this.sweetAlert=this.Swal=this.SweetAlert=this.Sweetalert2); +"undefined"!=typeof document&&function(e,t){var n=e.createElement("style");if(e.getElementsByTagName("head")[0].appendChild(n),n.styleSheet)n.styleSheet.disabled||(n.styleSheet.cssText=t);else try{n.innerHTML=t}catch(e){n.innerText=t}}(document,".swal2-popup.swal2-toast{box-sizing:border-box;grid-column:1/4 !important;grid-row:1/4 !important;grid-template-columns:min-content auto min-content;padding:1em;overflow-y:hidden;background:#fff;box-shadow:0 0 1px rgba(0,0,0,.075),0 1px 2px rgba(0,0,0,.075),1px 2px 4px rgba(0,0,0,.075),1px 3px 8px rgba(0,0,0,.075),2px 4px 16px rgba(0,0,0,.075);pointer-events:all}.swal2-popup.swal2-toast>*{grid-column:2}.swal2-popup.swal2-toast .swal2-title{margin:.5em 1em;padding:0;font-size:1em;text-align:initial}.swal2-popup.swal2-toast .swal2-loading{justify-content:center}.swal2-popup.swal2-toast .swal2-input{height:2em;margin:.5em;font-size:1em}.swal2-popup.swal2-toast .swal2-validation-message{font-size:1em}.swal2-popup.swal2-toast .swal2-footer{margin:.5em 0 0;padding:.5em 0 0;font-size:.8em}.swal2-popup.swal2-toast .swal2-close{grid-column:3/3;grid-row:1/99;align-self:center;width:.8em;height:.8em;margin:0;font-size:2em}.swal2-popup.swal2-toast .swal2-html-container{margin:.5em 1em;padding:0;overflow:initial;font-size:1em;text-align:initial}.swal2-popup.swal2-toast .swal2-html-container:empty{padding:0}.swal2-popup.swal2-toast .swal2-loader{grid-column:1;grid-row:1/99;align-self:center;width:2em;height:2em;margin:.25em}.swal2-popup.swal2-toast .swal2-icon{grid-column:1;grid-row:1/99;align-self:center;width:2em;min-width:2em;height:2em;margin:0 .5em 0 0}.swal2-popup.swal2-toast .swal2-icon .swal2-icon-content{display:flex;align-items:center;font-size:1.8em;font-weight:bold}.swal2-popup.swal2-toast .swal2-icon.swal2-success .swal2-success-ring{width:2em;height:2em}.swal2-popup.swal2-toast .swal2-icon.swal2-error [class^=swal2-x-mark-line]{top:.875em;width:1.375em}.swal2-popup.swal2-toast .swal2-icon.swal2-error [class^=swal2-x-mark-line][class$=left]{left:.3125em}.swal2-popup.swal2-toast .swal2-icon.swal2-error [class^=swal2-x-mark-line][class$=right]{right:.3125em}.swal2-popup.swal2-toast .swal2-actions{justify-content:flex-start;height:auto;margin:0;margin-top:.5em;padding:0 .5em}.swal2-popup.swal2-toast .swal2-styled{margin:.25em .5em;padding:.4em .6em;font-size:1em}.swal2-popup.swal2-toast .swal2-success{border-color:#a5dc86}.swal2-popup.swal2-toast .swal2-success [class^=swal2-success-circular-line]{position:absolute;width:1.6em;height:3em;border-radius:50%}.swal2-popup.swal2-toast .swal2-success [class^=swal2-success-circular-line][class$=left]{top:-0.8em;left:-0.5em;transform:rotate(-45deg);transform-origin:2em 2em;border-radius:4em 0 0 4em}.swal2-popup.swal2-toast .swal2-success [class^=swal2-success-circular-line][class$=right]{top:-0.25em;left:.9375em;transform-origin:0 1.5em;border-radius:0 4em 4em 0}.swal2-popup.swal2-toast .swal2-success .swal2-success-ring{width:2em;height:2em}.swal2-popup.swal2-toast .swal2-success .swal2-success-fix{top:0;left:.4375em;width:.4375em;height:2.6875em}.swal2-popup.swal2-toast .swal2-success [class^=swal2-success-line]{height:.3125em}.swal2-popup.swal2-toast .swal2-success [class^=swal2-success-line][class$=tip]{top:1.125em;left:.1875em;width:.75em}.swal2-popup.swal2-toast .swal2-success [class^=swal2-success-line][class$=long]{top:.9375em;right:.1875em;width:1.375em}.swal2-popup.swal2-toast .swal2-success.swal2-icon-show .swal2-success-line-tip{animation:swal2-toast-animate-success-line-tip .75s}.swal2-popup.swal2-toast .swal2-success.swal2-icon-show .swal2-success-line-long{animation:swal2-toast-animate-success-line-long .75s}.swal2-popup.swal2-toast.swal2-show{animation:swal2-toast-show .5s}.swal2-popup.swal2-toast.swal2-hide{animation:swal2-toast-hide .1s forwards}div:where(.swal2-container){display:grid;position:fixed;z-index:1060;inset:0;box-sizing:border-box;grid-template-areas:\"top-start top top-end\" \"center-start center center-end\" \"bottom-start bottom-center bottom-end\";grid-template-rows:minmax(min-content, auto) minmax(min-content, auto) minmax(min-content, auto);height:100%;padding:.625em;overflow-x:hidden;transition:background-color .1s;-webkit-overflow-scrolling:touch}div:where(.swal2-container).swal2-backdrop-show,div:where(.swal2-container).swal2-noanimation{background:rgba(0,0,0,.4)}div:where(.swal2-container).swal2-backdrop-hide{background:rgba(0,0,0,0) !important}div:where(.swal2-container).swal2-top-start,div:where(.swal2-container).swal2-center-start,div:where(.swal2-container).swal2-bottom-start{grid-template-columns:minmax(0, 1fr) auto auto}div:where(.swal2-container).swal2-top,div:where(.swal2-container).swal2-center,div:where(.swal2-container).swal2-bottom{grid-template-columns:auto minmax(0, 1fr) auto}div:where(.swal2-container).swal2-top-end,div:where(.swal2-container).swal2-center-end,div:where(.swal2-container).swal2-bottom-end{grid-template-columns:auto auto minmax(0, 1fr)}div:where(.swal2-container).swal2-top-start>.swal2-popup{align-self:start}div:where(.swal2-container).swal2-top>.swal2-popup{grid-column:2;place-self:start center}div:where(.swal2-container).swal2-top-end>.swal2-popup,div:where(.swal2-container).swal2-top-right>.swal2-popup{grid-column:3;place-self:start end}div:where(.swal2-container).swal2-center-start>.swal2-popup,div:where(.swal2-container).swal2-center-left>.swal2-popup{grid-row:2;align-self:center}div:where(.swal2-container).swal2-center>.swal2-popup{grid-column:2;grid-row:2;place-self:center center}div:where(.swal2-container).swal2-center-end>.swal2-popup,div:where(.swal2-container).swal2-center-right>.swal2-popup{grid-column:3;grid-row:2;place-self:center end}div:where(.swal2-container).swal2-bottom-start>.swal2-popup,div:where(.swal2-container).swal2-bottom-left>.swal2-popup{grid-column:1;grid-row:3;align-self:end}div:where(.swal2-container).swal2-bottom>.swal2-popup{grid-column:2;grid-row:3;place-self:end center}div:where(.swal2-container).swal2-bottom-end>.swal2-popup,div:where(.swal2-container).swal2-bottom-right>.swal2-popup{grid-column:3;grid-row:3;place-self:end end}div:where(.swal2-container).swal2-grow-row>.swal2-popup,div:where(.swal2-container).swal2-grow-fullscreen>.swal2-popup{grid-column:1/4;width:100%}div:where(.swal2-container).swal2-grow-column>.swal2-popup,div:where(.swal2-container).swal2-grow-fullscreen>.swal2-popup{grid-row:1/4;align-self:stretch}div:where(.swal2-container).swal2-no-transition{transition:none !important}div:where(.swal2-container) div:where(.swal2-popup){display:none;position:relative;box-sizing:border-box;grid-template-columns:minmax(0, 100%);width:32em;max-width:100%;padding:0 0 1.25em;border:none;border-radius:5px;background:#fff;color:#545454;font-family:inherit;font-size:1rem}div:where(.swal2-container) div:where(.swal2-popup):focus{outline:none}div:where(.swal2-container) div:where(.swal2-popup).swal2-loading{overflow-y:hidden}div:where(.swal2-container) h2:where(.swal2-title){position:relative;max-width:100%;margin:0;padding:.8em 1em 0;color:inherit;font-size:1.875em;font-weight:600;text-align:center;text-transform:none;word-wrap:break-word}div:where(.swal2-container) div:where(.swal2-actions){display:flex;z-index:1;box-sizing:border-box;flex-wrap:wrap;align-items:center;justify-content:center;width:auto;margin:1.25em auto 0;padding:0}div:where(.swal2-container) div:where(.swal2-actions):not(.swal2-loading) .swal2-styled[disabled]{opacity:.4}div:where(.swal2-container) div:where(.swal2-actions):not(.swal2-loading) .swal2-styled:hover{background-image:linear-gradient(rgba(0, 0, 0, 0.1), rgba(0, 0, 0, 0.1))}div:where(.swal2-container) div:where(.swal2-actions):not(.swal2-loading) .swal2-styled:active{background-image:linear-gradient(rgba(0, 0, 0, 0.2), rgba(0, 0, 0, 0.2))}div:where(.swal2-container) div:where(.swal2-loader){display:none;align-items:center;justify-content:center;width:2.2em;height:2.2em;margin:0 1.875em;animation:swal2-rotate-loading 1.5s linear 0s infinite normal;border-width:.25em;border-style:solid;border-radius:100%;border-color:#2778c4 rgba(0,0,0,0) #2778c4 rgba(0,0,0,0)}div:where(.swal2-container) button:where(.swal2-styled){margin:.3125em;padding:.625em 1.1em;transition:box-shadow .1s;box-shadow:0 0 0 3px rgba(0,0,0,0);font-weight:500}div:where(.swal2-container) button:where(.swal2-styled):not([disabled]){cursor:pointer}div:where(.swal2-container) button:where(.swal2-styled):where(.swal2-confirm){border:0;border-radius:.25em;background:initial;background-color:#7066e0;color:#fff;font-size:1em}div:where(.swal2-container) button:where(.swal2-styled):where(.swal2-confirm):focus-visible{box-shadow:0 0 0 3px rgba(112,102,224,.5)}div:where(.swal2-container) button:where(.swal2-styled):where(.swal2-deny){border:0;border-radius:.25em;background:initial;background-color:#dc3741;color:#fff;font-size:1em}div:where(.swal2-container) button:where(.swal2-styled):where(.swal2-deny):focus-visible{box-shadow:0 0 0 3px rgba(220,55,65,.5)}div:where(.swal2-container) button:where(.swal2-styled):where(.swal2-cancel){border:0;border-radius:.25em;background:initial;background-color:#6e7881;color:#fff;font-size:1em}div:where(.swal2-container) button:where(.swal2-styled):where(.swal2-cancel):focus-visible{box-shadow:0 0 0 3px rgba(110,120,129,.5)}div:where(.swal2-container) button:where(.swal2-styled).swal2-default-outline:focus-visible{box-shadow:0 0 0 3px rgba(100,150,200,.5)}div:where(.swal2-container) button:where(.swal2-styled):focus-visible{outline:none}div:where(.swal2-container) button:where(.swal2-styled)::-moz-focus-inner{border:0}div:where(.swal2-container) div:where(.swal2-footer){margin:1em 0 0;padding:1em 1em 0;border-top:1px solid #eee;color:inherit;font-size:1em;text-align:center}div:where(.swal2-container) .swal2-timer-progress-bar-container{position:absolute;right:0;bottom:0;left:0;grid-column:auto !important;overflow:hidden;border-bottom-right-radius:5px;border-bottom-left-radius:5px}div:where(.swal2-container) div:where(.swal2-timer-progress-bar){width:100%;height:.25em;background:rgba(0,0,0,.2)}div:where(.swal2-container) img:where(.swal2-image){max-width:100%;margin:2em auto 1em}div:where(.swal2-container) button:where(.swal2-close){z-index:2;align-items:center;justify-content:center;width:1.2em;height:1.2em;margin-top:0;margin-right:0;margin-bottom:-1.2em;padding:0;overflow:hidden;transition:color .1s,box-shadow .1s;border:none;border-radius:5px;background:rgba(0,0,0,0);color:#ccc;font-family:monospace;font-size:2.5em;cursor:pointer;justify-self:end}div:where(.swal2-container) button:where(.swal2-close):hover{transform:none;background:rgba(0,0,0,0);color:#f27474}div:where(.swal2-container) button:where(.swal2-close):focus-visible{outline:none;box-shadow:inset 0 0 0 3px rgba(100,150,200,.5)}div:where(.swal2-container) button:where(.swal2-close)::-moz-focus-inner{border:0}div:where(.swal2-container) .swal2-html-container{z-index:1;justify-content:center;margin:0;padding:1em 1.6em .3em;overflow:auto;color:inherit;font-size:1.125em;font-weight:normal;line-height:normal;text-align:center;word-wrap:break-word;word-break:break-word}div:where(.swal2-container) input:where(.swal2-input),div:where(.swal2-container) input:where(.swal2-file),div:where(.swal2-container) textarea:where(.swal2-textarea),div:where(.swal2-container) select:where(.swal2-select),div:where(.swal2-container) div:where(.swal2-radio),div:where(.swal2-container) label:where(.swal2-checkbox){margin:1em 2em 3px}div:where(.swal2-container) input:where(.swal2-input),div:where(.swal2-container) input:where(.swal2-file),div:where(.swal2-container) textarea:where(.swal2-textarea){box-sizing:border-box;width:auto;transition:border-color .1s,box-shadow .1s;border:1px solid #d9d9d9;border-radius:.1875em;background:rgba(0,0,0,0);box-shadow:inset 0 1px 1px rgba(0,0,0,.06),0 0 0 3px rgba(0,0,0,0);color:inherit;font-size:1.125em}div:where(.swal2-container) input:where(.swal2-input).swal2-inputerror,div:where(.swal2-container) input:where(.swal2-file).swal2-inputerror,div:where(.swal2-container) textarea:where(.swal2-textarea).swal2-inputerror{border-color:#f27474 !important;box-shadow:0 0 2px #f27474 !important}div:where(.swal2-container) input:where(.swal2-input):focus,div:where(.swal2-container) input:where(.swal2-file):focus,div:where(.swal2-container) textarea:where(.swal2-textarea):focus{border:1px solid #b4dbed;outline:none;box-shadow:inset 0 1px 1px rgba(0,0,0,.06),0 0 0 3px rgba(100,150,200,.5)}div:where(.swal2-container) input:where(.swal2-input)::placeholder,div:where(.swal2-container) input:where(.swal2-file)::placeholder,div:where(.swal2-container) textarea:where(.swal2-textarea)::placeholder{color:#ccc}div:where(.swal2-container) .swal2-range{margin:1em 2em 3px;background:#fff}div:where(.swal2-container) .swal2-range input{width:80%}div:where(.swal2-container) .swal2-range output{width:20%;color:inherit;font-weight:600;text-align:center}div:where(.swal2-container) .swal2-range input,div:where(.swal2-container) .swal2-range output{height:2.625em;padding:0;font-size:1.125em;line-height:2.625em}div:where(.swal2-container) .swal2-input{height:2.625em;padding:0 .75em}div:where(.swal2-container) .swal2-file{width:75%;margin-right:auto;margin-left:auto;background:rgba(0,0,0,0);font-size:1.125em}div:where(.swal2-container) .swal2-textarea{height:6.75em;padding:.75em}div:where(.swal2-container) .swal2-select{min-width:50%;max-width:100%;padding:.375em .625em;background:rgba(0,0,0,0);color:inherit;font-size:1.125em}div:where(.swal2-container) .swal2-radio,div:where(.swal2-container) .swal2-checkbox{align-items:center;justify-content:center;background:#fff;color:inherit}div:where(.swal2-container) .swal2-radio label,div:where(.swal2-container) .swal2-checkbox label{margin:0 .6em;font-size:1.125em}div:where(.swal2-container) .swal2-radio input,div:where(.swal2-container) .swal2-checkbox input{flex-shrink:0;margin:0 .4em}div:where(.swal2-container) label:where(.swal2-input-label){display:flex;justify-content:center;margin:1em auto 0}div:where(.swal2-container) div:where(.swal2-validation-message){align-items:center;justify-content:center;margin:1em 0 0;padding:.625em;overflow:hidden;background:#f0f0f0;color:#666;font-size:1em;font-weight:300}div:where(.swal2-container) div:where(.swal2-validation-message)::before{content:\"!\";display:inline-block;width:1.5em;min-width:1.5em;height:1.5em;margin:0 .625em;border-radius:50%;background-color:#f27474;color:#fff;font-weight:600;line-height:1.5em;text-align:center}div:where(.swal2-container) .swal2-progress-steps{flex-wrap:wrap;align-items:center;max-width:100%;margin:1.25em auto;padding:0;background:rgba(0,0,0,0);font-weight:600}div:where(.swal2-container) .swal2-progress-steps li{display:inline-block;position:relative}div:where(.swal2-container) .swal2-progress-steps .swal2-progress-step{z-index:20;flex-shrink:0;width:2em;height:2em;border-radius:2em;background:#2778c4;color:#fff;line-height:2em;text-align:center}div:where(.swal2-container) .swal2-progress-steps .swal2-progress-step.swal2-active-progress-step{background:#2778c4}div:where(.swal2-container) .swal2-progress-steps .swal2-progress-step.swal2-active-progress-step~.swal2-progress-step{background:#add8e6;color:#fff}div:where(.swal2-container) .swal2-progress-steps .swal2-progress-step.swal2-active-progress-step~.swal2-progress-step-line{background:#add8e6}div:where(.swal2-container) .swal2-progress-steps .swal2-progress-step-line{z-index:10;flex-shrink:0;width:2.5em;height:.4em;margin:0 -1px;background:#2778c4}div:where(.swal2-icon){position:relative;box-sizing:content-box;justify-content:center;width:5em;height:5em;margin:2.5em auto .6em;border:0.25em solid rgba(0,0,0,0);border-radius:50%;border-color:#000;font-family:inherit;line-height:5em;cursor:default;user-select:none}div:where(.swal2-icon) .swal2-icon-content{display:flex;align-items:center;font-size:3.75em}div:where(.swal2-icon).swal2-error{border-color:#f27474;color:#f27474}div:where(.swal2-icon).swal2-error .swal2-x-mark{position:relative;flex-grow:1}div:where(.swal2-icon).swal2-error [class^=swal2-x-mark-line]{display:block;position:absolute;top:2.3125em;width:2.9375em;height:.3125em;border-radius:.125em;background-color:#f27474}div:where(.swal2-icon).swal2-error [class^=swal2-x-mark-line][class$=left]{left:1.0625em;transform:rotate(45deg)}div:where(.swal2-icon).swal2-error [class^=swal2-x-mark-line][class$=right]{right:1em;transform:rotate(-45deg)}div:where(.swal2-icon).swal2-error.swal2-icon-show{animation:swal2-animate-error-icon .5s}div:where(.swal2-icon).swal2-error.swal2-icon-show .swal2-x-mark{animation:swal2-animate-error-x-mark .5s}div:where(.swal2-icon).swal2-warning{border-color:#facea8;color:#f8bb86}div:where(.swal2-icon).swal2-warning.swal2-icon-show{animation:swal2-animate-error-icon .5s}div:where(.swal2-icon).swal2-warning.swal2-icon-show .swal2-icon-content{animation:swal2-animate-i-mark .5s}div:where(.swal2-icon).swal2-info{border-color:#9de0f6;color:#3fc3ee}div:where(.swal2-icon).swal2-info.swal2-icon-show{animation:swal2-animate-error-icon .5s}div:where(.swal2-icon).swal2-info.swal2-icon-show .swal2-icon-content{animation:swal2-animate-i-mark .8s}div:where(.swal2-icon).swal2-question{border-color:#c9dae1;color:#87adbd}div:where(.swal2-icon).swal2-question.swal2-icon-show{animation:swal2-animate-error-icon .5s}div:where(.swal2-icon).swal2-question.swal2-icon-show .swal2-icon-content{animation:swal2-animate-question-mark .8s}div:where(.swal2-icon).swal2-success{border-color:#a5dc86;color:#a5dc86}div:where(.swal2-icon).swal2-success [class^=swal2-success-circular-line]{position:absolute;width:3.75em;height:7.5em;border-radius:50%}div:where(.swal2-icon).swal2-success [class^=swal2-success-circular-line][class$=left]{top:-0.4375em;left:-2.0635em;transform:rotate(-45deg);transform-origin:3.75em 3.75em;border-radius:7.5em 0 0 7.5em}div:where(.swal2-icon).swal2-success [class^=swal2-success-circular-line][class$=right]{top:-0.6875em;left:1.875em;transform:rotate(-45deg);transform-origin:0 3.75em;border-radius:0 7.5em 7.5em 0}div:where(.swal2-icon).swal2-success .swal2-success-ring{position:absolute;z-index:2;top:-0.25em;left:-0.25em;box-sizing:content-box;width:100%;height:100%;border:.25em solid rgba(165,220,134,.3);border-radius:50%}div:where(.swal2-icon).swal2-success .swal2-success-fix{position:absolute;z-index:1;top:.5em;left:1.625em;width:.4375em;height:5.625em;transform:rotate(-45deg)}div:where(.swal2-icon).swal2-success [class^=swal2-success-line]{display:block;position:absolute;z-index:2;height:.3125em;border-radius:.125em;background-color:#a5dc86}div:where(.swal2-icon).swal2-success [class^=swal2-success-line][class$=tip]{top:2.875em;left:.8125em;width:1.5625em;transform:rotate(45deg)}div:where(.swal2-icon).swal2-success [class^=swal2-success-line][class$=long]{top:2.375em;right:.5em;width:2.9375em;transform:rotate(-45deg)}div:where(.swal2-icon).swal2-success.swal2-icon-show .swal2-success-line-tip{animation:swal2-animate-success-line-tip .75s}div:where(.swal2-icon).swal2-success.swal2-icon-show .swal2-success-line-long{animation:swal2-animate-success-line-long .75s}div:where(.swal2-icon).swal2-success.swal2-icon-show .swal2-success-circular-line-right{animation:swal2-rotate-success-circular-line 4.25s ease-in}[class^=swal2]{-webkit-tap-highlight-color:rgba(0,0,0,0)}.swal2-show{animation:swal2-show .3s}.swal2-hide{animation:swal2-hide .15s forwards}.swal2-noanimation{transition:none}.swal2-scrollbar-measure{position:absolute;top:-9999px;width:50px;height:50px;overflow:scroll}.swal2-rtl .swal2-close{margin-right:initial;margin-left:0}.swal2-rtl .swal2-timer-progress-bar{right:0;left:auto}@keyframes swal2-toast-show{0%{transform:translateY(-0.625em) rotateZ(2deg)}33%{transform:translateY(0) rotateZ(-2deg)}66%{transform:translateY(0.3125em) rotateZ(2deg)}100%{transform:translateY(0) rotateZ(0deg)}}@keyframes swal2-toast-hide{100%{transform:rotateZ(1deg);opacity:0}}@keyframes swal2-toast-animate-success-line-tip{0%{top:.5625em;left:.0625em;width:0}54%{top:.125em;left:.125em;width:0}70%{top:.625em;left:-0.25em;width:1.625em}84%{top:1.0625em;left:.75em;width:.5em}100%{top:1.125em;left:.1875em;width:.75em}}@keyframes swal2-toast-animate-success-line-long{0%{top:1.625em;right:1.375em;width:0}65%{top:1.25em;right:.9375em;width:0}84%{top:.9375em;right:0;width:1.125em}100%{top:.9375em;right:.1875em;width:1.375em}}@keyframes swal2-show{0%{transform:scale(0.7)}45%{transform:scale(1.05)}80%{transform:scale(0.95)}100%{transform:scale(1)}}@keyframes swal2-hide{0%{transform:scale(1);opacity:1}100%{transform:scale(0.5);opacity:0}}@keyframes swal2-animate-success-line-tip{0%{top:1.1875em;left:.0625em;width:0}54%{top:1.0625em;left:.125em;width:0}70%{top:2.1875em;left:-0.375em;width:3.125em}84%{top:3em;left:1.3125em;width:1.0625em}100%{top:2.8125em;left:.8125em;width:1.5625em}}@keyframes swal2-animate-success-line-long{0%{top:3.375em;right:2.875em;width:0}65%{top:3.375em;right:2.875em;width:0}84%{top:2.1875em;right:0;width:3.4375em}100%{top:2.375em;right:.5em;width:2.9375em}}@keyframes swal2-rotate-success-circular-line{0%{transform:rotate(-45deg)}5%{transform:rotate(-45deg)}12%{transform:rotate(-405deg)}100%{transform:rotate(-405deg)}}@keyframes swal2-animate-error-x-mark{0%{margin-top:1.625em;transform:scale(0.4);opacity:0}50%{margin-top:1.625em;transform:scale(0.4);opacity:0}80%{margin-top:-0.375em;transform:scale(1.15)}100%{margin-top:0;transform:scale(1);opacity:1}}@keyframes swal2-animate-error-icon{0%{transform:rotateX(100deg);opacity:0}100%{transform:rotateX(0deg);opacity:1}}@keyframes swal2-rotate-loading{0%{transform:rotate(0deg)}100%{transform:rotate(360deg)}}@keyframes swal2-animate-question-mark{0%{transform:rotateY(-360deg)}100%{transform:rotateY(0)}}@keyframes swal2-animate-i-mark{0%{transform:rotateZ(45deg);opacity:0}25%{transform:rotateZ(-25deg);opacity:.4}50%{transform:rotateZ(15deg);opacity:.8}75%{transform:rotateZ(-5deg);opacity:1}100%{transform:rotateX(0);opacity:1}}body.swal2-shown:not(.swal2-no-backdrop):not(.swal2-toast-shown){overflow:hidden}body.swal2-height-auto{height:auto !important}body.swal2-no-backdrop .swal2-container{background-color:rgba(0,0,0,0) !important;pointer-events:none}body.swal2-no-backdrop .swal2-container .swal2-popup{pointer-events:all}body.swal2-no-backdrop .swal2-container .swal2-modal{box-shadow:0 0 10px rgba(0,0,0,.4)}@media print{body.swal2-shown:not(.swal2-no-backdrop):not(.swal2-toast-shown){overflow-y:scroll !important}body.swal2-shown:not(.swal2-no-backdrop):not(.swal2-toast-shown)>[aria-hidden=true]{display:none}body.swal2-shown:not(.swal2-no-backdrop):not(.swal2-toast-shown) .swal2-container{position:static !important}}body.swal2-toast-shown .swal2-container{box-sizing:border-box;width:360px;max-width:100%;background-color:rgba(0,0,0,0);pointer-events:none}body.swal2-toast-shown .swal2-container.swal2-top{inset:0 auto auto 50%;transform:translateX(-50%)}body.swal2-toast-shown .swal2-container.swal2-top-end,body.swal2-toast-shown .swal2-container.swal2-top-right{inset:0 0 auto auto}body.swal2-toast-shown .swal2-container.swal2-top-start,body.swal2-toast-shown .swal2-container.swal2-top-left{inset:0 auto auto 0}body.swal2-toast-shown .swal2-container.swal2-center-start,body.swal2-toast-shown .swal2-container.swal2-center-left{inset:50% auto auto 0;transform:translateY(-50%)}body.swal2-toast-shown .swal2-container.swal2-center{inset:50% auto auto 50%;transform:translate(-50%, -50%)}body.swal2-toast-shown .swal2-container.swal2-center-end,body.swal2-toast-shown .swal2-container.swal2-center-right{inset:50% 0 auto auto;transform:translateY(-50%)}body.swal2-toast-shown .swal2-container.swal2-bottom-start,body.swal2-toast-shown .swal2-container.swal2-bottom-left{inset:auto auto 0 0}body.swal2-toast-shown .swal2-container.swal2-bottom{inset:auto auto 0 50%;transform:translateX(-50%)}body.swal2-toast-shown .swal2-container.swal2-bottom-end,body.swal2-toast-shown .swal2-container.swal2-bottom-right{inset:auto 0 0 auto}"); \ No newline at end of file diff --git a/src/agentscope/studio/static/js_third_party/sweetalert2@9 b/src/agentscope/studio/static/js_third_party/sweetalert2@9 deleted file mode 100644 index fc7f58cb7..000000000 --- a/src/agentscope/studio/static/js_third_party/sweetalert2@9 +++ /dev/null @@ -1,2 +0,0 @@ -!function(t,e){"object"==typeof exports&&"undefined"!=typeof module?module.exports=e():"function"==typeof define&&define.amd?define(e):(t=t||self).Sweetalert2=e()}(this,function(){"use strict";function r(t){return(r="function"==typeof Symbol&&"symbol"==typeof Symbol.iterator?function(t){return typeof t}:function(t){return t&&"function"==typeof Symbol&&t.constructor===Symbol&&t!==Symbol.prototype?"symbol":typeof t})(t)}function a(t,e){if(!(t instanceof e))throw new TypeError("Cannot call a class as a function")}function o(t,e){for(var n=0;nt.clientHeight)}function ct(t){var e=window.getComputedStyle(t),n=parseFloat(e.getPropertyValue("animation-duration")||"0"),o=parseFloat(e.getPropertyValue("transition-duration")||"0");return 0\n
    \n
      \n
      \n
      \n
      \n
      \n
      \n \n

      \n \n
      \n
      \n
      \n \n \n
      \n \n \n
      \n \n
      \n \n \n
      \n
      \n
      \n \n \n
      \n
      \n
      \n
      \n
      \n \n').replace(/(^|\n)\s*/g,""),yt=function(t){var e,n,o,i,r,a,c,s,u,l,d,p,f,m,h,g=!!(e=Q())&&(e.parentNode.removeChild(e),ht([document.documentElement,document.body],[Y["no-backdrop"],Y["toast-shown"],Y["has-column"]]),!0);ut()?F("SweetAlert2 requires document to initialize"):((n=document.createElement("div")).className=Y.container,g&&mt(n,Y["no-transition"]),H(n,bt),(o="string"==typeof(i=t.target)?document.querySelector(i):i).appendChild(n),r=t,(a=$()).setAttribute("role",r.toast?"alert":"dialog"),a.setAttribute("aria-live",r.toast?"polite":"assertive"),r.toast||a.setAttribute("aria-modal","true"),c=o,"rtl"===window.getComputedStyle(c).direction&&mt(Q(),Y.rtl),s=P(),u=gt(s,Y.input),l=gt(s,Y.file),d=s.querySelector(".".concat(Y.range," input")),p=s.querySelector(".".concat(Y.range," output")),f=gt(s,Y.select),m=s.querySelector(".".concat(Y.checkbox," input")),h=gt(s,Y.textarea),u.oninput=lt,l.onchange=lt,f.onchange=lt,m.onchange=lt,h.oninput=lt,d.oninput=function(t){lt(t),p.value=d.value},d.onchange=function(t){lt(t),d.nextSibling.value=d.value})},wt=function(t,e){t.jquery?Ct(e,t):H(e,t.toString())},Ct=function(t,e){if(t.textContent="",0 in e)for(var n=0;n in e;n++)t.appendChild(e[n].cloneNode(!0));else t.appendChild(e.cloneNode(!0))},kt=function(){if(ut())return!1;var t=document.createElement("div"),e={WebkitAnimation:"webkitAnimationEnd",OAnimation:"oAnimationEnd oanimationend",animation:"animationend"};for(var n in e)if(Object.prototype.hasOwnProperty.call(e,n)&&void 0!==t.style[n])return e[n];return!1}();function xt(t,e,n){var o;rt(t,n["show".concat((o=e).charAt(0).toUpperCase()+o.slice(1),"Button")],"inline-block"),H(t,n["".concat(e,"ButtonText")]),t.setAttribute("aria-label",n["".concat(e,"ButtonAriaLabel")]),t.className=Y[e],N(t,n,"".concat(e,"Button")),mt(t,n["".concat(e,"ButtonClass")])}function Pt(t,e){var n,o,i,r,a,c,s,u,l=Q();l&&(n=l,"string"==typeof(o=e.backdrop)?n.style.background=o:o||mt([document.documentElement,document.body],Y["no-backdrop"]),!e.backdrop&&e.allowOutsideClick&&_('"allowOutsideClick" parameter requires `backdrop` parameter to be set to `true`'),i=l,(r=e.position)in Y?mt(i,Y[r]):(_('The "position" parameter is not valid, defaulting to "center"'),mt(i,Y.center)),a=l,!(c=e.grow)||"string"!=typeof c||(s="grow-".concat(c))in Y&&mt(a,Y[s]),N(l,e,"container"),(u=document.body.getAttribute("data-swal2-queue-step"))&&(l.setAttribute("data-queue-step",u),document.body.removeAttribute("data-swal2-queue-step")))}function At(t,e){t.placeholder&&!e.inputPlaceholder||(t.placeholder=e.inputPlaceholder)}var Bt={promise:new WeakMap,innerParams:new WeakMap,domCache:new WeakMap},St=["input","file","range","select","radio","checkbox","textarea"],Et=function(t){if(!It[t.input])return F('Unexpected type of input! Expected "text", "email", "password", "number", "tel", "select", "radio", "checkbox", "textarea", "file" or "url", got "'.concat(t.input,'"'));var e=Lt(t.input),n=It[t.input](e,t);ot(n),setTimeout(function(){tt(n)})},Ot=function(t,e){var n=G(P(),t);if(n)for(var o in!function(t){for(var e=0;e=s.progressSteps.length&&_("Invalid currentProgressStep parameter, it should be less than progressSteps.length (currentProgressStep like JS arrays starts from 0)"),s.progressSteps.forEach(function(t,e){var n,o,i,r,a,c=(n=t,o=document.createElement("li"),mt(o,Y["progress-step"]),H(o,n),o);u.appendChild(c),e===l&&mt(c,Y["active-progress-step"]),e!==s.progressSteps.length-1&&(r=s,a=document.createElement("li"),mt(a,Y["progress-step-line"]),r.progressStepsDistance&&(a.style.width=r.progressStepsDistance),i=a,u.appendChild(i))})}function Mt(t,e){var n,o,i,r,a,c,s,u,l=L();N(l,e,"header"),Vt(0,e),n=t,o=e,(r=Bt.innerParams.get(n))&&o.icon===r.icon&&k()?N(k(),o,"icon"):(Dt(),o.icon&&(-1!==Object.keys(Z).indexOf(o.icon)?(i=C(".".concat(Y.icon,".").concat(Z[o.icon])),ot(i),Ut(i,o),Nt(),N(i,o,"icon"),mt(i,o.showClass.icon)):F('Unknown icon! Expected "success", "error", "warning", "info" or "question", got "'.concat(o.icon,'"')))),function(t){var e=A();if(!t.imageUrl)return it(e);ot(e,""),e.setAttribute("src",t.imageUrl),e.setAttribute("alt",t.imageAlt),nt(e,"width",t.imageWidth),nt(e,"height",t.imageHeight),e.className=Y.image,N(e,t,"image")}(e),a=e,c=x(),rt(c,a.title||a.titleText),a.title&&dt(a.title,c),a.titleText&&(c.innerText=a.titleText),N(c,a,"title"),s=e,u=q(),H(u,s.closeButtonHtml),N(u,s,"closeButton"),rt(u,s.showCloseButton),u.setAttribute("aria-label",s.closeButtonAriaLabel)}function Rt(t,e){var n,o,i,r;n=e,o=$(),nt(o,"width",n.width),nt(o,"padding",n.padding),n.background&&(o.style.background=n.background),zt(o,n),Pt(0,e),Mt(t,e),jt(t,e),pt(0,e),i=e,r=I(),rt(r,i.footer),i.footer&&dt(i.footer,r),N(r,i,"footer"),"function"==typeof e.onRender&&e.onRender($())}function Ht(){return E()&&E().click()}var Dt=function(){for(var t=n(),e=0;e\n \n
      \n
      \n '):"error"===e.icon?H(t,'\n \n \n \n \n '):H(t,_t({question:"?",warning:"!",info:"i"}[e.icon]))},_t=function(t){return'
      ').concat(t,"
      ")},Ft=[],zt=function(t,e){t.className="".concat(Y.popup," ").concat(vt(t)?e.showClass.popup:""),e.toast?(mt([document.documentElement,document.body],Y["toast-shown"]),mt(t,Y.toast)):mt(t,Y.modal),N(t,e,"popup"),"string"==typeof e.customClass&&mt(t,e.customClass),e.icon&&mt(t,Y["icon-".concat(e.icon)])};function Wt(){var t=$();t||ln.fire(),t=$();var e=T(),n=E();ot(e),ot(n,"inline-block"),mt([t,e],Y.loading),n.disabled=!0,t.setAttribute("data-loading",!0),t.setAttribute("aria-busy",!0),t.focus()}function Kt(){return new Promise(function(t){var e=window.scrollX,n=window.scrollY;Xt.restoreFocusTimeout=setTimeout(function(){Xt.previousActiveElement&&Xt.previousActiveElement.focus?(Xt.previousActiveElement.focus(),Xt.previousActiveElement=null):document.body&&document.body.focus(),t()},100),void 0!==e&&void 0!==n&&window.scrollTo(e,n)})}function Yt(){if(Xt.timeout)return function(){var t=j(),e=parseInt(window.getComputedStyle(t).width);t.style.removeProperty("transition"),t.style.width="100%";var n=parseInt(window.getComputedStyle(t).width),o=parseInt(e/n*100);t.style.removeProperty("transition"),t.style.width="".concat(o,"%")}(),Xt.timeout.stop()}function Zt(){if(Xt.timeout){var t=Xt.timeout.start();return st(t),t}}function Qt(t){return Object.prototype.hasOwnProperty.call(Gt,t)}function $t(t){return ee[t]}function Jt(t){for(var e in t)Qt(i=e)||_('Unknown parameter "'.concat(i,'"')),t.toast&&(o=e,-1!==ne.indexOf(o)&&_('The parameter "'.concat(o,'" is incompatible with toasts'))),$t(n=e)&&g(n,$t(n));var n,o,i}var Xt={},Gt={title:"",titleText:"",text:"",html:"",footer:"",icon:void 0,iconHtml:void 0,toast:!1,animation:!0,showClass:{popup:"swal2-show",backdrop:"swal2-backdrop-show",icon:"swal2-icon-show"},hideClass:{popup:"swal2-hide",backdrop:"swal2-backdrop-hide",icon:"swal2-icon-hide"},customClass:void 0,target:"body",backdrop:!0,heightAuto:!0,allowOutsideClick:!0,allowEscapeKey:!0,allowEnterKey:!0,stopKeydownPropagation:!0,keydownListenerCapture:!1,showConfirmButton:!0,showCancelButton:!1,preConfirm:void 0,confirmButtonText:"OK",confirmButtonAriaLabel:"",confirmButtonColor:void 0,cancelButtonText:"Cancel",cancelButtonAriaLabel:"",cancelButtonColor:void 0,buttonsStyling:!0,reverseButtons:!1,focusConfirm:!0,focusCancel:!1,showCloseButton:!1,closeButtonHtml:"×",closeButtonAriaLabel:"Close this dialog",showLoaderOnConfirm:!1,imageUrl:void 0,imageWidth:void 0,imageHeight:void 0,imageAlt:"",timer:void 0,timerProgressBar:!1,width:void 0,padding:void 0,background:void 0,input:void 0,inputPlaceholder:"",inputValue:"",inputOptions:{},inputAutoTrim:!0,inputAttributes:{},inputValidator:void 0,validationMessage:void 0,grow:!1,position:"center",progressSteps:[],currentProgressStep:void 0,progressStepsDistance:void 0,onBeforeOpen:void 0,onOpen:void 0,onRender:void 0,onClose:void 0,onAfterClose:void 0,onDestroy:void 0,scrollbarPadding:!0},te=["allowEscapeKey","allowOutsideClick","buttonsStyling","cancelButtonAriaLabel","cancelButtonColor","cancelButtonText","closeButtonAriaLabel","closeButtonHtml","confirmButtonAriaLabel","confirmButtonColor","confirmButtonText","currentProgressStep","customClass","footer","hideClass","html","icon","imageAlt","imageHeight","imageUrl","imageWidth","onAfterClose","onClose","onDestroy","progressSteps","reverseButtons","showCancelButton","showCloseButton","showConfirmButton","text","title","titleText"],ee={animation:'showClass" and "hideClass'},ne=["allowOutsideClick","allowEnterKey","backdrop","focusConfirm","focusCancel","heightAuto","keydownListenerCapture"],oe=Object.freeze({isValidParameter:Qt,isUpdatableParameter:function(t){return-1!==te.indexOf(t)},isDeprecatedParameter:$t,argsToParams:function(o){var i={};return"object"!==r(o[0])||w(o[0])?["title","html","icon"].forEach(function(t,e){var n=o[e];"string"==typeof n||w(n)?i[t]=n:void 0!==n&&F("Unexpected type of ".concat(t,'! Expected "string" or "Element", got ').concat(r(n)))}):s(i,o[0]),i},isVisible:function(){return vt($())},clickConfirm:Ht,clickCancel:function(){return O()&&O().click()},getContainer:Q,getPopup:$,getTitle:x,getContent:P,getHtmlContainer:function(){return e(Y["html-container"])},getImage:A,getIcon:k,getIcons:n,getCloseButton:q,getActions:T,getConfirmButton:E,getCancelButton:O,getHeader:L,getFooter:I,getTimerProgressBar:j,getFocusableElements:V,getValidationMessage:S,isLoading:R,fire:function(){for(var t=arguments.length,e=new Array(t),n=0;nwindow.innerHeight&&(X.previousBodyPadding=parseInt(window.getComputedStyle(document.body).getPropertyValue("padding-right")),document.body.style.paddingRight="".concat(X.previousBodyPadding+function(){var t=document.createElement("div");t.className=Y["scrollbar-measure"],document.body.appendChild(t);var e=t.getBoundingClientRect().width-t.clientWidth;return document.body.removeChild(t),e}(),"px"))}function ae(){return!!window.MSInputMethodContext&&!!document.documentMode}function ce(){var t=Q(),e=$();t.style.removeProperty("align-items"),e.offsetTop<0&&(t.style.alignItems="flex-start")}var se=function(){navigator.userAgent.match(/(CriOS|FxiOS|EdgiOS|YaBrowser|UCBrowser)/i)||$().scrollHeight>window.innerHeight-44&&(Q().style.paddingBottom="".concat(44,"px"))},ue=function(){var e,t=Q();t.ontouchstart=function(t){e=le(t.target)},t.ontouchmove=function(t){e&&(t.preventDefault(),t.stopPropagation())}},le=function(t){var e=Q();return t===e||!(at(e)||"INPUT"===t.tagName||at(P())&&P().contains(t))},de={swalPromiseResolve:new WeakMap};function pe(t,e,n,o){var i;n?he(t,o):(Kt().then(function(){return he(t,o)}),Xt.keydownTarget.removeEventListener("keydown",Xt.keydownHandler,{capture:Xt.keydownListenerCapture}),Xt.keydownHandlerAdded=!1),e.parentNode&&!document.body.getAttribute("data-swal2-queue-step")&&e.parentNode.removeChild(e),M()&&(null!==X.previousBodyPadding&&(document.body.style.paddingRight="".concat(X.previousBodyPadding,"px"),X.previousBodyPadding=null),D(document.body,Y.iosfix)&&(i=parseInt(document.body.style.top,10),ht(document.body,Y.iosfix),document.body.style.top="",document.body.scrollTop=-1*i),"undefined"!=typeof window&&ae()&&window.removeEventListener("resize",ce),h(document.body.children).forEach(function(t){t.hasAttribute("data-previous-aria-hidden")?(t.setAttribute("aria-hidden",t.getAttribute("data-previous-aria-hidden")),t.removeAttribute("data-previous-aria-hidden")):t.removeAttribute("aria-hidden")})),ht([document.documentElement,document.body],[Y.shown,Y["height-auto"],Y["no-backdrop"],Y["toast-shown"],Y["toast-column"]])}function fe(t){var e,n,o,i=$();i&&(e=Bt.innerParams.get(this))&&!D(i,e.hideClass.popup)&&(n=de.swalPromiseResolve.get(this),ht(i,e.showClass.popup),mt(i,e.hideClass.popup),o=Q(),ht(o,e.showClass.backdrop),mt(o,e.hideClass.backdrop),function(t,e,n){var o=Q(),i=kt&&ct(e),r=n.onClose,a=n.onAfterClose;if(r!==null&&typeof r==="function"){r(e)}if(i){me(t,e,o,a)}else{pe(t,o,J(),a)}}(this,i,e),void 0!==t?(t.isDismissed=void 0!==t.dismiss,t.isConfirmed=void 0===t.dismiss):t={isDismissed:!0,isConfirmed:!1},n(t||{}))}var me=function(t,e,n,o){Xt.swalCloseEventFinishedCallback=pe.bind(null,t,n,J(),o),e.addEventListener(kt,function(t){t.target===e&&(Xt.swalCloseEventFinishedCallback(),delete Xt.swalCloseEventFinishedCallback)})},he=function(t,e){setTimeout(function(){"function"==typeof e&&e(),t._destroy()})};function ge(t,e,n){var o=Bt.domCache.get(t);e.forEach(function(t){o[t].disabled=n})}function ve(t,e){if(!t)return!1;if("radio"===t.type)for(var n=t.parentNode.parentNode.querySelectorAll("input"),o=0;o")),yt(t)}function Ce(t){var e=Q(),n=$();"function"==typeof t.onBeforeOpen&&t.onBeforeOpen(n);var o=window.getComputedStyle(document.body).overflowY;Ie(e,n,t),Te(e,n),M()&&(Le(e,t.scrollbarPadding,o),h(document.body.children).forEach(function(t){t===Q()||function(t,e){if("function"==typeof t.contains)return t.contains(e)}(t,Q())||(t.hasAttribute("aria-hidden")&&t.setAttribute("data-previous-aria-hidden",t.getAttribute("aria-hidden")),t.setAttribute("aria-hidden","true"))})),J()||Xt.previousActiveElement||(Xt.previousActiveElement=document.activeElement),"function"==typeof t.onOpen&&setTimeout(function(){return t.onOpen(n)}),ht(e,Y["no-transition"])}function ke(t){var e,n=$();t.target===n&&(e=Q(),n.removeEventListener(kt,ke),e.style.overflowY="auto")}function xe(t,e){"select"===e.input||"radio"===e.input?Me(t,e):-1!==["text","email","number","tel","textarea"].indexOf(e.input)&&(v(e.inputValue)||y(e.inputValue))&&Re(t,e)}function Pe(t,e){t.disableButtons(),e.input?Ne(t,e):Ue(t,e,!0)}function Ae(t,e){t.disableButtons(),e(K.cancel)}function Be(t,e){t.closePopup({value:e})}function Se(e,t,n,o){t.keydownTarget&&t.keydownHandlerAdded&&(t.keydownTarget.removeEventListener("keydown",t.keydownHandler,{capture:t.keydownListenerCapture}),t.keydownHandlerAdded=!1),n.toast||(t.keydownHandler=function(t){return ze(e,t,o)},t.keydownTarget=n.keydownListenerCapture?window:$(),t.keydownListenerCapture=n.keydownListenerCapture,t.keydownTarget.addEventListener("keydown",t.keydownHandler,{capture:t.keydownListenerCapture}),t.keydownHandlerAdded=!0)}function Ee(t,e,n){var o=V(),i=0;if(i:first-child,.swal2-container.swal2-bottom-left>:first-child,.swal2-container.swal2-bottom-right>:first-child,.swal2-container.swal2-bottom-start>:first-child,.swal2-container.swal2-bottom>:first-child{margin-top:auto}.swal2-container.swal2-grow-fullscreen>.swal2-modal{display:flex!important;flex:1;align-self:stretch;justify-content:center}.swal2-container.swal2-grow-row>.swal2-modal{display:flex!important;flex:1;align-content:center;justify-content:center}.swal2-container.swal2-grow-column{flex:1;flex-direction:column}.swal2-container.swal2-grow-column.swal2-bottom,.swal2-container.swal2-grow-column.swal2-center,.swal2-container.swal2-grow-column.swal2-top{align-items:center}.swal2-container.swal2-grow-column.swal2-bottom-left,.swal2-container.swal2-grow-column.swal2-bottom-start,.swal2-container.swal2-grow-column.swal2-center-left,.swal2-container.swal2-grow-column.swal2-center-start,.swal2-container.swal2-grow-column.swal2-top-left,.swal2-container.swal2-grow-column.swal2-top-start{align-items:flex-start}.swal2-container.swal2-grow-column.swal2-bottom-end,.swal2-container.swal2-grow-column.swal2-bottom-right,.swal2-container.swal2-grow-column.swal2-center-end,.swal2-container.swal2-grow-column.swal2-center-right,.swal2-container.swal2-grow-column.swal2-top-end,.swal2-container.swal2-grow-column.swal2-top-right{align-items:flex-end}.swal2-container.swal2-grow-column>.swal2-modal{display:flex!important;flex:1;align-content:center;justify-content:center}.swal2-container.swal2-no-transition{transition:none!important}.swal2-container:not(.swal2-top):not(.swal2-top-start):not(.swal2-top-end):not(.swal2-top-left):not(.swal2-top-right):not(.swal2-center-start):not(.swal2-center-end):not(.swal2-center-left):not(.swal2-center-right):not(.swal2-bottom):not(.swal2-bottom-start):not(.swal2-bottom-end):not(.swal2-bottom-left):not(.swal2-bottom-right):not(.swal2-grow-fullscreen)>.swal2-modal{margin:auto}@media all and (-ms-high-contrast:none),(-ms-high-contrast:active){.swal2-container .swal2-modal{margin:0!important}}.swal2-popup{display:none;position:relative;box-sizing:border-box;flex-direction:column;justify-content:center;width:32em;max-width:100%;padding:1.25em;border:none;border-radius:.3125em;background:#fff;font-family:inherit;font-size:1rem}.swal2-popup:focus{outline:0}.swal2-popup.swal2-loading{overflow-y:hidden}.swal2-header{display:flex;flex-direction:column;align-items:center;padding:0 1.8em}.swal2-title{position:relative;max-width:100%;margin:0 0 .4em;padding:0;color:#595959;font-size:1.875em;font-weight:600;text-align:center;text-transform:none;word-wrap:break-word}.swal2-actions{display:flex;z-index:1;flex-wrap:wrap;align-items:center;justify-content:center;width:100%;margin:1.25em auto 0}.swal2-actions:not(.swal2-loading) .swal2-styled[disabled]{opacity:.4}.swal2-actions:not(.swal2-loading) .swal2-styled:hover{background-image:linear-gradient(rgba(0,0,0,.1),rgba(0,0,0,.1))}.swal2-actions:not(.swal2-loading) .swal2-styled:active{background-image:linear-gradient(rgba(0,0,0,.2),rgba(0,0,0,.2))}.swal2-actions.swal2-loading .swal2-styled.swal2-confirm{box-sizing:border-box;width:2.5em;height:2.5em;margin:.46875em;padding:0;-webkit-animation:swal2-rotate-loading 1.5s linear 0s infinite normal;animation:swal2-rotate-loading 1.5s linear 0s infinite normal;border:.25em solid transparent;border-radius:100%;border-color:transparent;background-color:transparent!important;color:transparent!important;cursor:default;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.swal2-actions.swal2-loading .swal2-styled.swal2-cancel{margin-right:30px;margin-left:30px}.swal2-actions.swal2-loading :not(.swal2-styled).swal2-confirm::after{content:\"\";display:inline-block;width:15px;height:15px;margin-left:5px;-webkit-animation:swal2-rotate-loading 1.5s linear 0s infinite normal;animation:swal2-rotate-loading 1.5s linear 0s infinite normal;border:3px solid #999;border-radius:50%;border-right-color:transparent;box-shadow:1px 1px 1px #fff}.swal2-styled{margin:.3125em;padding:.625em 2em;box-shadow:none;font-weight:500}.swal2-styled:not([disabled]){cursor:pointer}.swal2-styled.swal2-confirm{border:0;border-radius:.25em;background:initial;background-color:#3085d6;color:#fff;font-size:1.0625em}.swal2-styled.swal2-cancel{border:0;border-radius:.25em;background:initial;background-color:#aaa;color:#fff;font-size:1.0625em}.swal2-styled:focus{outline:0;box-shadow:0 0 0 1px #fff,0 0 0 3px rgba(50,100,150,.4)}.swal2-styled::-moz-focus-inner{border:0}.swal2-footer{justify-content:center;margin:1.25em 0 0;padding:1em 0 0;border-top:1px solid #eee;color:#545454;font-size:1em}.swal2-timer-progress-bar-container{position:absolute;right:0;bottom:0;left:0;height:.25em;overflow:hidden;border-bottom-right-radius:.3125em;border-bottom-left-radius:.3125em}.swal2-timer-progress-bar{width:100%;height:.25em;background:rgba(0,0,0,.2)}.swal2-image{max-width:100%;margin:1.25em auto}.swal2-close{position:absolute;z-index:2;top:0;right:0;align-items:center;justify-content:center;width:1.2em;height:1.2em;padding:0;overflow:hidden;transition:color .1s ease-out;border:none;border-radius:0;background:0 0;color:#ccc;font-family:serif;font-size:2.5em;line-height:1.2;cursor:pointer}.swal2-close:hover{transform:none;background:0 0;color:#f27474}.swal2-close::-moz-focus-inner{border:0}.swal2-content{z-index:1;justify-content:center;margin:0;padding:0 1.6em;color:#545454;font-size:1.125em;font-weight:400;line-height:normal;text-align:center;word-wrap:break-word}.swal2-checkbox,.swal2-file,.swal2-input,.swal2-radio,.swal2-select,.swal2-textarea{margin:1em auto}.swal2-file,.swal2-input,.swal2-textarea{box-sizing:border-box;width:100%;transition:border-color .3s,box-shadow .3s;border:1px solid #d9d9d9;border-radius:.1875em;background:inherit;box-shadow:inset 0 1px 1px rgba(0,0,0,.06);color:inherit;font-size:1.125em}.swal2-file.swal2-inputerror,.swal2-input.swal2-inputerror,.swal2-textarea.swal2-inputerror{border-color:#f27474!important;box-shadow:0 0 2px #f27474!important}.swal2-file:focus,.swal2-input:focus,.swal2-textarea:focus{border:1px solid #b4dbed;outline:0;box-shadow:0 0 3px #c4e6f5}.swal2-file::-moz-placeholder,.swal2-input::-moz-placeholder,.swal2-textarea::-moz-placeholder{color:#ccc}.swal2-file:-ms-input-placeholder,.swal2-input:-ms-input-placeholder,.swal2-textarea:-ms-input-placeholder{color:#ccc}.swal2-file::-ms-input-placeholder,.swal2-input::-ms-input-placeholder,.swal2-textarea::-ms-input-placeholder{color:#ccc}.swal2-file::placeholder,.swal2-input::placeholder,.swal2-textarea::placeholder{color:#ccc}.swal2-range{margin:1em auto;background:#fff}.swal2-range input{width:80%}.swal2-range output{width:20%;color:inherit;font-weight:600;text-align:center}.swal2-range input,.swal2-range output{height:2.625em;padding:0;font-size:1.125em;line-height:2.625em}.swal2-input{height:2.625em;padding:0 .75em}.swal2-input[type=number]{max-width:10em}.swal2-file{background:inherit;font-size:1.125em}.swal2-textarea{height:6.75em;padding:.75em}.swal2-select{min-width:50%;max-width:100%;padding:.375em .625em;background:inherit;color:inherit;font-size:1.125em}.swal2-checkbox,.swal2-radio{align-items:center;justify-content:center;background:#fff;color:inherit}.swal2-checkbox label,.swal2-radio label{margin:0 .6em;font-size:1.125em}.swal2-checkbox input,.swal2-radio input{margin:0 .4em}.swal2-validation-message{display:none;align-items:center;justify-content:center;padding:.625em;overflow:hidden;background:#f0f0f0;color:#666;font-size:1em;font-weight:300}.swal2-validation-message::before{content:\"!\";display:inline-block;width:1.5em;min-width:1.5em;height:1.5em;margin:0 .625em;border-radius:50%;background-color:#f27474;color:#fff;font-weight:600;line-height:1.5em;text-align:center}.swal2-icon{position:relative;box-sizing:content-box;justify-content:center;width:5em;height:5em;margin:1.25em auto 1.875em;border:.25em solid transparent;border-radius:50%;font-family:inherit;line-height:5em;cursor:default;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none}.swal2-icon .swal2-icon-content{display:flex;align-items:center;font-size:3.75em}.swal2-icon.swal2-error{border-color:#f27474;color:#f27474}.swal2-icon.swal2-error .swal2-x-mark{position:relative;flex-grow:1}.swal2-icon.swal2-error [class^=swal2-x-mark-line]{display:block;position:absolute;top:2.3125em;width:2.9375em;height:.3125em;border-radius:.125em;background-color:#f27474}.swal2-icon.swal2-error [class^=swal2-x-mark-line][class$=left]{left:1.0625em;transform:rotate(45deg)}.swal2-icon.swal2-error [class^=swal2-x-mark-line][class$=right]{right:1em;transform:rotate(-45deg)}.swal2-icon.swal2-error.swal2-icon-show{-webkit-animation:swal2-animate-error-icon .5s;animation:swal2-animate-error-icon .5s}.swal2-icon.swal2-error.swal2-icon-show .swal2-x-mark{-webkit-animation:swal2-animate-error-x-mark .5s;animation:swal2-animate-error-x-mark .5s}.swal2-icon.swal2-warning{border-color:#facea8;color:#f8bb86}.swal2-icon.swal2-info{border-color:#9de0f6;color:#3fc3ee}.swal2-icon.swal2-question{border-color:#c9dae1;color:#87adbd}.swal2-icon.swal2-success{border-color:#a5dc86;color:#a5dc86}.swal2-icon.swal2-success [class^=swal2-success-circular-line]{position:absolute;width:3.75em;height:7.5em;transform:rotate(45deg);border-radius:50%}.swal2-icon.swal2-success [class^=swal2-success-circular-line][class$=left]{top:-.4375em;left:-2.0635em;transform:rotate(-45deg);transform-origin:3.75em 3.75em;border-radius:7.5em 0 0 7.5em}.swal2-icon.swal2-success [class^=swal2-success-circular-line][class$=right]{top:-.6875em;left:1.875em;transform:rotate(-45deg);transform-origin:0 3.75em;border-radius:0 7.5em 7.5em 0}.swal2-icon.swal2-success .swal2-success-ring{position:absolute;z-index:2;top:-.25em;left:-.25em;box-sizing:content-box;width:100%;height:100%;border:.25em solid rgba(165,220,134,.3);border-radius:50%}.swal2-icon.swal2-success .swal2-success-fix{position:absolute;z-index:1;top:.5em;left:1.625em;width:.4375em;height:5.625em;transform:rotate(-45deg)}.swal2-icon.swal2-success [class^=swal2-success-line]{display:block;position:absolute;z-index:2;height:.3125em;border-radius:.125em;background-color:#a5dc86}.swal2-icon.swal2-success [class^=swal2-success-line][class$=tip]{top:2.875em;left:.8125em;width:1.5625em;transform:rotate(45deg)}.swal2-icon.swal2-success [class^=swal2-success-line][class$=long]{top:2.375em;right:.5em;width:2.9375em;transform:rotate(-45deg)}.swal2-icon.swal2-success.swal2-icon-show .swal2-success-line-tip{-webkit-animation:swal2-animate-success-line-tip .75s;animation:swal2-animate-success-line-tip .75s}.swal2-icon.swal2-success.swal2-icon-show .swal2-success-line-long{-webkit-animation:swal2-animate-success-line-long .75s;animation:swal2-animate-success-line-long .75s}.swal2-icon.swal2-success.swal2-icon-show .swal2-success-circular-line-right{-webkit-animation:swal2-rotate-success-circular-line 4.25s ease-in;animation:swal2-rotate-success-circular-line 4.25s ease-in}.swal2-progress-steps{align-items:center;margin:0 0 1.25em;padding:0;background:inherit;font-weight:600}.swal2-progress-steps li{display:inline-block;position:relative}.swal2-progress-steps .swal2-progress-step{z-index:20;width:2em;height:2em;border-radius:2em;background:#3085d6;color:#fff;line-height:2em;text-align:center}.swal2-progress-steps .swal2-progress-step.swal2-active-progress-step{background:#3085d6}.swal2-progress-steps .swal2-progress-step.swal2-active-progress-step~.swal2-progress-step{background:#add8e6;color:#fff}.swal2-progress-steps .swal2-progress-step.swal2-active-progress-step~.swal2-progress-step-line{background:#add8e6}.swal2-progress-steps .swal2-progress-step-line{z-index:10;width:2.5em;height:.4em;margin:0 -1px;background:#3085d6}[class^=swal2]{-webkit-tap-highlight-color:transparent}.swal2-show{-webkit-animation:swal2-show .3s;animation:swal2-show .3s}.swal2-hide{-webkit-animation:swal2-hide .15s forwards;animation:swal2-hide .15s forwards}.swal2-noanimation{transition:none}.swal2-scrollbar-measure{position:absolute;top:-9999px;width:50px;height:50px;overflow:scroll}.swal2-rtl .swal2-close{right:auto;left:0}.swal2-rtl .swal2-timer-progress-bar{right:0;left:auto}@supports (-ms-accelerator:true){.swal2-range input{width:100%!important}.swal2-range output{display:none}}@media all and (-ms-high-contrast:none),(-ms-high-contrast:active){.swal2-range input{width:100%!important}.swal2-range output{display:none}}@-moz-document url-prefix(){.swal2-close:focus{outline:2px solid rgba(50,100,150,.4)}}@-webkit-keyframes swal2-toast-show{0%{transform:translateY(-.625em) rotateZ(2deg)}33%{transform:translateY(0) rotateZ(-2deg)}66%{transform:translateY(.3125em) rotateZ(2deg)}100%{transform:translateY(0) rotateZ(0)}}@keyframes swal2-toast-show{0%{transform:translateY(-.625em) rotateZ(2deg)}33%{transform:translateY(0) rotateZ(-2deg)}66%{transform:translateY(.3125em) rotateZ(2deg)}100%{transform:translateY(0) rotateZ(0)}}@-webkit-keyframes swal2-toast-hide{100%{transform:rotateZ(1deg);opacity:0}}@keyframes swal2-toast-hide{100%{transform:rotateZ(1deg);opacity:0}}@-webkit-keyframes swal2-toast-animate-success-line-tip{0%{top:.5625em;left:.0625em;width:0}54%{top:.125em;left:.125em;width:0}70%{top:.625em;left:-.25em;width:1.625em}84%{top:1.0625em;left:.75em;width:.5em}100%{top:1.125em;left:.1875em;width:.75em}}@keyframes swal2-toast-animate-success-line-tip{0%{top:.5625em;left:.0625em;width:0}54%{top:.125em;left:.125em;width:0}70%{top:.625em;left:-.25em;width:1.625em}84%{top:1.0625em;left:.75em;width:.5em}100%{top:1.125em;left:.1875em;width:.75em}}@-webkit-keyframes swal2-toast-animate-success-line-long{0%{top:1.625em;right:1.375em;width:0}65%{top:1.25em;right:.9375em;width:0}84%{top:.9375em;right:0;width:1.125em}100%{top:.9375em;right:.1875em;width:1.375em}}@keyframes swal2-toast-animate-success-line-long{0%{top:1.625em;right:1.375em;width:0}65%{top:1.25em;right:.9375em;width:0}84%{top:.9375em;right:0;width:1.125em}100%{top:.9375em;right:.1875em;width:1.375em}}@-webkit-keyframes swal2-show{0%{transform:scale(.7)}45%{transform:scale(1.05)}80%{transform:scale(.95)}100%{transform:scale(1)}}@keyframes swal2-show{0%{transform:scale(.7)}45%{transform:scale(1.05)}80%{transform:scale(.95)}100%{transform:scale(1)}}@-webkit-keyframes swal2-hide{0%{transform:scale(1);opacity:1}100%{transform:scale(.5);opacity:0}}@keyframes swal2-hide{0%{transform:scale(1);opacity:1}100%{transform:scale(.5);opacity:0}}@-webkit-keyframes swal2-animate-success-line-tip{0%{top:1.1875em;left:.0625em;width:0}54%{top:1.0625em;left:.125em;width:0}70%{top:2.1875em;left:-.375em;width:3.125em}84%{top:3em;left:1.3125em;width:1.0625em}100%{top:2.8125em;left:.8125em;width:1.5625em}}@keyframes swal2-animate-success-line-tip{0%{top:1.1875em;left:.0625em;width:0}54%{top:1.0625em;left:.125em;width:0}70%{top:2.1875em;left:-.375em;width:3.125em}84%{top:3em;left:1.3125em;width:1.0625em}100%{top:2.8125em;left:.8125em;width:1.5625em}}@-webkit-keyframes swal2-animate-success-line-long{0%{top:3.375em;right:2.875em;width:0}65%{top:3.375em;right:2.875em;width:0}84%{top:2.1875em;right:0;width:3.4375em}100%{top:2.375em;right:.5em;width:2.9375em}}@keyframes swal2-animate-success-line-long{0%{top:3.375em;right:2.875em;width:0}65%{top:3.375em;right:2.875em;width:0}84%{top:2.1875em;right:0;width:3.4375em}100%{top:2.375em;right:.5em;width:2.9375em}}@-webkit-keyframes swal2-rotate-success-circular-line{0%{transform:rotate(-45deg)}5%{transform:rotate(-45deg)}12%{transform:rotate(-405deg)}100%{transform:rotate(-405deg)}}@keyframes swal2-rotate-success-circular-line{0%{transform:rotate(-45deg)}5%{transform:rotate(-45deg)}12%{transform:rotate(-405deg)}100%{transform:rotate(-405deg)}}@-webkit-keyframes swal2-animate-error-x-mark{0%{margin-top:1.625em;transform:scale(.4);opacity:0}50%{margin-top:1.625em;transform:scale(.4);opacity:0}80%{margin-top:-.375em;transform:scale(1.15)}100%{margin-top:0;transform:scale(1);opacity:1}}@keyframes swal2-animate-error-x-mark{0%{margin-top:1.625em;transform:scale(.4);opacity:0}50%{margin-top:1.625em;transform:scale(.4);opacity:0}80%{margin-top:-.375em;transform:scale(1.15)}100%{margin-top:0;transform:scale(1);opacity:1}}@-webkit-keyframes swal2-animate-error-icon{0%{transform:rotateX(100deg);opacity:0}100%{transform:rotateX(0);opacity:1}}@keyframes swal2-animate-error-icon{0%{transform:rotateX(100deg);opacity:0}100%{transform:rotateX(0);opacity:1}}@-webkit-keyframes swal2-rotate-loading{0%{transform:rotate(0)}100%{transform:rotate(360deg)}}@keyframes swal2-rotate-loading{0%{transform:rotate(0)}100%{transform:rotate(360deg)}}body.swal2-shown:not(.swal2-no-backdrop):not(.swal2-toast-shown){overflow:hidden}body.swal2-height-auto{height:auto!important}body.swal2-no-backdrop .swal2-container{top:auto;right:auto;bottom:auto;left:auto;max-width:calc(100% - .625em * 2);background-color:transparent!important}body.swal2-no-backdrop .swal2-container>.swal2-modal{box-shadow:0 0 10px rgba(0,0,0,.4)}body.swal2-no-backdrop .swal2-container.swal2-top{top:0;left:50%;transform:translateX(-50%)}body.swal2-no-backdrop .swal2-container.swal2-top-left,body.swal2-no-backdrop .swal2-container.swal2-top-start{top:0;left:0}body.swal2-no-backdrop .swal2-container.swal2-top-end,body.swal2-no-backdrop .swal2-container.swal2-top-right{top:0;right:0}body.swal2-no-backdrop .swal2-container.swal2-center{top:50%;left:50%;transform:translate(-50%,-50%)}body.swal2-no-backdrop .swal2-container.swal2-center-left,body.swal2-no-backdrop .swal2-container.swal2-center-start{top:50%;left:0;transform:translateY(-50%)}body.swal2-no-backdrop .swal2-container.swal2-center-end,body.swal2-no-backdrop .swal2-container.swal2-center-right{top:50%;right:0;transform:translateY(-50%)}body.swal2-no-backdrop .swal2-container.swal2-bottom{bottom:0;left:50%;transform:translateX(-50%)}body.swal2-no-backdrop .swal2-container.swal2-bottom-left,body.swal2-no-backdrop .swal2-container.swal2-bottom-start{bottom:0;left:0}body.swal2-no-backdrop .swal2-container.swal2-bottom-end,body.swal2-no-backdrop .swal2-container.swal2-bottom-right{right:0;bottom:0}@media print{body.swal2-shown:not(.swal2-no-backdrop):not(.swal2-toast-shown){overflow-y:scroll!important}body.swal2-shown:not(.swal2-no-backdrop):not(.swal2-toast-shown)>[aria-hidden=true]{display:none}body.swal2-shown:not(.swal2-no-backdrop):not(.swal2-toast-shown) .swal2-container{position:static!important}}body.swal2-toast-shown .swal2-container{background-color:transparent}body.swal2-toast-shown .swal2-container.swal2-top{top:0;right:auto;bottom:auto;left:50%;transform:translateX(-50%)}body.swal2-toast-shown .swal2-container.swal2-top-end,body.swal2-toast-shown .swal2-container.swal2-top-right{top:0;right:0;bottom:auto;left:auto}body.swal2-toast-shown .swal2-container.swal2-top-left,body.swal2-toast-shown .swal2-container.swal2-top-start{top:0;right:auto;bottom:auto;left:0}body.swal2-toast-shown .swal2-container.swal2-center-left,body.swal2-toast-shown .swal2-container.swal2-center-start{top:50%;right:auto;bottom:auto;left:0;transform:translateY(-50%)}body.swal2-toast-shown .swal2-container.swal2-center{top:50%;right:auto;bottom:auto;left:50%;transform:translate(-50%,-50%)}body.swal2-toast-shown .swal2-container.swal2-center-end,body.swal2-toast-shown .swal2-container.swal2-center-right{top:50%;right:0;bottom:auto;left:auto;transform:translateY(-50%)}body.swal2-toast-shown .swal2-container.swal2-bottom-left,body.swal2-toast-shown .swal2-container.swal2-bottom-start{top:auto;right:auto;bottom:0;left:0}body.swal2-toast-shown .swal2-container.swal2-bottom{top:auto;right:auto;bottom:0;left:50%;transform:translateX(-50%)}body.swal2-toast-shown .swal2-container.swal2-bottom-end,body.swal2-toast-shown .swal2-container.swal2-bottom-right{top:auto;right:0;bottom:0;left:auto}body.swal2-toast-column .swal2-toast{flex-direction:column;align-items:stretch}body.swal2-toast-column .swal2-toast .swal2-actions{flex:1;align-self:stretch;height:2.2em;margin-top:.3125em}body.swal2-toast-column .swal2-toast .swal2-loading{justify-content:center}body.swal2-toast-column .swal2-toast .swal2-input{height:2em;margin:.3125em auto;font-size:1em}body.swal2-toast-column .swal2-toast .swal2-validation-message{font-size:1em}"); \ No newline at end of file diff --git a/src/agentscope/studio/static/workstation_templates/en4.json b/src/agentscope/studio/static/workstation_templates/en4.json index ddb39b327..0fcb35a2d 100644 --- a/src/agentscope/studio/static/workstation_templates/en4.json +++ b/src/agentscope/studio/static/workstation_templates/en4.json @@ -213,6 +213,7 @@ "data": { "args": { "name": "User", + "role": "user", "content": "Hello every one", "url": "" } diff --git a/src/agentscope/studio/templates/login.html b/src/agentscope/studio/templates/login.html new file mode 100644 index 000000000..c395b7723 --- /dev/null +++ b/src/agentscope/studio/templates/login.html @@ -0,0 +1,187 @@ + + + + + {{ _("AgentScope WorkStation Login Page") }} + + + + + + + + + + +
      +
      + × +

      {{ _("We want to hear from you") }}

      + +
      +
      + + + + \ No newline at end of file diff --git a/src/agentscope/studio/templates/workstation.html b/src/agentscope/studio/templates/workstation.html index 741a7848c..9685bccab 100644 --- a/src/agentscope/studio/templates/workstation.html +++ b/src/agentscope/studio/templates/workstation.html @@ -34,7 +34,7 @@ integrity="sha256-KzZiKy0DWYsnwMF+X1DvQngQ2/FxF7MF3Ff72XcpuPs=" src="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.13.0/js/all.min.js"> - + @@ -50,34 +50,38 @@
      Example
      • Two Agents + onclick="importExample(1);"> + Two Agents
        + importExample_step(1)">
      • Pipeline + onclick="importExample(2);"> + Pipeline
        + importExample_step(2)">
      • Conversation + onclick="importExample(3);"> + Conversation
        + importExample_step(3)">
      • Group Chat + onclick="importExample(4);"> + Group Chat
        + importExample_step(4)">
      • @@ -142,11 +146,6 @@ draggable="true" ondragstart="drag(event)"> UserAgent -
      • - TextToImageAgent -
      • @@ -289,14 +288,35 @@ + {% set version = token_dict.get('version') if token_dict is defined else "local" %} + + +
        diff --git a/src/agentscope/studio/utils.py b/src/agentscope/studio/utils.py new file mode 100644 index 000000000..bbab18889 --- /dev/null +++ b/src/agentscope/studio/utils.py @@ -0,0 +1,135 @@ +# -*- coding: utf-8 -*- +""" +This module provides utilities for securing views in a web application with +authentication and authorization checks. + +Functions: + _require_auth - A decorator for protecting views by requiring + authentication. +""" +from datetime import datetime, timedelta +from functools import wraps +from typing import Any, Callable + +import jwt +from flask import session, redirect, url_for, abort +from agentscope.constants import TOKEN_EXP_TIME + + +def _require_auth( + redirect_url: str = "_home", + fail_with_exception: bool = False, + secret_key: str = "", + **decorator_kwargs: Any, +) -> Callable: + """ + Decorator for view functions that requires user authentication. + + If the user is authenticated by token and user login name, or if the + request comes from the localhost (127.0.0.1), the decorated view is + executed. If the user is not authenticated, they are either redirected + to the given redirect_url, or an exception is raised, depending on the + fail_with_exception flag. + + Args: + redirect_url (str): The endpoint to which an unauthenticated user is + redirected. + fail_with_exception (bool): If True, raise an exception for + unauthorized access, otherwise redirect to the redirect_url. + secret_key (str): The secret key for generate jwt token. + **decorator_kwargs: Additional keyword arguments passed to the + decorated view. + + Returns: + A view function wrapped with authentication check logic. + """ + + def decorator(view_func: Callable) -> Callable: + @wraps(view_func) + def wrapper(*args: Any, **kwargs: Any) -> Any: + verification_token = session.get("verification_token") + user_login = session.get("user_login") + jwt_token = session.get("jwt_token") + + token_dict = decode_jwt(jwt_token, secret_key=secret_key) + valid_user_login = token_dict["user_login"] + valid_verification_token = token_dict["verification_token"] + + if ( + verification_token == valid_verification_token + and user_login == valid_user_login + ): + kwargs = { + **kwargs, + **decorator_kwargs, + "token_dict": token_dict, + } + return view_func(*args, **kwargs) + else: + if fail_with_exception: + raise EnvironmentError("Unauthorized access.") + return redirect(url_for(redirect_url)) + + return wrapper + + return decorator + + +def generate_jwt( + user_login: str, + access_token: str, + verification_token: str, + secret_key: str, + version: str = None, +) -> str: + """ + Generates a JSON Web Token (JWT) with the specified payload. + + Args: + user_login (str): The user's login or identifier. + access_token (str): The access token associated with the user. + verification_token (str): A verification token for additional security. + secret_key (str): The secret key used to sign the JWT. + version (str, optional): Optional version of the token. + + Returns: + str: The encoded JWT as a string. + """ + payload = { + "user_login": user_login, + "access_token": access_token, + "verification_token": verification_token, + "exp": datetime.utcnow() + timedelta(minutes=TOKEN_EXP_TIME), + } + if version: + payload["version"] = version + return jwt.encode(payload, secret_key, algorithm="HS256") + + +def decode_jwt(token: str, secret_key: str) -> Any: + """ + Decodes a JSON Web Token (JWT) using the provided secret key. + + Args: + token (str): The encoded JWT to decode. + secret_key (str): The secret key used for decoding the JWT. + + Returns: + dict: The payload of the decoded token if successful. + + Raises: + abort: If the token is expired or invalid, a 401 or 403 error is + raised. + """ + + try: + return jwt.decode(token, secret_key, algorithms=["HS256"]) + except jwt.ExpiredSignatureError: + abort(401, description="The provided token has expired.") + return None + except Exception: + abort( + 403, + description="The provided token is invalid. Please log in again.", + ) + return None diff --git a/src/agentscope/utils/common.py b/src/agentscope/utils/common.py index b7ebe3a15..372d9ca66 100644 --- a/src/agentscope/utils/common.py +++ b/src/agentscope/utils/common.py @@ -1,18 +1,25 @@ # -*- coding: utf-8 -*- """ Common utils.""" - +import base64 import contextlib +import datetime +import hashlib +import json import os +import random import re +import secrets import signal +import socket +import string import sys import tempfile import threading -from typing import Any, Generator, Optional, Union -import requests +from typing import Any, Generator, Optional, Union, Tuple, Literal, List +from urllib.parse import urlparse -from agentscope.service.service_response import ServiceResponse -from agentscope.service.service_status import ServiceExecStatus +import psutil +import requests @contextlib.contextmanager @@ -59,12 +66,12 @@ def create_tempdir() -> Generator: https://github.com/openai/human-eval/blob/master/human_eval/execution.py """ with tempfile.TemporaryDirectory() as dirname: - with chdir(dirname): + with _chdir(dirname): yield dirname @contextlib.contextmanager -def chdir(path: str) -> Generator: +def _chdir(path: str) -> Generator: """ A context manager that changes the current working directory to the given path. @@ -84,44 +91,7 @@ def chdir(path: str) -> Generator: os.chdir(cwd) -def write_file(content: str, file_path: str) -> ServiceResponse: - """ - Write content to a file. - - Args: - content (str): The content to be written to the file. - file_path (str): The path to the file where the content will be - written. - - Returns: - ServiceResponse: where the boolean indicates the success of the - operation, and the str contains an empty string if successful or an - error message if any, including the error type. - - This function attempts to open the file in write mode and write the - provided content to it. If the file does not exist, it will be created. - If the file exists, its content will be overwritten. If a - PermissionError occurs, indicating a lack of necessary permissions, - or an IOError occurs, signaling additional issues such as an invalid - file path or hardware-related I/O error, the function will catch the - exception and return `False` along with the error message. - """ - try: - with open(file_path, "w", encoding="utf-8") as file: - file.write(content) - return ServiceResponse( - status=ServiceExecStatus.SUCCESS, - content="Success", - ) - except Exception as e: - error_message = f"{e.__class__.__name__}: {e}" - return ServiceResponse( - status=ServiceExecStatus.ERROR, - content=error_message, - ) - - -def requests_get( +def _requests_get( url: str, params: dict, headers: Optional[dict] = None, @@ -178,3 +148,452 @@ def _if_change_database(sql_query: str) -> bool: if pattern_unsafe_sql.search(sql_query): return False return True + + +def _get_timestamp( + format_: str = "%Y-%m-%d %H:%M:%S", + time: datetime.datetime = None, +) -> str: + """Get current timestamp.""" + if time is None: + return datetime.datetime.now().strftime(format_) + else: + return time.strftime(format_) + + +def to_openai_dict(item: dict) -> dict: + """Convert `Msg` to `dict` for OpenAI API.""" + clean_dict = {} + + if "name" in item: + clean_dict["name"] = item["name"] + + if "role" in item: + clean_dict["role"] = item["role"] + else: + clean_dict["role"] = "assistant" + + if "content" in item: + clean_dict["content"] = _convert_to_str(item["content"]) + else: + raise ValueError("The content of the message is missing.") + + return clean_dict + + +def _find_available_port() -> int: + """Get an unoccupied socket port number.""" + with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: + s.bind(("", 0)) + return s.getsockname()[1] + + +def _check_port(port: Optional[int] = None) -> int: + """Check if the port is available. + + Args: + port (`int`): + the port number being checked. + + Returns: + `int`: the port number that passed the check. If the port is found + to be occupied, an available port number will be automatically + returned. + """ + if port is None: + new_port = _find_available_port() + return new_port + with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: + try: + if s.connect_ex(("localhost", port)) == 0: + raise RuntimeError("Port is occupied.") + except Exception: + new_port = _find_available_port() + return new_port + return port + + +def _guess_type_by_extension( + url: str, +) -> Literal["image", "audio", "video", "file"]: + """Guess the type of the file by its extension.""" + extension = url.split(".")[-1].lower() + + if extension in [ + "bmp", + "dib", + "icns", + "ico", + "jfif", + "jpe", + "jpeg", + "jpg", + "j2c", + "j2k", + "jp2", + "jpc", + "jpf", + "jpx", + "apng", + "png", + "bw", + "rgb", + "rgba", + "sgi", + "tif", + "tiff", + "webp", + ]: + return "image" + elif extension in [ + "amr", + "wav", + "3gp", + "3gpp", + "aac", + "mp3", + "flac", + "ogg", + ]: + return "audio" + elif extension in [ + "mp4", + "webm", + "mkv", + "flv", + "avi", + "mov", + "wmv", + "rmvb", + ]: + return "video" + else: + return "file" + + +def _to_openai_image_url(url: str) -> str: + """Convert an image url to openai format. If the given url is a local + file, it will be converted to base64 format. Otherwise, it will be + returned directly. + + Args: + url (`str`): + The local or public url of the image. + """ + # See https://platform.openai.com/docs/guides/vision for details of + # support image extensions. + support_image_extensions = ( + ".png", + ".jpg", + ".jpeg", + ".gif", + ".webp", + ) + + parsed_url = urlparse(url) + + lower_url = url.lower() + + # Web url + if parsed_url.scheme != "": + if any(lower_url.endswith(_) for _ in support_image_extensions): + return url + + # Check if it is a local file + elif os.path.exists(url) and os.path.isfile(url): + if any(lower_url.endswith(_) for _ in support_image_extensions): + with open(url, "rb") as image_file: + base64_image = base64.b64encode(image_file.read()).decode( + "utf-8", + ) + extension = parsed_url.path.lower().split(".")[-1] + mime_type = f"image/{extension}" + return f"data:{mime_type};base64,{base64_image}" + + raise TypeError(f"{url} should be end with {support_image_extensions}.") + + +def _download_file(url: str, path_file: str, max_retries: int = 3) -> bool: + """Download file from the given url and save it to the given path. + + Args: + url (`str`): + The url of the file. + path_file (`str`): + The path to save the file. + max_retries (`int`, defaults to `3`) + The maximum number of retries when fail to download the file. + """ + for n_retry in range(1, max_retries + 1): + response = requests.get(url, stream=True) + if response.status_code == requests.codes.ok: + with open(path_file, "wb") as file: + for chunk in response.iter_content(1024): + file.write(chunk) + return True + else: + raise RuntimeError( + f"Failed to download file from {url} (status code: " + f"{response.status_code}). Retry {n_retry}/{max_retries}.", + ) + return False + + +def _generate_random_code( + length: int = 6, + uppercase: bool = True, + lowercase: bool = True, + digits: bool = True, +) -> str: + """Get random code.""" + characters = "" + if uppercase: + characters += string.ascii_uppercase + if lowercase: + characters += string.ascii_lowercase + if digits: + characters += string.digits + return "".join(secrets.choice(characters) for i in range(length)) + + +def _generate_id_from_seed(seed: str, length: int = 8) -> str: + """Generate random id from seed str. + + Args: + seed (`str`): seed string. + length (`int`): generated id length. + """ + hasher = hashlib.sha256() + hasher.update(seed.encode("utf-8")) + hash_digest = hasher.hexdigest() + + random.seed(hash_digest) + id_chars = [ + random.choice(string.ascii_letters + string.digits) + for _ in range(length) + ] + return "".join(id_chars) + + +def _is_web_url(url: str) -> bool: + """Whether the url is accessible from the Web. + + Args: + url (`str`): + The url to check. + + Note: + This function is not perfect, it only checks if the URL starts with + common web protocols, e.g., http, https, ftp, oss. + """ + parsed_url = urlparse(url) + return parsed_url.scheme in ["http", "https", "ftp", "oss"] + + +def _is_json_serializable(obj: Any) -> bool: + """Check if the given object is json serializable.""" + try: + json.dumps(obj) + return True + except TypeError: + return False + + +def _convert_to_str(content: Any) -> str: + """Convert the content to string. + + Note: + For prompt engineering, simply calling `str(content)` or + `json.dumps(content)` is not enough. + + - For `str(content)`, if `content` is a dictionary, it will turn double + quotes to single quotes. When this string is fed into prompt, the LLMs + may learn to use single quotes instead of double quotes (which + cannot be loaded by `json.loads` API). + + - For `json.dumps(content)`, if `content` is a string, it will add + double quotes to the string. LLMs may learn to use double quotes to + wrap strings, which leads to the same issue as `str(content)`. + + To avoid these issues, we use this function to safely convert the + content to a string used in prompt. + + Args: + content (`Any`): + The content to be converted. + + Returns: + `str`: The converted string. + """ + + if isinstance(content, str): + return content + elif isinstance(content, (dict, list, int, float, bool, tuple)): + return json.dumps(content, ensure_ascii=False) + else: + return str(content) + + +def _join_str_with_comma_and(elements: List[str]) -> str: + """Return the JSON string with comma, and use " and " between the last two + elements.""" + + if len(elements) == 0: + return "" + elif len(elements) == 1: + return elements[0] + elif len(elements) == 2: + return " and ".join(elements) + else: + return ", ".join(elements[:-1]) + f", and {elements[-1]}" + + +class ImportErrorReporter: + """Used as a placeholder for missing packages. + When called, an ImportError will be raised, prompting the user to install + the specified extras requirement. + """ + + def __init__(self, error: ImportError, extras_require: str = None) -> None: + """Init the ImportErrorReporter. + + Args: + error (`ImportError`): the original ImportError. + extras_require (`str`): the extras requirement. + """ + self.error = error + self.extras_require = extras_require + + def __call__(self, *args: Any, **kwds: Any) -> Any: + return self._raise_import_error() + + def __getattr__(self, name: str) -> Any: + return self._raise_import_error() + + def __getitem__(self, __key: Any) -> Any: + return self._raise_import_error() + + def _raise_import_error(self) -> Any: + """Raise the ImportError""" + err_msg = f"ImportError occorred: [{self.error.msg}]." + if self.extras_require is not None: + err_msg += ( + f" Please install [{self.extras_require}] version" + " of agentscope." + ) + raise ImportError(err_msg) + + +def _hash_string( + data: str, + hash_method: Literal["sha256", "md5", "sha1"], +) -> str: + """Hash the string data.""" + hash_func = getattr(hashlib, hash_method)() + hash_func.update(data.encode()) + return hash_func.hexdigest() + + +def _get_process_creation_time() -> datetime.datetime: + """Get the creation time of the process.""" + pid = os.getpid() + # Find the process by pid + current_process = psutil.Process(pid) + # Obtain the process creation time + create_time = current_process.create_time() + # Change the timestamp to a readable format + return datetime.datetime.fromtimestamp(create_time) + + +def _is_process_alive( + pid: int, + create_time_str: str, + create_time_format: str = "%Y-%m-%d %H:%M:%S", + tolerance_seconds: int = 10, +) -> bool: + """Check if the process is alive by comparing the actual creation time of + the process with the given creation time. + + Args: + pid (`int`): + The process id. + create_time_str (`str`): + The given creation time string. + create_time_format (`str`, defaults to `"%Y-%m-%d %H:%M:%S"`): + The format of the given creation time string. + tolerance_seconds (`int`, defaults to `10`): + The tolerance seconds for comparing the actual creation time with + the given creation time. + + Returns: + `bool`: True if the process is alive, False otherwise. + """ + try: + # Try to create a process object by pid + proc = psutil.Process(pid) + # Obtain the actual creation time of the process + actual_create_time_timestamp = proc.create_time() + + # Convert the given creation time string to a datetime object + given_create_time_datetime = datetime.datetime.strptime( + create_time_str, + create_time_format, + ) + + # Calculate the time difference between the actual creation time and + time_difference = abs( + actual_create_time_timestamp + - given_create_time_datetime.timestamp(), + ) + + # Compare the actual creation time with the given creation time + if time_difference <= tolerance_seconds: + return True + + except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess): + # If the process is not found, access is denied, or the process is a + # zombie process, return False + return False + + return False + + +def _is_windows() -> bool: + """Check if the system is Windows.""" + return os.name == "nt" + + +def _map_string_to_color_mark( + target_str: str, +) -> Tuple[str, str]: + """Map a string into an index within a given length. + + Args: + target_str (`str`): + The string to be mapped. + + Returns: + `Tuple[str, str]`: A color marker tuple + """ + color_marks = [ + ("\033[90m", "\033[0m"), + ("\033[91m", "\033[0m"), + ("\033[92m", "\033[0m"), + ("\033[93m", "\033[0m"), + ("\033[94m", "\033[0m"), + ("\033[95m", "\033[0m"), + ("\033[96m", "\033[0m"), + ("\033[97m", "\033[0m"), + ] + + hash_value = int(hashlib.sha256(target_str.encode()).hexdigest(), 16) + index = hash_value % len(color_marks) + return color_marks[index] + + +def _generate_new_runtime_id() -> str: + """Generate a new random runtime id.""" + _RUNTIME_ID_FORMAT = "run_%Y%m%d-%H%M%S_{}" + return _get_timestamp(_RUNTIME_ID_FORMAT).format( + _generate_random_code(uppercase=False), + ) diff --git a/src/agentscope/utils/tools.py b/src/agentscope/utils/tools.py deleted file mode 100644 index 4e2382fc0..000000000 --- a/src/agentscope/utils/tools.py +++ /dev/null @@ -1,479 +0,0 @@ -# -*- coding: utf-8 -*- -""" Tools for agentscope """ -import base64 -import datetime -import json -import os.path -import secrets -import string -import socket -import hashlib -import random -from typing import Any, Literal, List, Optional, Tuple - -from urllib.parse import urlparse -import psutil -import requests - - -def _get_timestamp( - format_: str = "%Y-%m-%d %H:%M:%S", - time: datetime.datetime = None, -) -> str: - """Get current timestamp.""" - if time is None: - return datetime.datetime.now().strftime(format_) - else: - return time.strftime(format_) - - -def to_openai_dict(item: dict) -> dict: - """Convert `Msg` to `dict` for OpenAI API.""" - clean_dict = {} - - if "name" in item: - clean_dict["name"] = item["name"] - - if "role" in item: - clean_dict["role"] = item["role"] - else: - clean_dict["role"] = "assistant" - - if "content" in item: - clean_dict["content"] = _convert_to_str(item["content"]) - else: - raise ValueError("The content of the message is missing.") - - return clean_dict - - -def to_dialog_str(item: dict) -> str: - """Convert a dict into string prompt style.""" - speaker = item.get("name", None) or item.get("role", None) - content = item.get("content", None) - - if content is None: - return str(item) - - if speaker is None: - return content - else: - return f"{speaker}: {content}" - - -def find_available_port() -> int: - """Get an unoccupied socket port number.""" - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - s.bind(("", 0)) - return s.getsockname()[1] - - -def check_port(port: Optional[int] = None) -> int: - """Check if the port is available. - - Args: - port (`int`): - the port number being checked. - - Returns: - `int`: the port number that passed the check. If the port is found - to be occupied, an available port number will be automatically - returned. - """ - if port is None: - new_port = find_available_port() - return new_port - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - try: - if s.connect_ex(("localhost", port)) == 0: - raise RuntimeError("Port is occupied.") - except Exception: - new_port = find_available_port() - return new_port - return port - - -def _guess_type_by_extension( - url: str, -) -> Literal["image", "audio", "video", "file"]: - """Guess the type of the file by its extension.""" - extension = url.split(".")[-1].lower() - - if extension in [ - "bmp", - "dib", - "icns", - "ico", - "jfif", - "jpe", - "jpeg", - "jpg", - "j2c", - "j2k", - "jp2", - "jpc", - "jpf", - "jpx", - "apng", - "png", - "bw", - "rgb", - "rgba", - "sgi", - "tif", - "tiff", - "webp", - ]: - return "image" - elif extension in [ - "amr", - "wav", - "3gp", - "3gpp", - "aac", - "mp3", - "flac", - "ogg", - ]: - return "audio" - elif extension in [ - "mp4", - "webm", - "mkv", - "flv", - "avi", - "mov", - "wmv", - "rmvb", - ]: - return "video" - else: - return "file" - - -def _to_openai_image_url(url: str) -> str: - """Convert an image url to openai format. If the given url is a local - file, it will be converted to base64 format. Otherwise, it will be - returned directly. - - Args: - url (`str`): - The local or public url of the image. - """ - # See https://platform.openai.com/docs/guides/vision for details of - # support image extensions. - support_image_extensions = ( - ".png", - ".jpg", - ".jpeg", - ".gif", - ".webp", - ) - - parsed_url = urlparse(url) - - lower_url = url.lower() - - # Web url - if parsed_url.scheme != "": - if any(lower_url.endswith(_) for _ in support_image_extensions): - return url - - # Check if it is a local file - elif os.path.exists(url) and os.path.isfile(url): - if any(lower_url.endswith(_) for _ in support_image_extensions): - with open(url, "rb") as image_file: - base64_image = base64.b64encode(image_file.read()).decode( - "utf-8", - ) - extension = parsed_url.path.lower().split(".")[-1] - mime_type = f"image/{extension}" - return f"data:{mime_type};base64,{base64_image}" - - raise TypeError(f"{url} should be end with {support_image_extensions}.") - - -def _download_file(url: str, path_file: str, max_retries: int = 3) -> bool: - """Download file from the given url and save it to the given path. - - Args: - url (`str`): - The url of the file. - path_file (`str`): - The path to save the file. - max_retries (`int`, defaults to `3`) - The maximum number of retries when fail to download the file. - """ - for n_retry in range(1, max_retries + 1): - response = requests.get(url, stream=True) - if response.status_code == requests.codes.ok: - with open(path_file, "wb") as file: - for chunk in response.iter_content(1024): - file.write(chunk) - return True - else: - raise RuntimeError( - f"Failed to download file from {url} (status code: " - f"{response.status_code}). Retry {n_retry}/{max_retries}.", - ) - return False - - -def _generate_random_code( - length: int = 6, - uppercase: bool = True, - lowercase: bool = True, - digits: bool = True, -) -> str: - """Get random code.""" - characters = "" - if uppercase: - characters += string.ascii_uppercase - if lowercase: - characters += string.ascii_lowercase - if digits: - characters += string.digits - return "".join(secrets.choice(characters) for i in range(length)) - - -def generate_id_from_seed(seed: str, length: int = 8) -> str: - """Generate random id from seed str. - - Args: - seed (`str`): seed string. - length (`int`): generated id length. - """ - hasher = hashlib.sha256() - hasher.update(seed.encode("utf-8")) - hash_digest = hasher.hexdigest() - - random.seed(hash_digest) - id_chars = [ - random.choice(string.ascii_letters + string.digits) - for _ in range(length) - ] - return "".join(id_chars) - - -def is_web_accessible(url: str) -> bool: - """Whether the url is accessible from the Web. - - Args: - url (`str`): - The url to check. - - Note: - This function is not perfect, it only checks if the URL starts with - common web protocols, e.g., http, https, ftp, oss. - """ - parsed_url = urlparse(url) - return parsed_url.scheme in ["http", "https", "ftp", "oss"] - - -def _is_json_serializable(obj: Any) -> bool: - """Check if the given object is json serializable.""" - try: - json.dumps(obj) - return True - except TypeError: - return False - - -def _convert_to_str(content: Any) -> str: - """Convert the content to string. - - Note: - For prompt engineering, simply calling `str(content)` or - `json.dumps(content)` is not enough. - - - For `str(content)`, if `content` is a dictionary, it will turn double - quotes to single quotes. When this string is fed into prompt, the LLMs - may learn to use single quotes instead of double quotes (which - cannot be loaded by `json.loads` API). - - - For `json.dumps(content)`, if `content` is a string, it will add - double quotes to the string. LLMs may learn to use double quotes to - wrap strings, which leads to the same issue as `str(content)`. - - To avoid these issues, we use this function to safely convert the - content to a string used in prompt. - - Args: - content (`Any`): - The content to be converted. - - Returns: - `str`: The converted string. - """ - - if isinstance(content, str): - return content - elif isinstance(content, (dict, list, int, float, bool, tuple)): - return json.dumps(content, ensure_ascii=False) - else: - return str(content) - - -def _join_str_with_comma_and(elements: List[str]) -> str: - """Return the JSON string with comma, and use " and " between the last two - elements.""" - - if len(elements) == 0: - return "" - elif len(elements) == 1: - return elements[0] - elif len(elements) == 2: - return " and ".join(elements) - else: - return ", ".join(elements[:-1]) + f", and {elements[-1]}" - - -class ImportErrorReporter: - """Used as a placeholder for missing packages. - When called, an ImportError will be raised, prompting the user to install - the specified extras requirement. - """ - - def __init__(self, error: ImportError, extras_require: str = None) -> None: - """Init the ImportErrorReporter. - - Args: - error (`ImportError`): the original ImportError. - extras_require (`str`): the extras requirement. - """ - self.error = error - self.extras_require = extras_require - - def __call__(self, *args: Any, **kwds: Any) -> Any: - return self._raise_import_error() - - def __getattr__(self, name: str) -> Any: - return self._raise_import_error() - - def __getitem__(self, __key: Any) -> Any: - return self._raise_import_error() - - def _raise_import_error(self) -> Any: - """Raise the ImportError""" - err_msg = f"ImportError occorred: [{self.error.msg}]." - if self.extras_require is not None: - err_msg += ( - f" Please install [{self.extras_require}] version" - " of agentscope." - ) - raise ImportError(err_msg) - - -def _hash_string( - data: str, - hash_method: Literal["sha256", "md5", "sha1"], -) -> str: - """Hash the string data.""" - hash_func = getattr(hashlib, hash_method)() - hash_func.update(data.encode()) - return hash_func.hexdigest() - - -def _get_process_creation_time() -> datetime.datetime: - """Get the creation time of the process.""" - pid = os.getpid() - # Find the process by pid - current_process = psutil.Process(pid) - # Obtain the process creation time - create_time = current_process.create_time() - # Change the timestamp to a readable format - return datetime.datetime.fromtimestamp(create_time) - - -def _is_process_alive( - pid: int, - create_time_str: str, - create_time_format: str = "%Y-%m-%d %H:%M:%S", - tolerance_seconds: int = 10, -) -> bool: - """Check if the process is alive by comparing the actual creation time of - the process with the given creation time. - - Args: - pid (`int`): - The process id. - create_time_str (`str`): - The given creation time string. - create_time_format (`str`, defaults to `"%Y-%m-%d %H:%M:%S"`): - The format of the given creation time string. - tolerance_seconds (`int`, defaults to `10`): - The tolerance seconds for comparing the actual creation time with - the given creation time. - - Returns: - `bool`: True if the process is alive, False otherwise. - """ - try: - # Try to create a process object by pid - proc = psutil.Process(pid) - # Obtain the actual creation time of the process - actual_create_time_timestamp = proc.create_time() - - # Convert the given creation time string to a datetime object - given_create_time_datetime = datetime.datetime.strptime( - create_time_str, - create_time_format, - ) - - # Calculate the time difference between the actual creation time and - time_difference = abs( - actual_create_time_timestamp - - given_create_time_datetime.timestamp(), - ) - - # Compare the actual creation time with the given creation time - if time_difference <= tolerance_seconds: - return True - - except (psutil.NoSuchProcess, psutil.AccessDenied, psutil.ZombieProcess): - # If the process is not found, access is denied, or the process is a - # zombie process, return False - return False - - return False - - -def _is_windows() -> bool: - """Check if the system is Windows.""" - return os.name == "nt" - - -def _map_string_to_color_mark( - target_str: str, -) -> Tuple[str, str]: - """Map a string into an index within a given length. - - Args: - target_str (`str`): - The string to be mapped. - - Returns: - `Tuple[str, str]`: A color marker tuple - """ - color_marks = [ - ("\033[90m", "\033[0m"), - ("\033[91m", "\033[0m"), - ("\033[92m", "\033[0m"), - ("\033[93m", "\033[0m"), - ("\033[94m", "\033[0m"), - ("\033[95m", "\033[0m"), - ("\033[96m", "\033[0m"), - ("\033[97m", "\033[0m"), - ] - - hash_value = hash(target_str) - index = hash_value % len(color_marks) - return color_marks[index] - - -def _generate_new_runtime_id() -> str: - """Generate a new random runtime id.""" - _RUNTIME_ID_FORMAT = "run_%Y%m%d-%H%M%S_{}" - return _get_timestamp(_RUNTIME_ID_FORMAT).format( - _generate_random_code(uppercase=False), - ) diff --git a/src/agentscope/web/workstation/workflow_dag.py b/src/agentscope/web/workstation/workflow_dag.py index 242a8b36c..d9ffe43f7 100644 --- a/src/agentscope/web/workstation/workflow_dag.py +++ b/src/agentscope/web/workstation/workflow_dag.py @@ -310,6 +310,22 @@ def build_dag(config: dict) -> ASDiGraph: """ dag = ASDiGraph() + # for html json file, + # retrieve the contents of config["drawflow"]["Home"]["data"], + # and remove the node whose class is "welcome" + if ( + "drawflow" in config + and "Home" in config["drawflow"] + and "data" in config["drawflow"]["Home"] + ): + config = config["drawflow"]["Home"]["data"] + + config = { + k: v + for k, v in config.items() + if not ("class" in v and v["class"] == "welcome") + } + for node_id, node_info in config.items(): config[node_id] = sanitize_node_data(node_info) diff --git a/src/agentscope/web/workstation/workflow_node.py b/src/agentscope/web/workstation/workflow_node.py index 827905a22..337c97efe 100644 --- a/src/agentscope/web/workstation/workflow_node.py +++ b/src/agentscope/web/workstation/workflow_node.py @@ -9,7 +9,6 @@ from agentscope.agents import ( DialogAgent, UserAgent, - TextToImageAgent, DictDialogAgent, ReActAgent, ) @@ -220,36 +219,6 @@ def compile(self) -> dict: } -class TextToImageAgentNode(WorkflowNode): - """ - A node representing a TextToImageAgent within a workflow. - """ - - node_type = WorkflowNodeType.AGENT - - def __init__( - self, - node_id: str, - opt_kwargs: dict, - source_kwargs: dict, - dep_opts: list, - ) -> None: - super().__init__(node_id, opt_kwargs, source_kwargs, dep_opts) - self.pipeline = TextToImageAgent(**self.opt_kwargs) - - def __call__(self, x: dict = None) -> dict: - return self.pipeline(x) - - def compile(self) -> dict: - return { - "imports": "from agentscope.agents import TextToImageAgent", - "inits": f"{self.var_name} = TextToImageAgent(" - f"{kwarg_converter(self.opt_kwargs)})", - "execs": f"{DEFAULT_FLOW_VAR} = {self.var_name}" - f"({DEFAULT_FLOW_VAR})", - } - - class DictDialogAgentNode(WorkflowNode): """ A node representing a DictDialogAgent within a workflow. @@ -717,7 +686,7 @@ def __init__( def compile(self) -> dict: return { - "imports": "from agentscope.service import ServiceFactory\n" + "imports": "from agentscope.service import ServiceToolkit\n" "from functools import partial\n" "from agentscope.service import bing_search", "inits": f"{self.var_name} = partial(bing_search," @@ -745,7 +714,7 @@ def __init__( def compile(self) -> dict: return { - "imports": "from agentscope.service import ServiceFactory\n" + "imports": "from agentscope.service import ServiceToolkit\n" "from functools import partial\n" "from agentscope.service import google_search", "inits": f"{self.var_name} = partial(google_search," @@ -773,7 +742,7 @@ def __init__( def compile(self) -> dict: return { - "imports": "from agentscope.service import ServiceFactory\n" + "imports": "from agentscope.service import ServiceToolkit\n" "from agentscope.service import execute_python_code", "inits": f"{self.var_name} = execute_python_code", "execs": "", @@ -799,7 +768,7 @@ def __init__( def compile(self) -> dict: return { - "imports": "from agentscope.service import ServiceFactory\n" + "imports": "from agentscope.service import ServiceToolkit\n" "from agentscope.service import read_text_file", "inits": f"{self.var_name} = read_text_file", "execs": "", @@ -825,7 +794,7 @@ def __init__( def compile(self) -> dict: return { - "imports": "from agentscope.service import ServiceFactory\n" + "imports": "from agentscope.service import ServiceToolkit\n" "from agentscope.service import write_text_file", "inits": f"{self.var_name} = write_text_file", "execs": "", @@ -840,7 +809,6 @@ def compile(self) -> dict: "Message": MsgNode, "DialogAgent": DialogAgentNode, "UserAgent": UserAgentNode, - "TextToImageAgent": TextToImageAgentNode, "DictDialogAgent": DictDialogAgentNode, "ReActAgent": ReActAgentNode, "Placeholder": PlaceHolderNode, diff --git a/tests/agent_test.py b/tests/agent_test.py index 0d3ff1d91..629e69d7c 100644 --- a/tests/agent_test.py +++ b/tests/agent_test.py @@ -26,9 +26,6 @@ def __init__( use_memory=( kwargs["use_memory"] if "use_memory" in kwargs else None ), - memory_config=( - kwargs["memory_config"] if "memory_config" in kwargs else None - ), ) diff --git a/tests/custom/test_model_config.json b/tests/custom/test_model_config.json new file mode 100644 index 000000000..5123a729c --- /dev/null +++ b/tests/custom/test_model_config.json @@ -0,0 +1,11 @@ +[ + { + "config_name": "qwen", + "model_type": "dashscope_chat", + "model_name": "qwen-max", + "api_key": "xxx", + "generate_args": { + "temperature": 0.5 + } + } +] \ No newline at end of file diff --git a/tests/knowledge_test.py b/tests/knowledge_test.py index dde7877bf..1fae3ed01 100644 --- a/tests/knowledge_test.py +++ b/tests/knowledge_test.py @@ -10,7 +10,6 @@ import agentscope from agentscope.manager import ASManager -from agentscope.rag import LlamaIndexKnowledge from agentscope.models import OpenAIEmbeddingWrapper, ModelResponse @@ -59,6 +58,8 @@ def tearDown(self) -> None: def test_llamaindexknowledge(self) -> None: """test llamaindexknowledge""" + from agentscope.rag.llama_index_knowledge import LlamaIndexKnowledge + dummy_model = DummyModel() knowledge_config = { diff --git a/tests/logger_test.py b/tests/logger_test.py index 1cc684b89..762b0d697 100644 --- a/tests/logger_test.py +++ b/tests/logger_test.py @@ -1,5 +1,6 @@ # -*- coding: utf-8 -*- """ Unit test for logger chat""" +import json import os import shutil import time @@ -29,13 +30,11 @@ def test_logger_chat(self) -> None: msg1 = Msg("abc", "def", "assistant") msg1.id = 1 msg1.timestamp = 1 - msg1._colored_name = "1" # pylint: disable=protected-access # url msg2 = Msg("abc", "def", "assistant", url="https://xxx.png") msg2.id = 2 msg2.timestamp = 2 - msg2._colored_name = "2" # pylint: disable=protected-access # urls msg3 = Msg( @@ -46,13 +45,11 @@ def test_logger_chat(self) -> None: ) msg3.id = 3 msg3.timestamp = 3 - msg3._colored_name = "3" # pylint: disable=protected-access # html labels msg4 = Msg("Bob", "abc None: ) as file: lines = file.readlines() - ground_truth = [ - '{"id": 1, "timestamp": 1, "name": "abc", "content": "def", ' - '"role": "assistant", "url": null, "metadata": null, ' - '"_colored_name": "1"}\n', - '{"id": 2, "timestamp": 2, "name": "abc", "content": "def", ' - '"role": "assistant", "url": "https://xxx.png", "metadata": null, ' - '"_colored_name": "2"}\n', - '{"id": 3, "timestamp": 3, "name": "abc", "content": "def", ' - '"role": "assistant", "url": ' - '["https://yyy.png", "https://xxx.png"], "metadata": null, ' - '"_colored_name": "3"}\n', - '{"id": 4, "timestamp": 4, "name": "Bob", "content": ' - '"abcabc None: """Tear down for LoggerTest.""" diff --git a/tests/memory_test.py b/tests/memory_test.py index 55e02c109..8a3fdbfd0 100644 --- a/tests/memory_test.py +++ b/tests/memory_test.py @@ -9,6 +9,7 @@ from agentscope.message import Msg from agentscope.memory import TemporaryMemory +from agentscope.serialize import serialize class TemporaryMemoryTest(unittest.TestCase): @@ -80,7 +81,8 @@ def test_invalid(self) -> None: with self.assertRaises(Exception) as context: self.memory.add(self.invalid) self.assertTrue( - f"Cannot add {self.invalid} to memory" in str(context.exception), + f"Cannot add {type(self.invalid)} to memory, must be a Msg object." + in str(context.exception), ) def test_load_export(self) -> None: @@ -88,10 +90,11 @@ def test_load_export(self) -> None: Test load and export function of TemporaryMemory """ memory = TemporaryMemory() - user_input = Msg(name="user", content="Hello") + user_input = Msg(name="user", content="Hello", role="user") agent_input = Msg( name="agent", content="Hello! How can I help you?", + role="assistant", ) memory.load([user_input, agent_input]) retrieved_mem = memory.export(to_mem=True) @@ -108,8 +111,8 @@ def test_load_export(self) -> None: ) memory.load(self.file_name_1) self.assertEqual( - memory.get_memory(), - [user_input, agent_input], + serialize(memory.get_memory()), + serialize([user_input, agent_input]), ) diff --git a/tests/message_test.py b/tests/message_test.py new file mode 100644 index 000000000..7612842e6 --- /dev/null +++ b/tests/message_test.py @@ -0,0 +1,44 @@ +# -*- coding: utf-8 -*- +"""The unit test for message module.""" + +import unittest + +from agentscope.message import Msg + + +class MessageTest(unittest.TestCase): + """The test cases for message module.""" + + def test_msg(self) -> None: + """Test the basic attributes in Msg object.""" + msg = Msg(name="A", content="B", role="assistant") + self.assertEqual(msg.name, "A") + self.assertEqual(msg.content, "B") + self.assertEqual(msg.role, "assistant") + self.assertEqual(msg.metadata, None) + self.assertEqual(msg.url, None) + + def test_formatted_msg(self) -> None: + """Test the formatted message.""" + msg = Msg(name="A", content="B", role="assistant") + self.assertEqual( + msg.formatted_str(), + "A: B", + ) + self.assertEqual( + msg.formatted_str(colored=True), + "\x1b[95mA\x1b[0m: B", + ) + + def test_serialize(self) -> None: + """Test the serialization and deserialization of Msg object.""" + msg = Msg(name="A", content="B", role="assistant") + serialized_msg = msg.to_dict() + deserialized_msg = Msg.from_dict(serialized_msg) + self.assertEqual(msg.id, deserialized_msg.id) + self.assertEqual(msg.name, deserialized_msg.name) + self.assertEqual(msg.content, deserialized_msg.content) + self.assertEqual(msg.role, deserialized_msg.role) + self.assertEqual(msg.metadata, deserialized_msg.metadata) + self.assertEqual(msg.url, deserialized_msg.url) + self.assertEqual(msg.timestamp, deserialized_msg.timestamp) diff --git a/tests/msghub_test.py b/tests/msghub_test.py index 9859c364e..b5adadb25 100644 --- a/tests/msghub_test.py +++ b/tests/msghub_test.py @@ -34,10 +34,10 @@ def setUp(self) -> None: def test_msghub_operation(self) -> None: """Test add, delete and broadcast operations""" - msg1 = Msg(name="a1", content="msg1") - msg2 = Msg(name="a2", content="msg2") - msg3 = Msg(name="a3", content="msg3") - msg4 = Msg(name="a4", content="msg4") + msg1 = Msg(name="a1", content="msg1", role="assistant") + msg2 = Msg(name="a2", content="msg2", role="assistant") + msg3 = Msg(name="a3", content="msg3", role="assistant") + msg4 = Msg(name="a4", content="msg4", role="assistant") with msghub(participants=[self.agent1, self.agent2]) as hub: self.agent1(msg1) @@ -73,7 +73,7 @@ def test_msghub(self) -> None: name="w1", content="This secret that my password is 123456 can't be" " leaked!", - role="wisper", + role="assistant", ), ] diff --git a/tests/openai_services_test.py b/tests/openai_services_test.py index 997b5fa6e..d875fc3b1 100644 --- a/tests/openai_services_test.py +++ b/tests/openai_services_test.py @@ -4,7 +4,6 @@ from unittest.mock import patch, MagicMock, mock_open import os import shutil -from openai._types import NOT_GIVEN from agentscope.manager import ASManager from agentscope.service.multi_modality.openai_services import ( @@ -177,7 +176,7 @@ def test_openai_text_to_image_service_error( # Ensure _download_file is not called in case of service error mock_download_file.assert_not_called() - @patch("agentscope.service.multi_modality.openai_services.OpenAI") + @patch("openai.OpenAI") @patch( "builtins.open", new_callable=mock_open, @@ -212,7 +211,7 @@ def test_openai_audio_to_text_success( {"transcription": "This is a test transcription."}, ) - @patch("agentscope.service.multi_modality.openai_services.OpenAI") + @patch("openai.OpenAI") @patch("builtins.open", new_callable=mock_open) def test_openai_audio_to_text_error( self, @@ -238,7 +237,7 @@ def test_openai_audio_to_text_error( result.content, ) - @patch("agentscope.service.multi_modality.openai_services.OpenAI") + @patch("openai.OpenAI") def test_successful_audio_generation(self, mock_openai: MagicMock) -> None: """Test the openai_text_to_audio function with a valid text.""" # Mocking the OpenAI API response @@ -264,7 +263,7 @@ def test_successful_audio_generation(self, mock_openai: MagicMock) -> None: expected_audio_path, ) # Check file save - @patch("agentscope.service.multi_modality.openai_services.OpenAI") + @patch("openai.OpenAI") def test_api_error_text_to_audio(self, mock_openai: MagicMock) -> None: """Test the openai_text_to_audio function with an API error.""" # Mocking an OpenAI API error @@ -352,7 +351,7 @@ def test_openai_image_to_text_error( self.assertEqual(result.status, ServiceExecStatus.ERROR) self.assertEqual(result.content, "API Error") - @patch("agentscope.service.multi_modality.openai_services.OpenAI") + @patch("openai.OpenAI") @patch("agentscope.service.multi_modality.openai_services._parse_url") @patch( ( @@ -411,9 +410,12 @@ def test_openai_edit_image_success( ) # Check if _handle_openai_img_response was called - mock_handle_response.assert_called_once_with(mock_response, None) + mock_handle_response.assert_called_once_with( + mock_response.model_dump(), + None, + ) - @patch("agentscope.service.multi_modality.openai_services.OpenAI") + @patch("openai.OpenAI") @patch("agentscope.service.multi_modality.openai_services._parse_url") def test_openai_edit_image_error( self, @@ -444,13 +446,12 @@ def test_openai_edit_image_error( mock_client.images.edit.assert_called_once_with( model="dall-e-2", image="parsed_original_image.png", - mask=NOT_GIVEN, prompt="Add a sun to the sky", n=1, size="256x256", ) - @patch("agentscope.service.multi_modality.openai_services.OpenAI") + @patch("openai.OpenAI") @patch("agentscope.service.multi_modality.openai_services._parse_url") @patch( ( @@ -464,7 +465,7 @@ def test_openai_create_image_variation_success( mock_parse_url: MagicMock, mock_openai: MagicMock, ) -> None: - """Test the openai_create_image_variation swith a valid image URL.""" + """Test the openai_create_image_variation with a valid image URL.""" # Mock OpenAI client mock_client = MagicMock() mock_openai.return_value = mock_client @@ -505,9 +506,12 @@ def test_openai_create_image_variation_success( ) # Check if _handle_openai_img_response was called - mock_handle_response.assert_called_once_with(mock_response, None) + mock_handle_response.assert_called_once_with( + mock_response.model_dump(), + None, + ) - @patch("agentscope.service.multi_modality.openai_services.OpenAI") + @patch("openai.OpenAI") @patch("agentscope.service.multi_modality.openai_services._parse_url") def test_openai_create_image_variation_error( self, diff --git a/tests/prompt_engine_test.py b/tests/prompt_engine_test.py deleted file mode 100644 index 046ef40ed..000000000 --- a/tests/prompt_engine_test.py +++ /dev/null @@ -1,137 +0,0 @@ -# -*- coding: utf-8 -*- -"""Unit test for prompt engine.""" -import unittest -from typing import Any - -import agentscope -from agentscope.manager import ModelManager -from agentscope.models import ModelResponse -from agentscope.models import OpenAIWrapperBase -from agentscope.prompt import PromptEngine - - -class PromptEngineTest(unittest.TestCase): - """Unit test for prompt engine.""" - - def setUp(self) -> None: - """Init for PromptEngineTest.""" - self.name = "white" - self.sys_prompt = ( - "You're a player in a chess game, and you are playing {name}." - ) - self.dialog_history = [ - {"name": "white player", "content": "Move to E4."}, - {"name": "black player", "content": "Okay, I moved to F4."}, - {"name": "white player", "content": "Move to F5."}, - ] - self.hint = "Now decide your next move." - self.prefix = "{name} player: " - - agentscope.init( - model_configs=[ - { - "model_type": "post_api", - "config_name": "open-source", - "api_url": "http://xxx", - "headers": {"Autherization": "Bearer {API_TOKEN}"}, - "parameters": { - "temperature": 0.5, - }, - }, - { - "model_type": "openai_chat", - "config_name": "gpt-4", - "model_name": "gpt-4", - "api_key": "xxx", - "organization": "xxx", - }, - ], - disable_saving=True, - ) - - def test_list_prompt(self) -> None: - """Test for list prompt.""" - - class TestModelWrapperBase(OpenAIWrapperBase): - """Test model wrapper.""" - - def __init__(self) -> None: - self.max_length = 1000 - - def __call__( - self, - *args: Any, - **kwargs: Any, - ) -> ModelResponse: - return ModelResponse(text="") - - def _register_default_metrics(self) -> None: - pass - - model = TestModelWrapperBase() - engine = PromptEngine(model) - - prompt = engine.join( - self.sys_prompt, - self.dialog_history, - self.hint, - format_map={"name": self.name}, - ) - - self.assertEqual( - [ - { - "role": "assistant", - "content": "You're a player in a chess game, and you are " - "playing white.", - }, - { - "name": "white player", - "role": "assistant", - "content": "Move to E4.", - }, - { - "name": "black player", - "role": "assistant", - "content": "Okay, I moved to F4.", - }, - { - "name": "white player", - "role": "assistant", - "content": "Move to F5.", - }, - { - "role": "assistant", - "content": "Now decide your next move.", - }, - ], - prompt, - ) - - def test_str_prompt(self) -> None: - """Test for string prompt.""" - model_manager = ModelManager.get_instance() - model = model_manager.get_model_by_config_name("open-source") - engine = PromptEngine(model) - - prompt = engine.join( - self.sys_prompt, - self.dialog_history, - self.hint, - self.prefix, - format_map={"name": self.name}, - ) - - self.assertEqual( - """You're a player in a chess game, and you are playing white. -white player: Move to E4. -black player: Okay, I moved to F4. -white player: Move to F5. -Now decide your next move. -white player: """, - prompt, - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/tests/retrieval_from_list_test.py b/tests/retrieval_from_list_test.py index 52b30720b..f42529e3d 100644 --- a/tests/retrieval_from_list_test.py +++ b/tests/retrieval_from_list_test.py @@ -6,7 +6,7 @@ from agentscope.service import retrieve_from_list, cos_sim from agentscope.service.service_status import ServiceExecStatus -from agentscope.message import MessageBase, Msg +from agentscope.message import Msg from agentscope.memory.temporary_memory import TemporaryMemory from agentscope.models import OpenAIEmbeddingWrapper, ModelResponse @@ -40,11 +40,11 @@ def __call__(self, *args: Any, **kwargs: Any) -> ModelResponse: m2 = Msg(name="env", content="test2", role="assistant") m2.embedding = [0.5, 0.5] m2.timestamp = "2023-12-18 21:50:59" - memory = TemporaryMemory(config={}, embedding_model=dummy_model) + memory = TemporaryMemory(embedding_model=dummy_model) memory.add(m1) memory.add(m2) - def score_func(m1: MessageBase, m2: MessageBase) -> float: + def score_func(m1: Msg, m2: Msg) -> float: relevance = cos_sim(m1.embedding, m2.embedding).content time_gap = ( datetime.strptime(m1.timestamp, "%Y-%m-%d %H:%M:%S") diff --git a/tests/rpc_agent_test.py b/tests/rpc_agent_test.py index 0c62f9718..bda005882 100644 --- a/tests/rpc_agent_test.py +++ b/tests/rpc_agent_test.py @@ -1,4 +1,5 @@ # -*- coding: utf-8 -*- +# pylint: disable=W0212 """ Unit tests for rpc agent classes """ @@ -14,10 +15,10 @@ import agentscope from agentscope.agents import AgentBase, DistConf, DialogAgent from agentscope.manager import MonitorManager, ASManager +from agentscope.serialize import deserialize, serialize from agentscope.server import RpcAgentServerLauncher from agentscope.message import Msg from agentscope.message import PlaceholderMessage -from agentscope.message import deserialize from agentscope.msghub import msghub from agentscope.pipelines import sequentialpipeline from agentscope.rpc.rpc_agent_client import RpcAgentClient @@ -179,6 +180,13 @@ def setUp(self) -> None: agentscope.init( project="test", name="rpc_agent", + model_configs=os.path.abspath( + os.path.join( + os.path.abspath(os.path.dirname(__file__)), + "custom", + "test_model_config.json", + ), + ), save_dir="./.unittest_runs", save_log=True, ) @@ -202,35 +210,34 @@ def test_single_rpc_agent_server(self) -> None: role="system", ) result = agent_a(msg) - # get name without waiting for the server - self.assertEqual(result.name, "a") - self.assertEqual(result["name"], "a") - js_placeholder_result = result.serialize() - self.assertTrue(result._is_placeholder) # pylint: disable=W0212 + + # The deserialization without accessing the attributes will generate + # a PlaceholderMessage instance. + js_placeholder_result = serialize(result) placeholder_result = deserialize(js_placeholder_result) self.assertTrue(isinstance(placeholder_result, PlaceholderMessage)) - self.assertEqual(placeholder_result.name, "a") - self.assertEqual( - placeholder_result["name"], # type: ignore[call-overload] - "a", - ) - self.assertTrue( - placeholder_result._is_placeholder, # pylint: disable=W0212 - ) + + # Fetch the attribute from distributed agent + self.assertTrue(result._is_placeholder) + self.assertEqual(result.name, "System") + self.assertFalse(result._is_placeholder) + # wait to get content self.assertEqual(result.content, msg.content) - self.assertFalse(result._is_placeholder) # pylint: disable=W0212 self.assertEqual(result.id, 0) + + # The second time to fetch the attributes from the distributed agent self.assertTrue( - placeholder_result._is_placeholder, # pylint: disable=W0212 + placeholder_result._is_placeholder, ) self.assertEqual(placeholder_result.content, msg.content) self.assertFalse( - placeholder_result._is_placeholder, # pylint: disable=W0212 + placeholder_result._is_placeholder, ) self.assertEqual(placeholder_result.id, 0) + # check msg - js_msg_result = result.serialize() + js_msg_result = serialize(result) msg_result = deserialize(js_msg_result) self.assertTrue(isinstance(msg_result, Msg)) self.assertEqual(msg_result.content, msg.content) @@ -250,7 +257,7 @@ def test_connect_to_an_existing_rpc_server(self) -> None: ) launcher.launch() client = RpcAgentClient(host=launcher.host, port=launcher.port) - self.assertTrue(client.is_alive()) # pylint: disable=W0212 + self.assertTrue(client.is_alive()) agent_a = DemoRpcAgent( name="a", ).to_dist( @@ -264,7 +271,7 @@ def test_connect_to_an_existing_rpc_server(self) -> None: ) result = agent_a(msg) # get name without waiting for the server - self.assertEqual(result.name, "a") + self.assertEqual(result.name, "System") # waiting for server self.assertEqual(result.content, msg.content) # test dict usage @@ -275,9 +282,9 @@ def test_connect_to_an_existing_rpc_server(self) -> None: ) result = agent_a(msg) # get name without waiting for the server - self.assertEqual(result["name"], "a") + self.assertEqual(result.name, "System") # waiting for server - self.assertEqual(result["content"], msg.content) + self.assertEqual(result.content, msg.content) # test to_str msg = Msg( name="System", @@ -285,7 +292,7 @@ def test_connect_to_an_existing_rpc_server(self) -> None: role="system", ) result = agent_a(msg) - self.assertEqual(result.formatted_str(), "a: {'text': 'test'}") + self.assertEqual(result.formatted_str(), "System: {'text': 'test'}") launcher.shutdown() def test_multi_rpc_agent(self) -> None: @@ -436,7 +443,7 @@ def test_multi_agent_in_same_server(self) -> None: host="127.0.0.1", port=launcher.port, ) - agent3._agent_id = agent1.agent_id # pylint: disable=W0212 + agent3._agent_id = agent1.agent_id agent3.client.agent_id = agent1.client.agent_id msg1 = Msg( name="System", @@ -474,7 +481,7 @@ def test_multi_agent_in_same_server(self) -> None: role="system", ) res2 = agent2(msg2) - self.assertRaises(ValueError, res2.__getattr__, "content") + self.assertRaises(ValueError, res2.update_value) # should override remote default parameter(e.g. name field) agent4 = DemoRpcAgentWithMemory( @@ -557,7 +564,7 @@ def test_error_handling(self) -> None: """Test error handling""" agent = DemoErrorAgent(name="a").to_dist() x = agent() - self.assertRaises(AgentCallError, x.__getattr__, "content") + self.assertRaises(AgentCallError, x.update_value) def test_agent_nesting(self) -> None: """Test agent nesting""" @@ -642,8 +649,8 @@ def test_agent_server_management_funcs(self) -> None: resp.update_value() memory = client.get_agent_memory(memory_agent.agent_id) self.assertEqual(len(memory), 2) - self.assertEqual(memory[0]["content"], "first msg") - self.assertEqual(memory[1]["content"]["mem_size"], 1) + self.assertEqual(memory[0].content, "first msg") + self.assertEqual(memory[1].content["mem_size"], 1) agent_lists = client.get_agent_list() self.assertEqual(len(agent_lists), 1) self.assertEqual(agent_lists[0]["agent_id"], memory_agent.agent_id) @@ -669,7 +676,7 @@ def test_agent_server_management_funcs(self) -> None: ), ) local_file_path = file.url - self.assertNotEqual(remote_file_path, local_file_path) + self.assertEqual(remote_file_path, local_file_path) with open(remote_file_path, "rb") as rf: remote_content = rf.read() with open(local_file_path, "rb") as lf: @@ -677,6 +684,16 @@ def test_agent_server_management_funcs(self) -> None: self.assertEqual(remote_content, local_content) agent_lists = client.get_agent_list() self.assertEqual(len(agent_lists), 2) + # test existing model config + DialogAgent( + name="dialogue", + sys_prompt="You are a helful assistant.", + model_config_name="qwen", + to_dist={ + "host": "localhost", + "port": launcher.port, + }, + ) # model not exists error self.assertRaises( Exception, diff --git a/tests/serialize_test.py b/tests/serialize_test.py new file mode 100644 index 000000000..819bda14b --- /dev/null +++ b/tests/serialize_test.py @@ -0,0 +1,100 @@ +# -*- coding: utf-8 -*- +# pylint: disable=protected-access +"""Unit test for serialization.""" +import json +import unittest + +from agentscope.message import Msg, PlaceholderMessage +from agentscope.serialize import serialize, deserialize + + +class SerializationTest(unittest.TestCase): + """The test cases for serialization.""" + + def test_serialize(self) -> None: + """Test the serialization function.""" + + msg1 = Msg("A", "A", "assistant") + msg2 = Msg("B", "B", "assistant") + placeholder = PlaceholderMessage( + host="localhost", + port=50051, + ) + + serialized_msg1 = serialize(msg1) + deserialized_msg1 = deserialize(serialized_msg1) + self.assertTrue(isinstance(serialized_msg1, str)) + self.assertTrue(isinstance(deserialized_msg1, Msg)) + + msg1_dict = json.loads(serialized_msg1) + self.assertDictEqual( + msg1_dict, + { + "id": msg1.id, + "name": msg1.name, + "content": msg1.content, + "role": msg1.role, + "timestamp": msg1.timestamp, + "metadata": msg1.metadata, + "url": msg1.url, + "__module__": "agentscope.message.msg", + "__name__": "Msg", + }, + ) + + serialized_list = serialize([msg1, msg2]) + deserialized_list = deserialize(serialized_list) + self.assertTrue(isinstance(serialized_list, str)) + self.assertTrue( + isinstance(deserialized_list, list) + and len(deserialized_list) == 2 + and all(isinstance(msg, Msg) for msg in deserialized_list), + ) + + dict_list = json.loads(serialized_list) + self.assertListEqual( + dict_list, + [ + { + "id": msg1.id, + "name": msg1.name, + "content": msg1.content, + "role": msg1.role, + "timestamp": msg1.timestamp, + "metadata": msg1.metadata, + "url": msg1.url, + "__module__": "agentscope.message.msg", + "__name__": "Msg", + }, + { + "id": msg2.id, + "name": msg2.name, + "content": msg2.content, + "role": msg2.role, + "timestamp": msg2.timestamp, + "metadata": msg2.metadata, + "url": msg2.url, + "__module__": "agentscope.message.msg", + "__name__": "Msg", + }, + ], + ) + + serialized_placeholder = serialize(placeholder) + deserialized_placeholder = deserialize(serialized_placeholder) + self.assertTrue(isinstance(serialized_placeholder, str)) + self.assertTrue( + isinstance(deserialized_placeholder, PlaceholderMessage), + ) + + placeholder_dict = json.loads(serialized_placeholder) + self.assertDictEqual( + placeholder_dict, + { + "_host": placeholder._host, + "_port": placeholder._port, + "_task_id": placeholder._task_id, + "__module__": "agentscope.message.placeholder", + "__name__": "PlaceholderMessage", + }, + ) diff --git a/tests/wiki_test.py b/tests/wiki_test.py new file mode 100644 index 000000000..1ed4fe375 --- /dev/null +++ b/tests/wiki_test.py @@ -0,0 +1,112 @@ +# -*- coding: utf-8 -*- +"""Wiki retriever test.""" +import unittest +from unittest.mock import Mock, patch, MagicMock + +from agentscope.service import ( + wikipedia_search, + wikipedia_search_categories, + ServiceResponse, + ServiceExecStatus, +) + + +class TestWikipedia(unittest.TestCase): + """ExampleTest for a unit test.""" + + @patch("agentscope.utils.common.requests.get") + def test_wikipedia_search_categories( + self, + mock_get: MagicMock, + ) -> None: + """Test test_get_category_members""" + mock_response = Mock() + mock_dict = { + "query": { + "categorymembers": [ + { + "pageid": 20, + "ns": 0, + "title": "This is a test", + }, + ], + }, + } + + expected_result = ServiceResponse( + status=ServiceExecStatus.SUCCESS, + content=[ + { + "pageid": 20, + "ns": 0, + "title": "This is a test", + }, + ], + ) + + mock_response.json.return_value = mock_dict + mock_get.return_value = mock_response + + test_entity = "Test" + limit_per_request = 500 + params = { + "action": "query", + "list": "categorymembers", + "cmtitle": f"Category:{test_entity}", + "cmlimit": limit_per_request, + "format": "json", + } + + results = wikipedia_search_categories(query=test_entity) + + mock_get.assert_called_once_with( + "https://en.wikipedia.org/w/api.php", + params=params, + timeout=20, + ) + + self.assertEqual( + results, + expected_result, + ) + + @patch("agentscope.utils.common.requests.get") + def test_wikipedia_search( + self, + mock_get: MagicMock, + ) -> None: + """Test get_page_content_by_paragraph""" + + # Mock responses for extract query + mock_response = Mock() + mock_dict = { + "query": { + "pages": { + "20": { + "pageid": 20, + "title": "Test", + "extract": "This is the first paragraph.", + }, + "21": { + "pageid": 30, + "title": "Test", + "extract": "This is the second paragraph.", + }, + }, + }, + } + + mock_response.json.return_value = mock_dict + mock_get.return_value = mock_response + + expected_response = ServiceResponse( + status=ServiceExecStatus.SUCCESS, + content=( + "This is the first paragraph.\n" + "This is the second paragraph." + ), + ) + + response = wikipedia_search("Test") + + self.assertEqual(expected_response, response)