2024
- [2024/11/30] 📢📺 We've released a project introduction video! You can find the video on Bilibili. If you like it, don't forget to give it a thumbs up 👍 and follow us!
- [2024/11/19] 🔒🛡️ Incorporate a sensitive word processing module to prevent the model from outputting harmful viewpoints.
- [2024/11/18] 🔧🌟 Completed Quick Start section documentation updates and resolved environment setup bugs.
- [2024/11/14] 🎉✨ Successfully quantized four models using the LMDeploy tool. The FunGPT family welcomes new members! Models are now available on HuggingFace.
- [2024/11/13] 🎉✨ Released two new 1.8B models, expanding the FunGPT family! The models are BanterBot_1_8b-chat and BoostBot_1_8b-chat.
- [2024/11/10] 🎉✨ Launched two brand-new 7B models, adding new members to the FunGPT family! The models are BanterBot-7b-chat and BoostBot-7b-chat.
- [2024/11/10] 🛠️🎯 Major project updates completed, including fixing known absolute path bugs.
- [2024/10/28] 🎈🥳 Achieved fine-tuning large language models using xtuner and released the first version of BoostBot_v1!
- [2024/10/19] 🎉💬 Developed a toolchain to generate fine-tuned dialogue data using the Chat-GLM4 series.
- [2024/10/03] 🎨🐞 Beautified the system interface and fixed some known bugs.
- [2024/10/02] 🚀💻 Added model loading and unloading mechanisms to optimize GPU memory usage.
- [2024/10/01] 😄🐍 Integrated an exception handling module to enhance application stability.
- [2024/09/28] 👋👋 Completed initial testing and evaluation of individual functionalities for LLM (InternLM2.5_1.8b), ASR (Sensevoice), and TTS (ChatTTS).
🍬 Sweet Compliment Mode:
- Mood Booster 🌟✨: When you’re feeling down, our Sweet Compliment Mode will instantly lift your spirits, just like tasting an incredibly sweet candy.
- Confidence Fuel Station 💪🌈: Meanwhile, our Praise Master will compliment you in the most suitable and unique ways, making your confidence soar.
🔪 Sharp Retort Mode:
- Stress Release Valve 💥😤: When you’re feeling overwhelmed, our Retort Mode provides an outlet to blow off steam while delivering sharp remarks.
- Humorous Roasting Machine 😂👅: The words of the Roast Master are not only sharp but also humorous and imaginative, letting you experience brain-twisting comebacks while having fun.
- 🤗 Master of Compliments: Generate sweet words to brighten your daily life.
- 🗯️ Roasting Expert: Tailored responses with sharp wit, engaging in a battle of wits with "me."
- 📊 Data Collection Guide: Fully open-source, helping you quickly grasp the creation of fine-tuning datasets.
- 📖 Complete LLM Workflow Guide: Comprehensive code and documentation, open-source, making it easy to get started.
- 🔊 Complete ASR Workflow Guide: Open everything to help you realize your dream of speech recognition.
- 🎙️ Complete TTS Workflow Guide: From basics to advanced, fully open-source with no reservations!
- 📂 Clear Structure: Detailed annotations and documentation ensure seamless onboarding.
- ⚡ Model Quantization: Lower the usage barrier and experience the magic of AI anytime, anywhere.
- 🎥 Video Tutorials: Stay tuned for our complete project introduction videos!
Original_7b_BoostBot | BoostBot-7b |
---|---|
Original_7b_BanterBot | BanterBot-7b |
---|---|
Model | Base | Type | Link |
---|---|---|---|
BanterBot-7b-chat | internlm2_5_chat_7b | Pre-trained + QLoRA fine-tuning | HuggingFace OpenXLab |
BoostBot-7b-chat | internlm2_5_chat_7b | Pre-trained + QLoRA fine-tuning | HuggingFace OpenXLab |
BanterBot_1_8b-chat | internlm2_5_chat_1_8b | Pre-trained + QLoRA fine-tuning | HuggingFace OpenXLab |
BoostBot_1_8b-chat | internlm2_5_chat_1_8b | Pre-trained + QLoRA fine-tuning | HuggingFace OpenXLab |
BanterBot-7b-chat-w4a16-4bit | internlm2_5_chat_7b | Pre-trained + QLoRA fine-tuning + w4a16 quantization | HuggingFace OpenXLab |
BoostBot-7b-chat-w4a16-4bit | internlm2_5_chat_7b | Pre-trained + QLoRA fine-tuning + w4a16 quantization | HuggingFace OpenXLab |
BanterBot_1_8b-chat-w4a16-4bit | internlm2_5_chat_1_8b | Pre-trained + QLoRA fine-tuning + w4a16 quantization | HuggingFace OpenXLab |
BoostBot_1_8b-chat-w4a16-4bit | internlm2_5_chat_1_8b | Pre-trained + QLoRA fine-tuning + w4a16 quantization | HuggingFace OpenXLab |
.
|-- ASR
| |-- __init__.py
| |-- models
| | `-- sensevoice.py
| |-- readme.md
| `-- weights
| `-- readme.md
|-- Assets
| |-- animation
| | `-- Animation_1.json
| |-- avatar
| | |-- BanterBot.jpg
| | |-- BoostBot.jpg
| | |-- BoostBot_v2.jpg
| | |-- User_v1.jpg
| | |-- person1.png
| | `-- person2.png
| |-- gif
| | |-- BanterBot-7b.gif
| | |-- BoostBot-7b.gif
| | |-- Original_7b_BanterBot.gif
| | `-- Original_7b_BoostBot.gif
| |-- image
| |-- svg
| | |-- FunGPT-logo.svg
| | `-- openxlab_logo.svg
| `-- video
| |-- BanterBot-7b.mp4
| |-- BoostBot-7b.mp4
| |-- Original_7b_BanterBot.mp4
| `-- Original_7b_BoostBot.mp4
|-- Data
| |-- BanterBot
| | |-- feasible_data
| | | `-- readme.md
| | |-- raw
| | | `-- readme.md
| | |-- readme.md
| | |-- sample
| | | `-- readme.md
| | |-- scripts
| | | |-- filter_bad_from_conv_data.py
| | | |-- filter_sensitive_words_from_conv_data.py
| | | |-- generate_mutil_conv_chatglm.py
| | | |-- generate_mutil_topic_chatglm.py
| | | |-- generate_self_congnitive_data.py
| | | `-- merge_conv_data_finetune.py
| | `-- sensitive_words
| | `-- readme.md
| `-- BoostBot
| |-- feasible_data
| | `-- readme.md
| |-- raw
| | `-- topic.txt
| |-- sample
| | `-- multi_conversation.jsonl
| `-- scripts
| |-- generate_mutil_conv_chatglm.py
| |-- generate_mutil_topic_chatglm.py
| |-- generate_self_congnitive_data.py
| `-- merge_conv_data_finetune.py
|-- Docs
| |-- pictures
| | `-- FunGPT.png
| |-- readme.md
| `-- user_guides
| `-- readme.md
|-- Finetune
| |-- BanterBot
| | `-- internlm2_5_chat_7b_qlora_alpaca_e3_copy.py
| |-- BaseModel
| | `-- readme.md
| `-- BoostBot
| `-- internlm2_5_chat_7b_qlora_alpaca_e3_copy.py
|-- LICENSE
|-- LLM
| |-- __init__.py
| |-- models
| | `-- internlm2_5_7b_chat.py
| |-- readme.md
| |-- templates
| | `-- template.py
| `-- weights
| `-- readme.md
|-- README.md
|-- README_en.md
|-- README_zh.md
|-- TTS
| |-- __init__.py
| |-- models
| | `-- chattts.py
| |-- readme.md
| `-- weights
| `-- readme.md
|-- Test
| |-- ASR
| | |-- example.py
| | `-- test_wav.wav
| |-- TTS
| | `-- example.ipynb
| `-- readme.md
|-- Utils
| |-- common_utils.py
| |-- configs.py
| |-- convert_gif.sh
| |-- data_utils.py
| |-- model_settings.py
| |-- model_utils.py
| `-- readme.md
|-- Work_dirs
| |-- ASR
| | `-- readme.md
| `-- TTS
| `-- readme.md
|-- __init__.py
|-- app.py
|-- env.yaml
|-- pages
| |-- 1_🍬💖_甜言模式.py
| |-- 2_💥😤_怼语模式.py
| `-- 3_🚀💫_待开发ing.py
|-- project_structure.txt
`-- requirements.txt
44 directories, 79 files
- Operating System: Ubuntu 20.04.6 LTS
- CPU: Intel(R) Xeon(R) Platinum 8369B CPU @ 2.90GHz (Online GPU Server)
- GPU: NVIDIA A100-SXM4-80GB, NVIDIA-SMI 535.54.03, Driver Version: 535.54.03, CUDA Version: 12.2
- Python: 3.10.0
Python==3.10.0
torch==2.4.1
torch-complex==0.4.4
torchaudio==2.4.1
torchvision==0.16.2
chattts==0.1.1
streamlit==1.38.0
audio-recorder-streamlit==0.0.10
git clone https://github.com/Alannikos/FunGPT
- Enter the root directory of the project
cd FunGPT
- Create a conda environment
conda create -n FunGPT python==3.10.0
- Install third-party libraries
pip install -r requirements.txt
# This will take approximately 1 hour
- Install git-lfs
As model files need to be downloaded, please ensure
git-lfs
is already installed. Linux users can install it using the following command:
apt install git-lfs
- Initialize
LFS
git lfs install
- Download the TTS model to the specified path
# 1. Navigate to the specific directory
cd /FunGPT/TTS/weights
# 2. Download the model from huggingface
git clone https://huggingface.co/2Noise/ChatTTS
- For users unable to access HuggingFace, download from the mirror source
# 2. Download the model from the mirror source
git clone https://hf-mirror.com/2Noise/ChatTTS
As model files need to be downloaded, please ensure git-lfs
is already installed. Linux users can install it using the following command:
# Users who have already downloaded can ignore this command
apt install git-lfs
- Initialize
LFS
git lfs install
- Download the ASR model to the specified path
# 1. 进入指定目录
cd /FunGPT/ASR/weights
# 2. Navigate to the specific directory
git clone https://huggingface.co/FunAudioLLM/SenseVoiceSmall
- For users unable to access HuggingFace, download from the mirror source
# 2. Download the model from the mirror source
git clone https://hf-mirror.com/FunAudioLLM/SenseVoiceSmall
- Initialize
LFS
git lfs install
- Download the LLM models to the specified path
# 1. Navigate to the specific directory
cd /FunGPT/LLM/weights
# 2. Download the BanterBot-1_8b-chat model from huggingface
https://huggingface.co/Alannikos768/BanterBot_1_8b-chat
# 3. Download the BoostBot-1_8b-chat model from huggingface
https://huggingface.co/Alannikos768/BoostBot_1_8b-chat
- For users unable to access HuggingFace, download from OpenXLab
# 2. Download the BanterBot-1_8b-chat model from OpenXLab (for users in China)
git clone https://code.openxlab.org.cn/Alannikos/BanterBot-1_8b-chat.git
# 3. Download the BoostBot-1_8b-chat model from OpenXLab (for users in China)
git clone https://code.openxlab.org.cn/Alannikos/BoostBot-1_8b-chat.git
conda activate FunGPT
streamlit run app.py --server.address=127.0.0.1 --server.port=7860
- If running on a remote server, port forwarding is needed
ssh -p 46411 user@ip -CNg -L 7860:127.0.0.1:7860 -o StrictHostKeyChecking=no
- Then, to experience the application
Open your browser, input http://127.0.0.1:7860
, and click the corresponding interface to experience FunGPT
.
-
-
Recording Project Video
-
-
- Support GPT-Sovits
-
- Support API access for large language models
-
- Improve the data generation guide section
-
- Enhance the large language model usage section
-
- Refine the text-to-speech module introduction section
-
- Improve the speech recognition usage section
-
-
Add Sensitive Words Module
-
Thanks to the following open-source tools and projects for their support:
Tool | Description |
---|---|
InternLM-Tutorial | An active and open large model training camp |
Xtuner | Tool for model training and fine-tuning |
LMDeploy | Tool for model quantization and deployment |
Streamlit | Tool for efficiently building AI applications |
DeepSpeed | Tool for model training and inference acceleration |
Pytorch | Widely used deep learning framework |
Project | Description |
---|---|
InternLM | A series of advanced open-source large language models |
ChatTTS | An open-source text-to-speech project |
SenseVoice | A speech recognition open-source project by Alibaba |
LangGPT | An open-source project about structured prompts. |
GangLLM | An open-source project about engaging in banter with users |
Linly-Talker | An open-source project on artificial intelligence systems |
Yanjie | An open-source project for enhancing English learning with an AI assistant |
wulewule | An open-source project featuring an AI assistant themed on Black Myth: Wukong |
ChatSensitiveWords | An project concerning SensitiveWords |
Organization | Description |
---|---|
Shanghai Artificial Intelligence Laboratory | Thanks for the technical and platform support |
-
Research Purposes: The FunGPT project and its associated resources are intended solely for academic research purposes and are strictly prohibited from any commercial use. If any third-party code is involved, please adhere strictly to its respective open-source license.
-
Accuracy of Generated Content: Due to the influence of factors such as model algorithms, randomness, and quantization precision limitations, FunGPT cannot guarantee the accuracy or applicability of the generated content. Please exercise caution and independently assess the suitability of the content when using it.
-
Legal Responsibility: This project does not assume any responsibility for the legality of the model's output content and its consequences. Users should ensure that their behavior complies with relevant laws and regulations and are responsible for the outcomes of their use.