Skip to content

Commit

Permalink
Doc Updates for !.2 V1
Browse files Browse the repository at this point in the history
Update docs
  • Loading branch information
slowsynapse authored Sep 18, 2024
2 parents 2015b49 + c6583fe commit d6c56b5
Show file tree
Hide file tree
Showing 9 changed files with 270 additions and 15 deletions.
4 changes: 2 additions & 2 deletions docs/getting-started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,14 +47,14 @@ We will use llama.cpp for local LLM. However, you can see how to do this with ot
```bash
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make server -j 4
make llama-server -j 4

# download https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF/blob/main/openhermes-2.5-mistral-7b.Q5_K_M.gguf
# save this file into llama.cpp/models folder

# when the download is complete you can start the server and load the model
# you can save this command in a file called start_server.sh
./server -t 4 -c 4096 -ngl 35 -b 512 --mlock -m models/openhermes-2-mistral-7b.Q5_K_M.gguf
./llama-server -t 4 -c 4096 -ngl 35 -b 512 --mlock -m models/openhermes-2-mistral-7b.Q5_K_M.gguf
```

Now go to [http://127.0.0.1:8080](http://127.0.0.1:8080) in browser and test it works.
Expand Down
60 changes: 60 additions & 0 deletions docs/guides/using-alltalk.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
---
title: Using AllTalk
order: 14
---

Navigate to [AllTalk](https://github.com/erew123/alltalk_tts) and follow the instructions below to set up AllTalk using Docker or manually.

## Setting Up AllTalk Locally

### Method 1: Manual Setup

For manual setup, follow the official instructions provided [here](https://github.com/erew123/alltalk_tts/blob/main/README.md#-manual-installation---as-a-standalone-application).

1. Clone the AllTalk repository:
```bash
git clone https://github.com/erew123/alltalk_tts.git
cd alltalk_tts
```

2. Create conda environment and activated it:
```bash
conda create --name alltalkenv python=3.11.5
conda activate alltalkenv
```

3. Install the required dependencies:
```bash
pip install -r system/requirements/requirements_standalone.txt
```

4. Run the AllTalk server:
```bash
python script.py
```

5. Access the server at `localhost:7851`.

### Method 2: Setup via Docker

1. Pull the AllTalk Docker image:
```bash
docker pull flukexp/alltalkenv
```

2. Run the AllTalk Docker container:
```bash
docker run -d -p 7851:7851 --name alltalk-server flukexp/alltalkenv
```

3. The server will be available at `localhost:7851`.

## Make sure AllTalk is enabled for TTS:

```bash
Settings -> Text-to-Speech -> TTS Backend -> AllTalk
```

### Notes
- AllTalk can be used as a local text-to-speech backend in your application.
- For further details, refer to the official [AllTalk GitHub repository](https://github.com/erew123/alltalk_tts).
84 changes: 75 additions & 9 deletions docs/guides/using-coqui.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,26 +5,92 @@ order: 9

Navigate to [Coqui](https://coqui.ai/) and click on the **Get Started** button.

> Coqui.ai has been discontinued. Enthusiasts can setup locally by reviewing the Coqui Local PR on the repo.
> Coqui.ai has been discontinued, but enthusiasts can still set up Coqui locally by following the instructions below.
Once you have signed up, you will be redirected to the **Dashboard** page.
## Setting Up Coqui Locally

Navigate to your [account page](https://app.coqui.ai/account) and on the bottom of the page, you will see a **Create new API Key** button.
### Method 1: Manual Setup

Make sure Coqui is enabled for TTS:
1. Create a directory for Coqui and navigate to it:
```bash
mkdir ~/coqui && cd ~/coqui
```

```bash
Settings -> Text-to-Speech -> TTS Backend -> Coqui
2. Download and install Miniconda:
```bash
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o miniconda3.sh
chmod +x ./miniconda3.sh
./miniconda3.sh
```

3. Create a Conda environment and install Python 3.10:
```bash
conda create --name coqui python=3.10
conda activate coqui
```

4. Clone the Coqui TTS repository:
```bash
git clone https://github.com/coqui-ai/TTS.git
```

5. Install dependencies:
```bash
brew install mecab espeak
pip install numpy==1.21.6 flask_cors
conda install scipy scikit-learn Cython
```

6. Navigate to the cloned `TTS` directory and install Coqui TTS:
```bash
cd TTS && make install
```

8. Run the local Coqui TTS server:
```bash
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits
```

### Method 2: Setup via Docker

1. Pull the Coqui TTS Docker image:
```bash
docker pull ghcr.io/coqui-ai/tts --platform linux/amd64
```

2. Run the Coqui TTS container:
```bash
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts
```

3. Inside the container, install Flask CORS and run the server:
```bash
pip install flask_cors
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits
```

## Adding CORS Support

To ensure that the Coqui server allows cross-origin resource sharing (CORS), add the following lines to Flask app in `/TTS/server/server.py` :

```python
from flask_cors import CORS
CORS(app)
```

Copy your API Key:
## Make sure Coqui is enabled for TTS:

```bash
Settings -> Text-to-Speech -> Coqui -> API Key
Settings -> Text-to-Speech -> TTS Backend -> Coqui
```

Proceed to make a new voice. When you are satisfied, copy the **Voice ID**.
## Proceed to make a new voice. When you are satisfied, copy the **Voice ID**.

```bash
Settings -> Text-to-Speech -> Coqui -> Voice ID
```

### Notes
- Coqui TTS can be used as a local text-to-speech backend in your application.
- If you want to explore more models or functionalities, refer to the official [Coqui TTS GitHub repository](https://github.com/coqui-ai/TTS).
4 changes: 2 additions & 2 deletions docs/guides/using-llamacpp.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ Navigate to [TheBloke/openchat_3.5-GGUF](https://huggingface.co/TheBloke/opencha
## Step 3 - Build the server

```bash
make server
make llama-server
```

## Step 4 - Run the server

Read the [llama.cpp](https://github.com/ggerganov/llama.cpp/blob/master/README.md) documentation for more information on the server options. Or run `./server --help`.

```bash
./server -t 4 -c 4096 -ngl 35 -b 512 --mlock -m models/openchat_3.5.Q5_K_M.gguf
./llama-server -t 4 -c 4096 -ngl 35 -b 512 --mlock -m models/openchat_3.5.Q5_K_M.gguf
```

## Step 5 - Enable the server in the client
Expand Down
83 changes: 83 additions & 0 deletions docs/guides/using-piper.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
---
title: Using Piper
order: 13
---

Navigate to [Piper](https://github.com/rhasspy/piper) and follow the setup instructions below to run Piper locally as a TTS backend.

## Setting Up Piper Locally

### Method 1: Setup via Docker

1. Clone the artibex/piper repository:
```bash
git clone [email protected]:artibex/piper-http.git
```

2. Navigate to the `piper-http` directory:
```bash
cd piper-http
```

3. Add CORS support by installing Flask CORS in the Dockerfile. To do this, locate the Dockerfile and add the following line:
```bash
RUN pip install flask_cors
```

4. Build the Piper Docker image:
```bash
docker build -t http-piper .
```

5. Run the Piper Docker container:
```bash
docker run --name piper -p 5000:5000 piper
```

6. To allow CORS within the Piper server, modify the `http_server.py` file inside the running Docker container:

- Navigate to the `piper-http` container's files:
```bash
docker exec -it piper /bin/bash
```
- Locate the `http_server.py` file:
```bash
cd /app/piper/src/python_run/piper
```
- Edit `http_server.py` and add the following lines at the top to enable CORS:
```python
from flask_cors import CORS
CORS(app)
```
7. Save the changes and restart the Piper server inside the container:
```bash
python3 http_server.py
```
### Method 2: Manual Setup
1. Clone the repository:
```bash
git clone https://github.com/flukexp/PiperTTS-API-Wrapper.git
```
2. Navigate to the project directory:
```bash
cd PiperTTS-API-Wrapper
```
4. Download piper, install Piper sample voices and start piper server:
```bash
./piper_installer.sh
```
## Make sure Piper is enabled for TTS:
```bash
Settings -> Text-to-Speech -> TTS Backend -> Piper
```
### Notes
- Piper can be used as a local text-to-speech backend in your application.
- For more details on models and configurations, refer to the official [Piper GitHub repository](https://github.com/rhasspy/piper).
2 changes: 1 addition & 1 deletion docs/overview/amica-vs-other-tools.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Amica vs Other Tools
order: 4
order: 5
---

The landscape of 3D AI avatar software has seen a notable expansion in recent years, and within this dynamic environment, Amica has emerged as a standout solution.
Expand Down
40 changes: 40 additions & 0 deletions docs/overview/features.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
title: Features
order: 3
---

Read the [Local Setup](../getting-started/installation.md) guide if you are interested in getting everything running locally quickly.

## Amica Life Features

Amica Life is designed to operate in a semi-autonomous mode, incorporating animations, sleep functionality, function calling, a subconscious subroutine, and self-prompting features to create a seamless virtual assistant experience.

### Key Features of Amica Life

* **Subconscious Subroutine**: Amica stores compressed conversation logs with timestamps, enabling it to remember past interactions and influence future responses. This enhances conversation continuity and provides a more personalized experience over time.

* **Random Animation Playback**: Amica supports customizable VRM (Virtual Reality Model) avatars. These avatars can randomly trigger animations, dynamically express emotions, sync with speech, and react in real-time, providing an immersive interface.

* **News Function Calling**: Amica can autonomously retrieve real-time news and relevant information based on contextual triggers within conversations. This feature ensures users stay informed without the need for manual searches.

* **Self-Prompting System**: Amica’s self-prompting system can independently generate follow-up questions or perform actions based on ongoing discussions. This makes conversations more interactive and allows the assistant to anticipate user needs.

## Load/Save VRM Feature

Amica supports loading and saving customizable VRM avatars, allowing users to personalize their virtual assistant. Avatars can be loaded or saved for future use, with dynamic expression of emotions and lip-syncing in real-time.

## Load/Save Conversation Feature

Users can load and save chat conversations as `.txt` files. This feature is ideal for storing conversation histories, reviewing past discussions, or continuing from where a previous session left off.

## Wake Word Feature

Amica includes a wake word detection feature, allowing users to activate the assistant with a specific phrase. This enables hands-free operation and provides a more natural interaction with the system.

## Chat Mode Feature

In Chat Mode, Amica’s avatar minimizes into a corner of the screen, providing a compact interface. This feature is useful for multitasking, allowing users to interact with Amica while focusing on other tasks.

## Plugin System (Function Calling) Feature

Amica Life supports a customizable plugin system that allows users to add their own function calls. By placing scripts in the designated plugin folder, new functionalities can be seamlessly integrated, expanding Amica's capabilities.
2 changes: 1 addition & 1 deletion docs/overview/use-cases.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Use Cases
order: 3
order: 4
---

At its core, Amica is a platform for creating lifelike avatars that can be used in a variety of applications. The following are some examples of how Amica can be used to create engaging and interactive experiences for users.
Expand Down
6 changes: 6 additions & 0 deletions docs/tutorials/creating-new-avatars.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,12 @@ order: 12

You can use [VRoid Studio](https://vroid.com/en/studio) to design your own avatar. You can also use [VRM](https://vrm.dev/en/) to convert your 3D model to VRM format.

## Designing custom expressions

Amica supports custom expressions from VRM models. To design and implement these expressions:

* Use VRoid Studio to design various facial expressions for your avatar.
* Ensure that your custom expressions are included in the VRM file when exporting.

## Downloading Avatars

Expand Down

0 comments on commit d6c56b5

Please sign in to comment.