Skip to content

Commit

Permalink
Merge pull request #145 from LuShanYanYu7/main
Browse files Browse the repository at this point in the history
更新英文网站 快速开始-》使用算力实例
  • Loading branch information
yangj1211 authored Nov 7, 2024
2 parents 418558b + fcb5be6 commit 5f5cd29
Show file tree
Hide file tree
Showing 78 changed files with 77 additions and 105 deletions.
22 changes: 11 additions & 11 deletions docs/Built-in_tools/comfyui.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ This guide will walk you through deploying the official ComfyUI image on the `ne

First, log in to the `neolink.ai` platform and follow these steps:

1. In the main interface, navigate to **Computing Instances** and click **Create Instance**.
1. In the main interface, navigate to **GPU Instance** and click **Create Instance**.
2. **Single GPU mode is recommended**, as multi-GPU mode currently does not support ComfyUI deployment.
3. **GPU recommendation: 4090**, which offers the best price-performance ratio and compatibility. 3090 and H100 are also supported, but H20 is not yet available.
4. **Do not modify the data disk configuration**, and keep the default mount path.
Expand All @@ -25,32 +25,32 @@ First, log in to the `neolink.ai` platform and follow these steps:
6. Enter the **instance name** and verify that all other parameters are configured correctly.

**Example screenshots:**
<img src={require('../../static/img/comfyui/1.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/img/comfyui/2.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/comfyui/1.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/comfyui/2.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />

### 2. Deploy the Instance

Before clicking the **Create Now** button, ensure the following settings are completed:

1. Uncheck the no-GPU mode to ensure the GPU is properly utilized.
2. Do not modify the data disk mount path to ensure the smooth deployment and operation of ComfyUI.
After completing the above steps, click the **Create Now** button. The system will start deploying the instance, which may take a few minutes depending on resource availability.
After completing the above steps, click the **Create** button. The system will start deploying the instance, which may take a few minutes depending on resource availability.

**Example screenshot:**
<img src={require('../../static/img/comfyui/3.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/comfyui/3.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />

### 3. Complete the Deployment

Once the instance is created, you can launch ComfyUI using the built-in tools on the platform. Follow these steps:

1. Go to the **Computing Instances** page on the `neolink.ai` platform.
1. Go to the **GPU Instance** page on the `neolink.ai` platform.
2. In the deployed instance, find the **Built-in Tools** option.
3. Click **ComfyUI** to open the deployed ComfyUI image.
4. Click **Queue Prompt** to start generating images.

**Example screenshots:**
<img src={require('../../static/img/comfyui/4.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/img/comfyui/7.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/comfyui/4.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/comfyui/7.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />

For detailed tutorials, refer to the official documentation: [ComfyUI GitHub](https://github.com/comfyanonymous/ComfyUI).
Once launched, you can use the web interface for model inference and development, with all configurations and dependencies ready for immediate use.
Expand All @@ -68,15 +68,15 @@ Upload the model using the large file upload guide [Data Storage -> Storage Mana
Uploaded files are stored in the `data` disk, so they need to be moved to the `checkpoint` folder.

1. In the deployed instance, find the **Built-in Tools** option and log in using SSH with the provided credentials.
<img src={require('../../static/img/comfyui/8.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/comfyui/8.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />

2. Move the file to the `checkpoint` folder.
<img src={require('../../static/img/comfyui/9.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/comfyui/9.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />

### 3. Use the Model

Refresh the ComfyUI image to select the `ghostmix_v20Bakedvae.safetensors` model for creative tasks.
<img src={require('../../static/img/comfyui/10.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/comfyui/10.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />

---

Expand Down
14 changes: 7 additions & 7 deletions docs/Built-in_tools/jupyterlab.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,26 +24,26 @@ All Neolink images currently come with JupyterLab as a built-in tool, helping us

## Steps to Use

Access the **Compute Instances** from the left sidebar, and under the built-in tools of the specific instance, click **JupyterLab** to enter the JupyterLab interface.
Access the **GPU Instances** from the left sidebar, and under the built-in tools of the specific instance, click **JupyterLab** to enter the JupyterLab interface.

<img src={require('../../static/img/qa/ssh-interrupt-1.png').default} alt="" style={{width: '600px', height: 'auto'}} />
<img src={require('../../static/en-img/qa/ssh-interrupt-1.png').default} alt="" style={{width: '600px', height: 'auto'}} />

<img src={require('../../static/img/jupyterlab/jupyterlab-2.png').default} alt="JupyterLab Interface" style={{width: '600px', height: 'auto'}} />
<img src={require('../../static/en-img/jupyterlab/jupyterlab-2.png').default} alt="JupyterLab Interface" style={{width: '600px', height: 'auto'}} />

## Basic Features

The JupyterLab interface mainly consists of two parts: the File Browser and the Workspace.

<img src={require('../../static/img/jupyterlab/jupyterlab-3.png').default} alt="JupyterLab Interface" style={{width: '500px', height: 'auto'}} />
<img src={require('../../static/en-img/jupyterlab/jupyterlab-3.png').default} alt="JupyterLab Interface" style={{width: '500px', height: 'auto'}} />

In the File Browser, double-click the folder name to navigate into directories. Click the file upload icon to select files for upload.

<img src={require('../../static/img/jupyterlab/jupyterlab-4.png').default} alt="JupyterLab Interface" style={{width: '200px', height: 'auto'}} />
<img src={require('../../static/en-img/jupyterlab/jupyterlab-4.png').default} alt="JupyterLab Interface" style={{width: '200px', height: 'auto'}} />

Right-click on a specific folder or file in the File Browser to manage files.

<img src={require('../../static/img/jupyterlab/jupyterlab-5.png').default} alt="JupyterLab Interface" style={{width: '300px', height: 'auto'}} />
<img src={require('../../static/en-img/jupyterlab/jupyterlab-5.png').default} alt="JupyterLab Interface" style={{width: '300px', height: 'auto'}} />

In the Workspace, click **Other > Terminal** to open a new terminal. JupyterLab does not terminate processes by default, even after closing terminal or notebook tabs.

<img src={require('../../static/img/jupyterlab/jupyterlab-6.png').default} alt="JupyterLab Interface" style={{width: '500px', height: 'auto'}} />
<img src={require('../../static/en-img/jupyterlab/jupyterlab-6.png').default} alt="JupyterLab Interface" style={{width: '500px', height: 'auto'}} />
16 changes: 8 additions & 8 deletions docs/Built-in_tools/ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ When creating an instance, select an image with the **ollama-webui** prefix. Onc

**NOTE:** Not supported in no-GPU mode.

<img src={require('../../static/img/tools/ollama-1.png').default} alt="ollama" style={{width: '500px', height: 'auto'}} />
<img src={require('../../static/en-img/ollama/ollama-1.png').default} alt="ollama" style={{width: '500px', height: 'auto'}} />

Next, we will introduce how to interact with large language models using three methods: the command line, the WebUI interface, and the API.

Expand All @@ -28,7 +28,7 @@ The `ollama run` command is used to start and run a specified language model, en
ollama run <model_name>
```

<img src={require('../../static/img/tools/ollama-7.png').default} alt="ollama" style={{width: '500px', height: 'auto'}} />
<img src={require('../../static/en-img/ollama/ollama-7.png').default} alt="ollama" style={{width: '500px', height: 'auto'}} />

For more information on how to use Ollama commands, refer to the official Ollama [documentation](https://github.com/ollama/ollama/blob/main/README.md#quickstart).

Expand All @@ -38,23 +38,23 @@ Ollama WebUI is a web-based user interface that allows users to interact with mo

Click on **Ollama** under the built-in tools to access the WebUI login page. On your first visit, registration will be required.

<img src={require('../../static/img/tools/ollama-3.png').default} alt="ollama" style={{width: '300px', height: 'auto'}} />
<img src={require('../../static/en-img/ollama/ollama-3.png').default} alt="ollama" style={{width: '300px', height: 'auto'}} />

After successfully logging in, you will enter the WebUI interface. However, you cannot initiate a conversation just yet because no large language model has been deployed. Click on the **Select a Model** dropdown menu in the upper left corner, enter the desired model name in the text box, such as `llama3.1`, and click **Pull 'llama3.1' from Ollama.com**. The system will automatically begin downloading the model and display the download progress.

<img src={require('../../static/img/tools/ollama-4.png').default} alt="ollama" style={{width: '500px', height: 'auto'}} />
<img src={require('../../static/en-img/ollama/ollama-4.png').default} alt="ollama" style={{width: '500px', height: 'auto'}} />

Once the model is downloaded, you can start a conversation.

<img src={require('../../static/img/tools/ollama-5.png').default} alt="ollama" style={{width: '600px', height: 'auto'}} />
<img src={require('../../static/en-img/ollama/ollama-5.png').default} alt="ollama" style={{width: '600px', height: 'auto'}} />

### Ollama API

The Ollama API provides a simple way for developers to interact with Ollama models programmatically. Through the Ollama API, users can easily integrate and invoke large language models to perform various natural language processing tasks. Additionally, Ollama is compatible with OpenAI's [Chat Completions API](https://github.com/ollama/ollama/blob/main/docs/openai.md), allowing users to use more OpenAI-related tools and applications locally to interact with Ollama models without relying on external services.

Click on **Ollama** under the built-in tools to access the API page.

<img src={require('../../static/img/tools/ollama-2.png').default} alt="ollama" style={{width: '600px', height: 'auto'}} />
<img src={require('../../static/en-img/ollama/ollama-2.png').default} alt="ollama" style={{width: '600px', height: 'auto'}} />

The address shown in the browser's address bar is the Ollama API endpoint. Below, we will introduce how to use the API to perform Q&A tasks. For more information on the Ollama API, refer to the official [API documentation](https://github.com/ollama/ollama/blob/main/docs/api.md).

Expand All @@ -79,7 +79,7 @@ The address shown in the browser's address bar is the Ollama API endpoint. Below
}'
```

<img src={require('../../static/img/tools/ollama-6.png').default} alt="ollama" style={{width: '800px', height: 'auto'}} />
<img src={require('../../static/en-img/ollama/ollama-6.png').default} alt="ollama" style={{width: '800px', height: 'auto'}} />

- Open Python library

Expand All @@ -105,4 +105,4 @@ The address shown in the browser's address bar is the Ollama API endpoint. Below
print(response.choices[0].message.content)
```

<img src={require('../../static/img/tools/ollama-8.png').default} alt="ollama" style={{width: '600px', height: 'auto'}} />
<img src={require('../../static/en-img/ollama/ollama-8.png').default} alt="ollama" style={{width: '600px', height: 'auto'}} />
2 changes: 1 addition & 1 deletion docs/Built-in_tools/tensorboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Before introducing TensorBoard, let’s first understand TensorFlow.

TensorFlow is an open-source machine learning framework developed and maintained by Google. It provides a rich set of tools and libraries for building, training, and deploying various machine learning models, especially deep learning models. TensorFlow is widely used in fields such as image recognition, natural language processing, speech recognition, and recommendation systems. For example, in image recognition tasks, models can be trained to identify different objects, while in natural language processing, TensorFlow can be applied to text classification and machine translation. TensorBoard is the official visualization tool for TensorFlow. Neolink.AI has integrated this tool to help users better understand and analyze their model training processes.

<img src={require('../../static/img/tensorboard/tensorboard-1.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />
<img src={require('../../static/en-img/tensorboard/tensorboard-1.png').default} alt="tensorboard" style={{width: '1000px', height: 'auto'}} />

## Application Scenarios

Expand Down
12 changes: 6 additions & 6 deletions docs/ContainerInstance/create instance.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,18 @@ Compute instances are commonly used for algorithm development and model fine-tun

## Steps

1. Navigate to **Compute Instances** from the left sidebar and click **Create Instance**.
1. Navigate to **GPU Instance** from the left sidebar and click **Create Instance**.

<img src={require('../../static/img/getstarted/getstarted-1.png').default} alt="Create Instance" style={{width: '400px', height: 'auto'}} />
<img src={require('../../static/en-img/getstarted/getstarted-1.png').default} alt="Create Instance" style={{width: '400px', height: 'auto'}} />

2. On the **Create Instance** page, select the **Payment Method** (pay-as-you-go, daily, weekly, or monthly billing), **GPU Model** (3090, 4090, H20, H100), **GPU Quantity**, and **GPU Specification**. Choose an **Image** (with built-in deep learning frameworks), and click **Create**. For instructions on creating a private image, refer to the [Images](../ConfigureEnvironment/image.md) section. If you require additional storage for your data, adjust the disk size accordingly.
2. On the **Create Instance** page, select the **Payment Method** (pay-as-you-go, daily, weekly, or monthly billing), **GPU Model** (3090, 4090, H20, H100), **GPU Card**, and **GPU Specification**. Choose an **Image** (with built-in deep learning frameworks), and click **Create**. For instructions on creating a private image, refer to the [Images](../ConfigureEnvironment/image.md) section. If you require additional storage for your data, adjust the disk size accordingly.

**NOTE**: Since instances with H100 GPUs cannot access the internet, we have provided customized images (with the **-h100** suffix) for these instances. These images redirect `pip` and `apt` to local mirrors, ensuring smooth package and dependency installation even in an offline environment. The local mirrors contain the same packages as the Tsinghua repository.

<img src={require('../../static/img/getstarted/getstarted-create-instance-1.png').default} alt="Rent Instance" style={{width: '700px', height: 'auto'}} />
<img src={require('../../static/en-img/getstarted/getstarted-create-instance-1.png').default} alt="Rent Instance" style={{width: '700px', height: 'auto'}} />

3. Return to the **Compute Instances** page and wait for the instance to be created. The created instance will appear in the list with the status **Running**.
3. Return to the **GPU Instance** page and wait for the instance to be created. The created instance will appear in the list with the status **Running**.

![Compute Instance Creation - Example](../../static/img/containerinstance/containerinstance-1.png)
![Compute Instance Creation - Example](../../static/en-img/containerinstance/containerinstance-1.png)

4. After the instance is created, you can access it via JupyterLab or SSH for development and fine-tuning.
5 changes: 0 additions & 5 deletions docs/ContainerInstance/gpu_mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,3 @@ sidebar_label: GPU Mode
---

When using built-in tools such as ComfyUI, these operations require GPU resources to ensure efficient model performance and fast response times. In such cases, it is recommended to select GPU Mode (i.e., enable GPU usage) to fully utilize the computational power of the graphics card, enhancing processing speed and performance. Once GPU Mode is selected, the system will automatically allocate available GPUs for computation, making it suitable for resource-intensive tasks such as model inference and image generation.

<iframe width="640" height="360"
src={require('../../static/video/demo/gpu_mode.mp4').default}
frameborder="0" allowfullscreen>
</iframe>
27 changes: 11 additions & 16 deletions docs/ContainerInstance/gpuless_mode.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,23 @@
---
sidebar_position: 6
title: GPU-less Mode
sidebar_label: GPU-less Mode
title: No-GPU Mode
sidebar_label: No-GPU Mode
---

For tasks such as writing and debugging code, uploading or downloading data to the instance, or presenting code that do not require GPU usage, you can choose to start the instance in GPU-less Mode. In GPU-less Mode, the instance will use a 2-core CPU and 4GB of memory without a GPU. The cost for this mode is ¥0.1/hour. Switching to GPU-less Mode will not affect your instance data before or after use, and you can still switch back to normal mode from the instance list for regular operations such as powering on and off.
For tasks such as writing and debugging code, uploading or downloading data to the instance, or presenting code that do not require GPU usage, you can choose to start the instance in No-GPU Mode. In No-GPU Mode, the instance will use a 2-core CPU and 4GB of memory without a GPU. The cost for this mode is ¥0.1/hour. Switching to No-GPU Mode will not affect your instance data before or after use, and you can still switch back to normal mode from the instance list for regular operations such as powering on and off.

<iframe width="640" height="360"
src={require('../../static/video/demo/gpuless_mode.mp4').default}
frameborder="0" allowfullscreen>
</iframe>
## Starting in No-GPU Mode

## Starting in GPU-less Mode
When creating an instance, select **No-GPU Mode** to start the instance with No-GPU specifications. No-GPU Mode is only available with pay-as-you-go billing.

When creating an instance, select **GPU-less Mode** to start the instance with GPU-less specifications. GPU-less Mode is only available with pay-as-you-go billing.
<img src={require('../../static/en-img/gpuless/gpuless-1.png').default} alt="Start in No-GPU Mode" style={{width: '700px', height: 'auto'}} />

<img src={require('../../static/gpuless/gpuless-1.png').default} alt="Start in GPU-less Mode" style={{width: '700px', height: 'auto'}} />
You will see that the newly created instance is labeled with the **No-GPU Mode** indicator.

You will see that the newly created instance is labeled with the **GPU-less Mode** indicator.
<img src={require('../../static/en-img/gpuless/gpuless-2.png').default} alt="No-GPU Instance" style={{width: '700px', height: 'auto'}} />

<img src={require('../../static/gpuless/gpuless-2.png').default} alt="GPU-less Instance" style={{width: '700px', height: 'auto'}} />
## Powering On in No-GPU Mode

## Powering On in GPU-less Mode
In the instance list, after an instance is powered off, you can select **Power On in No-GPU Mode**.

In the instance list, after an instance is powered off, you can select **Power On in GPU-less Mode**.

<img src={require('../../static/gpuless/gpuless-3.png').default} alt="Power On in GPU-less Mode" style={{width: '700px', height: 'auto'}} />
<img src={require('../../static/en-img/gpuless/gpuless-3.png').default} alt="Power On in No-GPU Mode" style={{width: '700px', height: 'auto'}} />
Loading

0 comments on commit 5f5cd29

Please sign in to comment.