diff --git a/README.md b/README.md
index faada83ef..cf4272cad 100644
--- a/README.md
+++ b/README.md
@@ -18,7 +18,7 @@
Reverse engineered `ChatGPT` proxy
-> Since `ArkoseLabs` is constantly updated and fixed, the project will be released with closed source patches. If you are worried about security issues, please do not use it.
+> The project has been released as closed source.
### Features
@@ -31,22 +31,7 @@ Reverse engineered `ChatGPT` proxy
### Installation
-If you need more detailed installation and usage information, please check [Document](https://github.com/gngpp/ninja/blob/main/doc/readme.md)
-
-- Platform
-
- - `x86_64-unknown-linux-musl`
- - `aarch64-unknown-linux-musl`
- - `armv7-unknown-linux-musleabi`
- - `armv7-unknown-linux-musleabihf`
- - `arm-unknown-linux-musleabi`
- - `arm-unknown-linux-musleabihf`
- - `armv5te-unknown-linux-musleabi`
- - `i686-unknown-linux-gnu`
- - `i586-unknown-linux-gnu`
- - `x86_64-pc-windows-msvc`
- - `x86_64-apple-darwin`
- - `aarch64-apple-darwin`
+If you need more detailed installation and usage information, please check [wiki](https://github.com/gngpp/ninja/wiki)
### Contributing
diff --git a/README_zh.md b/README_zh.md
index f6c5d4156..2c6ad1c3e 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -17,7 +17,7 @@
逆向工程的 `ChatGPT` 代理
-> 由于`ArkoseLabs`不断地更新修复,项目将应用闭源补丁进行发布,如果你担心安全问题,请你不要使用它。
+> 项目已闭源发布。
### 特性
@@ -30,7 +30,7 @@
### 安装
-如果您需要更详细的安装与使用信息,请查看[文档](https://github.com/gngpp/ninja/blob/main/doc/readme_zh.md)
+如果您需要更详细的安装与使用信息,请查看[wiki](https://github.com/gngpp/ninja/wiki)
### 贡献
diff --git a/doc/readme.md b/doc/readme.md
deleted file mode 100644
index febc09241..000000000
--- a/doc/readme.md
+++ /dev/null
@@ -1,392 +0,0 @@
-
English | [简体中文](https://github.com/gngpp/ninja/blob/main/doc/readme_zh.md)
-
-If the project is helpful to you, please consider [donating support](https://github.com/gngpp/gngpp/blob/main/SPONSOR.md#sponsor-my-open-source-works) for continued project maintenance, or you can Pay for consulting and technical support services.
-
-### Install
-
-- #### Platform
-
- - `x86_64-unknown-linux-musl`
- - `aarch64-unknown-linux-musl`
- - `armv7-unknown-linux-musleabi`
- - `armv7-unknown-linux-musleabihf`
- - `arm-unknown-linux-musleabi`
- - `arm-unknown-linux-musleabihf`
- - `armv5te-unknown-linux-musleabi`
- - `i686-unknown-linux-gnu`
- - `i586-unknown-linux-gnu`
- - `x86_64-pc-windows-msvc`
- - `x86_64-apple-darwin`
- - `aarch64-apple-darwin`
-
-- #### Ubuntu(Other Linux)
-
-Making [Releases](https://github.com/gngpp/ninja/releases/latest) has a precompiled deb package, binaries, in Ubuntu, for example:
-
-```shell
-wget https://github.com/gngpp/ninja/releases/download/v0.9.28/ninja-0.9.28-x86_64-unknown-linux-musl.tar.gz
-tar -xf ninja-0.9.28-x86_64-unknown-linux-musl.tar.gz
-mv ./ninja /bin/ninja
-./ninja run
-
-# Online update version
-ninja update
-
-# Run the process in the foreground
-ninja run
-
-# Run the process in the background
-ninja start
-
-# Stop background process
-ninja stop
-
-# Restart background process
-ninja restart
-
-# Check background process status
-ninja status
-
-# View background process logs
-ninja log
-
-# Generate configuration file template, serve.toml
-ninja gt -o serve.toml
-
-# Specify the configuration file template to run, bypassing the cumbersome cli commands
-ninja (run/start/restart) -C serve.toml
-```
-
-- #### Docker
-
-> Mirror source supports `gngpp/ninja:latest`/`ghcr.io/gngpp/ninja:latest`
-
-```shell
-docker run --rm -it -p 7999:7999 --name=ninja \
- -e LOG=info \
- -v ~/.ninja:/root/.ninja \
- ghcr.io/gngpp/ninja:latest run
-```
-
-- Docker Compose
-
-> `CloudFlare Warp` is not supported in your region (China), please delete it, or if your `VPS` IP can be directly connected to `OpenAI`, you can also delete it
-
-```yaml
-version: '3'
-
-services:
- ninja:
- image: gngpp/ninja:latest
- container_name: ninja
- restart: unless-stopped
- environment:
- - TZ=Asia/Shanghai
- - PROXIES=socks5://warp:10000
- command: run
- ports:
- - "8080:7999"
- depends_on:
- - warp
-
- warp:
- container_name: warp
- image: ghcr.io/gngpp/warp:latest
- restart: unless-stopped
-
- watchtower:
- container_name: watchtower
- image: containrrr/watchtower
- volumes:
- - /var/run/docker.sock:/var/run/docker.sock
- command: --interval 3600 --cleanup
- restart: unless-stopped
-
-```
-
-### ArkoseLabs
-
-Sending `GPT-4/GPT-3.5/Creating API-Key` dialog requires sending `Arkose Token` as a parameter.
-
-1) Use HAR
-
-- Support HAR feature pooling, multiple HAR can be uploaded simultaneously, using round-robin strategy. The following is the method to obtain the HAR file:
- - First, log in to the `ChatGPT` GPT4 question interface, press the `F12` key, and the browser console will open. Find `network` and click with the left mouse button (If your console is in Chinese, it will be displayed as `网络`), and the browser's network capture interface will switch to.
- - With the console open, send a `GPT-4` session message, then find `filter` in the capture interface (If your console is in Chinese, it will be displayed as `过滤`), enter this address for filtering: `https://tcr9i.chat.openai.com/fc/gt2/public_key/35536E1E-65B4-4D96-9D97-6ADB7EFF8147`
- - At least one record will be filtered out. Randomly select one and download the HAR log record file of this interface. The specific operation is: right-click on this record, then find `Save all as HAR with content` (If your console is in Chinese, it will be displayed as `以 HAR 格式保存所有内容`).
- - Use the startup parameter `--arkose-har-dir` to specify the HAR directory path (if you do not specify a path, use the default path `~/.ninja`, and you can directly upload and update HAR). If you use docker and do not specify a directory, only Need to map the `~/.ninja` working directory, support WebUI upload and update HAR, request path: `/har/upload`, optional upload authentication parameter: `--auth-key`.
-
-2) Use [Fcsrv](https://github.com/gngpp/fcsrv) / [YesCaptcha](https://yescaptcha.com/i/1Cc5i4) / [CapSolver](https://dashboard.capsolver.com/passport/register?inviteCode=y7CtB_a-3X6d)
-
-
-- `Fcsrv` / `YesCaptcha` / `CapSolver` is recommended to be used with HAR. When the verification code is generated, the parser is called for processing.
-
-The platform performs verification code parsing, and the startup parameter `--arkose-solver` selects the platform (default uses `Fcsrv`), `--arkose-solver-key` fills in the `Client Key`, and selects the customized submission node URL, for example: `--arkose-solver-endpoint http://localhost:8000/task`, `Fcsrv`/`YesCaptcha`/`CapSolver` are supported, `Fcsrv`/`YesCaptcha`/`CapSolver` is supported, `Fcsrv`/`YesCaptcha`/`CapSolver` Everyone supports it. Say important things three times.
-
-Currently OpenAI has updated `Login` which requires verification of `Arkose Token`. The solution is the same as `GPT-4`. Fill in the startup parameters and specify the HAR file `--arkose-auth-har-dir`. To create an API-Key, you need to upload the HAR feature file related to the Platform. The acquisition method is the same as above.
-
-`OpenAI` cancels `Arkose` verification for `GPT-3.5` and can be used without uploading HAR feature files (uploaded ones will not be affected). After compatibility, `Arkose` verification may be turned on again, and startup parameters need to be added`-- arkose-gpt3-experiment` enables the `GPT-3.5` model `Arkose` verification process, and the WebUI is not affected. If you encounter `418 I'm a teapot`, you can enable `--arkose-gpt3-experiment`, and you need to upload `HAR` features. If there are no `GPT-3.5` features, `GPT-4` features are also required. It can be used. If it still doesn't work, try to enable `--arkose-gpt3-experiment-solver`, which may use a third-party platform to solve the verification code.
-
-> The above are the prerequisites for using `API`. There is no need to consider using `WebUI`.
-
-### Http Server
-
-#### Public interface, `*` represents any `URL` suffix
-
-- ChatGPT-API
- - `/public-api/*`
- - `/backend-api/*`
-
-- OpenAI-API
- - `/v1/*`
-
-- Platform-API
- - `/dashboard/*`
-
-- ChatGPT-To-API
- - `/v1/chat/completions`
- > About using `ChatGPT` to `API`, use `AceessToken` directly as `API Key`
-
-- Files-API
- - `/files/*`
- > Image and file upload and download API proxy, the API returned by the `/backend-api/files` interface has been converted to `/files/*`
-
-- Arkose-API
- - `/auth/arkose_token/:pk`
- > where pk is the arkose type ID, such as requesting Arkose for GPT4, `/auth/arkose_token/35536E1E-65B4-4D96-9D97-6ADB7EFF8147`. If `GPT-4` starts to force blob parameters, you need to bring `AccessToken` -> `/auth/arkose_token/35536E1E-65B4-4D96-9D97-6ADB7EFF8147?blob=your_access_token`
-
-- Authorization
- > Except for login, use `Authorization: Bearer xxxx`, [Python Example](https://github.com/gngpp/ninja/blob/main/doc/authorization.md)
-
- - Login: `/auth/token`, optional parameter of form `option`, defaults to `web` login, returns `AccessToken` and `Session`; parameter is `apple`/`platform`, returns `AccessToken` and `RefreshToken`. Among them, the `ChatGPT App` login method for the `Apple` platform requires the `preauth_cookie` endpoint and the startup parameter setting `--preauth-endpoint https://example.com/api`. It is recommended to use [xyhelper](https:/ /github.com/xyhelper) Provided free endpoint: `https://tcr9i.xyhelper.cn/auth/preauth`
- - Refresh `RefreshToken`: `POST /auth/refresh_token`, support `platform`/`apple` revocation
- - Revoke `RefreshToken`: `POST /auth/revoke_token`, supports `platform`/`apple` revocation
- - Refresh `Session`: `POST /auth/refresh_session`, use the `Session` returned by `web` login to refresh
- - Obtain `Sess token`: `POST /auth/sess_token`, use `AccessToken` of `platform` to obtain
- - Obtain `Billing`: `GET /auth/billing`, use `sess token` to obtain
-
- ```shell
- # Generate certificate
- ninja genca
-
- ninja run --pbind 0.0.0.0:8888
-
- # Set the network on your mobile phone to set your proxy listening address, for example: http://192.168.1.1:8888
- # Then open the browser http://192.168.1.1:8888/preauth/cert, download the certificate, install it and trust it, then open iOS ChatGPT and you can play happily
- ```
-
-#### API documentation
-
-- Platfrom API [doc](https://platform.openai.com/docs/api-reference)
-- Backend API [doc1](https://github.com/gngpp/ninja/blob/main/doc/rest.http) or [doc2](https://github.com/gngpp/ninja/blob/main/doc/python-requests.ipynb)
- > The example is only part of it, the official `API` is proxied according to `/backend-api/*`
-
-#### Basic services
-
-- ChatGPT WebUI
-- Expose `ChatGPT-API`/`OpenAI-API` proxies
-- `API` prefix is consistent with the official one
-- `ChatGPT` to `API`
-- Can access third-party clients
-- Can access IP proxy pool to improve concurrency
-- Supports obtaining RefreshToken
-- Support file feature pooling in HAR format
-
-#### Parameter Description
-
-> **Default working directory `~/.ninja`**
-
-- `--level`, environment variable `LOG`, log level: default info
-- `--bind`, environment variable `BIND`, service listening address: default 0.0.0.0:7999,
-- `--tls-cert`, environment variable `TLS_CERT`', TLS certificate public key. Supported format: EC/PKCS8/RSA
-- `--tls-key`, environment variable `TLS_KEY`, TLS certificate private key
-- `--enable-webui`, the built-in WebUI is turned off by default. Use this parameter to enable it. You must set `--arkose-endpoint`. If your exit access domain name is `example.com`, then you need to set `--arkose-endpoint https://example.com`
-- `--enable-file-proxy`, environment variable `ENABLE_FILE_PROXY`, turns on the file upload and download API proxy
-- `--enable-arkose-proxy`, enable obtaining `Arkose Token` endpoint
-- `--enable-direct`, enable direct connection, add the IP bound to the `interface` export to the proxy pool
-- `--proxies`, proxy, supports proxy pool, multiple proxies are separated by `,`, format: protocol://user:pass@ip:port
-- `--no-keepalive` turns off Http Client Tcp keepalive
-- `--fastest-dns` Use the built-in fastest DNS group
-- `--visitor-email-whitelist`, whitelist restriction, the restriction is for AccessToken, the parameter is the email address, multiple email addresses are separated by `,`
-- `--cookie-store`, enable Cookie Store
-- `--cf-site-key`, Cloudflare turnstile captcha site key
-- `--cf-secret-key`, Cloudflare turnstile captcha secret key
-- `--arkose-endpoint`, ArkoseLabs endpoint, for example:
-- `--arkose-har-dir`, ArkoseLabs HAR feature file directory path, for example: `~/har`, if the path is not specified, the default path `~/.ninja` will be used
-- `--arkose-solver`, ArkoseLabs solver platform, for example: yescaptcha
-- `--arkose-solver-key`, ArkoseLabs solver client key
-- `--arkose-gpt3-experiment`, to enable GPT-3.5 ArkoseLabs experiment
-- `--arkose-gpt3-experiment-solver`, to open the GPT-3.5 ArkoseLabs experiment, you need to upload the HAR feature file, and the correctness of the ArkoseToken will be verified
-- `--impersonate-uas`, you can optionally simulate UA randomly. Use `,` to separate multiple ones. Please see the command manual for details.
-- `--auth-key`, `API` authentication `Key` of `Login`/`HAR Manager`/`Arkose`, sent using `Authorization Bearer` format
-- `--preauth-endpoint`, enable the `preauth_cookie` endpoint for `Apple` platform `ChatGPT App` login
-
-##### Advanced proxy usage
-
-The built-in protocols and proxy types of agents are divided into built-in protocols: `all/api/auth/arkose`, where `all` is for all clients, `api` is for all `OpenAI API`, `auth` is for authorization/login, `arkose` For ArkoseLabs; proxy type: `interface/proxy/ipv6_subnet`, where `interface` represents the bound export `IP` address, `proxy` represents the upstream proxy protocol: `http/https/socks5/socks5h`, `ipv6_subnet` represents the A random IP address within the IPv6 subnet acts as a proxy. The format is `proto|proxy`, example: **`all|socks5://192.168.1.1:1080, api|10.0.0.1, auth|2001:db8::/32, http://192.168.1.1:1081`**, without built-in protocol, the protocol defaults to `all`.
-
-> Regarding the proxy `http/https/socks5/socks5h`, only when the `socks5h` protocol is used, the DNS resolution will go through the proxy resolution, otherwise the `local`/`built-in` DNS resolution will be used
-
-##### Agent usage rules
-
-1) The existence of `interface` \ `proxy` \ `ipv6_subnet`
-
-When `--enable-direct` is turned on, `proxy` + `interface` will be used as the proxy pool; if `--enable-direct` is not turned on, `proxy` will be used only if the number of `proxy` is greater than or equal to 2, otherwise it will Use `ipv6_subnet` as the proxy pool and `interface` as the fallback address.
-
-2) The existence of `interface` \ `proxy`
-
-When `--enable-direct` is turned on, `proxy` + `interface` will be used as the proxy pool; if `--enable-direct` is not turned on, only `proxy` will be used as the proxy pool.
-
-3) The existence of `proxy` \ `ipv6_subnet`
-
-The rules are the same as (1), except that there is no `interface` as the fallback address.
-
-4) The existence of `interface` \ `ipv6_subnet`
-When `--enable-direct` is turned on and the number of `interface` is greater than or equal to 2, `interface` will be used as the proxy pool; if `--enable-direct` is not turned on, `ipv6_subnet` will be used as the proxy pool and `interface` will be used as the proxy pool. fallback address.
-
-5) The existence of `proxy`
-
-When `--enable-direct` is enabled, `proxy` + default direct connection is used as the proxy pool; when `--enable-direct` is not enabled, only `proxy` is used as the proxy pool
-
-6) The existence of `ipv6_subnet`
-
-Regardless of whether `--enable-direct` is turned on, `ipv6_subnet` will be used as the proxy pool
-
-### Command Manual
-
-```shell
-$ ninja --help
-Reverse engineered ChatGPT proxy
-
-Usage: ninja [COMMAND]
-
-Commands:
- run Run the HTTP server
- stop Stop the HTTP server daemon
- start Start the HTTP server daemon
- restart Restart the HTTP server daemon
- status Status of the Http server daemon process
- log Show the Http server daemon log
- genca Generate MITM CA certificate
- ua Show the impersonate user-agent list
- gt Generate config template file (toml format file)
- update Update the application
- help Print this message or the help of the given subcommand(s)
-
-Options:
- -h, --help Print help
- -V, --version Print version
-
-$ ninja run --help
-Run the HTTP server
-
-Usage: ninja run [OPTIONS]
-
-Options:
- -L, --level
- Log level (info/debug/warn/trace/error) [env: LOG=] [default: info]
- -C, --config
- Configuration file path (toml format file) [env: CONFIG=]
- -b, --bind
- Server bind address [env: BIND=] [default: 0.0.0.0:7999]
- --concurrent-limit
- Server Enforces a limit on the concurrent number of requests the underlying [default: 1024]
- --timeout
- Server/Client timeout (seconds) [default: 360]
- --connect-timeout
- Server/Client connect timeout (seconds) [default: 5]
- --tcp-keepalive
- Server/Client TCP keepalive (seconds) [default: 60]
- -H, --no-keepalive
- No TCP keepalive (Client) [env: NO_TCP_KEEPALIVE=]
- --pool-idle-timeout
- Keep the client alive on an idle socket with an optional timeout set [default: 90]
- -x, --proxies
- Client proxy, support multiple proxy, use ',' to separate, Format: proto|type
- Proto: all/api/auth/arkose, default: all
- Type: interface/proxy/ipv6 subnet,proxy type only support: socks5/http/https
- e.g. all|socks5://192.168.1.1:1080, api|10.0.0.1, auth|2001:db8::/32, http://192.168.1.1:1081 [env: PROXIES=]
- --enable-direct
- Enable direct connection [env: ENABLE_DIRECT=]
- -I, --impersonate-uas
- Impersonate User-Agent, separate multiple ones with "," [env: IMPERSONATE_UA=]
- --cookie-store
- Enabled Cookie Store [env: COOKIE_STORE=]
- --fastest-dns
- Use fastest DNS resolver [env: FASTEST_DNS=]
- --tls-cert
- TLS certificate file path [env: TLS_CERT=]
- --tls-key
- TLS private key file path (EC/PKCS8/RSA) [env: TLS_KEY=]
- --cf-site-key
- Cloudflare turnstile captcha site key [env: CF_SECRET_KEY=]
- --cf-secret-key
- Cloudflare turnstile captcha secret key [env: CF_SITE_KEY=]
- -A, --auth-key
- Login/Arkose/HAR Authentication Key [env: AUTH_KEY=]
- -P, --preauth-endpoint
- PreAuth cookie endpoint by Login [env: PREAUTH_ENDPOINT=]
- --enable-webui
- Enable WebUI [env: ENABLE_WEBUI=]
- -F, --enable-file-proxy
- Enable file endpoint proxy [env: ENABLE_FILE_PROXY=]
- -G, --enable-arkose-proxy
- Enable arkose token endpoint proxy [env: ENABLE_ARKOSE_PROXY=]
- -W, --visitor-email-whitelist
- Visitor email whitelist [env: VISITOR_EMAIL_WHITELIST=]
- --arkose-endpoint
- Arkose endpoint, e.g. https://client-api.arkoselabs.com
- -E, --arkose-gpt3-experiment
- Enable Arkose GPT-3.5 experiment
- -S, --arkose-gpt3-experiment-solver
- Enable Arkose GPT-3.5 experiment solver
- --arkose-har-dir
- About the browser HAR directory path requested by ArkoseLabs
- -s, --arkose-solver
- About ArkoseLabs solver platform [default: fcsrv]
- -k, --arkose-solver-key
- About the solver client key by ArkoseLabs
- --arkose-solver-endpoint
- About the solver client endpoint by ArkoseLabs
- --arkose-solver-limit
- About the solver submit multiple image limit by ArkoseLabs [default: 1]
- --arkose-solver-tguess-endpoint
- About the solver tguess endpoint by ArkoseLabs
- --arkose-solver-image-dir
- About the solver image store directory by ArkoseLabs
- -T, --tb-enable
- Enable token bucket flow limitation
- --tb-strategy
- Token bucket store strategy (mem/redb) [default: mem]
- --tb-capacity
- Token bucket capacity [default: 60]
- --tb-fill-rate
- Token bucket fill rate [default: 1]
- --tb-expired
- Token bucket expired (seconds) [default: 86400]
- -h, --help
- Print help
-```
-
-### Compile
-
-- Linux compile, Ubuntu machine for example:
-
-```shell
-apt install build-essential
-apt install cmake
-apt install libclang-dev
-
-git clone https://github.com/gngpp/ninja.git && cd ninja
-cargo build --release
-```
-
-- OpenWrt Compile
-
-```shell
-cd package
-svn co https://github.com/gngpp/ninja/trunk/openwrt
-cd -
-make menuconfig # choose LUCI->Applications->luci-app-ninja
-make V=s
-```
diff --git a/doc/readme_zh.md b/doc/readme_zh.md
deleted file mode 100644
index 5105127e3..000000000
--- a/doc/readme_zh.md
+++ /dev/null
@@ -1,382 +0,0 @@
-
简体中文 | [English](https://github.com/gngpp/ninja/blob/main/doc/readme.md)
-
-如果项目对你有帮助,请考虑[捐赠支持](https://github.com/gngpp/gngpp/blob/main/SPONSOR.md#sponsor-my-open-source-works)项目持续维护,也可以付费获取咨询和技术支持服务。
-
-### 安装
-
-- #### 平台支持
-
- - `x86_64-unknown-linux-musl`
- - `aarch64-unknown-linux-musl`
- - `armv7-unknown-linux-musleabi`
- - `armv7-unknown-linux-musleabihf`
- - `arm-unknown-linux-musleabi`
- - `arm-unknown-linux-musleabihf`
- - `armv5te-unknown-linux-musleabi`
- - `i686-unknown-linux-gnu`
- - `i586-unknown-linux-gnu`
- - `x86_64-pc-windows-msvc`
- - `x86_64-apple-darwin`
- - `aarch64-apple-darwin`
-
-- #### Ubuntu(Other Linux)
-
- GitHub [Releases](https://github.com/gngpp/ninja/releases/latest) 中有预编译的 deb包,二进制文件,以Ubuntu为例:
-
-```shell
-wget https://github.com/gngpp/ninja/releases/download/v0.9.28/ninja-0.9.28-x86_64-unknown-linux-musl.tar.gz
-tar -xf ninja-0.9.28-x86_64-unknown-linux-musl.tar.gz
-mv ./ninja /bin/ninja
-
-# 在线更新版本
-ninja update
-
-# 前台运行进程
-ninja run
-
-# 后台运行进程
-ninja start
-
-# 停止后台进程
-ninja stop
-
-# 重启后台进程
-ninja restart
-
-# 查看后台进程状态
-ninja status
-
-# 查看后台进程日志
-ninja log
-
-# 生成配置文件模版,serve.toml
-ninja gt -o serve.toml
-
-# 指定配置文件模版运行,绕开繁琐的cli命令
-ninja (run/start/restart) -C serve.toml
-```
-
-- #### Docker
-
-> 镜像源支持`gngpp/ninja:latest`/`ghcr.io/gngpp/ninja:latest`
-
-```shell
-docker run --rm -it -p 7999:7999 --name=ninja \
- -e LOG=info \
- -v ~/.ninja:/root/.ninja \
- ghcr.io/gngpp/ninja:latest run
-```
-
-- Docker Compose
-
-> `CloudFlare Warp`你的地区不支持(China)请把它删掉,或者你的`VPS`IP可直连`OpenAI`,那么也可以删掉
-
-```yaml
-version: '3'
-
-services:
- ninja:
- image: gngpp/ninja:latest
- container_name: ninja
- restart: unless-stopped
- environment:
- - TZ=Asia/Shanghai
- - PROXIES=socks5://warp:10000
- command: run
- ports:
- - "8080:7999"
- depends_on:
- - warp
-
- warp:
- container_name: warp
- image: ghcr.io/gngpp/warp:latest
- restart: unless-stopped
-
- watchtower:
- container_name: watchtower
- image: containrrr/watchtower
- volumes:
- - /var/run/docker.sock:/var/run/docker.sock
- command: --interval 3600 --cleanup
- restart: unless-stopped
-
-```
-
-### ArkoseLabs
-
-发送`GPT-4/GPT-3.5/创建API-Key`对话需要`Arkose Token`作为参数发送
-
-1) 使用HAR
-
-- 支持HAR特征池化,可同时上传多个HAR,使用轮训策略,下面是获取HAR文件的方法
- - 先登录到 `ChatGPT` 的 `GPT4` 提问界面,按下 `F12` 键,此时会打开浏览器的控制台,找到 `network` (如果你的控制台为中文,则显示为 `网络` )并左键点击,此时会切换到浏览器的网络抓包界面
- - 在控制台打开的情况下,发送一次 `GPT-4` 会话消息,然后在抓包界面找到 `filter` (如果你的控制台为中文,则显示为 `过滤` ),输入这个地址进行过滤 `https://tcr9i.chat.openai.com/fc/gt2/public_key/35536E1E-65B4-4D96-9D97-6ADB7EFF8147`
- - 过滤出来的至少会有一条记录,随机选择一条,然后下载这个接口的HAR日志记录文件,具体操作是:右键点击这条记录,然后找到 `Save all as HAR with content` (如果你的控制台为中文,则显示为 `以 HAR 格式保存所有内容` )
- - 使用启动参数 `--arkose-har-dir` 指定HAR目录路径使用(不指定路径则使用默认路径`~/.ninja`,可直接上传更新HAR),如果你使用docker,并且不指定目录,只需要映射`~/.ninja`工作目录,支持WebUI上传更新HAR,请求路径:`/har/upload`,可选上传身份验证参数:`--auth-key`
-
-
-1) 使用 [Fcsrv](https://github.com/gngpp/fcsrv) / [YesCaptcha](https://yescaptcha.com/i/1Cc5i4) / [CapSolver](https://dashboard.capsolver.com/passport/register?inviteCode=y7CtB_a-3X6d)
-
-- `Fcsrv` / `YesCaptcha` / `CapSolver`推荐搭配HAR使用,出验证码则调用解析器处理
-
-平台进行验证码解析,启动参数`--arkose-solver`选择平台(默认使用`Fcsrv`),`--arkose-solver-key` 填写`Client Key`,选择自定义的提交节点URL,例如:`--arkose-solver-endpoint http://localhost:8000/task`,`Fcsrv`/`YesCaptcha`/`CapSolver`都支持,`Fcsrv`/`YesCaptcha`/`CapSolver`都支持,`Fcsrv`/`YesCaptcha`/`CapSolver`都支持,重要的事情说三遍。
-
-目前OpenAI已经更新`登录`需要验证`Arkose Token`,解决方式同`GPT-4`,填写启动参数指定HAR文件`--arkose-auth-har-dir`。创建API-Key需要上传Platform相关的HAR特征文件,获取方式同上。
-
-`OpenAI`取消对`GPT-3.5`进行`Arkose`验证,可以不上传HAR特征文件使用(已上传的不影响),兼容后续可能会再次开启`Arkose`验证,需要加上启动参数`--arkose-gpt3-experiment`进行开启`GPT-3.5`模型`Arkose`验证处理,WebUI不受影响。如果遇到`418 I'm a teapot`,可以开启`--arkose-gpt3-experiment`,同时需要上传`HAR`特征,如果没有`GPT-3.5`的特征,`GPT-4`的特征也可以使用,如果还不行,则尝试开启`--arkose-gpt3-experiment-solver`,可能会使用第三方平台解决验证码。
-
-> 以上是使用`API`的前提,使用`WebUI`不需要考虑
-
-### Http 服务
-
-#### 公开接口, `*` 表示任意`URL`后缀
-
-- ChatGPT-API
- - `/public-api/*`
- - `/backend-api/*`
-
-- OpenAI-API
- - `/v1/*`
-
-- Platform-API
- - `/dashboard/*`
-
-- ChatGPT-To-API
- - `/v1/chat/completions`
- > 关于`ChatGPT`转`API`使用方法,`AceessToken`当`API Key`使用
-
-- Files-API
- - `/files/*`
- > 图片和文件上下传API代理,`/backend-api/files`接口返回的API已经转为`/files/*`
-
-- Arkose-API
- - `/auth/arkose_token/:pk`
- > 其中pk为arkose类型的ID,比如请求GPT4的Arkose,`/auth/arkose_token/35536E1E-65B4-4D96-9D97-6ADB7EFF8147`,若`GPT-4`开始强制blob参数,需要带上`AccessToken` -> `/auth/arkose_token/35536E1E-65B4-4D96-9D97-6ADB7EFF8147?blob=your_access_token`
-
-- Authorization
-
- > 除了登录,都使用`Authorization: Bearer xxxx`,[Python Example](https://github.com/gngpp/ninja/blob/main/doc/authorization.md)
-
- - 登录: `/auth/token`,表单`option`可选参数,默认为`web`登录,返回`AccessToken`与`Session`;参数为`apple`/`platform`,返回`AccessToken`与`RefreshToken`。其中`Apple`平台`ChatGPT App`登录方式,需要提供`preauth_cookie`端点,启动参数设置`--preauth-endpoint https://example.com/api`,推荐使用[xyhelper](https://github.com/xyhelper)提供的免费端点: `https://tcr9i.xyhelper.cn/auth/preauth`
- - 刷新 `RefreshToken`: `POST /auth/refresh_token`,支持`platform`/`apple`撤销
- - 撤销 `RefreshToken`: `POST /auth/revoke_token`, 支持`platform`/`apple`撤销
- - 刷新 `Session`: `POST /auth/refresh_session`,使用`web`登录返回的`Session`刷新
- - 获取 `Sess token`: `POST /auth/sess_token`,使用`platform`的`AccessToken`获取
- - 获取 `Billing`: `GET /auth/billing`,使用`sess token`获取
-
-#### API文档
-
-- Platfrom API [doc](https://platform.openai.com/docs/api-reference)
-- Backend API [doc1](https://github.com/gngpp/ninja/blob/main/doc/rest.http) or [doc2](https://github.com/gngpp/ninja/blob/main/doc/python-requests.ipynb)
- > 例子只是部分,根据`/backend-api/*`代理了官方`API`
-
-#### 基本服务
-
-- ChatGPT WebUI
-- 公开`ChatGPT-API`/`OpenAI-API`代理
-- `API`前缀与官方一致
-- `ChatGPT` 转 `API`
-- 可接入第三方客户端
-- 可接入IP代理池,提高并发
-- 支持获取RefreshToken
-- 支持以HAR格式文件特征池
-
-#### 参数说明
-
-> **默认工作目录`~/.ninja`**
-
-- `--level`,环境变量 `LOG`,日志级别: 默认info
-- `--bind`,环境变量 `BIND`, 服务监听地址: 默认0.0.0.0:7999,
-- `--tls-cert`,环境变量 `TLS_CERT`,TLS证书公钥,支持格式: EC/PKCS8/RSA
-- `--tls-key`,环境变量 `TLS_KEY`,TLS证书私钥
-- `--enable-webui`, 默认关闭自带的WebUI,使用此参数开启,必须设置`--arkose-endpoint`,如果你的出口访问域名是`example.com`,那么你需要设置`--arkose-endpoint https://example.com`
-- `--enable-file-proxy`,环境变量`ENABLE_FILE_PROXY`,开启文件上下传API代理
-- `--enable-arkose-proxy`,开启获取`Arkose Token`端点
-- `--enable-direct`,开启直连,将绑定`interface`出口的IP的加入代理池
-- `--proxies`,代理,支持代理池,多个代理使用`,`隔开,格式: protocol://user:pass@ip:port
-- `--no-keepalive` 关闭Http Client Tcp保活
-- `--fastest-dns` 使用内置最快DNS组
-- `--visitor-email-whitelist`,白名单限制,限制针对AccessToken,参数为邮箱,多个邮箱用`,`隔开
-- `--cookie-store`,开启Cookie Store
-- `--cf-site-key`,Cloudflare turnstile captcha site key
-- `--cf-secret-key`,Cloudflare turnstile captcha secret key
-- `--arkose-endpoint`,ArkoseLabs endpoint,例如:
-- `--arkose-har-dir`,ArkoseLabs HAR特征文件目录路径,例如: `~/har`,不指定路径则使用默认路径`~/.ninja`
-- `--arkose-solver`,ArkoseLabs solver platform,例如: yescaptcha
-- `--arkose-solver-key`,ArkoseLabs solver client key
-- `--arkose-gpt3-experiment`,开启GPT-3.5 ArkoseLabs实验
-- `--arkose-gpt3-experiment-solver`,开启GPT-3.5 ArkoseLabs实验,需要上传HAR特征文件,并且会校验ArkoseToken正确性
-- `--impersonate-uas`,可选随机模拟UA,多个使用`,`隔开,详细请看命令手册
-- `--auth-key`,`登录`/`HAR Manager`/`Arkose`的`API`认证`Key`,使用`Authorization Bearer`格式发送
-- `--preauth-endpoint`, 启用`Apple`平台`ChatGPT App`登录的`preauth_cookie`端点
-
-##### 代理高阶用法
-
-分代理内置协议和代理类型,内置协议: `all/api/auth/arkose`,其中`all`针对所有客户端,`api`针对所有`OpenAI API`,`auth`针对授权/登录,`arkose`针对ArkoseLabs;代理类型: `interface/proxy/ipv6_subnet`,其中`interface`表示绑定的出口`IP`地址,`proxy`表示上游代理协议: `http/https/socks5/socks5h`,`ipv6_subnet`表示用Ipv6子网段内随机IP地址作为代理。格式为`proto|proxy`,例子: **`all|socks5://192.168.1.1:1080, api|10.0.0.1, auth|2001:db8::/32, http://192.168.1.1:1081`**,不带内置协议,协议默认为`all`。
-
-> 关于代理`http/https/socks5/socks5h`,只有使用`socks5h`协议时,DNS解析才会走代理解析,否则将使用`本地`/`内置`DNS解析
-
-##### 代理使用规则
-
-1) 存在`interface` \ `proxy` \ `ipv6_subnet`
-
-当开启`--enable-direct`,那么将使用`proxy` + `interface`作为代理池;未开启`--enable-direct`,只有`proxy`数量大于等于2才使用`proxy`,否则将使用 `ipv6_subnet`作为代理池,`interface`作为fallback地址。
-
-2) 存在`interface` \ `proxy`
-
-当开启`--enable-direct`,那么将使用`proxy` + `interface`作为代理池;未开启`--enable-direct`,只使用`proxy`作为代理池。
-
-3) 存在`proxy` \ `ipv6_subnet`
-
-规则同(1),只是没有`interface`作为fallback地址。
-
-4) 存在`interface` \ `ipv6_subnet`
-当开启`--enable-direct`,同时`interface`数量大于等于2似,`interface`作为代理池;未开启`--enable-direct`,将使用 `ipv6_subnet`作为代理池,`interface`作为fallback地址。
-
-5) 存在`proxy`
-
-当开启`--enable-direct`,使用`proxy` + 默认直连作为代理池;未开启`--enable-direct`,只使用`proxy`作为代理池
-
-6) 存在`ipv6_subnet`
-
-无论是否开启`--enable-direct`,都将使用`ipv6_subnet`作为代理池
-
-### 命令手册
-
-```shell
-$ ninja --help
-Reverse engineered ChatGPT proxy
-
-Usage: ninja [COMMAND]
-
-Commands:
- run Run the HTTP server
- stop Stop the HTTP server daemon
- start Start the HTTP server daemon
- restart Restart the HTTP server daemon
- status Status of the Http server daemon process
- log Show the Http server daemon log
- genca Generate MITM CA certificate
- ua Show the impersonate user-agent list
- gt Generate config template file (toml format file)
- update Update the application
- help Print this message or the help of the given subcommand(s)
-
-Options:
- -h, --help Print help
- -V, --version Print version
-
-$ ninja run --help
-Run the HTTP server
-
-Usage: ninja run [OPTIONS]
-
-Options:
- -L, --level
- Log level (info/debug/warn/trace/error) [env: LOG=] [default: info]
- -C, --config
- Configuration file path (toml format file) [env: CONFIG=]
- -b, --bind
- Server bind address [env: BIND=] [default: 0.0.0.0:7999]
- --concurrent-limit
- Server Enforces a limit on the concurrent number of requests the underlying [default: 1024]
- --timeout
- Server/Client timeout (seconds) [default: 360]
- --connect-timeout
- Server/Client connect timeout (seconds) [default: 5]
- --tcp-keepalive
- Server/Client TCP keepalive (seconds) [default: 60]
- -H, --no-keepalive
- No TCP keepalive (Client) [env: NO_TCP_KEEPALIVE=]
- --pool-idle-timeout
- Keep the client alive on an idle socket with an optional timeout set [default: 90]
- -x, --proxies
- Client proxy, support multiple proxy, use ',' to separate, Format: proto|type
- Proto: all/api/auth/arkose, default: all
- Type: interface/proxy/ipv6 subnet,proxy type only support: socks5/http/https
- e.g. all|socks5://192.168.1.1:1080, api|10.0.0.1, auth|2001:db8::/32, http://192.168.1.1:1081 [env: PROXIES=]
- --enable-direct
- Enable direct connection [env: ENABLE_DIRECT=]
- -I, --impersonate-uas
- Impersonate User-Agent, separate multiple ones with "," [env: IMPERSONATE_UA=]
- --cookie-store
- Enabled Cookie Store [env: COOKIE_STORE=]
- --fastest-dns
- Use fastest DNS resolver [env: FASTEST_DNS=]
- --tls-cert
- TLS certificate file path [env: TLS_CERT=]
- --tls-key
- TLS private key file path (EC/PKCS8/RSA) [env: TLS_KEY=]
- --cf-site-key
- Cloudflare turnstile captcha site key [env: CF_SECRET_KEY=]
- --cf-secret-key
- Cloudflare turnstile captcha secret key [env: CF_SITE_KEY=]
- -A, --auth-key
- Login/Arkose/HAR Authentication Key [env: AUTH_KEY=]
- -P, --preauth-endpoint
- PreAuth cookie endpoint by Login [env: PREAUTH_ENDPOINT=]
- --enable-webui
- Enable WebUI [env: ENABLE_WEBUI=]
- -F, --enable-file-proxy
- Enable file endpoint proxy [env: ENABLE_FILE_PROXY=]
- -G, --enable-arkose-proxy
- Enable arkose token endpoint proxy [env: ENABLE_ARKOSE_PROXY=]
- -W, --visitor-email-whitelist
- Visitor email whitelist [env: VISITOR_EMAIL_WHITELIST=]
- --arkose-endpoint
- Arkose endpoint, e.g. https://client-api.arkoselabs.com
- -E, --arkose-gpt3-experiment
- Enable Arkose GPT-3.5 experiment
- -S, --arkose-gpt3-experiment-solver
- Enable Arkose GPT-3.5 experiment solver
- --arkose-har-dir
- About the browser HAR directory path requested by ArkoseLabs
- -s, --arkose-solver
- About ArkoseLabs solver platform [default: fcsrv]
- -k, --arkose-solver-key
- About the solver client key by ArkoseLabs
- --arkose-solver-endpoint
- About the solver client endpoint by ArkoseLabs
- --arkose-solver-limit
- About the solver submit multiple image limit by ArkoseLabs [default: 1]
- --arkose-solver-tguess-endpoint
- About the solver tguess endpoint by ArkoseLabs
- --arkose-solver-image-dir
- About the solver image store directory by ArkoseLabs
- -T, --tb-enable
- Enable token bucket flow limitation
- --tb-strategy
- Token bucket store strategy (mem/redb) [default: mem]
- --tb-capacity
- Token bucket capacity [default: 60]
- --tb-fill-rate
- Token bucket fill rate [default: 1]
- --tb-expired
- Token bucket expired (seconds) [default: 86400]
- -h, --help
- Print help
-```
-
-### 编译
-
-- Linux编译,Ubuntu机器为例:
-
-```shell
-apt install build-essential
-apt install cmake
-apt install libclang-dev
-
-git clone https://github.com/gngpp/ninja.git && cd ninja
-cargo build --release
-```
-
-- OpenWrt 编译
-
-```shell
-cd package
-svn co https://github.com/gngpp/ninja/trunk/openwrt
-cd -
-make menuconfig # choose LUCI->Applications->luci-app-ninja
-make V=s
-```