diff --git a/README.md b/README.md index 884573e2..4d31086c 100644 --- a/README.md +++ b/README.md @@ -17,11 +17,16 @@ Also, follow [@akashnet\_](https://twitter.com/akashnet_) to stay in the loop wi ### AI - [Alpaca.cpp](alpaca-cpp) +- [Auto-GPT](auto-gpt) +- [BabyAGI](babyagi) +- [BabyAGI-UI](babyagi-ui) +- [ChatChat](chatchat) - [ChatGPT Self-Hosted Chat](ai-chat-app) - [Daila](daila) - [GPT4ALL](gpt4all) - [Serge](serge) - [Stable Diffusion](stable-diffusion-ui) +- [Terminal GPT](tgpt) ### Blogging diff --git a/auto-gpt/Dockerfile b/auto-gpt/Dockerfile new file mode 100644 index 00000000..f2bfe79f --- /dev/null +++ b/auto-gpt/Dockerfile @@ -0,0 +1,44 @@ +# 'dev' or 'release' container build +ARG BUILD_TYPE=dev + +# Use an official Python base image from the Docker Hub +FROM python:3.10-slim AS autogpt-base + +# Install browsers +RUN apt-get update && apt-get install -y \ + chromium-driver firefox-esr \ + ca-certificates + +# Install utilities +RUN apt-get install -y curl jq wget git + +# Set environment variables +ENV PIP_NO_CACHE_DIR=yes \ + PYTHONUNBUFFERED=1 \ + PYTHONDONTWRITEBYTECODE=1 + +# Install the required python packages globally +ENV PATH="$PATH:/root/.local/bin" +COPY requirements.txt . + +# Set the entrypoint +#ENTRYPOINT ["python", "-m", "autogpt"] +RUN wget https://github.com/yudai/gotty/releases/download/v2.0.0-alpha.3/gotty_2.0.0-alpha.3_linux_amd64.tar.gz +RUN tar -zxvf gotty_2.0.0-alpha.3_linux_amd64.tar.gz ; chmod +x gotty ; rm -rf gotty_2.0.0-alpha.3_linux_amd64.tar.gz + +ENTRYPOINT ["/gotty", "-w", "--random-url-length", "16", "python", "-m", "autogpt"] + +# dev build -> include everything +FROM autogpt-base as autogpt-dev +RUN pip install --no-cache-dir -r requirements.txt +WORKDIR /app +ONBUILD COPY . ./ + +# release build -> include bare minimum +FROM autogpt-base as autogpt-release +RUN sed -i '/Items below this point will not be included in the Docker Image/,$d' requirements.txt && \ + pip install --no-cache-dir -r requirements.txt +WORKDIR /app +ONBUILD COPY autogpt/ ./autogpt + +FROM autogpt-${BUILD_TYPE} AS auto-gpt \ No newline at end of file diff --git a/auto-gpt/README.md b/auto-gpt/README.md new file mode 100644 index 00000000..eaa06aa2 --- /dev/null +++ b/auto-gpt/README.md @@ -0,0 +1,153 @@ +# Auto-GPT: An Autonomous GPT-4 Experiment +[![Official Website](https://img.shields.io/badge/Official%20Website-agpt.co-blue?style=flat&logo=world&logoColor=white)](https://agpt.co) +[![Unit Tests](https://img.shields.io/github/actions/workflow/status/Significant-Gravitas/Auto-GPT/ci.yml?label=unit%20tests)](https://github.com/Significant-Gravitas/Auto-GPT/actions/workflows/ci.yml) +[![Discord Follow](https://dcbadge.vercel.app/api/server/autogpt?style=flat)](https://discord.gg/autogpt) +[![GitHub Repo stars](https://img.shields.io/github/stars/Significant-Gravitas/auto-gpt?style=social)](https://github.com/Significant-Gravitas/Auto-GPT/stargazers) +[![Twitter Follow](https://img.shields.io/twitter/follow/siggravitas?style=social)](https://twitter.com/SigGravitas) + +## πŸ’‘ Get help - [Q&A](https://github.com/Significant-Gravitas/Auto-GPT/discussions/categories/q-a) or [Discord πŸ’¬](https://discord.gg/autogpt) + +
+ +### πŸ”΄ πŸ”΄ πŸ”΄ Urgent: USE `stable` not `master` πŸ”΄ πŸ”΄ πŸ”΄ + +**Download the latest `stable` release from here: https://github.com/Significant-Gravitas/Auto-GPT/releases/latest.** +The `master` branch may often be in a **broken** state. + +
+ + +Auto-GPT is an experimental open-source application showcasing the capabilities of the GPT-4 language model. This program, driven by GPT-4, chains together LLM "thoughts", to autonomously achieve whatever goal you set. As one of the first examples of GPT-4 running fully autonomously, Auto-GPT pushes the boundaries of what is possible with AI. + +

Demo April 16th 2023

+ +https://user-images.githubusercontent.com/70048414/232352935-55c6bf7c-3958-406e-8610-0913475a0b05.mp4 + +Demo made by Blake Werlinger + +

πŸ’– Help Fund Auto-GPT's Development πŸ’–

+

+If you can spare a coffee, you can help to cover the costs of developing Auto-GPT and help to push the boundaries of fully autonomous AI! +Your support is greatly appreciated. Development of this free, open-source project is made possible by all the contributors and sponsors. If you'd like to sponsor this project and have your avatar or company logo appear below click here. +

+ + +

+

+ + + + Zilliz + + + + +Roost.AI + + + + + NucleiAI + + + + + + + Algohash + + + + + + + TypingMind + + + + + + + TypingMind + + + +
+
+ + + +

robinicus  0xmatchmaker  jazgarewal  MayurVirkar  avy-ai  TheStoneMX  goldenrecursion  MatthewAgs  eelbaz  rapidstartup  gklab  VoiceBeer  DailyBotHQ  lucas-chu  knifour  refinery1  st617  neodenit  CrazySwami  Heitechsoft  RealChrisSean  abhinav-pandey29  Explorergt92  SparkplanAI  crizzler  kreativai  omphos  Jahmazon  tjarmain  ddtarazona  saten-private  anvarazizov  lazzacapital  m  Pythagora-io  Web3Capital  toverly1  digisomni  concreit  LeeRobidas  Josecodesalot  dexterityx  rickscode  Brodie0  FSTatSBS  nocodeclarity  jsolejr  amr-elsehemy  RawBanana  horazius  SwftCoins  tob-le-rone  RThaweewat  jun784  joaomdmoura  rejunity  mathewhawkins  caitlynmeeks  jd3655  Odin519Tomas  DataMetis  webbcolton  rocks6  cxs  fruition  nnkostov  morcos  pingbotan  maxxflyer  tommi-joentakanen  hunteraraujo  projectonegames  tullytim  comet-ml  thepok  prompthero  sunchongren  neverinstall  josephcmiller2  yx3110  MBassi91  SpacingLily  arthur-x88  ciscodebs  christian-gheorghe  EngageStrategies  jondwillis  Cameron-Fulton  AryaXAI  AuroraHolding  Mr-Bishop42  doverhq  johnculkin  marv-technology  ikarosai  ColinConwell  humungasaurus  terpsfreak  iddelacruz  thisisjeffchen  nicoguyon  arjunb023  Nalhos  belharethsami  Mobivs  txtr99  ntwrite  founderblocks-sils  kMag410  angiaou  garythebat  lmaugustin  shawnharmsen  clortegah  MetaPath01  sekomike910  MediConCenHK  svpermari0  jacobyoby  turintech  allenstecat  CatsMeow492  tommygeee  judegomila  cfarquhar  ZoneSixGames  kenndanielso  CrypteorCapital  sultanmeghji  jenius-eagle  josephjacks  pingshian0131  AIdevelopersAI  ternary5  ChrisDMT  AcountoOU  chatgpt-prompts  Partender  Daniel1357  KiaArmani  zkonduit  fabrietech  scryptedinc  coreyspagnoli  AntonioCiolino  Dradstone  CarmenCocoa  bentoml  merwanehamadi  vkozacek  ASmithOWL  tekelsey  GalaxyVideoAgency  wenfengwang  rviramontes  indoor47  ZERO-A-ONE  

+ + + +## πŸš€ Features + +- 🌐 Internet access for searches and information gathering +- πŸ’Ύ Long-term and short-term memory management +- 🧠 GPT-4 instances for text generation +- πŸ”— Access to popular websites and platforms +- πŸ—ƒοΈ File storage and summarization with GPT-3.5 +- πŸ”Œ Extensibility with Plugins + +## Quickstart + +0. Check out the [wiki](https://github.com/Significant-Gravitas/Auto-GPT/wiki) +1. Get an OpenAI [API Key](https://platform.openai.com/account/api-keys) +2. Download the [latest release](https://github.com/Significant-Gravitas/Auto-GPT/releases/latest) +3. Follow the [installation instructions][docs/setup] +4. Configure any additional features you want, or install some [plugins][docs/plugins] +5. [Run][docs/usage] the app + +Please see the [documentation][docs] for full setup instructions and configuration options. + +[docs]: https://docs.agpt.co/ + +## πŸ“– Documentation +* [βš™οΈ Setup][docs/setup] +* [πŸ’» Usage][docs/usage] +* [πŸ”Œ Plugins][docs/plugins] +* Configuration + * [πŸ” Web Search](https://docs.agpt.co/configuration/search/) + * [🧠 Memory](https://docs.agpt.co/configuration/memory/) + * [πŸ—£οΈ Voice (TTS)](https://docs.agpt.co/configuration/voice/) + * [πŸ–ΌοΈ Image Generation](https://docs.agpt.co/configuration/imagegen/) + +[docs/setup]: https://docs.agpt.co/setup/ +[docs/usage]: https://docs.agpt.co/usage/ +[docs/plugins]: https://docs.agpt.co/plugins/ + +## ⚠️ Limitations + +This experiment aims to showcase the potential of GPT-4 but comes with some limitations: + +1. Not a polished application or product, just an experiment +2. May not perform well in complex, real-world business scenarios. In fact, if it actually does, please share your results! +3. Quite expensive to run, so set and monitor your API key limits with OpenAI! + +## πŸ›‘ Disclaimer + +This project, Auto-GPT, is an experimental application and is provided "as-is" without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise. + +The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by Auto-GPT. + +**Please note that the use of the GPT-4 language model can be expensive due to its token usage.** By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges. + +As an autonomous experiment, Auto-GPT may generate content or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made based on the output of this software comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software. + +By using Auto-GPT, you agree to indemnify, defend, and hold harmless the developers, contributors, and any affiliated parties from and against any and all claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising from your use of this software or your violation of these terms. + +## 🐦 Connect with Us on Twitter + +Stay up-to-date with the latest news, updates, and insights about Auto-GPT by following our Twitter accounts. Engage with the developer and the AI's own account for interesting discussions, project updates, and more. + +- **Developer**: Follow [@siggravitas](https://twitter.com/siggravitas) for insights into the development process, project updates, and related topics from the creator of Entrepreneur-GPT. +- **Entrepreneur-GPT**: Join the conversation with the AI itself by following [@En_GPT](https://twitter.com/En_GPT). Share your experiences, discuss the AI's outputs, and engage with the growing community of users. + +We look forward to connecting with you and hearing your thoughts, ideas, and experiences with Auto-GPT. Join us on Twitter and let's explore the future of AI together! + +

+ + Star History Chart + +

diff --git a/auto-gpt/deploy.yaml b/auto-gpt/deploy.yaml new file mode 100644 index 00000000..6eb20844 --- /dev/null +++ b/auto-gpt/deploy.yaml @@ -0,0 +1,109 @@ +--- +version: "2.0" + +services: + auto-gpt: + image: cryptoandcoffee/akash-auto-gpt:1 + expose: + - port: 8080 + as: 80 + proto: tcp + to: + - global: true + env: + - "EXECUTE_LOCAL_COMMANDS=True" + - "RESTRICT_TO_WORKSPACE=False" + - "USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36" + #- "AI_SETTINGS_FILE=ai_settings.yaml" + - "AUTHORISE_COMMAND_KEY=y" + - "EXIT_KEY=n" + #- "DISABLED_COMMAND_CATEGORIES=autogpt.commands.analyze_code,autogpt.commands.execute_code,autogpt.commands.git_operations,autogpt.commands.improve_code,autogpt.commands.write_tests" + - "OPENAI_API_KEY=your-openai-api-key" + - "TEMPERATURE=0.5" + - "USE_AZURE=False" + - "SMART_LLM_MODEL=gpt-4" + - "FAST_LLM_MODEL=gpt-3.5-turbo" + - "FAST_TOKEN_LIMIT=4000" + - "SMART_TOKEN_LIMIT=8000" + - "EMBEDDING_MODEL=text-embedding-ada-002" + - "EMBEDDING_TOKENIZER=cl100k_base" + - "EMBEDDING_TOKEN_LIMIT=8191" + - "MEMORY_BACKEND=local" + - "MEMORY_INDEX=auto-gpt" + - "PINECONE_API_KEY=your-pinecone-api-key" + - "PINECONE_ENV=your-pinecone-region" + - "REDIS_HOST=localhost" + - "REDIS_PORT=6379" + - "REDIS_PASSWORD=" + - "WIPE_REDIS_ON_START=True" + - "WEAVIATE_HOST=127.0.0.1" + - "WEAVIATE_PORT=8080" + - "WEAVIATE_PROTOCOL=http" + - "USE_WEAVIATE_EMBEDDED=False" + - "WEAVIATE_EMBEDDED_PATH=/home/me/.local/share/weaviate" + - "WEAVIATE_USERNAME=" + - "WEAVIATE_PASSWORD=" + - "WEAVIATE_API_KEY=" + - "MILVUS_ADDR=localhost:19530" + - "MILVUS_USERNAME=" + - "MILVUS_PASSWORD=" + - "MILVUS_SECURE=" + - "MILVUS_COLLECTION=autogpt" + - "IMAGE_PROVIDER=dalle" + - "IMAGE_SIZE=256" + - "HUGGINGFACE_IMAGE_MODEL=CompVis/stable-diffusion-v1-4" + - "HUGGINGFACE_API_TOKEN=your-huggingface-api-token" + - "SD_WEBUI_AUTH=" + - "SD_WEBUI_URL=http://127.0.0.1:7860" + - "HUGGINGFACE_AUDIO_TO_TEXT_MODEL=facebook/wav2vec2-base-960h" + - "GITHUB_API_KEY=github_pat_123" + - "GITHUB_USERNAME=your-github-username" + - "HEADLESS_BROWSER=True" + - "USE_WEB_BROWSER=chrome" + - "BROWSE_CHUNK_MAX_LENGTH=3000" + - "BROWSE_SPACY_LANGUAGE_MODEL=en_core_web_sm" + - "GOOGLE_API_KEY=your-google-api-key" + - "CUSTOM_SEARCH_ENGINE_ID=your-custom-search-engine-id" + - "USE_MAC_OS_TTS=False" + - "USE_BRIAN_TTS=False" + - "ELEVENLABS_API_KEY=your-elevenlabs-api-key" + - "ELEVENLABS_VOICE_1_ID=your-voice-id-1" + - "ELEVENLABS_VOICE_2_ID=your-voice-id-2" + - "TW_CONSUMER_KEY=" + - "TW_CONSUMER_SECRET=" + - "TW_ACCESS_TOKEN=" + - "TW_ACCESS_TOKEN_SECRET=" + - "ALLOWLISTED_PLUGINS=" + - "DENYLISTED_PLUGINS=" + - "CHAT_MESSAGES_ENABLED=False" +profiles: + compute: + auto-gpt: + resources: + cpu: + units: 4 + memory: + size: 4.75Gi #Increase this for larger local memory, can be lower if used external vector for memory. + storage: + size: 16Gi + placement: + akash: + ####################################################### + #Keep this section to deploy on trusted providers + signedBy: + anyOf: + - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" + - "akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4" + ####################################################### + #Remove this section to deploy on untrusted providers + #Beware* You may have deployment, security, or other issues on untrusted providers + #https://docs.akash.network/providers/akash-audited-attributes + pricing: + auto-gpt: + denom: uakt + amount: 10000 #Keep high to show all bids +deployment: + auto-gpt: + akash: + profile: auto-gpt + count: 1 diff --git a/babyagi-ui/Dockerfile b/babyagi-ui/Dockerfile new file mode 100644 index 00000000..31e67290 --- /dev/null +++ b/babyagi-ui/Dockerfile @@ -0,0 +1,22 @@ +FROM node:19 + +WORKDIR /usr/src/app + +COPY package*.json ./ + +RUN npm ci + +COPY . . + +RUN npm run build + +ENV OPENAI_API_KEY="" +ENV PINECONE_API_KEY="" +ENV PINECONE_ENVIRONMENT="" +ENV NEXT_PUBLIC_TABLE_NAME="baby-agi-test-table" +ENV NEXT_PUBLIC_USE_USER_API_KEY="false" +ENV SEARP_API_KEY="" + +EXPOSE 3000 + +ENTRYPOINT [ "npm", "start" ] diff --git a/babyagi-ui/README.md b/babyagi-ui/README.md new file mode 100644 index 00000000..7a26a16c --- /dev/null +++ b/babyagi-ui/README.md @@ -0,0 +1,75 @@ +# BabyAGI UI πŸ‘ΆπŸ€–πŸ–₯️ + +BabyAGI UI is designed to make it easier to run and develop with [babyagi](https://github.com/yoheinakajima/babyagi) in a web app, like a ChatGPT. +This is a port of [babyagi](https://github.com/yoheinakajima/babyagi) with [Langchain.js](https://github.com/hwchase17/langchainjs) and build a user interface. + +[Demo](https://twitter.com/miiura/status/1653026609606320130) + +## 🧰 Stack + +- [Next.js](https://nextjs.org/) +- [Pinecone](https://www.pinecone.io/) +- [LangChain.js](https://github.com/hwchase17/langchainjs) +- [Tailwind CSS](https://tailwindcss.com/) +- [Radix UI](https://www.radix-ui.com/) + +## πŸš— Roadmap + +- [x] The BabyAGI can search and scrape the web. ([🐝 BabyBeeAGI](https://twitter.com/yoheinakajima/status/1652732735344246784)) +- [x] Exporting Execution Results +- [x] Execution history +- [x] Faster speeds and fewer errors. ([😺 BabyCatAGI](https://twitter.com/yoheinakajima/status/1657448504112091136)) +- [ ] Display the current task and task list +- [ ] i18n support +- [ ] User feedback +- [ ] Other LLM models support + +and more ... + +## πŸ‘‰ Getting Started + +1. Clone the repository + +```sh +git clone https://github.com/miurla/babyagi-ui +``` + +2. Go to the project holder + +```sh +cd babyagi-ui +``` + +3. Install packages with npm + +```sh +npm install +``` + +4. Setup your .env file. And set the variables. + - You need to create an index in advance with [Pinecone](https://www.pinecone.io/). + - [Reference setting](./public/pinecone-setup.png) + - Set your SerpAPI Key, if you want to use the search tool with BabyBeeAGI. + +```sh +cp .env.example .env +``` + +5. Run the project + +```sh +npm run dev +``` + +## ⚠️ Warning + +This script is designed to be run continuously as part of a task management system. Running this script continuously can result in high API usage, so please use it responsibly. Additionally, the script requires the OpenAI API to be set up correctly, so make sure you have set up the API before running the script. + +[original](https://github.com/yoheinakajima/babyagi#warning) + +## Credit + +### BabyAGI + +- Github: https://github.com/yoheinakajima/babyagi +- Author: [@yoheinakajima](https://github.com/yoheinakajima) diff --git a/babyagi-ui/deploy.yaml b/babyagi-ui/deploy.yaml new file mode 100644 index 00000000..406cc311 --- /dev/null +++ b/babyagi-ui/deploy.yaml @@ -0,0 +1,50 @@ +--- +version: "2.0" + +services: + babyagi-ui: + image: cryptoandcoffee/akash-babyagi-ui:1 + expose: + - port: 3000 + as: 80 + proto: tcp + to: + - global: true + env: + - "OPENAI_API_KEY=" + - "PINECONE_API_KEY=" + - "PINECONE_ENVIRONMENT=" + - "NEXT_PUBLIC_TABLE_NAME=baby-agi-test-table" + - "NEXT_PUBLIC_USE_USER_API_KEY=false" + - "SEARP_API_KEY=" +profiles: + compute: + babyagi-ui: + resources: + cpu: + units: 8 + memory: + size: 4.75Gi #Increase this for larger local memory, can be lower if used external vector for memory. + storage: + size: 16Gi + placement: + akash: + ####################################################### + #Keep this section to deploy on trusted providers + signedBy: + anyOf: + - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" + - "akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4" + ####################################################### + #Remove this section to deploy on untrusted providers + #Beware* You may have deployment, security, or other issues on untrusted providers + #https://docs.akash.network/providers/akash-audited-attributes + pricing: + babyagi-ui: + denom: uakt + amount: 10000 #Keep high to show all bids +deployment: + babyagi-ui: + akash: + profile: babyagi-ui + count: 1 diff --git a/babyagi/Dockerfile b/babyagi/Dockerfile new file mode 100644 index 00000000..3f05ec92 --- /dev/null +++ b/babyagi/Dockerfile @@ -0,0 +1,13 @@ +FROM python:3.11-slim + +ENV PIP_NO_CACHE_DIR=true +WORKDIR /app +RUN apt-get update && apt-get install build-essential wget curl git -y + +COPY entrypoint.sh . +EXPOSE 8080 + +RUN wget https://github.com/yudai/gotty/releases/download/v2.0.0-alpha.3/gotty_2.0.0-alpha.3_linux_amd64.tar.gz +RUN tar -zxvf gotty_2.0.0-alpha.3_linux_amd64.tar.gz ; chmod +x gotty ; rm -rf gotty_2.0.0-alpha.3_linux_amd64.tar.gz ; mv gotty /usr/local/bin + +ENTRYPOINT ["bash", "/app/entrypoint.sh"] diff --git a/babyagi/README.md b/babyagi/README.md new file mode 100644 index 00000000..2a134c51 --- /dev/null +++ b/babyagi/README.md @@ -0,0 +1,107 @@ +# babyagi Objective + +This Python script is an example of an AI-powered task management system. The system uses OpenAI and vector databases such as Chroma or Weaviate to create, prioritize, and execute tasks. The main idea behind this system is that it creates tasks based on the result of previous tasks and a predefined objective. The script then uses OpenAI's natural language processing (NLP) capabilities to create new tasks based on the objective, and Chroma/Weaviate to store and retrieve task results for context. This is a pared-down version of the original [Task-Driven Autonomous Agent](https://twitter.com/yoheinakajima/status/1640934493489070080?s=20) (Mar 28, 2023). + +This README will cover the following: + +- [How the script works](#how-it-works) + +- [How to use the script](#how-to-use) + +- [Supported Models](#supported-models) + +- [Warning about running the script continuously](#continous-script-warning) + +# How It Works + +The script works by running an infinite loop that does the following steps: + +1. Pulls the first task from the task list. +2. Sends the task to the execution agent, which uses OpenAI's API to complete the task based on the context. +3. Enriches the result and stores it in [Chroma](https://docs.trychroma.com)/[Weaviate](https://weaviate.io/). +4. Creates new tasks and reprioritizes the task list based on the objective and the result of the previous task. +
+ +![image](https://user-images.githubusercontent.com/21254008/235015461-543a897f-70cc-4b63-941a-2ae3c9172b11.png) + +The `execution_agent()` function is where the OpenAI API is used. It takes two parameters: the objective and the task. It then sends a prompt to OpenAI's API, which returns the result of the task. The prompt consists of a description of the AI system's task, the objective, and the task itself. The result is then returned as a string. + +The `task_creation_agent()` function is where OpenAI's API is used to create new tasks based on the objective and the result of the previous task. The function takes four parameters: the objective, the result of the previous task, the task description, and the current task list. It then sends a prompt to OpenAI's API, which returns a list of new tasks as strings. The function then returns the new tasks as a list of dictionaries, where each dictionary contains the name of the task. + +The `prioritization_agent()` function is where OpenAI's API is used to reprioritize the task list. The function takes one parameter, the ID of the current task. It sends a prompt to OpenAI's API, which returns the reprioritized task list as a numbered list. + +Finally, the script uses Chroma/Weaviate to store and retrieve task results for context. The script creates a Chroma/Weaviate collection based on the table name specified in the TABLE_NAME variable. Chroma/Weaviate is then used to store the results of the task in the collection, along with the task name and any additional metadata. + +# How to Use + +To use the script, you will need to follow these steps: + +1. Clone the repository via `git clone https://github.com/yoheinakajima/babyagi.git` and `cd` into the cloned repository. +2. Install the required packages: `pip install -r requirements.txt` +3. Copy the .env.example file to .env: `cp .env.example .env`. This is where you will set the following variables. +4. Set your OpenAI API key in the OPENAI_API_KEY and OPENAPI_API_MODEL variables. In order to use with Weaviate you will also need to setup additional variables detailed [here](docs/weaviate.md). +5. Set the name of the table where the task results will be stored in the TABLE_NAME variable. +6. (Optional) Set the name of the BabyAGI instance in the BABY_NAME variable. +7. (Optional) Set the objective of the task management system in the OBJECTIVE variable. +8. (Optional) Set the first task of the system in the INITIAL_TASK variable. +9. Run the script: `python babyagi.py` + +All optional values above can also be specified on the command line. + +# Running inside a docker container + +As a prerequisite, you will need docker and docker-compose installed. Docker desktop is the simplest option https://www.docker.com/products/docker-desktop/ + +To run the system inside a docker container, setup your .env file as per steps above and then run the following: + +``` +docker-compose up +``` + +# Supported Models + +This script works with all OpenAI models, as well as Llama and its variations through Llama.cpp. Default model is **gpt-3.5-turbo**. To use a different model, specify it through LLM_MODEL or use the command line. + +## Llama + +Llama integration requires llama-cpp package. You will also need the Llama model weights. + +- **Under no circumstances share IPFS, magnet links, or any other links to model downloads anywhere in this repository, including in issues, discussions or pull requests. They will be immediately deleted.** + +Once you have them, set LLAMA_MODEL_PATH to the path of the specific model to use. For convenience, you can link `models` in BabyAGI repo to the folder where you have the Llama model weights. Then run the script with `LLM_MODEL=llama` or `-l` argument. + +# Warning + +This script is designed to be run continuously as part of a task management system. Running this script continuously can result in high API usage, so please use it responsibly. Additionally, the script requires the OpenAI API to be set up correctly, so make sure you have set up the API before running the script. + +# Contribution + +Needless to say, BabyAGI is still in its infancy and thus we are still determining its direction and the steps to get there. Currently, a key design goal for BabyAGI is to be _simple_ such that it's easy to understand and build upon. To maintain this simplicity, we kindly request that you adhere to the following guidelines when submitting PRs: + +- Focus on small, modular modifications rather than extensive refactoring. +- When introducing new features, provide a detailed description of the specific use case you are addressing. + +A note from @yoheinakajima (Apr 5th, 2023): + +> I know there are a growing number of PRs, appreciate your patience - as I am both new to GitHub/OpenSource, and did not plan my time availability accordingly this week. Re:direction, I've been torn on keeping it simple vs expanding - currently leaning towards keeping a core Baby AGI simple, and using this as a platform to support and promote different approaches to expanding this (eg. BabyAGIxLangchain as one direction). I believe there are various opinionated approaches that are worth exploring, and I see value in having a central place to compare and discuss. More updates coming shortly. + +I am new to GitHub and open source, so please be patient as I learn to manage this project properly. I run a VC firm by day, so I will generally be checking PRs and issues at night after I get my kids down - which may not be every night. Open to the idea of bringing in support, will be updating this section soon (expectations, visions, etc). Talking to lots of people and learning - hang tight for updates! + +# BabyAGI Activity Report + +To help the BabyAGI community stay informed about the project's progress, Blueprint AI has developed a Github activity summarizer for BabyAGI. This concise report displays a summary of all contributions to the BabyAGI repository over the past 7 days (continuously updated), making it easy for you to keep track of the latest developments. + +To view the BabyAGI 7-day activity report, go here: [https://app.blueprint.ai/github/yoheinakajima/babyagi](https://app.blueprint.ai/github/yoheinakajima/babyagi) + +[image](https://app.blueprint.ai/github/yoheinakajima/babyagi) + + +# Inspired projects + +In the short time since it was release, BabyAGI inspired many projects. You can see them all [here](docs/inspired-projects.md). + +# Backstory + +BabyAGI is a pared-down version of the original [Task-Driven Autonomous Agent](https://twitter.com/yoheinakajima/status/1640934493489070080?s=20) (Mar 28, 2023) shared on Twitter. This version is down to 140 lines: 13 comments, 22 blanks, and 105 code. The name of the repo came up in the reaction to the original autonomous agent - the author does not mean to imply that this is AGI. + +Made with love by [@yoheinakajima](https://twitter.com/yoheinakajima), who happens to be a VC (would love to see what you're building!) diff --git a/babyagi/deploy.yaml b/babyagi/deploy.yaml new file mode 100644 index 00000000..90e4815a --- /dev/null +++ b/babyagi/deploy.yaml @@ -0,0 +1,80 @@ +--- +version: "2.0" + +services: + babyagi: + image: cryptoandcoffee/akash-babyagi:1 + expose: + - port: 3000 + as: 80 + proto: tcp + to: + - global: true + env: + #- API CONFIG + #- OPENAI_API_MODEL can be used instead + #- Special values: + #- human - use human as intermediary with custom LLMs + #- llama - use llama.cpp with Llama, Alpaca, Vicuna, GPT4All, etc + - "LLM_MODEL=gpt-3.5-turbo" # alternatively, gpt-4, text-davinci-003, etc + - "LLAMA_MODEL_PATH=" # ex. models/llama-13B/ggml-model.bin + #- LLAMA_THREADS_NUM=8 # Set the number of threads for llama (optional) + - "OPENAI_API_KEY=" + - "OPENAI_TEMPERATURE=0.5" + #- STORE CONFIG + #- TABLE_NAME can be used instead + - "RESULTS_STORE_NAME=baby-agi-test-table" + #- Weaviate config + #- Uncomment and fill these to switch from local ChromaDB to Weaviate + #- WEAVIATE_USE_EMBEDDED=true + #- WEAVIATE_URL= + #- WEAVIATE_API_KEY= + #- Pinecone config + #- Uncomment and fill these to switch from local ChromaDB to Pinecone + #- PINECONE_API_KEY= + #- PINECONE_ENVIRONMENT= + #- COOPERATIVE MODE CONFIG + #- BABY_NAME can be used instead + - "INSTANCE_NAME=BabyAGI" + - "COOPERATIVE_MODE=none" # local + #- RUN CONFIG + - "OBJECTIVE=Solve world hunger" + #- For backwards compatibility + #- FIRST_TASK can be used instead of INITIAL_TASK + - "INITIAL_TASK=Develop a task list" + #- Extensions + #- List additional extension .env files to load (except .env.example!) + - "DOTENV_EXTENSIONS=" + #- Set to true to enable command line args support + - "ENABLE_COMMAND_LINE_ARGS=false" +profiles: + compute: + babyagi: + resources: + cpu: + units: 8 + memory: + size: 4.75Gi #Increase this for larger local memory, can be lower if used external vector for memory. + storage: + size: 16Gi + placement: + akash: + ####################################################### + #Keep this section to deploy on trusted providers + signedBy: + anyOf: + - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" + - "akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4" + ####################################################### + #Remove this section to deploy on untrusted providers + #Beware* You may have deployment, security, or other issues on untrusted providers + #https://docs.akash.network/providers/akash-audited-attributes + pricing: + babyagi: + denom: uakt + amount: 10000 #Keep high to show all bids +deployment: + babyagi: + akash: + profile: babyagi + count: 1 diff --git a/babyagi/entrypoint.sh b/babyagi/entrypoint.sh new file mode 100755 index 00000000..56ba3665 --- /dev/null +++ b/babyagi/entrypoint.sh @@ -0,0 +1,5 @@ +git clone https://github.com/yoheinakajima/babyagi babyagi +cd babyagi +pip install --upgrade pip +pip install -r requirements.txt +gotty -p 3000 -w --random-url-length 16 python babyagi.py diff --git a/chatchat/Dockerfile b/chatchat/Dockerfile new file mode 100644 index 00000000..1091e80c --- /dev/null +++ b/chatchat/Dockerfile @@ -0,0 +1,30 @@ +FROM node:lts-alpine + +WORKDIR /app + +RUN apk add --no-cache git + +COPY docker-entrypoint.sh /usr/local/bin/ +RUN chmod +x /usr/local/bin/docker-entrypoint.sh +ENTRYPOINT ["docker-entrypoint.sh"] + +#FROM node:lts-alpine as production + +#WORKDIR /app + +EXPOSE 3000 + +ENV NODE_ENV=production \ + BASE_URL=http://localhost:3000 \ + OPENAI_API_KEY="" \ + OPENAI_API_ENDPOINT="https://api.openai.com" \ + DATABASE_URL="" \ + NEXTAUTH_URL="" \ + NEXTAUTH_SECRET="" \ + EMAIL_HOST="" \ + EMAIL_PORT="" \ + EMAIL_USERNAME="" \ + EMAIL_PASSWORD="" \ + EMAIL_FORM="" + +#CMD ["yarn", "start"] diff --git a/chatchat/README.md b/chatchat/README.md new file mode 100644 index 00000000..858e65a1 --- /dev/null +++ b/chatchat/README.md @@ -0,0 +1,109 @@ +# [Chat Chat](https://chat.okisdev.com) + +> Chat Chat to unlock your next level AI conversational experience. You can use multiple APIs from OpenAI, Microsoft Azure, Claude, Cohere, Hugging Face, and more to make your AI conversation experience even richer. + +## Important Notes + +- Some APIs are paid APIs, please make sure you have read and agreed to the relevant terms of service before use. +- Some features are still under development, please submit PR or Issue. +- The demo is for demonstration purposes only, it may retain some user data. +- AI may generate offensive content, please use it with caution. + +## Preview + +### Interface + +![UI](https://cdn.harrly.com/project/GitHub/Chat-Chat/img/UI-1.png) + +![Dashboard](https://cdn.harrly.com/project/GitHub/Chat-Chat/img/Dashboard-1.png) + +### Functions + +https://user-images.githubusercontent.com/66008528/235539101-562afbc8-cb62-41cc-84d9-1ea8ed83d435.mp4 + +https://user-images.githubusercontent.com/66008528/235539163-35f7ee91-e357-453a-ae8b-998018e003a7.mp4 + +## Features + +- [x] TTS +- [x] Dark Mode +- [x] Chat with files +- [x] Markdown formatting +- [x] Multi-language support +- [x] Support for System Prompt +- [x] Shortcut menu (command + k) +- [x] Wrapped API (no more proxies) +- [x] Support for sharing conversations +- [x] Chat history (local and cloud sync) +- [x] Support for streaming messages (SSE) +- [x] Plugin support (`/search`, `/fetch`) +- [x] Support for message code syntax highlighting +- [x] Support for OpenAI, Microsoft Azure, Claude, Cohere, Hugging Face + +## Roadmap + +Please refer to https://github.com/users/okisdev/projects/7 + +## Usage + +### Prerequisites + +- Any API key from OpenAI, Microsoft Azure, Claude, Cohere, Hugging Face + +### Environment variables + +| variable name | description | default | mandatory | prompt | +| --------------------- | --------------------------- | ------------------------------------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------- | +| `BASE_URL` | Your website URL | Local default `http://localhost:3000` | (with prefix) **Yes** | | +| `DATABASE_URL` | Postgresql database address | | **Yes** | Start with `postgresql://` (if not required, please fill in `postgresql://user:password@example.com:port/dbname`) | +| `NEXTAUTH_URL` | Your website URL | | (without prefix) **Yes** | | +| `NEXTAUTH_SECRET` | NextAuth Secret | | **Yes** | Random hash (16 bits is best) | +| `OPENAI_API_KEY` | OpenAI API key | | No | | +| `OPENAI_API_ENDPOINT` | OpenAI API access point | | No | | +| `EMAIL_HOST` | SMTP Host | | No | | +| `EMAIL_PORT` | SMTP Port | | No | | +| `EMAIL_USERNAME` | SMTP username | | No | | +| `EMAIL_PASSWORD` | SMTP password | | No | | +| `EMAIL_FORM` | SMTP sending address | | No | | + +### Deployment + +> Please modify the environment variables before deployment, more details can be found in the [documentation](https://docs.okis.dev/chat/deployment/). + +#### Local Deployment + +```bash +git clone +cd ChatChat +yarn +yarn dev +``` + +#### Docker + +```bash +docker build -t chatchat . +docker run -p 3000:3000 chatchat -e BASE_URL="" -e DATABASE_URL="" -e NEXTAUTH_URL="" -e NEXTAUTH_SECRET="" -e OPENAI_API_KEY="" -e OPENAI_API_ENDPOINT="" -e EMAIL_HOST="" -e EMAIL_PORT="" -e EMAIL_USERNAME="" -e EMAIL_PASSWORD="" -e EMAIL_FORM="" +``` + +OR + +```bash +docker run -p 3000:3000 ghcr.io/okisdev/chatchat:latest -e BASE_URL="" -e DATABASE_URL="" -e NEXTAUTH_URL="" -e NEXTAUTH_SECRET="" -e OPENAI_API_KEY="" -e OPENAI_API_ENDPOINT="" -e EMAIL_HOST="" -e EMAIL_PORT="" -e EMAIL_USERNAME="" -e EMAIL_PASSWORD="" -e EMAIL_FORM="" +``` + +## LICENSE + +[AGPL-3.0](./LICENSE) + +## Support me + +[![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png)](https://www.buymeacoffee.com/okisdev) + +## Technology Stack + +nextjs / tailwindcss / shadcn UI + +``` + +``` diff --git a/chatchat/deploy.yaml b/chatchat/deploy.yaml new file mode 100644 index 00000000..791cfb52 --- /dev/null +++ b/chatchat/deploy.yaml @@ -0,0 +1,81 @@ +--- +version: '2.0' +services: + chatchat: + image: cryptoandcoffee/akash-chatchat:1 + depends_on: + - postgres + expose: + - port: 3000 + as: 80 + to: + - global: true + env: + - "BASE_URL=http://localhost:3000" + - "DATABASE_URL=postgresql://akash:squarekingzebrahello@postgres:5432/chatchat" + - "NEXTAUTH_URL=localhost:3000" + - "NEXTAUTH_SECRET=0x3A7F" #Generate Random hash (16 bits is best) + - "OPENAI_API_KEY=your-openai-key" + - "OPENAI_API_ENDPOINT=https://api.openai.com" + #- "EMAIL_HOST=" + #- "EMAIL_PORT=" + #- "EMAIL_USERNAME=" + #- "EMAIL_PASSWORD=" + #- "EMAIL_FROM=" + - "POSTGRES_USER=admin" + postgres: + image: postgres + expose: + - port: 5432 + to: + - service: chatchat + env: + - "POSTGRES_USER=admin" + - "POSTGRES_PASSWORD=squarekingzebrahello" + - "POSTGRES_DB=chatchat" +profiles: + compute: + chatchat: + resources: + cpu: + units: 8 + memory: + size: 4.75Gi + storage: + - size: 16Gi + postgres: + resources: + cpu: + units: 0.5 + memory: + size: 256Mi + storage: + - size: 1Gi + placement: + akash: + ####################################################### + #Keep this section to deploy on trusted providers + signedBy: + anyOf: + - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" + - "akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4" + ####################################################### + #Remove this section to deploy on untrusted providers + #Beware* You may have deployment, security, or other issues on untrusted providers + #https://docs.akash.network/providers/akash-audited-attributes + pricing: + chatchat: + denom: uakt + amount: 10000 + postgres: + denom: uakt + amount: 10000 +deployment: + chatchat: + akash: + profile: chatchat + count: 1 + postgres: + akash: + profile: postgres + count: 1 diff --git a/daila/deploy.yaml~HEAD b/daila/deploy.yaml~HEAD new file mode 100644 index 00000000..576950f6 --- /dev/null +++ b/daila/deploy.yaml~HEAD @@ -0,0 +1,43 @@ +--- +version: "2.0" + +services: + daila: + image: cryptoandcoffee/akash-daila:5 + expose: + - port: 3000 + as: 80 + to: + - global: true + env: + - MODEL_SIZE=7B + - ALPACA=true + - LLAMA=false +profiles: + compute: + daila: + resources: + cpu: + units: 32.0 + memory: + size: 16Gi #Need to increase for larger models + storage: + size: 8Gi #Need to increase for larger models + placement: + akash: + attributes: + host: akash + signedBy: + anyOf: + - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" + - "akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4" + pricing: + daila: + denom: uakt + amount: 10000 + +deployment: + daila: + akash: + profile: daila + count: 1 diff --git a/daila/deploy.yaml~master b/daila/deploy.yaml~master new file mode 100644 index 00000000..576950f6 --- /dev/null +++ b/daila/deploy.yaml~master @@ -0,0 +1,43 @@ +--- +version: "2.0" + +services: + daila: + image: cryptoandcoffee/akash-daila:5 + expose: + - port: 3000 + as: 80 + to: + - global: true + env: + - MODEL_SIZE=7B + - ALPACA=true + - LLAMA=false +profiles: + compute: + daila: + resources: + cpu: + units: 32.0 + memory: + size: 16Gi #Need to increase for larger models + storage: + size: 8Gi #Need to increase for larger models + placement: + akash: + attributes: + host: akash + signedBy: + anyOf: + - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" + - "akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4" + pricing: + daila: + denom: uakt + amount: 10000 + +deployment: + daila: + akash: + profile: daila + count: 1 diff --git a/tgpt/Dockerfile b/tgpt/Dockerfile new file mode 100644 index 00000000..ac6c363c --- /dev/null +++ b/tgpt/Dockerfile @@ -0,0 +1,4 @@ +FROM debian +COPY entrypoint.sh . +RUN apt-get update ; apt-get install -y aria2 curl sudo ; aria2c --out=gotty.tar.gz https://github.com/yudai/gotty/releases/download/v2.0.0-alpha.3/gotty_2.0.0-alpha.3_linux_amd64.tar.gz ; tar -zxvf gotty.tar.gz ; chmod +x gotty ; rm -rf gotty.tar.gz +CMD ["bash", "./entrypoint.sh"] diff --git a/tgpt/README.md b/tgpt/README.md new file mode 100644 index 00000000..980b81db --- /dev/null +++ b/tgpt/README.md @@ -0,0 +1,44 @@ +# Terminal GPT (tgpt) πŸš€ + +tgpt is a cross-platform cli (commandline) tool that lets you use ChatGPT 3.5 in Terminal **without API KEYS**. It communicates with the Backend of [Bai chatbot](https://chatbot.theb.ai). Its written in Go. + +# Usage πŸ’¬ +``` +tgpt "What is the purpose of life?" +``` +![demo](https://user-images.githubusercontent.com/66430340/233759296-c4cf8cf2-0cab-48aa-9e84-40765b823282.gif) + +# Installation ⏬ + +## Download for GNU/Linux 🐧 or MacOS 🍎 +The default download location is /usr/local/bin. But you can change it in the command and use your own location. However make sure its in PATH, if you want it to be easily accessible. + +You can download with this command :- +``` +curl -sSL https://raw.githubusercontent.com/aandrew-me/tgpt/main/install | bash -s /usr/local/bin +``` + +If you are using Arch Linux you can install the [AUR package](https://aur.archlinux.org/packages/tgpt-bin) with `paru`: + +``` +paru -S tgpt-bin +``` +Or with `yay` +``` +yay -S tgpt-bin +``` +## With Go +``` +go install github.com/aandrew-me/tgpt@latest +``` + +## Windows πŸͺŸ +Package can be installed with [scoop](https://scoop.sh/) with the following command - +``` +scoop install https://raw.githubusercontent.com/aandrew-me/tgpt/main/tgpt.json +``` +## From Release + +You can download an executable for your Operating System, then rename it to tgpt or whatever you want. Then you can execute it by typing `./tgpt` being in that directory. Or you can add it to the Environmental Variable **PATH** and then you can execute it by just typing `tgpt`. + +### If you liked this project, give it a star! ⭐ diff --git a/tgpt/deploy.yaml b/tgpt/deploy.yaml new file mode 100644 index 00000000..833b39cd --- /dev/null +++ b/tgpt/deploy.yaml @@ -0,0 +1,43 @@ +--- +version: "2.0" + +services: + tgpt: + image: cryptoandcoffee/akash-tgpt:1 + expose: + - port: 8080 + as: 80 + proto: tcp + to: + - global: true +profiles: + compute: + tgpt: + resources: + cpu: + units: 8 + memory: + size: 4.75Gi #Increase this for larger local memory, can be lower if used external vector for memory. + storage: + size: 16Gi + placement: + akash: + ####################################################### + #Keep this section to deploy on trusted providers + signedBy: + anyOf: + - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63" + - "akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4" + ####################################################### + #Remove this section to deploy on untrusted providers + #Beware* You may have deployment, security, or other issues on untrusted providers + #https://docs.akash.network/providers/akash-audited-attributes + pricing: + tgpt: + denom: uakt + amount: 10000 #Keep high to show all bids +deployment: + tgpt: + akash: + profile: tgpt + count: 1 diff --git a/tgpt/entrypoint.sh b/tgpt/entrypoint.sh new file mode 100755 index 00000000..caf7aca9 --- /dev/null +++ b/tgpt/entrypoint.sh @@ -0,0 +1,2 @@ +curl -sSL https://raw.githubusercontent.com/aandrew-me/tgpt/main/install | bash +./gotty -w --random-url-length 16 tgpt -i -m "Hello world!"