Skip to content

codebyfalxing/Graphoria

Repository files navigation

Graphoria

The open-source AI engine for visual storytelling

Graphoria

Graphoria CA: QxcCntiD4fZwysz9vYMgH154Q4rbpru2bx73uEjpump

Running locally

Provider config:

  • LLM_ENGINE: can be one of INFERENCE_API, INFERENCE_ENDPOINT, OPENAI, GROQ, ANTHROPIC
  • RENDERING_ENGINE: can be one of: "INFERENCE_API", "INFERENCE_ENDPOINT", "REPLICATE", "VIDEOCHAIN", "OPENAI" for now, unless you code your custom solution

Rendering config:

  • RENDERING_HF_INFERENCE_ENDPOINT_URL: necessary if you decide to use a custom inference endpoint
  • RENDERING_REPLICATE_API_MODEL_VERSION: url to the VideoChain API server
  • RENDERING_HF_INFERENCE_ENDPOINT_URL: optional, default to nothing
  • RENDERING_HF_INFERENCE_API_BASE_MODEL: optional, defaults to "stabilityai/stable-diffusion-xl-base-1.0"
  • RENDERING_HF_INFERENCE_API_REFINER_MODEL: optional, defaults to "stabilityai/stable-diffusion-xl-refiner-1.0"
  • RENDERING_REPLICATE_API_MODEL: optional, defaults to "stabilityai/sdxl"
  • RENDERING_REPLICATE_API_MODEL_VERSION: optional, in case you want to change the version

Language model config (depending on the LLM engine you decide to use):

  • LLM_HF_INFERENCE_ENDPOINT_URL: ""
  • LLM_HF_INFERENCE_API_MODEL: "HuggingFaceH4/zephyr-7b-beta"
  • LLM_OPENAI_API_BASE_URL: "https://api.openai.com/v1"
  • LLM_OPENAI_API_MODEL: "gpt-4o-mini"
  • LLM_GROQ_API_MODEL: "mixtral-8x7b-32768"
  • LLM_ANTHROPIC_API_MODEL: "claude-3-opus-20240229"

Please read the .env default config file for more informations. To customise a variable locally, you should create a .env.local (do not commit this file).

-> If you intend to run it with local, cloud-hosted and/or proprietary models you are going to need to code 👨‍💻.

The LLM API (Large Language Model)

Currently Graphica uses the OpenAI API to generate stories.

Option 1: Use an Image Inference API model

This is a new option added recently, where you can use one of the models from the Hugging Face Hub. By default we suggest to use zephyr-7b-beta as it will provide better results than the 7b model.

To activate it, create a .env.local configuration file:

LLM_ENGINE="INFERENCE_API"

HF_API_TOKEN="Your Hugging Face token"

# "HuggingFaceH4/zephyr-7b-beta" is used by default, but you can change this
# note: You should use a model able to generate JSON responses,
# so it is storngly suggested to use at least the 34b model
HF_INFERENCE_API_MODEL="HuggingFaceH4/zephyr-7b-beta"

Option 2: use Groq

LLM_ENGINE="GROQ"

LLM_GROQ_API_MODEL="mixtral-8x7b-32768"

AUTH_GROQ_API_KEY="Your own GROQ API Key"

Option 3: use Anthropic (Claude)

LLM_ENGINE="ANTHROPIC"

LLM_ANTHROPIC_API_MODEL="claude-3-opus-20240229"

AUTH_ANTHROPIC_API_KEY="Your own ANTHROPIC API Key"

About

The open-source AI engine for visual storytelling

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages