Skip to content

hatanp/simple_triton_gpt_example

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 

Repository files navigation

Simple example for running text generation and stable diffusion with Hugging Face and custom Python backend with NVIDIA Triton inference server. In case you only want to use other then just delete the relevant files from server/model_repository.

Setup

  • Run setup.sh at server folder to save model files locally (TODO: make use of proper Hugging Face cache)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published