Extension: homestar w/ llama2 build & model + prompt chain component/effect + example #628
Open
1 of 4 tasks
Labels
enhancement
New feature or request
Summary
Add functionality to Homestar to allow for the user-driven execution of an LLM Chain within a sandboxed environment (Wasm) as a workflow composed of a series of steps as prompts (akin to a series of step functions). The outcome of this feature is for the inference to operate locally on a trained model (e.g. Llama 2, Minstral) privately provided by the host platform it's executing on.
The learning goals of this feature addition are to experiment with working with LLMs locally on hosts where the training data remains private and only computationally derived information can be shared with other users/peers for consumption, allowing for working with AI computation in ways not tied to any specific vendor or large cloud provider. Frankly, this work will showcase everything against what IEEE Spectrum's Open-Source AI Is Uniquely Dangerous article scrutinizes. Incorporating ways for users to chain LLM steps together while controlling what inference gets exhibited without the infrastructure concerns or data risks typically associated with external cloud services, presents a unique opportunity to democratize AI capabilities. By ensuring that users can interact with and execute complex AI workflows with ease, this feature aims to bridge the gap between advanced AI technologies and non-technical end users.
Components
The text was updated successfully, but these errors were encountered: