You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When attempting to deploy WrenAI locally with Ollama LLM and embedding services, modeling consistently fails to deploy. The deployment failure occurs specifically in the modeling stage
Desktop (please complete the following information):
OS: macOS
WrenAI Configuration:
LLM: Ollama (qwen2.5:14b)
Embedder: Ollama (nomic-embed-text)
** Reproduce
docker- compos up
Open browser and navigate to http://localhost:3000
Select database in the initial interface
Modeling Configuration
Navigate to modeling page
Click "Deploy" button
Describe the bug
When attempting to deploy WrenAI locally with Ollama LLM and embedding services, modeling consistently fails to deploy. The deployment failure occurs specifically in the modeling stage
Desktop (please complete the following information):
** Reproduce
Select database in the initial interface
Navigate to modeling page
Click "Deploy" button
Wren AI Information
Relevant log output
config.yaml.log
docker-compose.yaml.log
wrenai-ibis-server.log
wrenai-wren-ai-service.log
wrenai-wren-engine.log
wrenai-wren-ui.log
The text was updated successfully, but these errors were encountered: