Prompt Storm is a powerful toolkit designed for sophisticated prompt engineering and optimization. It provides a comprehensive set of tools for creating, optimizing, and managing prompts at scale, making it an essential tool for AI developers and researchers working with language models.
- Overview
- Features
- Installation
- Quick Start
- Usage Guide
- Architecture
- API Reference
- Development
- Advanced Topics
- Troubleshooting
- License
- Contributing
- Publishing to PyPI
Prompt Storm empowers developers to:
- Optimize prompts for better performance and consistency
- Process multiple prompts in batch mode
- Format prompts in standardized YAML format
- Track optimization progress with rich logging
- Handle errors gracefully with comprehensive error reporting
- Intelligent Prompt Optimization: Leverages advanced LLMs to enhance prompt effectiveness
- Batch Processing: Efficiently handle multiple prompts using CSV input
- YAML Formatting: Standardize prompts with structured YAML output
- Progress Tracking: Rich console output with detailed progress information
- Error Handling: Robust error management with helpful error messages
- Optimizing prompts for chatbots and AI assistants
- Standardizing prompt formats across large projects
- Processing and converting legacy prompts to YAML format
- Batch optimization of prompt libraries
- Quality assurance for prompt engineering pipelines
- Python 3.8 or higher
- pip package manager
pip install prompt-storm
git clone https://github.com/yourusername/prompt-storm.git
cd prompt-storm
pip install -e ".[dev]"
- click: Command line interface creation
- rich: Enhanced terminal output
- litellm: LLM interface
- pyyaml: YAML processing
- pandas: CSV handling
Optimize a single prompt:
prompt-storm optimize "Write a story about {{subject}}"
Result:
Write a detailed and engaging story about {{subject_name}}. Ensure the narrative includes:
1. A clear introduction to {{subject_name}} and their background.
2. A central conflict or challenge that {{subject_name}} faces.
3. The steps {{subject_name}} takes to address the conflict, using Chain of Thought (CoT) to explain their reasoning.
4. Interactions with other characters to provide depth and context.
5. A resolution that is satisfying and aligns with the character's journey.
6. Consider diverse perspectives and avoid stereotypes.
7. Maintain a balance between creative freedom and factual accuracy if {{subject_name}} is based on a real person or event.
8. Use vivid, descriptive language to enhance the reader's experience.
Example with yaml output:
prompt-storm optimize "Write a story about {{subject}}" --yaml
Result:
name: "Adventure Story Prompt"
version: '1.0'
description: >-
A prompt designed to generate a compelling story about a main character who embarks on an unexpected adventure. The story should include details about the conflict they face and how
they resolve it, reflecting diverse perspectives and addressing potential biases. The narrative style should balance human emotion with logical progression.
author: quantalogic
input_variables:
main_character:
type: string
description: >-
The name or description of the main character in the story.
examples:
- "Alice"
- "John Doe"
setting:
type: string
description: >-
The environment or location where the adventure takes place.
examples:
- "a mystical forest"
- "a futuristic city"
conflict:
type: string
description: >-
The main challenge or problem the character faces during their adventure.
examples:
- "a quest to find a lost artifact"
- "battling a powerful enemy"
tags:
- "storytelling"
- "adventure"
- "character development"
- "conflict resolution"
categories:
- "writing"
- "creative storytelling"
content: >-
Write a compelling story about {{main_character}} who embarks on an unexpected adventure in {{setting}}. Include details about {{conflict}} they face and how they resolve it. Ensure the
story reflects diverse perspectives and addresses potential biases. Use a narrative style that balances human emotion with logical progression. For example, start with an introduction of
{{main_character}} and their initial situation, followed by the emergence of {{conflict}}, their journey through challenges, and finally, the resolution. Consider edge cases such as
unexpected outcomes or alternative resolutions. Maintain clear, unambiguous language throughout.
Process multiple prompts from a CSV file:
prompt-storm optimize-batch input.csv output_dir --prompt-column "prompt"
Format a prompt to YAML:
prompt-storm format-prompt "Generate a creative story" --output-file story.yaml
Optimize a single prompt with customizable parameters:
prompt-storm optimize "Your prompt" \
--model gpt-4o-mini \
--max-tokens 2000 \
--temperature 0.7 \
--output-file optimized.yaml
Parameters:
--model
: LLM model to use (default: gpt-4o-mini)--max-tokens
: Maximum tokens in response (default: 2000)--temperature
: Generation temperature (default: 0.7)--input-file
: Optional input file containing the prompt--output-file
: Optional output file for the result--verbose
: Enable detailed logging
Process multiple prompts from a CSV file:
prompt-storm optimize-batch prompts.csv output/ \
--prompt-column "prompt" \
--model gpt-4o-mini \
--language english
Parameters:
input-csv
: Path to input CSV fileoutput-dir
: Directory for output files--prompt-column
: Name of CSV column containing prompts--model
: LLM model to use--language
: Target language for optimization
Convert a prompt to YAML format:
prompt-storm format-prompt "Your prompt" \
--output-file formatted.yaml \
--language english
Configure the default settings for Prompt Storm:
prompt-storm configure --model gpt-4o-mini --max-tokens 2000 --temperature 0.7
Parameters:
--model
: Default LLM model to use (default: gpt-4o-mini)--max-tokens
: Default maximum tokens in response (default: 2000)--temperature
: Default generation temperature (default: 0.7)
Display the current configuration settings:
prompt-storm show-config
prompt-storm optimize "Explain the concept of quantum computing" \
--model gpt-4o-turbo \
--max-tokens 1500 \
--temperature 0.5 \
--output-file quantum_computing_explanation.yaml
prompt-storm optimize-batch prompts.csv output/ \
--prompt-column "description" \
--model gpt-4o-mini \
--language english
prompt-storm format-prompt "Create a recipe for chocolate cake" \
--output-file chocolate_cake_recipe.yaml
prompt-storm optimize "Design a user-friendly interface for a mobile app" \
--model gpt-4o-mini \
--max-tokens 2000 \
--temperature 0.7 \
--verbose
prompt-storm optimize-batch prompts.csv output/ \
--prompt-column "description" \
--model gpt-4o-mini \
--language spanish
Model configuration options:
config = OptimizationConfig(
model="gpt-4o-mini",
max_tokens=2000,
temperature=0.7,
language="english"
)
We use litellm to interface with various language models.
Example of models:
gpt-4o-mini
: GPT-4o Mini modelgpt-4o-turbo
: GPT-4o Turbo model
bedrock/amazon.nova-pro-v1:0
bedrock/amazon.nova-lite-v1:0
bedrock/amazon.nova-micro-v1:0
bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0
ollama/llama3.3:latest
ollama/qwen2.5-coder:14b
graph TD
A[CLI] --> B[Optimizer]
B --> C[Services]
C --> D[YAML Service]
C --> E[CSV Service]
C --> F[Batch Service]
B --> G[Utils]
G --> H[Logger]
G --> I[Error Handler]
G --> J[Response Processor]
Handles prompt optimization using LLMs:
optimizer = OptimizerService(config)
result = optimizer.optimize("Your prompt")
Manages YAML formatting and validation:
yaml_service = YAMLService(config)
yaml_output = yaml_service.format_to_yaml(prompt)
Processes multiple prompts efficiently:
batch_service = BatchOptimizerService(
optimizer_service=optimizer,
yaml_service=yaml_service,
csv_service=csv_service
)
prompt_storm/
├── __init__.py # Package initialization
├── cli.py # Command line interface
├── optimizer.py # Core optimization logic
├── models/ # Data models
│ ├── config.py
│ └── responses.py
├── services/ # Core services
│ ├── optimizer_service.py
│ ├── yaml_service.py
│ ├── csv_service.py
│ └── batch_optimizer_service.py
├── utils/ # Utility functions
│ ├── logger.py
│ ├── error_handler.py
│ └── response_processor.py
└── interfaces/ # Service interfaces
└── service_interfaces.py
Run the test suite:
pytest tests/
Write tests following the existing pattern:
def test_optimize():
optimizer = PromptOptimizer()
result = optimizer.optimize("Test prompt")
assert result is not None
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests
- Submit a pull request
from prompt_storm.models.config import OptimizationConfig
config = OptimizationConfig(
model="gpt-4o-mini",
max_tokens=2000,
temperature=0.7,
language="english"
)
from prompt_storm import PromptOptimizer
optimizer = PromptOptimizer()
# Single prompt optimization
result = optimizer.optimize("Your prompt")
# Batch processing
with open('prompts.csv', 'r') as f:
prompts = f.readlines()
results = [optimizer.optimize(p) for p in prompts]
- Use appropriate temperature settings for your use case
- Implement proper error handling
- Monitor token usage
- Validate YAML output
- Use batch processing for large datasets
- Rate Limiting
try:
result = optimizer.optimize(prompt)
except Exception as e:
if "rate limit" in str(e).lower():
time.sleep(60) # Wait before retry
- Invalid YAML Format
try:
yaml_output = yaml_service.format_to_yaml(prompt)
except YAMLValidationError as e:
logger.error(f"YAML validation failed: {e}")
Rate limit exceeded
: Wait a few minutes or upgrade API planInvalid YAML format
: Check prompt structure and formattingResource exhausted
: Reduce batch size or implement rate limitingInvalid model
: Verify model name and availability
This project is licensed under the MIT License - see the LICENSE file for details.
We welcome contributions! Please see our Contributing Guidelines for details.
Developed by the Prompt Storm team. Special thanks to all contributors.
To publish the project to PyPI using Poetry, follow these steps:
Update the version number in pyproject.toml
.
Log in to PyPI using your token:
poetry config pypi-token.pypi <your-pypi-token>
Build the source distribution and wheel:
poetry build
Upload the distribution to PyPI using twine
:
twine upload dist/*
Verify the upload by checking the project page on PyPI.