Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update README with LLM messaging and llms.txt #3362

Open
wants to merge 4 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
116 changes: 84 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
<div align="center">
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=0fcbab94-8fbe-4a38-93e8-c2348450a42e" />
<h1 align="center">Connecting data science teams seamlessly to cloud infrastructure.
</h1>
<h1 align="center">Beyond The Demo: Production-Grade LLMOps Systems</h1>
<h3 align="center">ZenML brings battle-tested MLOps practices to your LLM applications, handling evaluation, monitoring, and deployment at scale</h3>
Comment on lines +3 to +4
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WDYT about focusing on AI applications rather than LLM applications here?

Suggested change
<h1 align="center">Beyond The Demo: Production-Grade LLMOps Systems</h1>
<h3 align="center">ZenML brings battle-tested MLOps practices to your LLM applications, handling evaluation, monitoring, and deployment at scale</h3>
<h1 align="center">Beyond The Demo: Production-Grade AI Systems</h1>
<h3 align="center">ZenML brings battle-tested MLOps practices to your AI applications, handling evaluation, monitoring, and deployment at scale</h3>

</div>

<!-- PROJECT SHIELDS -->
Expand Down Expand Up @@ -98,39 +98,43 @@ Take a tour with the guided quickstart by running:
zenml go
```

## 🪄 Simple, integrated, End-to-end MLOps
## 🪄 From Prototype to Production: LLMOps Made Simple
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## 🪄 From Prototype to Production: LLMOps Made Simple
## 🪄 From Prototype to Production: AI Made Simple


### Create machine learning pipelines with minimal code changes
### Create LLM pipelines with minimal code changes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### Create LLM pipelines with minimal code changes
### Create AI pipelines with minimal code changes


ZenML is a MLOps framework intended for data scientists or ML engineers looking to standardize machine learning practices. Just add `@step` and `@pipeline` to your existing Python functions to get going. Here is a toy example:
ZenML is an open-source LLMOps framework for engineers scaling AI beyond prototypes. Automate evaluation loops, track performance, and deploy updates across 100s of pipelines—all while your RAG apps run like clockwork.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ZenML is an open-source LLMOps framework for engineers scaling AI beyond prototypes. Automate evaluation loops, track performance, and deploy updates across 100s of pipelines—all while your RAG apps run like clockwork.
ZenML is an open-source framework that handles MLOps and LLMOps for engineers scaling AI beyond prototypes. Automate evaluation loops, track performance, and deploy updates across 100s of pipelines—all while your RAG apps run like clockwork.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed wit hsuggestion


```python
from zenml import pipeline, step

@step # Just add this decorator
def load_data() -> dict:
training_data = [[1, 2], [3, 4], [5, 6]]
labels = [0, 1, 0]
return {'features': training_data, 'labels': labels}
@step
def load_rag_documents() -> dict:
# Load and chunk documents for RAG pipeline
documents = extract_web_content(url="https://www.zenml.io/")
return {"chunks": chunk_documents(documents)}

@step
def train_model(data: dict) -> None:
total_features = sum(map(sum, data['features']))
total_labels = sum(data['labels'])

print(f"Trained model using {len(data['features'])} data points. "
f"Feature sum is {total_features}, label sum is {total_labels}")
def generate_embeddings(data: dict) -> None:
# Generate embeddings for RAG pipeline
embeddings = embed_documents(data['chunks'])
return {"embeddings": embeddings}

@pipeline # This function combines steps together
def simple_ml_pipeline():
dataset = load_data()
train_model(dataset)
@step
def index_generator(
embeddings: dict,
) -> str:
# Generate index for RAG pipeline
index = create_index(embeddings)
return index.id


if __name__ == "__main__":
run = simple_ml_pipeline() # call this to run the pipeline

@pipeline
def llm_pipeline() -> str:
documents = load_rag_documents()
embeddings = generate_embeddings(documents)
index = index_generator(embeddings)
return index
```

![Running a ZenML pipeline](/docs/book/.gitbook/assets/readme_basic_pipeline.gif)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isnt this stil the old pipeline?


### Easily provision an MLOps stack or reuse your existing infrastructure
Expand Down Expand Up @@ -183,18 +187,48 @@ def training(...):

Create a complete lineage of who, where, and what data and models are produced.

Youll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability.
You'll be able to find out who produced which model, at what time, with which data, and on which version of the code. This guarantees full reproducibility and auditability.

```python
from zenml import Model

@step(model=Model(name="classification"))
def trainer(training_df: pd.DataFrame) -> Annotated["model", torch.nn.Module]:
...
@step(model=Model(name="rag_llm", tags=["staging"]))
def deploy_rag(index_id: str) -> str:
deployment_id = deploy_to_endpoint(index_id)
return deployment_id
```

![Exploring ZenML Models](/docs/book/.gitbook/assets/readme_mcp.gif)

## 🚀 Key LLMOps Capabilities
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure this ## title is needed


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change


### Continual RAG Improvement
**Build production-ready retrieval systems**

<div align="center">
<img src="/docs/book/.gitbook/assets/rag_zenml_home.png" width="800" alt="RAG Pipeline">
</div>

ZenML tracks document ingestion, embedding versions, and query patterns. Implement feedback loops and:
- Fix your RAG logic based on production logs
- Automatically re-ingest updated documents
- A/B test different embedding models
- Monitor retrieval quality metrics

### Reproducible Model Fine-Tuning
**Confidence in model updates**

<div align="center">
<img src="/docs/book/.gitbook/assets/finetune_zenml_home.png" width="800" alt="Finetuning Pipeline">
</div>

Maintain full lineage of SLM/LLM training runs:
- Version training data and hyperparameters
- Track performance across iterations
- Automatically promote validated models
- Roll back to previous versions if needed

### Purpose built for machine learning with integrations to your favorite tools

While ZenML brings a lot of value out of the box, it also integrates into your existing tooling and infrastructure without you having to be locked in.
Expand All @@ -211,6 +245,14 @@ def train_and_deploy(training_df: pd.DataFrame) -> bento.Bento

![Exploring ZenML Integrations](/docs/book/.gitbook/assets/readme_integrations.gif)

## 🔄 Your LLM Framework Isn't Enough for Production

While tools like LangChain and LlamaIndex help you **build** LLM workflows, ZenML helps you **productionize** them by adding:

✅ **Artifact Tracking** - Every vector store index, fine-tuned model, and evaluation result versioned automatically
✅ **Pipeline History** - See exactly what code/data produced each version of your RAG system
✅ **Stage Promotion** - Move validated pipelines from staging → production with one click

## 🖼️ Learning

The best way to learn about ZenML is the [docs](https://docs.zenml.io/). We recommend beginning with the [Starter Guide](https://docs.zenml.io/user-guide/starter-guide) to get up and running quickly.
Expand Down Expand Up @@ -295,13 +337,23 @@ Or, if you
prefer, [open an issue](https://github.com/zenml-io/zenml/issues/new/choose) on
our GitHub repo.

## ⭐️ Show Your Support
## 📚 LLM-focused Learning Resources

If you find ZenML helpful or interesting, please consider giving us a star on GitHub. Your support helps promote the project and lets others know that it's worth checking out.
1. [LL Complete Guide - Full RAG Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-complete-guide) - Document ingestion, embedding management, and query serving
2. [LLM Fine-Tuning Pipeline](https://github.com/zenml-io/zenml-projects/tree/main/llm-finetuning) - From data prep to deployed model
3. [LLM Agents Example](https://github.com/zenml-io/zenml-projects/tree/main/llm-agents) - Track conversation quality and tool usage

Thank you for your support! 🌟
## 🤖 AI-Friendly Documentation with llms.txt

[![Star this project](https://img.shields.io/github/stars/zenml-io/zenml?style=social)](https://github.com/zenml-io/zenml/stargazers)
ZenML implements the llms.txt standard to make our documentation more accessible to AI assistants and LLMs. Our implementation includes:

- Base documentation at [zenml.io/llms.txt](https://zenml.io/llms.txt) with core user guides
- Specialized files for different documentation aspects:
- [Component guides](https://zenml.io/component-guide.txt) for integration details
- [How-to guides](https://zenml.io/how-to-guides.txt) for practical implementations
- [Complete documentation corpus](https://zenml.io/llms-full.txt) for comprehensive access

This structured approach helps AI tools better understand and utilize ZenML's documentation, enabling more accurate code suggestions and improved documentation search.

## 📜 License

Expand Down
Binary file added docs/book/.gitbook/assets/finetune_zenml_home.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/book/.gitbook/assets/rag_zenml_home.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading