diff --git a/components/home/Hero.tsx b/components/home/Hero.tsx
index 22b65a8ad..d6f379db0 100644
--- a/components/home/Hero.tsx
+++ b/components/home/Hero.tsx
@@ -76,10 +76,10 @@ export function Hero() {
{/*
*/}
);
diff --git a/pages/blog/2024-04-introducing-langfuse-2.0.mdx b/pages/blog/2024-04-introducing-langfuse-2.0.mdx
index cbc6289ea..66d6cc807 100644
--- a/pages/blog/2024-04-introducing-langfuse-2.0.mdx
+++ b/pages/blog/2024-04-introducing-langfuse-2.0.mdx
@@ -34,10 +34,10 @@ What are LLMs without Prompts? We have doubled down on helping developers manage
And since this week, you can also directly iterate on prompts within the new [**LLM playground**](/docs/playground). It is a neat and easy way to continue tinkering with the data you observe in Langfuse while not leaving the interface.
Our most sophisticated users **experiment and iterate on their entire LLM pipelines**. This might start with a playground for some workflows, but our newly revamped [**Datasets**](/docs/datasets) feature helps do this on an ongoing basis with a structured evaluation process. Datasets are reference sets of inputs and expected outputs. You can upload your own datasets via the API and SDKs or continuously add to the datasets when recognizing new edge cases in production traces. You can then run experiments on these datasets and attach scores and evaluations to them.
diff --git a/pages/guides/videos/introducing-langfuse-2.0.mdx b/pages/guides/videos/introducing-langfuse-2.0.mdx
index 1d3c1ccc7..8f903b7fd 100644
--- a/pages/guides/videos/introducing-langfuse-2.0.mdx
+++ b/pages/guides/videos/introducing-langfuse-2.0.mdx
@@ -1,13 +1,13 @@
---
title: Introducing Langfuse 2.0
-description: Reintroducing Langfuse as we grew from observability into an LLM Engineering Platform
+description: Reintroducing Langfuse – the Open Source LLM Engineering Platform
---
# Introducing Langfuse 2.0