From 3250e5a99effb30c9f9058a1b7ffc69fcf76b568 Mon Sep 17 00:00:00 2001 From: Christian Koep Date: Wed, 14 Aug 2024 08:00:34 +0200 Subject: [PATCH] Fix typos to make the copy & paste experience more smooth --- docs/demos/podman-ai-lab-to-rhoai/podman-ai-lab-to-rhoai.md | 4 ++-- docs/tools-and-applications/apache-spark/apache-spark.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/demos/podman-ai-lab-to-rhoai/podman-ai-lab-to-rhoai.md b/docs/demos/podman-ai-lab-to-rhoai/podman-ai-lab-to-rhoai.md index 03e69ca2..ca749465 100644 --- a/docs/demos/podman-ai-lab-to-rhoai/podman-ai-lab-to-rhoai.md +++ b/docs/demos/podman-ai-lab-to-rhoai/podman-ai-lab-to-rhoai.md @@ -188,7 +188,7 @@ Now that the Elasticsearch operator has been deployed and an instance created, w **Note:** *If you have insufficient resources to start a medium container size then stop the workbench and change the workbench to start as a small container size.* -7. Upload or import the ./notebooks/Langchain-ElasticSearchVector-ingest.ipynb notebook to your workbench. +7. Upload or import the ./notebooks/Langchain-ElasticSearchVector-Ingest.ipynb notebook to your workbench. ![RHOAI Workbench Notebook](img/vector_ingest_notebook_upload.png) @@ -469,4 +469,4 @@ We'll now update the chat recipe application that we created from Podman AI Lab *The notebook to ingest data into Elasticsearch and the Langchain code added to the chatbot app.* - [AI Accelerator](https://github.com/redhat-ai-services/ai-accelerator){:target="_blank"} - *The code used to deploy the various components on OpenShift and OpenShift AI.* \ No newline at end of file + *The code used to deploy the various components on OpenShift and OpenShift AI.* diff --git a/docs/tools-and-applications/apache-spark/apache-spark.md b/docs/tools-and-applications/apache-spark/apache-spark.md index 970d36af..d3b5079d 100644 --- a/docs/tools-and-applications/apache-spark/apache-spark.md +++ b/docs/tools-and-applications/apache-spark/apache-spark.md @@ -19,5 +19,5 @@ It includes: - instructions to deploy the Spark history server to gather your processing logs, - instructions to deploy the Spark on Kubernetes operator, - Prometheus and Grafana configuration to monitor your data processing and operator in real time, -- instructions to work without the operator, from a Notebook or a Terminal, inside or outside the OpenShit Cluster, +- instructions to work without the operator, from a Notebook or a Terminal, inside or outside the OpenShift Cluster, - various examples to test your installation and the different methods.