From 5ad24af2ee88dde3177e7e16d885e9cda632079d Mon Sep 17 00:00:00 2001
From: Letong Han <106566639+letonghan@users.noreply.github.com>
Date: Thu, 16 Jan 2025 19:50:59 +0800
Subject: [PATCH] Fix Vectorestores Path Issue of Refactor (#1399)

Fix vectorestores path issue caused by refactor in PR opea-project/GenAIComps#1159.
Modify docker image name and file path in docker_images_list.md.

Signed-off-by: letonghan <letong.han@intel.com>
---
 docker_images_list.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docker_images_list.md b/docker_images_list.md
index 9698f2167..e487fef94 100644
--- a/docker_images_list.md
+++ b/docker_images_list.md
@@ -78,13 +78,13 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
 | [opea/lvm-llama-vision]()                                                                                           | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/integrations/dependency/llama-vision/Dockerfile)      | The docker image exposed the OPEA microservice running LLaMA Vision as a large visual model (LVM) server for GenAI application use                                                                                     |
 | [opea/lvm-predictionguard](https://hub.docker.com/r/opea/lvm-predictionguard)                                       | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/integrations/dependency/predictionguard/Dockerfile)   | The docker image exposed the OPEA microservice running PredictionGuard as a large visual model (LVM) server for GenAI application use                                                                                  |
 | [opea/nginx](https://hub.docker.com/r/opea/nginx)                                                                   | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/nginx/src/Dockerfile)                            | The docker image exposed the OPEA nginx microservice for GenAI application use                                                                                                                                         |
+| [opea/pathway](https://hub.docker.com/r/opea/vectorstore-pathway)                                                   | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/third_parties/pathway/src/Dockerfile)                          | The docker image exposed the OPEA Vectorstores microservice with Pathway for GenAI application use                                                                                                                     |
 | [opea/promptregistry-mongo-server](https://hub.docker.com/r/opea/promptregistry-mongo-server)                       | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/prompt_registry/src/Dockerfile)                                | The docker image exposes the OPEA Prompt Registry microservices which based on MongoDB database, designed to store and retrieve user's preferred prompts                                                               |
 | [opea/reranking]()                                                                                                  | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/rerankings/src/Dockerfile)                                     | The docker image exposed the OPEA reranking microservice based on tei docker image for GenAI application use                                                                                                           |
 | [opea/retriever]()                                                                                                  | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/src/Dockerfile)                                     | The docker image exposed the OPEA retrieval microservice based on milvus vectordb for GenAI application use                                                                                                            |
 | [opea/speecht5](https://hub.docker.com/r/opea/speecht5)                                                             | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/integrations/dependency/speecht5/Dockerfile)           | The docker image exposed the OPEA SpeechT5 service for GenAI application use                                                                                                                                           |
 | [opea/speecht5-gaudi](https://hub.docker.com/r/opea/speecht5-gaudi)                                                 | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/integrations/dependency/speecht5/Dockerfile.intel_hpu) | The docker image exposed the OPEA SpeechT5 service on Gaudi2 for GenAI application use                                                                                                                                 |
 | [opea/tei-gaudi](https://hub.docker.com/r/opea/tei-gaudi/tags)                                                      | [Link](https://github.com/huggingface/tei-gaudi/blob/habana-main/Dockerfile-hpu)                                                 | The docker image powered by HuggingFace Text Embedding Inference (TEI) on Gaudi2 for deploying and serving Embedding Models                                                                                            |
-| [opea/vectorstore-pathway](https://hub.docker.com/r/opea/vectorstore-pathway)                                       | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/vectorstores/pathway/Dockerfile)                               | The docker image exposed the OPEA Vectorstores microservice with Pathway for GenAI application use                                                                                                                     |
 | [opea/lvm-video-llama](https://hub.docker.com/r/opea/lvm-video-llama)                                               | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/src/integrations/dependency/video-llama/Dockerfile)       | The docker image exposed the OPEA microservice running Video-Llama as a large visual model (LVM) server for GenAI application use                                                                                      |
 | [opea/tts](https://hub.docker.com/r/opea/tts)                                                                       | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/src/Dockerfile)                                            | The docker image exposed the OPEA Text-To-Speech microservice for GenAI application use                                                                                                                                |
 | [opea/vllm](https://hub.docker.com/r/opea/vllm)                                                                     | [Link](https://github.com/vllm-project/vllm/blob/main/Dockerfile.cpu)                                                            | The docker image powered by vllm-project for deploying and serving vllm Models                                                                                                                                         |