From 08895fdfff500985c8f990bc69df67982d040baa Mon Sep 17 00:00:00 2001
From: Ella Charlaix <80481427+echarlaix@users.noreply.github.com>
Date: Fri, 15 Mar 2024 23:05:24 +0100
Subject: [PATCH] Fix table (#1906)

---
 intel-fast-embedding.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/intel-fast-embedding.md b/intel-fast-embedding.md
index 693f9b8b8d..f8efd58cb5 100644
--- a/intel-fast-embedding.md
+++ b/intel-fast-embedding.md
@@ -149,10 +149,10 @@ Quantizing the models' weights to a lower precision introduces accuracy loss, as
 The table below shows the average accuracy (on multiple datasets)  of each task type (MAP for Reranking, NDCG@10 for Retrieval), where `int8` is our quantized model and `fp32` is the original model (results taken from the official MTEB leaderboard). The quantized models show less than 1% error rate compared to the original model in the Reranking task and less than 1.55% in the Retrieval task.
 
 <table>
-<tr><th> Model  </th><th>   Reranking </th><th> Retrieval </th></tr>
+<tr><th>  </th><th>   Reranking </th><th> Retrieval </th></tr>
 <tr><td>
 
-| precision |
+|           |
 | --------- |
 | BGE-small |
 | BGE-base  |