diff --git a/README.md b/README.md index e82d8ca..ff964b2 100755 --- a/README.md +++ b/README.md @@ -42,7 +42,7 @@ With compact __custom pre-trained transformer models__, this can run anywhere fr ## Features -- __Tiny Embeddings__: 64-dimensional [Matryoshaka][matryoshka]-style embeddings for extremely fast [search][usearch]. +- __Tiny Embeddings__: 64-dimensional [Matryoshka][matryoshka]-style embeddings for extremely fast [search][usearch]. - __Throughput__: Thanks to the small size, the inference speed is [2-4x faster](#speed) than competitors. - __Portable__: Models come with native ONNX support, making them easy to deploy on any platform. - __Quantization Aware__: Down-cast embeddings from `f32` to `i8` without losing much recall.