Skip to content

Latest commit

 

History

History
77 lines (53 loc) · 2.68 KB

hello-world.md

File metadata and controls

77 lines (53 loc) · 2.68 KB

Jina "Hello, World!" 👋🌍

Just starting out? Try Jina's "Hello, World" - jina hello --help

👗 Fashion Image Search

A simple image neural search demo for Fashion-MNIST. No extra dependencies needed, simply run:

jina hello fashion  # more options in --help

...or even easier for Docker users, no install required:

docker run -v "$(pwd)/j:/j" jinaai/jina hello fashion --workdir /j && open j/hello-world.html
 replace "open" with "xdg-open" on Linux
Click here to see console output

hello world console output

This downloads the Fashion-MNIST training and test dataset and tells Jina to index 60,000 images from the training set. Then it randomly samples images from the test set as queries and asks Jina to retrieve relevant results. The whole process takes about 1 minute.

🤖 Covid-19 Chatbot

For NLP engineers, we provide a simple chatbot demo for answering Covid-19 questions. To run that:

pip install --pre "jina[chatbot]"

jina hello chatbot

This downloads CovidQA dataset and tells Jina to index 418 question-answer pairs with MPNet. The index process takes about 1 minute on CPU. Then it opens a web page where you can input questions and ask Jina.





🪆 Multimodal Document Search

A multimodal-document contains multiple data types, e.g. a PDF document often contains figures and text. Jina lets you build a multimodal search solution in just minutes. To run our minimum multimodal document search demo:

pip install --pre "jina[multimodal]"

jina hello multimodal

This downloads people image dataset and tells Jina to index 2,000 image-caption pairs with MobileNet and MPNet. The index process takes about 3 minute on CPU. Then it opens a web page where you can query multimodal documents. We have prepared a YouTube tutorial to walk you through this demo.