Skip to content

Latest commit

 

History

History

02-open-source

2. Open-Source LLMs

In the previous module, we used OpenAI via OpenAI API. It's a very convenient way to use an LLM, but you have to pay for the usage, and you don't have control over the model you get to use.

In this module, we'll look at using open-source LLMs instead.

2.1 Open-Source LLMs - Introduction

  • Open-Source LLMs
  • Replacing the LLM box in the RAG flow

2.2 Using a GPU in Saturn Cloud

  • Registering in Saturn Cloud
  • Configuring secrets and git
  • Creating an instance with a GPU
pip install -U transformers accelerate bitsandbytes sentencepiece

Links:

Google Colab as an alternative:

2.3 FLAN-T5

import os
os.environ['HF_HOME'] = '/run/cache/'

Links:

Explanation of Parameters:

  • max_length: Set this to a higher value if you want longer responses. For example, max_length=300.
  • num_beams: Increasing this can lead to more thorough exploration of possible sequences. Typical values are between 5 and 10.
  • do_sample: Set this to True to use sampling methods. This can produce more diverse responses.
  • temperature: Lowering this value makes the model more confident and deterministic, while higher values increase diversity. Typical values range from 0.7 to 1.5.
  • top_k and top_p: These parameters control nucleus sampling. top_k limits the sampling pool to the top k tokens, while top_p uses cumulative probability to cut off the sampling pool. Adjust these based on the desired level of randomness.

2.4 Phi 3 Mini

Links:

2.5 Mistral-7B and HuggingFace Hub Authentication

ChatGPT instructions for serving

Links:

2.6 Other models

Where to find them:

  • Leaderboards
  • Google
  • ChatGPT

Links:

2.7 Ollama - Running LLMs on a CPU

For Linux:

curl -fsSL https://ollama.com/install.sh | sh

ollama start
ollama pull phi3
ollama run phi3

Prompt example

Connecting to it with OpenAI API:

from openai import OpenAI

client = OpenAI(
    base_url='http://localhost:11434/v1/',
    api_key='ollama',
)

Docker

docker run -it \
    -v ollama:/root/.ollama \
    -p 11434:11434 \
    --name ollama \
    ollama/ollama

Pulling the model

docker exec -it ollama bash
ollama pull phi3

2.8 Ollama & Phi3 + Elastic in Docker-Compose

  • Creating a Docker-Compose file

  • Re-running the module 1 notebook

  • Notebook: rag-intro.ipynb

2.9 UI for RAG

  • Putting it in Streamlit
  • Code

If you want to learn more about streamlit, you can use this material from our repository with projects of the week.

Homework

See here

Notes