Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs[minor]: Update mistral LLM doc #6338

Merged
merged 2 commits into from
Aug 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
194 changes: 171 additions & 23 deletions docs/core_docs/docs/integrations/llms/mistral.ipynb
Original file line number Diff line number Diff line change
@@ -1,18 +1,67 @@
{
"cells": [
{
"cell_type": "raw",
"id": "67db2992",
"metadata": {},
"source": [
"---\n",
"sidebar_label: MistralAI\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "9597802c",
"metadata": {},
"source": [
"# MistralAI\n",
"\n",
"```{=mdx}\n",
"\n",
":::tip\n",
"Want to run Mistral's models locally? Check out our [Ollama integration](/docs/integrations/chat/ollama).\n",
":::\n",
"\n",
":::caution\n",
"You are currently on a page documenting the use of Cohere models as [text completion models](/docs/concepts/#llms). Many popular models available on Mistral are [chat completion models](/docs/concepts/#chat-models).\n",
"\n",
"You may be looking for [this page instead](/docs/integrations/chat/mistral/).\n",
":::\n",
"\n",
"```\n",
"\n",
"Here's how you can initialize an `MistralAI` LLM instance:\n",
"This will help you get started with MistralAI completion models (LLMs) using LangChain. For detailed documentation on `MistralAI` features and configuration options, please refer to the [API reference](https://api.js.langchain.com/classes/langchain_mistralai.MistralAI.html).\n",
"\n",
"## Overview\n",
"### Integration details\n",
"\n",
"| Class | Package | Local | Serializable | PY support | Package downloads | Package latest |\n",
"| :--- | :--- | :---: | :---: | :---: | :---: | :---: |\n",
"| [MistralAI](https://api.js.langchain.com/classes/langchain_mistralai.MistralAI.html) | [@langchain/mistralai](https://api.js.langchain.com/modules/langchain_mistralai.html) | ❌ | ✅ | ❌ | ![NPM - Downloads](https://img.shields.io/npm/dm/@langchain/mistralai?style=flat-square&label=%20&) | ![NPM - Version](https://img.shields.io/npm/v/@langchain/mistralai?style=flat-square&label=%20&) |\n",
"\n",
"## Setup\n",
"\n",
"To access MistralAI models you'll need to create a MistralAI account, get an API key, and install the `@langchain/mistralai` integration package.\n",
"\n",
"### Credentials\n",
"\n",
"Head to [console.mistral.ai](https://console.mistral.ai/) to sign up to MistralAI and generate an API key. Once you've done this set the `MISTRAL_API_KEY` environment variable:\n",
"\n",
"```bash\n",
"export MISTRAL_API_KEY=\"your-api-key\"\n",
"```\n",
"\n",
"If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:\n",
"\n",
"```bash\n",
"# export LANGCHAIN_TRACING_V2=\"true\"\n",
"# export LANGCHAIN_API_KEY=\"your-api-key\"\n",
"```\n",
"\n",
"### Installation\n",
"\n",
"The LangChain MistralAI integration lives in the `@langchain/mistralai` package:\n",
"\n",
"```{=mdx}\n",
"import IntegrationInstallTooltip from \"@mdx_components/integration_install_tooltip.mdx\";\n",
Expand All @@ -23,49 +72,132 @@
"<Npm2Yarn>\n",
" @langchain/mistralai\n",
"</Npm2Yarn>\n",
"```\n"
"\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "0a760037",
"metadata": {},
"source": [
"## Instantiation\n",
"\n",
"Now we can instantiate our model object and generate chat completions:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a0562a13",
"metadata": {},
"outputs": [],
"source": [
"import { MistralAI } from \"@langchain/mistralai\"\n",
"\n",
"const llm = new MistralAI({\n",
" model: \"codestral-latest\",\n",
" temperature: 0,\n",
" maxTokens: undefined,\n",
" maxRetries: 2,\n",
" // other params...\n",
"})"
]
},
{
"cell_type": "markdown",
"id": "0ee90032",
"metadata": {},
"source": [
"## Invocation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "035dea0f",
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" has developed Mistral 7B, a large language model (LLM) that is open-source and available for commercial use. Mistral 7B is a 7 billion parameter model that is trained on a diverse and high-quality dataset, and it has been fine-tuned to perform well on a variety of tasks, including text generation, question answering, and code interpretation.\n",
"\n",
"MistralAI has made Mistral 7B available under a permissive license, allowing anyone to use the model for commercial purposes without having to pay any fees. This has made Mistral 7B a popular choice for businesses and organizations that want to leverage the power of large language models without incurring high costs.\n",
"\n",
"Mistral 7B has been trained on a diverse and high-quality dataset, which has enabled it to perform well on a variety of tasks. It has been fine-tuned to generate coherent and contextually relevant text, and it has been shown to be capable of answering complex questions and interpreting code.\n",
"\n",
"Mistral 7B is also a highly efficient model, capable of processing text at a fast pace. This makes it well-suited for applications that require real-time responses, such as chatbots and virtual assistants.\n",
"\n",
"Overall, Mistral 7B is a powerful and versatile large language model that is open-source and available for commercial use. Its ability to perform well on a variety of tasks, its efficiency, and its permissive license make it a popular choice for businesses and organizations that want to leverage the power of large language models.\n"
]
}
],
"source": [
"const inputText = \"MistralAI is an AI company that \"\n",
"\n",
"const completion = await llm.invoke(inputText)\n",
"completion"
]
},
{
"cell_type": "markdown",
"id": "add38532",
"metadata": {},
"source": [
"## Chaining\n",
"\n",
"We can [chain](/docs/how_to/sequence/) our completion model with a prompt template like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "078e9db2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\n",
"console.log('hello world');\n",
"```\n",
"This will output 'hello world' to the console.\n"
"I love programming.\n",
"\n",
"Ich liebe Programmieren.\n",
"\n",
"In German, the phrase \"I love programming\" is translated as \"Ich liebe Programmieren.\" The word \"programming\" is translated to \"Programmieren,\" and \"I love\" is translated to \"Ich liebe.\"\n"
]
}
],
"source": [
"import { MistralAI } from \"@langchain/mistralai\";\n",
"import { PromptTemplate } from \"@langchain/core/prompts\"\n",
"\n",
"const model = new MistralAI({\n",
" model: \"codestral-latest\", // Defaults to \"codestral-latest\" if no model provided.\n",
" temperature: 0,\n",
" apiKey: \"YOUR-API-KEY\", // In Node.js defaults to process.env.MISTRAL_API_KEY\n",
"});\n",
"const res = await model.invoke(\n",
" \"You can print 'hello world' to the console in javascript like this:\\n```javascript\"\n",
");\n",
"console.log(res);"
"const prompt = PromptTemplate.fromTemplate(\"How to say {input} in {output_language}:\\n\")\n",
"\n",
"const chain = prompt.pipe(llm);\n",
"await chain.invoke(\n",
" {\n",
" output_language: \"German\",\n",
" input: \"I love programming.\",\n",
" }\n",
")"
]
},
{
"cell_type": "markdown",
"id": "e99eef30",
"metadata": {},
"source": [
"Since the Mistral LLM is a completions model, they also allow you to insert a `suffix` to the prompt. Suffixes can be passed via the call options when invoking a model like so:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 5,
"id": "ec67551d",
"metadata": {},
"outputs": [
{
Expand All @@ -79,16 +211,17 @@
}
],
"source": [
"const res = await model.invoke(\n",
"const suffixResponse = await llm.invoke(\n",
" \"You can print 'hello world' to the console in javascript like this:\\n```javascript\", {\n",
" suffix: \"```\"\n",
" }\n",
");\n",
"console.log(res);"
"console.log(suffixResponse);"
]
},
{
"cell_type": "markdown",
"id": "b9265343",
"metadata": {},
"source": [
"As seen in the first example, the model generated the requested `console.log('hello world')` code snippet, but also included extra unwanted text. By adding a suffix, we can constrain the model to only complete the prompt up to the suffix (in this case, three backticks). This allows us to easily parse the completion and extract only the desired response without the suffix using a custom output parser."
Expand All @@ -97,6 +230,7 @@
{
"cell_type": "code",
"execution_count": 1,
"id": "e2d34dc8",
"metadata": {},
"outputs": [
{
Expand All @@ -112,10 +246,9 @@
"source": [
"import { MistralAI } from \"@langchain/mistralai\";\n",
"\n",
"const model = new MistralAI({\n",
"const llmForFillInCompletion = new MistralAI({\n",
" model: \"codestral-latest\",\n",
" temperature: 0,\n",
" apiKey: \"YOUR-API-KEY\",\n",
"});\n",
"\n",
"const suffix = \"```\";\n",
Expand All @@ -127,13 +260,23 @@
" throw new Error(\"Input does not contain suffix.\")\n",
"};\n",
"\n",
"const res = await model.invoke(\n",
"const resWithParser = await llmForFillInCompletion.invoke(\n",
" \"You can print 'hello world' to the console in javascript like this:\\n```javascript\", {\n",
" suffix,\n",
" }\n",
");\n",
"\n",
"console.log(customOutputParser(res));"
"console.log(customOutputParser(resWithParser));"
]
},
{
"cell_type": "markdown",
"id": "e9bdfcef",
"metadata": {},
"source": [
"## API reference\n",
"\n",
"For detailed documentation of all MistralAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_mistralai.MistralAI.html"
]
}
],
Expand All @@ -153,8 +296,13 @@
"mimetype": "text/typescript",
"name": "typescript",
"version": "3.7.2"
},
"vscode": {
"interpreter": {
"hash": "e971737741ff4ec9aff7dc6155a1060a59a8a6d52c757dbbe66bf8ee389494b1"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 5
}
Loading
Loading