From 3fd2bd1314d8c130723ebf38491059391feae350 Mon Sep 17 00:00:00 2001 From: Eugene Yurtsev Date: Mon, 19 Feb 2024 09:56:19 -0500 Subject: [PATCH] x --- .../model_io/chat/custom_chat_model.ipynb | 478 ++++++------------ 1 file changed, 148 insertions(+), 330 deletions(-) diff --git a/docs/docs/modules/model_io/chat/custom_chat_model.ipynb b/docs/docs/modules/model_io/chat/custom_chat_model.ipynb index 3a4292da436d0..b91ca4cfd4333 100644 --- a/docs/docs/modules/model_io/chat/custom_chat_model.ipynb +++ b/docs/docs/modules/model_io/chat/custom_chat_model.ipynb @@ -1,39 +1,28 @@ { "cells": [ { + "attachments": {}, "cell_type": "markdown", - "id": "4adf4238-b6e5-43ca-ba5e-7fd2884dfaf1", + "id": "e3da9a3f-f583-4ba6-994e-0e8c1158f5eb", "metadata": {}, "source": [ "# Custom Chat Model\n", "\n", - "In this guide, we'll learn how to create a custom chat model using `LangChain` abstractions.\n", - "\n", - "Wrapping your LLM with the standard `ChatModel` interface allow use your LLM in existing `LangChain` programs with minimal code modifications!\n", + "In this guide, we'll learn how to create a custom chat model using LangChain abstractions.\n", "\n", - "As an bonus, you're LLM will automatically become a `LangChain Runnable` and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support (e.g., `ainvoke`, `abatch`), `astream_events` API etc.\n", + "Wrapping your LLM with the standard `ChatModel` interface allow you to use your LLM in existing LangChain programs with minimal code modifications!\n", "\n", - "You have 2 options to provide an implementation:\n", + "As an bonus, your LLM will automatically become a LangChain `Runnable` and will benefit from some optimizations out of the box (e.g., batch via a threadpool), async support, the `astream_events` API, etc.\n", "\n", - "1. Using `SimpleChatModel`: Primarily meant for prototyping. This won't allow you to implement a bunch of features, but might be good enough for your use case.\n", - "2. Using `BaseChatModel`: Best suited for a full implementation that supports all features (e.g., streaming, function calling)." - ] - }, - { - "attachments": {}, - "cell_type": "markdown", - "id": "120f1b46-d6ee-4e7a-84ba-afe5f68baa80", - "metadata": {}, - "source": [ "## Inputs and outputs\n", "\n", - "Before we start with implementing a chat model, we need to talk about messages which are the inputs to and the outputs from chat models.\n", + "First, we need to talk about messages which are the inputs and outputs of chat models.\n", "\n", "### Messages\n", "\n", - "Chat models take messages as inputs and return chat messages as outputs. \n", + "Chat models take messages as inputs and return a message as output. \n", "\n", - "Here are the messages:\n", + "LangChain has a few built-in message types:\n", "\n", "- `SystemMessage`: Used for priming AI behavior, usually passed in as the first of a sequence of input messages.\n", "- `HumanMessage`: Represents a message from a person interacting with the chat model.\n", @@ -49,7 +38,7 @@ }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 1, "id": "c5046e6a-8b09-4a99-b6e6-7a605aac5738", "metadata": {}, "outputs": [], @@ -76,7 +65,7 @@ }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 2, "id": "d4656e9d-bfa1-4703-8f79-762fe6421294", "metadata": {}, "outputs": [], @@ -100,7 +89,7 @@ }, { "cell_type": "code", - "execution_count": 22, + "execution_count": 3, "id": "9c15c299-6f8a-49cf-a072-09924fd44396", "metadata": {}, "outputs": [ @@ -110,7 +99,7 @@ "AIMessageChunk(content='Hello World!')" ] }, - "execution_count": 22, + "execution_count": 3, "metadata": {}, "output_type": "execute_result" } @@ -135,7 +124,6 @@ "You need to implement the following:\n", "\n", "* The method `_call` - Use to generate a chat result from a prompt.\n", - "* The property `_llm_type` - Used to uniquely identify the type of the model. Used for logging.\n", "\n", "In addition, you have the option to specify the following:\n", "\n", @@ -143,293 +131,44 @@ "\n", "Optional:\n", "\n", - "* `_stream` - Use to implement streaming.\n", - "* `_agenerate` - Use to implement a native async method\n", - "* `_astream` - Use to implement async version of `_stream`\n", - "\n", - ":::{.callout-caution}\n", - "\n", - "If you're implementing streaming and want streaming to work in async, you must provide an async implementation of streaming (`_astream`).\n", - "\n", - "If you want to replicate the logic in the sync variant of `_stream` you can use the trick below by running it in a separate executor.\n", - ":::\n" + "* `_stream` - Use to implement streaming.\n" ] }, { "cell_type": "markdown", - "id": "6c61c165-975a-4cad-aa7c-5e5d9e2dedd2", - "metadata": {}, - "source": [ - "### Implementation" - ] - }, - { - "cell_type": "code", - "execution_count": 23, - "id": "b318e80c-ca39-4743-bbb9-0d6bd30afeef", + "id": "bbfebea1", "metadata": {}, - "outputs": [], "source": [ - "from typing import Any, Dict, List, Optional\n", - "\n", - "from langchain_core.callbacks.manager import CallbackManagerForLLMRun\n", - "from langchain_core.language_models import SimpleChatModel\n", - "from langchain_core.messages import BaseMessage, HumanMessage\n", - "\n", - "\n", - "class CustomChatModel(SimpleChatModel):\n", - " \"\"\"A custom chat model that echoes the first `n` characters of the input.\n", - "\n", - " When contributing an implementation to LangChain, carefully document\n", - " the model including the initialization parameters, include\n", - " an example of how to initialize the model and include any relevant\n", - " links to the underlying models documentation or API.\n", - "\n", - " Example:\n", - "\n", - " .. code-block:: python\n", - "\n", - " model = CustomChatModel(n=2)\n", - " result = model.invoke([HumanMessage(content=\"hello\")])\n", - " result = model.batch([[HumanMessage(content=\"hello\")],\n", - " [HumanMessage(content=\"world\")]])\n", - " \"\"\"\n", + "## Base Chat Model\n", "\n", - " n: int\n", - " \"\"\"The number of characters from the last message of the prompt to be echoed.\"\"\"\n", + "Let's implement a chat model that echoes back the first `n` characetrs of the last message in the prompt!\n", "\n", - " # This is the core logic. It accepts a prompt composed of a list of messages\n", - " # and returns a response which is a string.\n", - " def _call(\n", - " self,\n", - " messages: List[BaseMessage],\n", - " stop: Optional[List[str]] = None,\n", - " run_manager: Optional[CallbackManagerForLLMRun] = None,\n", - " **kwargs: Any,\n", - " ) -> str:\n", - " \"\"\"Implementation of the chat model logic.\n", + "To do so, we will inherit from `BaseChatModel` and we'll need to implement the following methods/properties:\n", "\n", - " Args:\n", - " messages: the prompt composed of a list of messages.\n", - " stop: a list of strings on which the model should stop generating.\n", - " If generation stops due to a stop token, the stop token itself\n", - " SHOULD BE INCLUDED as part of the output. This is not enforced\n", - " across models right now, but it's a good practice to follow since\n", - " it makes it much easier to parse the output of the model\n", - " downstream and understand why generation stopped.\n", - " run_manager: A run manager that contains callbacks for the LLM.\n", - " on_chat_start and on_llm_end callbacks are automatically called\n", - " by wrapping code, so you don't need to invoke them when\n", - " using SimpleChatModel.\n", - " Please refer to the callbacks section in the documentation for\n", - " more details about callbacks.\n", - " \"\"\"\n", - " return messages[-1].content[: self.n]\n", + "In addition, you have the option to specify the following:\n", "\n", - " @property\n", - " def _llm_type(self) -> str:\n", - " \"\"\"Get the type of language model used by this chat model.\n", + "To do so inherit from `BaseChatModel` which is a lower level class and implement the methods:\n", "\n", - " This property must return a string. It's used for logging purposes, and\n", - " will be accessible from callbacks. It should identify the type\n", - " of the model uniquely.\n", - " \"\"\"\n", - " return \"echoing_chat_model\"\n", + "* `_generate` - Use to generate a chat result from a prompt\n", + "* The property `_llm_type` - Used to uniquely identify the type of the model. Used for logging.\n", "\n", - " # **Optional** override the identifying parameters to include additional\n", - " # information about the model parameterization for logging purposes\n", - " @property\n", - " def _identifying_params(self) -> Dict[str, Any]:\n", - " \"\"\"Return a dictionary of identifying parameters.\"\"\"\n", - " return {\"n\": self.n}" - ] - }, - { - "cell_type": "markdown", - "id": "714dede0", - "metadata": {}, - "source": [ - "### Using it\n", + "Optional:\n", "\n", - "The chat model will implement the standard `Runnable` interface of LangChain which many of the LangChain abstractions support!" - ] - }, - { - "cell_type": "code", - "execution_count": 24, - "id": "10e5ece6", - "metadata": {}, - "outputs": [], - "source": [ - "model = CustomChatModel(n=7)" - ] - }, - { - "cell_type": "code", - "execution_count": 25, - "id": "d92f328f-8094-4d6e-8db1-a929c05368a7", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "AIMessage(content='hello w')" - ] - }, - "execution_count": 25, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "model.invoke([HumanMessage(content=\"hello world!\")])" - ] - }, - { - "cell_type": "markdown", - "id": "245f2ecf-aaa7-457a-8fed-828faac1dea4", - "metadata": {}, - "source": [ - "In addition, it supoprts the standard type conversions associated with chat models to make it more convenient to work with it! \n", + "* `_stream` - Use to implement streaming.\n", + "* `_agenerate` - Use to implement a native async method.\n", + "* `_astream` - Use to implement async version of `_stream`.\n", + "* The property `_identifying_params` - Represent model parameterization for logging purposes.\n", "\n", - "An input of a string, gets interpreted as a `Human Message`!" - ] - }, - { - "cell_type": "code", - "execution_count": 7, - "id": "36ab5064-ba04-4ea1-9eb4-8703fa8f1474", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "AIMessage(content='Hello W')" - ] - }, - "execution_count": 7, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "model.invoke(\"Hello World!\")" - ] - }, - { - "cell_type": "markdown", - "id": "fb2e7b9d-d373-46ee-b118-53e8fed47f70", - "metadata": {}, - "source": [ - "And look async works as well ๐Ÿ˜˜" - ] - }, - { - "cell_type": "code", - "execution_count": 8, - "id": "1b832be1-19cc-4164-8f0d-80f686696256", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "AIMessage(content='Hello W')" - ] - }, - "execution_count": 8, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "await model.ainvoke(\"Hello World!\")" - ] - }, - { - "cell_type": "markdown", - "id": "cb1e6c75-33d4-446d-86be-d490bdfec18b", - "metadata": {}, - "source": [ - "By adding a LangChain interface to your LLM, you get a bunch of optimizations out of the box!\n", "\n", - "Batch by default executes in a threadpool which means that operations that block on IO (e.g., an API call to another service), will automatically be run in parallel!\n", + ":::{.callout-caution}\n", "\n", - ":::{.callout-note}\n", - "The default `batch` implementation will not help to speed up CPU / GPU bound operations, unless those operations are delegated to lower level code that releases the GIL.\n", + "Currently, to get async streaming to work (via `astream`), you must provide an implementation of `_astream`.\n", "\n", - "If this doesn't mean much to you, then just ignore it. ๐Ÿ˜ตโ€๐Ÿ’ซ\n", - ":::" - ] - }, - { - "cell_type": "code", - "execution_count": 9, - "id": "8cd49199", - "metadata": {}, - "outputs": [ - { - "data": { - "text/plain": [ - "[AIMessage(content='hello w'), AIMessage(content='goodbye')]" - ] - }, - "execution_count": 9, - "metadata": {}, - "output_type": "execute_result" - } - ], - "source": [ - "model.batch([\"hello world\", \"goodbye moon\"])" - ] - }, - { - "cell_type": "markdown", - "id": "659a16e5-4ce8-4ea8-bbb7-e0fcc3777391", - "metadata": {}, - "source": [ - ":::{.callout-important}\n", - "The `astream` interface will work as well, but it won't actually stream since we haven't provided a streaming implementation!\n", + "By default if `_astream` is not provided, then async streaming falls back on `_agenerate` which does not support\n", + "token by token streaming.\n", ":::" ] }, - { - "cell_type": "code", - "execution_count": 10, - "id": "6e898fa8-bd5d-4470-8a6a-6001f60997f6", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "content='hello w'\n" - ] - } - ], - "source": [ - "async for chunk in model.astream(\"hello world\"):\n", - " print(chunk)" - ] - }, - { - "cell_type": "markdown", - "id": "bbfebea1", - "metadata": {}, - "source": [ - "## Base Chat Model\n", - "\n", - "You may want to add additional features like function calling or running the model in JSON mode or streaming.\n", - "\n", - "To do so inherit from `BaseChatModel` which is a lower level class and implement the methods:\n", - "\n", - "* `_generate` - Use to generate a chat result from a prompt\n", - "\n", - "Optional:\n", - "\n", - "* `_stream` - Use to implement streaming\n", - "* `_agenerate` - Use to implement a native async method " - ] - }, { "cell_type": "markdown", "id": "8e7047bd-c235-46f6-85e1-d6d7e0868eb1", @@ -440,14 +179,14 @@ }, { "cell_type": "code", - "execution_count": null, + "execution_count": 4, "id": "25ba32e5-5a6d-49f4-bb68-911827b84d61", "metadata": {}, "outputs": [], "source": [ "from typing import Any, AsyncIterator, Dict, Iterator, List, Optional\n", "\n", - "from langchain_core.callbacks.manager import (\n", + "from langchain_core.callbacks import (\n", " AsyncCallbackManagerForLLMRun,\n", " CallbackManagerForLLMRun,\n", ")\n", @@ -579,19 +318,32 @@ " return {\"n\": self.n}" ] }, + { + "cell_type": "markdown", + "id": "b3c3d030-8d8b-4891-962d-a2d39b331883", + "metadata": {}, + "source": [ + ":::{.callout-tip}\n", + "The `_astream` implementation uses `run_in_executor` to launch the sync `_stream` in a separate thread.\n", + "\n", + "You can use this trick if you want to reuse the `_stream` implementation, but if you're able to implement code\n", + "that's natively async that's a better solution since that code will run with less overhead.\n", + ":::" + ] + }, { "cell_type": "markdown", "id": "1e9af284-f2d3-44e2-ac6a-09b73d89ada3", "metadata": {}, "source": [ - "### Using it\n", + "### Let's test it ๐Ÿงช\n", "\n", "The chat model will implement the standard `Runnable` interface of LangChain which many of the LangChain abstractions support!" ] }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 5, "id": "34bf2d48-556a-48be-aee7-496fb02332f3", "metadata": {}, "outputs": [], @@ -601,7 +353,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 6, "id": "27689f30-dcd2-466b-ba9d-f60b7d434110", "metadata": {}, "outputs": [ @@ -611,7 +363,7 @@ "AIMessage(content='Meo')" ] }, - "execution_count": 13, + "execution_count": 6, "metadata": {}, "output_type": "execute_result" } @@ -628,7 +380,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 7, "id": "406436df-31bf-466b-9c3d-39db9d6b6407", "metadata": {}, "outputs": [ @@ -638,7 +390,7 @@ "AIMessage(content='hel')" ] }, - "execution_count": 14, + "execution_count": 7, "metadata": {}, "output_type": "execute_result" } @@ -649,7 +401,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 8, "id": "a72ffa46-6004-41ef-bbe4-56fa17a029e2", "metadata": {}, "outputs": [ @@ -659,7 +411,7 @@ "[AIMessage(content='hel'), AIMessage(content='goo')]" ] }, - "execution_count": 15, + "execution_count": 8, "metadata": {}, "output_type": "execute_result" } @@ -670,7 +422,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 9, "id": "3633be2c-2ea0-42f9-a72f-3b5240690b55", "metadata": {}, "outputs": [ @@ -697,7 +449,7 @@ }, { "cell_type": "code", - "execution_count": 17, + "execution_count": 10, "id": "b7d73995-eeab-48c6-a7d8-32c98ba29fc2", "metadata": {}, "outputs": [ @@ -724,7 +476,7 @@ }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 11, "id": "17840eba-8ff4-4e73-8e4f-85f16eb1c9d0", "metadata": {}, "outputs": [ @@ -732,11 +484,11 @@ "name": "stdout", "output_type": "stream", "text": [ - "{'event': 'on_chat_model_start', 'run_id': 'fe9141d7-78a0-4a5e-a267-44194f3a1d39', 'name': 'CustomChatModelAdvanced', 'tags': [], 'metadata': {}, 'data': {'input': 'cat'}}\n", - "{'event': 'on_chat_model_stream', 'run_id': 'fe9141d7-78a0-4a5e-a267-44194f3a1d39', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='c')}}\n", - "{'event': 'on_chat_model_stream', 'run_id': 'fe9141d7-78a0-4a5e-a267-44194f3a1d39', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='a')}}\n", - "{'event': 'on_chat_model_stream', 'run_id': 'fe9141d7-78a0-4a5e-a267-44194f3a1d39', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='t')}}\n", - "{'event': 'on_chat_model_end', 'name': 'CustomChatModelAdvanced', 'run_id': 'fe9141d7-78a0-4a5e-a267-44194f3a1d39', 'tags': [], 'metadata': {}, 'data': {'output': AIMessageChunk(content='cat')}}\n" + "{'event': 'on_chat_model_start', 'run_id': 'e03c0b21-521f-4cb4-a837-02fed65cf1cf', 'name': 'CustomChatModelAdvanced', 'tags': [], 'metadata': {}, 'data': {'input': 'cat'}}\n", + "{'event': 'on_chat_model_stream', 'run_id': 'e03c0b21-521f-4cb4-a837-02fed65cf1cf', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='c')}}\n", + "{'event': 'on_chat_model_stream', 'run_id': 'e03c0b21-521f-4cb4-a837-02fed65cf1cf', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='a')}}\n", + "{'event': 'on_chat_model_stream', 'run_id': 'e03c0b21-521f-4cb4-a837-02fed65cf1cf', 'tags': [], 'metadata': {}, 'name': 'CustomChatModelAdvanced', 'data': {'chunk': AIMessageChunk(content='t')}}\n", + "{'event': 'on_chat_model_end', 'name': 'CustomChatModelAdvanced', 'run_id': 'e03c0b21-521f-4cb4-a837-02fed65cf1cf', 'tags': [], 'metadata': {}, 'data': {'output': AIMessageChunk(content='cat')}}\n" ] }, { @@ -755,18 +507,92 @@ }, { "cell_type": "markdown", - "id": "44ee559b-b1da-4851-8c97-420ab394aff9", + "id": "42f9553f-7d8c-4277-aeb4-d80d77839d90", "metadata": {}, "source": [ - "## Contributing\n", + "## Identifying Params\n", "\n", - "We would very much appreciate contributions for the chat model.\n", + "LangChain has a callback system which allows implementing loggers to monitor the behavior of LLM applications.\n", "\n", - "Here's a checklist to help you validate your implementation of the chat model.\n", + "Remember the `_identifying_params` property from earlier? \n", "\n", - "### Checklist\n", + "It's passed to the callback system and is accessible for user specified loggers.\n", "\n", - "An overview of things to verify to make sure that the implementation is done correctly.\n", + "Below we'll implement a handler with just a single `on_chat_model_start` event to see where `_identifying_params` appears." + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "id": "cc7e6b5f-711b-48aa-9ebe-92a13e230c37", + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "---\n", + "On chat model start.\n", + "{'invocation_params': {'n': 3, '_type': 'echoing-chat-model-advanced', 'stop': ['woof']}, 'options': {'stop': ['woof']}, 'name': None, 'batch_size': 1}\n" + ] + }, + { + "data": { + "text/plain": [ + "AIMessage(content='meo')" + ] + }, + "execution_count": 12, + "metadata": {}, + "output_type": "execute_result" + } + ], + "source": [ + "from typing import Union\n", + "from uuid import UUID\n", + "\n", + "from langchain_core.callbacks import AsyncCallbackHandler\n", + "from langchain_core.outputs import (\n", + " ChatGenerationChunk,\n", + " ChatResult,\n", + " GenerationChunk,\n", + " LLMResult,\n", + ")\n", + "\n", + "\n", + "class SampleCallbackHandler(AsyncCallbackHandler):\n", + " \"\"\"Async callback handler that handles callbacks from LangChain.\"\"\"\n", + "\n", + " async def on_chat_model_start(\n", + " self,\n", + " serialized: Dict[str, Any],\n", + " messages: List[List[BaseMessage]],\n", + " *,\n", + " run_id: UUID,\n", + " parent_run_id: Optional[UUID] = None,\n", + " tags: Optional[List[str]] = None,\n", + " metadata: Optional[Dict[str, Any]] = None,\n", + " **kwargs: Any,\n", + " ) -> Any:\n", + " \"\"\"Run when a chat model starts running.\"\"\"\n", + " print(\"---\")\n", + " print(\"On chat model start.\")\n", + " print(kwargs)\n", + "\n", + "\n", + "model.invoke(\"meow\", stop=[\"woof\"], config={\"callbacks\": [SampleCallbackHandler()]})" + ] + }, + { + "cell_type": "markdown", + "id": "44ee559b-b1da-4851-8c97-420ab394aff9", + "metadata": {}, + "source": [ + "## Contributing\n", + "\n", + "We appreciate all chat model integration contributions. \n", + "\n", + "Here's a checklist to help make sure your contribution gets added to LangChain:\n", "\n", "Documentation:\n", "\n", @@ -775,31 +601,23 @@ "\n", "Tests:\n", "\n", - "* Add unit or integration tests to the overridden methods. Verify that `invoke`, `ainvoke`, `batch`, `stream` work if you've over-ridden the corresponding code.\n", + "* [ ] Add unit or integration tests to the overridden methods. Verify that `invoke`, `ainvoke`, `batch`, `stream` work if you've over-ridden the corresponding code.\n", "\n", "Streaming (if you're implementing it):\n", "\n", - "* Provided an async implementation via `_astream`\n", - "* Make sure to invoke the `on_llm_new_token` callback\n", - "* `on_llm_new_token` is invoked BEFORE yielding the chunk\n", + "* [ ] Provided an async implementation via `_astream`\n", + "* [ ] Make sure to invoke the `on_llm_new_token` callback\n", + "* [ ] `on_llm_new_token` is invoked BEFORE yielding the chunk\n", "\n", "Stop Token Behavior:\n", "\n", - "* Stop token should be respected\n", - "* Stop token should be INCLUDED as part of the response\n", + "* [ ] Stop token should be respected\n", + "* [ ] Stop token should be INCLUDED as part of the response\n", "\n", "Secret API Keys:\n", "\n", - "* If your model connects to an API it will likely accept API keys as part of its initialization. Use Pydantic's `SecretStr` type for secrets, so they don't get accidentally printed out when folks print the model." + "* [ ] If your model connects to an API it will likely accept API keys as part of its initialization. Use Pydantic's `SecretStr` type for secrets, so they don't get accidentally printed out when folks print the model." ] - }, - { - "cell_type": "code", - "execution_count": null, - "id": "4f3bbb70-5568-4bfe-810c-ad83fdf76dac", - "metadata": {}, - "outputs": [], - "source": [] } ], "metadata": {