From af9e60025850f9dc93f66a1166955bb7dabf2ec9 Mon Sep 17 00:00:00 2001 From: Bagatur Date: Mon, 18 Dec 2023 12:12:34 -0500 Subject: [PATCH] rm platform page --- docs/docs/integrations/platforms/nvidia.mdx | 45 --------------------- 1 file changed, 45 deletions(-) delete mode 100644 docs/docs/integrations/platforms/nvidia.mdx diff --git a/docs/docs/integrations/platforms/nvidia.mdx b/docs/docs/integrations/platforms/nvidia.mdx deleted file mode 100644 index a519d322a041f..0000000000000 --- a/docs/docs/integrations/platforms/nvidia.mdx +++ /dev/null @@ -1,45 +0,0 @@ -# NVIDIA - -All functionality related to [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/). - -## NVIDIA AI Foundation Models and Endpoints - -> [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/) give users easy access to NVIDIA hosted API endpoints for NVIDIA AI Foundation Models like Mixtral 8x7B, Llama 2, Stable Diffusion, etc. These models, hosted on the [NVIDIA NGC catalog](https://catalog.ngc.nvidia.com/ai-foundation-models), are optimized, tested, and hosted on the NVIDIA AI platform, making them fast and easy to evaluate, further customize, and seamlessly run at peak performance on any accelerated stack. -> -> With [NVIDIA AI Foundation Endpoints](https://www.nvidia.com/en-us/ai-data-science/foundation-models/), you can get quick results from a fully accelerated stack running on [NVIDIA DGX Cloud](https://www.nvidia.com/en-us/data-center/dgx-cloud/). Once customized, these models can be deployed anywhere with enterprise-grade security, stability, and support using [NVIDIA AI Enterprise](https://www.nvidia.com/en-us/data-center/products/ai-enterprise/). -> -> These models can be easily accessed via the [`langchain-nvidia-ai-endpoints`](https://pypi.org/project/langchain-nvidia-ai-endpoints/) package, as shown below. - -### Installation - -```bash -pip install -U langchain-nvidia-ai-endpoints -``` - -### Setup and Authentication - -- Create a free [NVIDIA NGC](https://catalog.ngc.nvidia.com/) account. -- Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`. -- Select `API` and generate the key `NVIDIA_API_KEY`. - -```bash -export NVIDIA_API_KEY=nvapi-XXXXXXXXXXXXXXXXXXXXXXXXXX -``` - -```python -from langchain_nvidia_ai_endpoints import ChatNVIDIA - -llm = ChatNVIDIA(model="mixtral_8x7b") -result = llm.invoke("Write a ballad about LangChain.") -print(result.content) -``` - -### Using NVIDIA AI Foundation Endpoints - -A selection of NVIDIA AI Foundation models are supported directly in LangChain with familiar APIs. - -The active models which are supported can be found [in NGC](https://catalog.ngc.nvidia.com/ai-foundation-models). - -**The following may be useful examples to help you get started:** -- **[`ChatNVIDIA` Model](/docs/integrations/chat/nvidia_ai_endpoints).** -- **[`NVIDIAEmbeddings` Model for RAG Workflows](/docs/integrations/text_embedding/nvidia_ai_endpoints).**