diff --git a/gabo_rag.ipynb b/gabo_rag.ipynb index c4ea57f..e3812e3 100644 --- a/gabo_rag.ipynb +++ b/gabo_rag.ipynb @@ -2,12 +2,14 @@ "cells": [ { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "rvxgds6SxekA" + }, "source": [ "# Gabo RAG | Daniel Felipe Montenegro\n", - "**'Gabo'** is a **RAG (Retrieval-Augmented Generation)** system designed to enhance the capabilities of **LLMs (Large Language Models)** such as **'Llama 3.1'** or **'Phi 3.5**'. This project honors Colombian author **Gabriel García Márquez** by marking the tenth anniversary of his death, creating a specialized assistant to answer questions about his work, and using new technologies to further reveal his literary legacy.\n", + "**'Gabo'** is a **RAG (Retrieval-Augmented Generation)** system designed to enhance the capabilities of **LLMs (Large Language Models)** such as **'Llama 3.2'** or **'Phi 3.5**'. This project honors Colombian author **Gabriel García Márquez** by marking the tenth anniversary of his death, creating a specialized assistant to answer questions about his work, and using new technologies to further reveal his literary legacy.\n", "\n", - "[**Python Notebook**](https://github.com/dafmontenegro/gabo-rag/blob/master/gabo_rag.ipynb) | [**Webpage**](https://dafmontenegro.com/gabo-rag/) | [**Repository**](https://github.com/dafmontenegro/gabo-rag)\n", + "[**Python Notebook**](https://github.com/dafmontenegro/gabo-rag/blob/master/gabo_rag.ipynb) | [**Webpage**](https://montenegrodanielfelipe.com/gabo-rag/) | [**Repository**](https://github.com/dafmontenegro/gabo-rag)\n", "\n", "- [1. Tools and Technologies](#1-tools-and-technologies)\n", "- [2. How to run Ollama in Google Colab?](#2-how-to-run-ollama-in-google-colab)\n", @@ -32,11 +34,13 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "oD1C9HDGxekF" + }, "source": [ "## 1. Tools and Technologies\n", "\n", - "- [**Ollama**](https://ollama.com/): Running models ([Llama 3.1](https://ollama.com/library/llama3.1) or [Phi 3.5](https://ollama.com/library/phi3.5)) and embeddings ([Nomic](https://ollama.com/library/nomic-embed-text))\n", + "- [**Ollama**](https://ollama.com/): Running models ([Llama 3.2](https://ollama.com/library/llama3.2) or [Phi 3.5](https://ollama.com/library/phi3.5)) and embeddings ([Nomic](https://ollama.com/library/nomic-embed-text))\n", "- [**LangChain**](https://python.langchain.com/docs/introduction/): Framework and web scraping tool\n", "- [**Chroma**](https://docs.trychroma.com/): Vector database\n", "\n", @@ -45,14 +49,18 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "FMv75HvtxekG" + }, "source": [ "## 2. How to run Ollama in Google Colab?" ] }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "WPUNiHyCxekH" + }, "source": [ "### 2.1 Ollama Installation\n", "For this, we simply go to the [Ollama downloads page](https://ollama.com/download/linux) and select **Linux**. The command is as follows" @@ -60,13 +68,9 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "Lwzxaz9WN8Sr", - "outputId": "3652cec4-0063-4038-ce16-521349ac35b9" + "id": "Lwzxaz9WN8Sr" }, "outputs": [], "source": [ @@ -75,7 +79,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "WFyDUt_nxekL" + }, "source": [ "### 2.2 Run 'ollama serve'\n", "If you run ollama serve, you will encounter the issue where you cannot execute subsequent cells and your script will remain stuck in that cell indefinitely. To resolve this, you simply need to run the following command:" @@ -94,7 +100,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "OK-OgAidxekM" + }, "source": [ "After running this command, it is advisable to wait a reasonable amount of time for it to execute before running the next command, so you can add something like:" ] @@ -113,30 +121,30 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "iCg-eX9axekM" + }, "source": [ "### 2.3 Run 'ollama pull '\n", - "For this project we will use [Phi-3.5-mini](https://ollama.com/library/phi3.5) the lightweight **Microsoft** model with high capabilities. This project is also extensible to [Llama 3.1](https://ollama.com/library/llama3.1), you would only have to pull that other model." + "For this project we will use [Llama 3.2](https://ollama.com/library/llama3.2) the most recent release of **Meta** and specifically the **3B parameters** version. This project is also extensible to [Phi-3.5-mini](https://ollama.com/library/phi3.5) (the lightweight **Microsoft** model with high capabilities); you would only have to pull that other model." ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "OXavgAvmO4z1", - "outputId": "691a15cf-0ef1-4152-ef32-2a8b8e73466f" + "id": "OXavgAvmO4z1" }, "outputs": [], "source": [ - "!ollama pull phi3.5" + "!ollama pull llama3.2" ] }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "3gLJHKGCxekN" + }, "source": [ "## 3. Exploring LLMs\n", "Now that we have our LLM, it's time to test them with what will be our control question." @@ -156,7 +164,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "bMBRTRTGxekN" + }, "source": [ "> 'Gabo' will be designed to function in Spanish, as it was Gabriel García Márquez's native language and his literary work is also in this language.\n", "\n", @@ -167,12 +177,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/", - "height": 72 - }, - "id": "6Fu72N5yPAOS", - "outputId": "79499be5-6f38-4c81-9433-c5fa464253da" + "id": "6Fu72N5yPAOS" }, "outputs": [], "source": [ @@ -189,7 +194,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "O-quS3HQxekN" + }, "source": [ "Before we can invoke the LLM, we need to install LangChain. [1]" ] @@ -198,11 +205,7 @@ "cell_type": "code", "execution_count": 7, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "T9jBMTndPEIp", - "outputId": "d76669e9-b2f6-46d5-daad-b3169b6be78f" + "id": "T9jBMTndPEIp" }, "outputs": [], "source": [ @@ -211,57 +214,99 @@ }, { "cell_type": "markdown", - "metadata": {}, + "source": [ + "and LangChain's support to Ollama" + ], + "metadata": { + "id": "ZxKmZgrl9Ok9" + } + }, + { + "cell_type": "code", + "source": [ + "!pip install -qU langchain-ollama" + ], + "metadata": { + "id": "vUrAvsSN3UKz" + }, + "execution_count": 8, + "outputs": [] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "elWsBonGxekN" + }, "source": [ "Now we create the model." ] }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 9, "metadata": { "id": "_SDD9Fq6PGId" }, "outputs": [], "source": [ - "from langchain_community.llms import Ollama\n", + "from langchain_ollama import OllamaLLM\n", "\n", - "llm_phi = Ollama(model=\"phi3.5\")" + "llm_llama = OllamaLLM(model=\"llama3.2\")" ] }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "dia9tuMxxekO" + }, "source": [ - "Invoke Phi 3.5" + "Invoke Llama 3.2" ] }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 10, "metadata": { "colab": { "base_uri": "https://localhost:8080/", - "height": 72 + "height": 87 }, "id": "Gp8wJRpjPJY2", - "outputId": "6cf13505-324c-410f-f2d7-8ed6df9e3b99" + "outputId": "2cd9e3af-e621-4dc4-b46f-377e05aafb5c" }, - "outputs": [], + "outputs": [ + { + "output_type": "execute_result", + "data": { + "text/plain": [ + "'No tengo información sobre un cuento llamado \"Algo muy grave va a suceder en este pueblo\" que incluya a una \"señora vieja\". Es posible que el cuento sea de autor desconocido o que no esté ampliamente conocido.\\n\\nSin embargo, puedo sugerirte algunas posibles opciones para encontrar la respuesta a tu pregunta:\\n\\n1. **Buscar en línea**: Puedes buscar el título del cuento en motores de búsqueda como Google para ver si se puede encontrar información sobre él.\\n2. **Consultar una base de datos de literatura**: Si conoces el autor o la fecha de publicación del cuento, puedes consultar bases de datos de literatura en línea, como Goodreads o Literary Maps, para ver si se puede encontrar información sobre él.\\n3. **Preguntar a un experto**: Si eres estudiante de literatura o tienes interés en el tema, puedes preguntar a un experto en la materia o buscar recursos educativos que puedan ayudarte a encontrar la respuesta a tu pregunta.\\n\\nSi tienes más información sobre el cuento, como el autor o la fecha de publicación, estaré encantado de ayudarte a encontrar la respuesta.'" + ], + "application/vnd.google.colaboratory.intrinsic+json": { + "type": "string" + } + }, + "metadata": {}, + "execution_count": 10 + } + ], "source": [ - "llm_phi.invoke(test_message)" + "llm_llama.invoke(test_message)" ] }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "9bzWsfsrxekO" + }, "source": [ "> At this stage, the model is not expected to be able to answer the question correctly, and they might even hallucinate when trying to give an answer. To solve this problem, we will start building our **RAG** in the next section." ] }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "hhBhALn_xekO" + }, "source": [ "## 4. Data Extraction and Preparation\n", "To collect the information that our **RAG** will use, we will perform **Web Scraping** of the section dedicated to [Gabriel Garcia Marquez](https://ciudadseva.com/autor/gabriel-garcia-marquez/) in the **Ciudad Seva web site**." @@ -269,7 +314,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "P7A-vRBNxekO" + }, "source": [ "### 4.1 Web Scraping and Chunking\n", "The first step is to install **Beautiful Soup** so that LangChain's **WebBaseLoader** works correctly." @@ -277,7 +324,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 11, "metadata": { "id": "gU3MDwCmPMNF" }, @@ -288,14 +335,16 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "KqKbW_iXxekO" + }, "source": [ "The next step will be to save the list of sources we will extract from the website into a variable." ] }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 12, "metadata": { "id": "_iaDs740PNrl" }, @@ -308,7 +357,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "k7iYMW8AxekO" + }, "source": [ "Now we will create a function to collect all the links that lead to the texts. If we look at the HTML structure, we will notice that the information we're looking for is inside an `
` element with the class `status-publish`. Then, we simply extract the `href` attributes from the `
  • ` elements inside the `` tags." ] @@ -317,11 +368,7 @@ "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "yXzAV9rqPPD9", - "outputId": "9bf9c34f-c805-45b7-a8db-2acb6711c562" + "id": "yXzAV9rqPPD9" }, "outputs": [], "source": [ @@ -335,31 +382,33 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "tVXkLFPGxekP" + }, "source": [ "Let's see how many texts by the writer we can gather." ] }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 14, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "gAPmhZpWPRZt", - "outputId": "978b79eb-9343-46f7-c9cf-1e98a84b920e" + "outputId": "c9978f79-64cc-4c80-9d52-16adb8f96cff" }, "outputs": [ { + "output_type": "execute_result", "data": { "text/plain": [ "51" ] }, - "execution_count": 13, "metadata": {}, - "output_type": "execute_result" + "execution_count": 14 } ], "source": [ @@ -373,14 +422,16 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "upah3a1WxekQ" + }, "source": [ "Now that we have the URLs of the texts to feed our **RAG**, we just need to perform web scraping directly from the content of the stories. For that, we will build a function that follows a logic very similar to the previous function, which will initially give us the **raw text**, along with the **reference information** about what we are obtaining (the information found in `
    `)." ] }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 15, "metadata": { "id": "FUI5q2A2Pc7R" }, @@ -396,7 +447,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "UdD3qbqAxekQ" + }, "source": [ "There are indeed many ways to perform chunking, several of which are discussed in **\"5 Levels of Text Splitting\"** [2]. The most interesting idea for me about how to split texts, and what I believe fits best in this project, is **Semantic Splitting**. So, following that idea, we will ensure that the function divides all the texts by their periods, thus generating **semantic fragments in Spanish**.\n", "\n", @@ -405,39 +458,19 @@ }, { "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 4.2 Embedding Model: Nomic\n", - "I ran several tests with different **embedding models**, including **LLama 3.1** and **Phi 3.5**, but it wasn't until I used `nomic-embed-text` that I saw significantly better results. So, this is the embedding model we'll use." - ] - }, - { - "cell_type": "code", - "execution_count": 15, "metadata": { - "id": "sDll7cv2SMk4" + "id": "uCot1ZcDxekR" }, - "outputs": [], - "source": [ - "!pip install -qU langchain-ollama" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, "source": [ - "Now let's pull with Ollama from [Nomic's embedding model](https://ollama.com/library/nomic-embed-text)" + "### 4.2 Embedding Model: Nomic\n", + "I ran several tests with different **embedding models**, including **LLama 3.1** and **Phi 3.5**, but it wasn't until I used `nomic-embed-text` that I saw significantly better results. So, this is the embedding model we'll use. Now let's pull with Ollama from [Nomic's embedding model](https://ollama.com/library/nomic-embed-text)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "MvixQQrFPiwL", - "outputId": "768e625a-1137-4160-b248-0873d2c33bf8" + "id": "MvixQQrFPiwL" }, "outputs": [], "source": [ @@ -446,7 +479,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "lD1i6Vt_xekS" + }, "source": [ "We're going to create our model so we can later use it in **Chroma**, our vector database." ] @@ -466,7 +501,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "Npt8b7AmxekX" + }, "source": [ "## 5. Storing in the Vector Database\n", "**Chroma** is our chosen vector database. With the help of our embedding model provided by **Nomic**, we will store all the fragments generated from the texts, so that later we can query them and make them part of our context for each query to the **LLMs**." @@ -474,7 +511,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "6CK8SYRXxekX" + }, "source": [ "### 5.1 Making Chroma Persistent\n", "Here we have to think **one step ahead in time**, so we assume that chroma is already persistent, which means that it **exists in a directory**. If we don't do this, what will happen every time we run this **Python Notebook**, is that we will add repeated strings over and over again to the vector database. So it is a good practice to **reset Chroma** and in case it does not exist, it will be created and **simply remain empty**. [4]" @@ -484,11 +523,7 @@ "cell_type": "code", "execution_count": 18, "metadata": { - "colab": { - "base_uri": "https://localhost:8080/" - }, - "id": "SGbgWAwsPlPY", - "outputId": "75b30d6c-fb7d-4060-b875-6413e72216a2" + "id": "SGbgWAwsPlPY" }, "outputs": [], "source": [ @@ -497,7 +532,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "4jJRo6COxekX" + }, "source": [ "We will create a function that will be specifically in charge of resetting the collection." ] @@ -524,7 +561,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "OeAoGhoNxekX" + }, "source": [ "### 5.2 Adding Documents to Chroma\n", "We may think that it is enough to just pass it all the text and it will store it completely, but that approach is inefficient and contradictory to the idea of RAG; that is why a whole section was dedicated to Chunking before." @@ -538,18 +577,18 @@ "base_uri": "https://localhost:8080/" }, "id": "N6pHyfQePp3F", - "outputId": "1e9fe521-1a9e-4009-d03d-42dc4fddf1a2" + "outputId": "d9deb890-84f9-445d-9cff-32e6abf01d21" }, "outputs": [ { + "output_type": "execute_result", "data": { "text/plain": [ "5908" ] }, - "execution_count": 20, "metadata": {}, - "output_type": "execute_result" + "execution_count": 20 } ], "source": [ @@ -565,7 +604,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "8l40Rm8uxekY" + }, "source": [ "Let's verify that all fragments were saved correctly in Chroma" ] @@ -578,18 +619,18 @@ "base_uri": "https://localhost:8080/" }, "id": "lYluu0-ZPseN", - "outputId": "31088fd0-8aae-4e01-f5de-aac9ee72f9dc" + "outputId": "1cbb7a25-51e9-45b5-a96b-1afcfbf8d090" }, "outputs": [ { + "output_type": "execute_result", "data": { "text/plain": [ "5908" ] }, - "execution_count": 21, "metadata": {}, - "output_type": "execute_result" + "execution_count": 21 } ], "source": [ @@ -600,14 +641,18 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "bwPPCBmbxekY" + }, "source": [ "> Here we are accessing the persistent data, not the in-memory data." ] }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "CueXp3UzxekY" + }, "source": [ "## 6. Use a Vectorstore as a Retriever\n", "A retriever is an **interface** that specializes in retrieving information from an **unstructured query**. Let's test the work we did, we will use the same `test_message` as before and see if the retriever can return the **specific fragment** of the text that has the answer (the one quoted in section [3. Exploring LLMs](#3-exploring-llms))." @@ -621,12 +666,12 @@ "base_uri": "https://localhost:8080/" }, "id": "PTJV3LclPuSd", - "outputId": "ed972ce4-f90a-481a-a06f-c1e62abb26dc" + "outputId": "f09dc22a-068e-4ae5-fdc0-81f654526c8d" }, "outputs": [ { - "name": "stdout", "output_type": "stream", + "name": "stdout", "text": [ "\n", "Fragmento 2/40 de 'Algo muy grave va a suceder en este pueblo [Cuento - Texto completo.] Gabriel García Márquez:\n", @@ -646,7 +691,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "34VWapD5xekY" + }, "source": [ "By default `Chroma.as_retriever()` will search for the most similar documents and `search_kwargs={”k“: 1}` indicates that we want to limit the output to **1**. [4]\n", "\n", @@ -655,7 +702,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "zQjlQfMbxekZ" + }, "source": [ "## 7. RAG (Retrieval-Augmented Generation)\n", "To better integrate our context to the query, we will make use of a **template** that will help us set up the behavior of the **RAG** and give it indications on how to answer." @@ -664,7 +713,9 @@ { "cell_type": "code", "execution_count": 23, - "metadata": {}, + "metadata": { + "id": "t2bQi1vvxekZ" + }, "outputs": [], "source": [ "from langchain_core.prompts import PromptTemplate\n", @@ -686,9 +737,11 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "cY198Ib8xekZ" + }, "source": [ - "**LangChain** tells us how to use `create_stuff_documents_chain()` to integrate **Phi 3.5** and our **custom prompt**. Then we just need to use `create_retrieval_chain()` to automatically pass to the **LLM** our input along with the context and fill it in the template. [5]" + "**LangChain** tells us how to use `create_stuff_documents_chain()` to integrate **Llama 3.2** and our **custom prompt**. Then we just need to use `create_retrieval_chain()` to automatically pass to the **LLM** our input along with the context and fill it in the template. [5]" ] }, { @@ -702,13 +755,15 @@ "from langchain.chains.combine_documents import create_stuff_documents_chain\n", "from langchain.chains import create_retrieval_chain\n", "\n", - "question_answer_chain = create_stuff_documents_chain(llm_phi, custom_rag_prompt)\n", + "question_answer_chain = create_stuff_documents_chain(llm_llama, custom_rag_prompt)\n", "rag_chain = create_retrieval_chain(retriever, question_answer_chain)" ] }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "nuesp6DhxekZ" + }, "source": [ "Now let's test with our first control question, which allows us to check if the **LLM** is aware of his or her **new identity.**" ] @@ -718,21 +773,19 @@ "execution_count": 25, "metadata": { "colab": { - "base_uri": "https://localhost:8080/", - "height": 424 + "base_uri": "https://localhost:8080/" }, "id": "M7W_Vu8CP-vU", - "outputId": "d09c97e7-0bdc-4f5f-e6bc-3d20dd7bd5c4" + "outputId": "469e9339-971f-48ef-b1d1-58c08ee9070b" }, "outputs": [ { - "name": "stdout", "output_type": "stream", + "name": "stdout", "text": [ "\n", - "ANSWER: Gabo es mi nombre, un asistente diseñado para proporcionar información sobre el ilustre escritor colombiano Gabriel García Márquez y su extensa obra literaria. Mis respuestas están informadas por textos como los fragmentos del cuento \"En este pueblo no hay ladrones\", donde la simplicidad cotidiana refleja las profundidades que el maestro de Macondo exploró en sus narrativas ricas y complejas.\n", - "\n", - "CONTEXT: Fragmento 457/714 de 'En este pueblo no hay ladrones [Cuento - Texto completo.] Gabriel García Márquez': 'Comieron sin hablar'\n" + "ANSWER: Soy Gabo, un asistente especializado en la obra de Gabriel García Márquez. Fui creado en conmemoración del decimo aniversario de su muerte, como un homenaje a su legado literario y una forma de preservar su memoria para futuras generaciones. Mi nombre es una referencia a Gabriel García Márquez, pero también un apodo que me ha sido otorgado por aquellos que buscan información sobre su vida y obra.\n", + "CONTEXT: Fragmento 62/179 de 'Diecisiete ingleses envenenados [Cuento - Texto completo.] Gabriel García Márquez': 'Un maletero hermoso y amable se echó el baúl al hombro y se hizo cargo de ella'\n" ] } ], @@ -744,7 +797,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "bb-JGAWBxeka" + }, "source": [ "Finally let's conclude with the question that **started all this**...." ] @@ -753,15 +808,19 @@ "cell_type": "code", "execution_count": 26, "metadata": { - "id": "8q7XXiewQCeg" + "id": "8q7XXiewQCeg", + "colab": { + "base_uri": "https://localhost:8080/" + }, + "outputId": "ec075f96-0204-409e-fbd8-534964e2d8a7" }, "outputs": [ { - "name": "stdout", "output_type": "stream", + "name": "stdout", "text": [ "\n", - "ANSWER: La señora vieja del cuento 'Algo muy grave va a suceder en este pueblo' posee dos hijos. Uno de los cuales tiene 17 años y la otra, una niña, es de 14 años. Está representando el estilo realista mágico característico que García Márquez utiliza para tejer personajes complejos dentro del tejido familiar densamente poblado en su narrativa.\n", + "ANSWER: La señora vieja del cuento \"Algo muy grave va a suceder en este pueblo\" tiene dos hijos, un varón de 17 años y una hija de 14 años.\n", "CONTEXT: Fragmento 2/40 de 'Algo muy grave va a suceder en este pueblo [Cuento - Texto completo.] Gabriel García Márquez': 'Imagínese usted un pueblo muy pequeño donde hay una señora vieja que tiene dos hijos, uno de 17 y una hija de 14'\n" ] } @@ -774,7 +833,9 @@ }, { "cell_type": "markdown", - "metadata": {}, + "metadata": { + "id": "meXs07uuxekb" + }, "source": [ "## 8. References\n", "[1] **Ollama. (s. f.). ollama/docs/tutorials/langchainpy.md at main · ollama/ollama. GitHub.** https://github.com/ollama/ollama/blob/main/docs/tutorials/langchainpy.md\n", @@ -814,4 +875,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} +} \ No newline at end of file