diff --git a/_sources/reading_writing_machines.ipynb b/_sources/reading_writing_machines.ipynb index 7cdd5f6..a5b1efd 100644 --- a/_sources/reading_writing_machines.ipynb +++ b/_sources/reading_writing_machines.ipynb @@ -1 +1 @@ -{"cells": [{"cell_type": "markdown", "id": "8ee233a9-62a7-4f38-80ee-bcd72b368f2f", "metadata": {}, "source": ["# Reading Machines\n", "## Exploring the Linguistic Unconscious of AI\n", "\n", "### Introduction: Two ways of thinking about computation\n", "\n", "The history of computing revolves around efforts to automate the human labor of computation. And in many narratives of this history, the algorithm plays a central role. By _algorithm_, I refer to methods of reducing complex calculations and other operations to explicit formal rules, rules that can be implemented with rigor and precision by purely mechanical or electronic means.\n", "\n", "", "\n", "", "\n", "But as a means of understanding Chat GPT and other forms of [generative AI](https://en.wikipedia.org/wiki/Generative_artificial_intelligence), a consideration of algorithms only gets us so far. In fact, when it comes to the [large language models](https://en.wikipedia.org/wiki/Large_language_model) that have captivated the public imagination, in order to make sense of their \"unreasonable effectiveness,\" we must attend to another strand of computing, one which, though bound up with the first, manifests distinct pressures and concerns. Instead of formal logic and mathematical proof, this strand draws on traditions of thinking about data, randomness, and probability. And instead of the prescription of (computational) actions, it aims at the description and prediction of (non-computational) aspects of the world. \n", "\n", "", "\n", "A key moment in this tradition, in light of later developments, remains Claude Shannon's* work on modeling the statistical structure of printed English ({cite}`shannon_mathematical_1948`). In this interactive document, we will use the [Python programming language](https://www.python.org) to reproduce a couple of the experiments that Shannon* reported in his famous article, in the hopes of pulling back the curtain a bit on what seems to many (and not unreasonably) as evidence of a ghost in the machine. I, for one, do find many of these experiences haunting. But maybe the haunting doesn't happen where we at first assume.\n", "\n", "", "\n", "The material that follows draws on and is inspired by my reading of Lydia Liu's _The Freudian Robot_, one of the few works in the humanities that I'm aware of to deal with Shannon's work in depth. See {cite}`liu_freudian_2010`."]}, {"cell_type": "markdown", "id": "209cb756-c356-4150-adf3-a4a8a2cf0b24", "metadata": {}, "source": ["### Two kinds of coding\n", "\n", "Before we delve into our experiments, let's clarify some terminology. In particular, what do we mean by _code_? \n", "\n", "The demonstration below goes into a little more explicit detail, as far as the mechanics of Python are concerned, than the rest of this document. That's intended to motivate the contrast to follow, between the kind of code we write in Python, and the kind of coding that Shannon's* work deals with. \n", "\n", "#### Programs as code(s)\n", "\n", "We imagine computers as machines that operate on 1's and 0's. In fact, the 1's and 0's are themselves an abstraction for human convenience: digital computation happens as a series of electronic pulses: switches that are either \"on\" or \"off.\" (Think of counting to 10 by flipping a light switch on and off 10 times.)\n", "\n", "Every digital representation -- everything that can be computed by a digital computer -- must be encoded, ultimately, in this binary form. \n", "\n", "But to make computers efficient for human use, many additional layers of abstraction have been developed on top of the basic binary layer. By virtue of using computers and smartphones, we are all familiar with the concept of an interface, which instantiates a set of rules prescribing how we are to interact with the device in order to accomplish well-defined tasks. These interactions get encoded down to the level of electronic pulses (and the results of the computation are translated back into the encoding of the interface). \n", "\n", "A programming language is also an interface: a text-based one. It represents a code into which we can translate our instructions for computation, in order for those instructions to be encoded further for processing. \n", "\n", "#### Baby steps in Python\n", "\n", "\n", "Let's start with a single instruction. Run the following line of Python code by clicking the button,. You won't see any output -- that's okay."]}, {"cell_type": "code", "execution_count": null, "id": "fa425114-e402-4761-b30c-c1e1762dd61b", "metadata": {}, "outputs": [], "source": ["answer_to_everything = 42"]}, {"cell_type": "markdown", "id": "c43f26d2-9ed3-49fb-a2c6-9591eb1738da", "metadata": {}, "source": ["In the encoding specified by the Python language, the equals sign (`=`) is an instruction that loosely translates to: \"Store this value (on the right side) somewhere in memory, and give that location in memory the provided name (on the left side).\" The following image presents one way of imagining what happens in response to this code (with the caveat that, ultimately, the letters and numbers are represented by their binary encoding). "]}, {"cell_type": "markdown", "id": "063a8ee0-c7cb-4ce2-b74d-d447fb9b0865", "metadata": {}, "source": []}, {"cell_type": "markdown", "id": "fb21619b-0b45-4159-b520-63c6f4f08952", "metadata": {}, "source": ["By running the previous line of code, we have created a _variable_, which maps the name `answer_to_everything` to the value `42`. We can use the variable to retrieve its value (for use in other parts of our program). Run the code below to see some output."]}, {"cell_type": "code", "execution_count": null, "id": "e41c570a-0627-4a97-9785-b6b5faf94b4b", "metadata": {}, "outputs": [], "source": ["print(answer_to_everything)"]}, {"cell_type": "markdown", "id": "83d96246-c61e-404d-ba0f-f8096b11bf47", "metadata": {}, "source": ["The `print()` _function_ is a command in Python syntax that displays a value on the screen. Python's syntax picks out the following elements:\n", " - the name `print`\n", " - the parentheses that follow it, which enclose the _argument_\n", " - the argument itself, which in this case is a variable name (previously defined)\n", "\n", "These elements are perfectly arbitrary (in the Saussurean sense). This syntax was invented by the designers of the Python language, though they drew on conventions found in other programming languages. The point is that nothing about the Python command `print(answer_to_everything)` makes its operation transparent; to know what it does, you have to know the language (or, at least, be familiar with the conventions of programming languages more generally) -- just as when learning to speak a foreign language, you can't deduce much about the meaning of the words from the way they look or sound.\n", "\n", "However, unlike so-called _natural languages_, even minor deviations in syntax will usually cause errors, and errors will usually bring the whole program to a crashing halt.\n", "\n", "", "\n", "Run the code below -- you should see an error message."]}, {"cell_type": "code", "execution_count": null, "id": "2633fb92-6114-4b5c-aea8-571269503f8a", "metadata": {}, "outputs": [], "source": ["print(answer_to_everythin)"]}, {"cell_type": "markdown", "id": "76593f24-7ab4-48dd-a6a6-19d2b2016e13", "metadata": {}, "source": ["A misspelled variable name causes Python to abort its computation. Imagine if conversation ground to a halt whenever one of the parties mispronounced a word or used a malapropism!\n", "\n", "I tend to say that Python is extremely literal. But of course, this is merely an analogy, and a loose one. There is no room for metaphor in programming languages, at least, not as far as the computation itself is concerned. The operation of a language like Python is determined by the algorithms used to implement it. Given the same input and the same conditions of operation, a given Python program should produce the same output every time. (If it does not, that's usually considered a bug.)"]}, {"cell_type": "markdown", "id": "382482b5-5d87-455a-b07d-0d05451e72db", "metadata": {}, "source": ["#### Encoding text\n", "\n", "While _programming languages_ are ways of encoding algorithms, the operation of the resulting _programs_ does depend, in most cases, on more than just the algorithm itself. Programs depend on data. And in order to be used in computation, data must be encoded, too.\n", "\n", "As an engineer at Bell Labs, Claude Shannon* wanted to find -- mathematically -- the most efficient means of encoding data for electronic transmission. Note that this task involves a rather different set of factors from those that influence the design of a programming language.\n", "\n", "The designer of the language has the luxury of insisting on a programmer's fidelity to the specified syntax. In working in Python, we have to write `print(42)`, exactly as written, in order to display the number `42` on the screen. if we forget the parentheses, for instance, the command won't work. But when we talk on the phone (or via Zoom, etc.), it would certainly be a hassle if we had to first translate our words into a strict, fault-intolerant code like that of Python. \n", "\n", "All the same, there is no digital (electronic) representation without encoding. To refer to the difference between these two types of codes, I am drawing a distinction between _algorithms_ and _data_. Shannon's* work illustrates the importance of this distinction, which remains relevant to any consideration of machine learning and generative AI."]}, {"cell_type": "markdown", "id": "8d76bcb8-0c51-41cb-9153-8606436c8c9d", "metadata": {}, "source": ["#### Representing text in Python\n", "\n", "Before we turn to Shannon's* experiments with English text, let's look briefly at how Python represents text as data."]}, {"cell_type": "code", "execution_count": null, "id": "23a16da0-d11a-43f3-a179-ede5979f3369", "metadata": {}, "outputs": [], "source": ["a_text = \"Most noble and illustrious drinkers, and you thrice precious pockified blades (for to you, and none else, do I dedicate my writings), Alcibiades, in that dialogue of Plato's, which is entitled The Banquet, whilst he was setting forth the praises of his schoolmaster Socrates (without all question the prince of philosophers), amongst other discourses to that purpose, said that he resembled the Silenes.\""]}, {"cell_type": "markdown", "id": "02ab3533-27f3-4480-bc29-bea8410ba8fd", "metadata": {}, "source": ["Running the code above creates a new variable, `a_text`, and assigns it to a _string_ representing the first sentence from Francois Rabelais' early Modern novel, _Gargantua and Pantagruel_. A string is the most basic way in Python of representing text, where \"text\" means anything that is not to be treated purely a numeric value. \n", "\n", "Anything between quotation marks (either double `\"\"` or single `''`) is a string.\n", "\n", "One problem with strings in Python (and other programming languages) is that they have very little structure. A Python string is a sequence of characters, where a _character_ is a letter of a recognized alphabet, a punctuation mark, a space, etc. Each character is stored in the computer's memory as a numeric code, and from that perspective, all characters are essentially equal. We can access a single character in a string by supplying its position. (Python counts characters in strings from left to right, starting with 0, not 1, for the first character.)"]}, {"cell_type": "code", "execution_count": null, "id": "703efe54-ca3b-488d-b76e-61bba4ddc8fb", "metadata": {}, "outputs": [], "source": ["a_text[5]"]}, {"cell_type": "markdown", "id": "a655e224-65ab-4b8d-bf14-3aed95941064", "metadata": {}, "source": ["We can access a sequence of characters -- here, the characters in positions 11 through 50."]}, {"cell_type": "code", "execution_count": null, "id": "46981216-84ec-4a56-a2d6-7a0edc0cf788", "metadata": {}, "outputs": [], "source": ["a_text[10:50]"]}, {"cell_type": "markdown", "id": "b1ef2a6f-fed0-4929-8df3-41dcfb06427b", "metadata": {}, "source": ["We can even divide the string into pieces, using the occurences of particular characters. The code below divides our text on the white space, returning a _list_ (another Python construct) of smaller strings."]}, {"cell_type": "code", "execution_count": null, "id": "6d2ff07f-e349-4cc8-bb80-593ad37126dd", "metadata": {"scrolled": true}, "outputs": [], "source": ["a_text.split()"]}, {"cell_type": "markdown", "id": "f0492dc1-df43-4eee-8357-ecac1e6be83b", "metadata": {}, "source": ["The strings in the list above correspond, loosely, to the individual words in the sentence from Rabelais' text. But Python really has no concept of \"word,\" neither in English, nor any other (natural) language. "]}, {"cell_type": "markdown", "id": "ce5c70c2-238e-4277-bd02-f17e2d6b3e6b", "metadata": {}, "source": ["### Language & chance\n", "\n", "It's probably fair to say that when Shannon* was developing his mathematical approach to encoding information, the algorithmic ideal dominated computational research in Western Europe and the United States. In previous decades, philosophers like Bertrand Russell and mathematicians like David Hilbert had sought to develop a formal approach to mathematical proof, an approach that, they hoped, would ultimately unify the scientific disciplines. The goal of such research was to identify a core set of axioms, or logical rules, in terms of which all other \"rigorous\" methods of thought could be expressed. In other words, to reduce to zero the uncertainty and ambiguity plaguing natural language as a tool for expression: to make language algorithmic.\n", "\n", "Working within this tradition, Alan Turing had developed his model of what would become the digital computer. \n", "\n", "But can language as humans use it be reduced to such formal rules? On the face of it, it's easy to think not. However, that conclusion presents a problem for computation involving natural language, since the computer is, at bottom, a formal-rule-following machine. Shannon's* work implicitly challenges the assumption that we need to resort to formal rules in order to deal with the uncertainty in language. Instead, he sought mathematical means for _quantifying_ that uncertainty. And as Lydia Liu points out, that effort began with a set of observations about patterns in printed English texts.\n", "\n", ""]}, {"cell_type": "markdown", "id": "0e382de6-5c4c-4372-8965-6d8f7712cc86", "metadata": {}, "source": ["#### The long history of code\n", "\n", "Of course, Shannon's* insights do not begin with Shannon*. A long history predates him of speculation on what we might call the statistical features of language. Speculations of some practical urgency, given the even longer history of cryptographic communication in political, military, and other contexts.\n", "\n", "In the 9th Century CE, the Arab mathematician and philosopher Al-Kindi composed a work on cryptography in which he applied the relative frequency of letters in Arabic to a method for decrypting coded text ({cite}`broemeling_account_2011`). Al-Kindi, alongside his many other accomplishments, composed the earliest surviving analysis of this kind, which is a direct precursor of methods popular in the digital humanities (word frequency analysis), among other many other domains. \n", "\n", "Closer yet to the hearts of digital humanists, the Russian mathematician Andrei Markov, in a 1913 address to the Russian Academy of Sciences, reported on the results of his experiment with Aleksandr Pushkin's _Evegnii Onegin_: a statistical analysis of the occurrences of consonants and vowels in the first two chapters of Pushkin's novel in verse ({cite}`markov_example_2006`). From the perspective of today's large-language models, Markov improved on Al-Kindi's methods by counting not just isolated occurrences of vowels or consonants, but co-occurences: that is, where a vowel follows a consonant, a consonant a vowel, etc. As a means of articulating the structure of a sequential process, Markov's method generalizes into a powerful mathematical tool, to which he lends his name. We will see how Shannon* used [Markov chains](https://en.wikipedia.org/wiki/Markov_chain) shortly. "]}, {"cell_type": "markdown", "id": "adb9b846-06b4-4ad9-9aa9-8042402b9192", "metadata": {}, "source": ["#### A spate of tedious counting\n", "\n", "First, however, let's illustrate the more basic method, just to get a feel for its effectiveness.\n", "\n", "We'll take a text of sufficient length. Urquhart's English translation of _Gargantual and Pantagruel_, in the Everyman's Library edition, clocks in at 823 pages; that's a decent sample. If we were following the methods used by Al-Kindi, Markov, or even Shannon* himself, we would proceed as follows:\n", " 1. Make a list of the letters of the alphabet on a sheet of paper.\n", " 2. Go through the text, letter by letter.\n", " 3. Beside each letter on your paper, make one mark each time you encounter that letter in the text.\n", "\n", "Fortunately for us, we can avail ourselves of a computer to do this work. \n", "\n", "In the following sections of Python code, we download the Project Gutenberg edition of Rabelais' novel, saving it to the computer as a text file. We can read the whole file into the computer's memory as a single Python string. Then using a property of Python strings that allows us to _iterate_ over them, we can automate the process of counting up the occurences of each character. "]}, {"cell_type": "code", "execution_count": null, "id": "f41af9d8-6b35-427b-ad0b-14636f6027c1", "metadata": {}, "outputs": [], "source": ["from urllib.request import urlretrieve\n", "urlretrieve(\"https://www.gutenberg.org/cache/epub/1200/pg1200.txt\", \"gargantua.txt\")"]}, {"cell_type": "code", "execution_count": null, "id": "f8299d21-85e0-412b-902e-bd4a3e875301", "metadata": {}, "outputs": [], "source": ["with open('gargantua.txt') as f:\n", " g_text = f.read()"]}, {"cell_type": "markdown", "id": "e5efa9a4-9b35-433f-b4f9-ea3781a41e2f", "metadata": {}, "source": ["Running the code below uses the `len()` function to display the length -- in characters -- of a string. "]}, {"cell_type": "code", "execution_count": null, "id": "e21a11d7-ecf8-47d8-8805-e615901f68b6", "metadata": {}, "outputs": [], "source": ["len(g_text)"]}, {"cell_type": "markdown", "id": "49dc8fc5-ec15-4626-8ddd-a44348c03725", "metadata": {}, "source": ["The Project Gutenberg version of _Gargantua and Pantagruel_ has close to a 2 million characters."]}, {"cell_type": "markdown", "id": "522242ba-9511-4911-9dad-9769fad74386", "metadata": {}, "source": ["As an initial exercise, we can count the frequency with which each character appears. Run the following section of code to create a structure mapping each character to its frequency."]}, {"cell_type": "code", "execution_count": null, "id": "a68be960-bc75-4593-8f69-07c5e98cd318", "metadata": {}, "outputs": [], "source": ["g_characters = {}\n", "for character in g_text:\n", " if character in g_characters:\n", " g_characters[character] += 1\n", " else:\n", " g_characters[character] = 1"]}, {"cell_type": "markdown", "id": "ee699718-3a61-445f-bd61-ef6a7f31b8ea", "metadata": {}, "source": ["Run the code below to reveal the frequencies."]}, {"cell_type": "code", "execution_count": null, "id": "8533ef6e-8dd2-4cb1-81a5-7849b452fff4", "metadata": {}, "outputs": [], "source": ["g_characters"]}, {"cell_type": "markdown", "id": "2d6db4f5-c0ee-4f00-9ed2-25e358ca41b2", "metadata": {}, "source": ["Looking at the contents of `g_characters`, we can see that it consists of more than just the letters in standard [Latin script](https://en.wikipedia.org/wiki/Latin_script). There are punctuation marks, numerals, and other symbols, like `\\n`, which represents a line break. \n", "\n", "But if we look at the 10 most commonly occurring characters, with one exception, it aligns well with the [relative frequency of letters in English](https://en.wikipedia.org/wiki/Letter_frequency) as reported from studying large textual corpora. "]}, {"cell_type": "code", "execution_count": null, "id": "f82644ae-d893-4328-859a-34dfadafc2b6", "metadata": {}, "outputs": [], "source": ["sorted(g_characters.items(), key=lambda x: x[1], reverse=True)[:10]"]}, {"cell_type": "markdown", "id": "243b23b0-3fbe-40d7-92f7-815d28fa7a99", "metadata": {}, "source": ["#### Random writing\n", "\n", "At the heart of Shannon's* method lies the notion of _random sampling_. It's perhaps easiest to illustrate this concept before defining it.\n", "\n", "Using more Python code, let's compare what happens when we construct two random samples of the letters of the Latin script, one in which we select each letter with equal probability, and the other in which we weight our selections according to the frequency we have computed above."]}, {"cell_type": "code", "execution_count": null, "id": "cb83d49e-0dd2-4faa-9a25-af3166b6f50c", "metadata": {}, "outputs": [], "source": ["from random import choices\n", "alphabet = \"abcdefghijklmnopqrstuvwxyz\"\n", "print(\"\".join(choices(alphabet, k=50)))"]}, {"cell_type": "markdown", "id": "6f90a735-bd96-40cf-9bf3-c0baa2b95abb", "metadata": {}, "source": ["The code above uses the `choices()` method to create a sample of 50 letters, where each letter is equally likely to appear in our sample. Imagine rolling a 26-sided die, with a different letter on each face, 50 times, writing down the letter that comes up on top each time.\n", "\n", "Now let's run this trial again, this time supplying the observed frequency of the letters in _Gargantual and Pantagruel_ as weights to the sampling. (For simplicity's sake, we first remove everything but the 26 lowercase letters of the Latin script: numbers, punctuation marks, spaces, letters with accent marks, etc.)"]}, {"cell_type": "code", "execution_count": null, "id": "b6d07380-29f0-4150-82eb-4114d209cc9c", "metadata": {}, "outputs": [], "source": ["g_alpha_chars = {}\n", "for c, n in g_characters.items():\n", " if c in alphabet:\n", " g_alpha_chars[c] = n\n", "letters = list(g_alpha_chars.keys())\n", "weights = g_alpha_chars.values()\n", "print(''.join(choices(letters, weights, k=50)))"]}, {"cell_type": "markdown", "id": "31c74a8c-264a-4512-b5bf-393b34045eac", "metadata": {}, "source": ["Do you notice any difference between the two results? It depends to some extent on roll of the dice, since both selections are still random. But you might see _more_ runs of letters in the second that resemble sequences you could expect in English, maybe even a word or two hiding in there."]}, {"cell_type": "markdown", "id": "6ceb4e52-ba65-49e9-a7c2-3d18ce75191b", "metadata": {}, "source": ["#### The difference a space makes\n", "\n", "On Liu's telling, one of Shannon's* key innovations was his realization that in analyzing _printed_ English, the _space between words_ counts as a character. It's the spaces that delimit words in printed text; without them, our analysis fails to account for word boundaries. \n", "\n", "Let's say what happens when we include the space character in our frequencies."]}, {"cell_type": "code", "execution_count": null, "id": "6e6589e6-5acd-4a41-b740-b14c59d1332c", "metadata": {}, "outputs": [], "source": ["g_shannon_chars = {}\n", "for c, n in g_characters.items():\n", " if c in alphabet or c == \" \":\n", " g_shannon_chars[c] = n\n", "letters = list(g_shannon_chars.keys())\n", "weights=g_shannon_chars.values()\n", "print(''.join(choices(letters, weights, k=50)))"]}, {"cell_type": "markdown", "id": "9f9e98e6-1f46-40b5-a548-a6182bc06147", "metadata": {}, "source": ["It may not seem like much improvement, but now we're starting to see sequences of recognizable \"word length,\" considering the average lengths of words in English. \n", "\n", "But note that we haven't so far actually tallied anything that would count as a word: we're still operating exclusively at the level of individual characters or letters."]}, {"cell_type": "markdown", "id": "582e9daa-bd74-429c-b87f-3e15dd3382b0", "metadata": {}, "source": ["#### Law-abiding numbers\n", "\n", "To unpack what we're doing a little more: when we make a _weighted_ selection from the letters of the alphabet, using the frequencies we've observed, it's equivalent to drawing letters out of a bag of Scrabble tiles, where different tiles appear in a different amounts. If there are 5 `e`'s in the bag but only 1 `z`, you might draw a `z`, but over time, you're more likely to draw an `e`. And if you make repeated draws, recording the letter you draw each time before putting it back in the bag, your final tally of letters will usually have more `e`'s than `z`'s. \n", "\n", "In probability theory, this expectation is called [the law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers). It describes the fundamental intuition behind the utility of averages, as well as their limitation: sampling better approximates the mathematical average as the samples get larger, but in every case, we're talking about behavior in the aggregate, not the individual case. "]}, {"cell_type": "markdown", "id": "129c012a-9824-481f-9b34-e557f9788f7a", "metadata": {}, "source": ["### Language as a drunken walk\n", "\n", "How effectively can we model natural language using statistical means? It's worth dwelling on the assumptions latent in this question. Parts of speech, word order, syntactic dependencies, etc: none of these classically linguistic entities come up for discussion in Shannon's* article. Nor are there any claims therein about underlying structures of thought that might map onto grammatical or syntactic structures, such as we find in the Chomskian theory of [generative grammar](https://en.wikipedia.org/wiki/Generative_grammar). The latter theory remains squarely within the algorithmic paradigm: the search for formal rules or laws of thought. \n", "\n", "Language, in Shannon's* treatment, resembles a different kind of phenomena: biological populations, financial markets, or the weather. In each of these systems, it is taken as a given that there are simply too many variables at play to arrive at the kind of description that would even remotely resemble the steps of a formally logical proof. Rather, the systems are described, and attempts are made to predict their behavior over time, drawing on observable patterns held to be valid in the aggregate. \n", "\n", "Whether the human linguistic faculty is best described in terms of formal, algorithmic rules, or as something else (emotional weather, perhaps), was not a question germane to Shannon's* analysis. Inn the introduction to his 1948 article, he claims that the \"semantic aspects of communication are irrelevant to the engineering problem\" (i.e., the problem of devising efficient means of encoding messages, linguistic or otherwise). These \"semantic aspects,\" excluded from \"the engineering problem,\" return to haunt the scene of generative AI with a vengeance. But in order to set this scene, let's return to Shannon's* experiments.\n", "\n", "Following Andrei Markov, Shannon* modeled printed English as a Markov chain: as a special kind of weighted selection where the weights of the current selection depend _only_ on the immediately previous selection. A Markov chain is often called a _random walk_, though the conventional illustration is of a person who has had a bit too much to drink stumbling about. Observing such a situation, you might not be able to determine where the person is trying to go; all you can predict is that their next position will fall within stumbling distance of where they're standing right now. Or if you prefer a less Rabelaisian metaphor, imagine threading your way among a host of puddles. With each step, you try to keep to dry land, but your path is likely to be anything but linear.\n", "\n", "It turns out that Markov chains can be used to model lots of processes in the physical world. And they can be used to model language, too, as Claude Shannon* showed."]}, {"cell_type": "markdown", "id": "8b2c4e45-e87f-4c24-8f71-739f4b007180", "metadata": {"jp-MarkdownHeadingCollapsed": true}, "source": ["#### More tedious counting\n", "\n", "One way to construct such an analysis is as follows: represent your sample of text as a continuous string of characters. (As we've seen, that's easy to do in Python.) Then \"glue\" it to another string, representing the same text, but with every character shifted to the left by one position. For example, the first several characters of the first sentence from _Gargantua and Pantagruel_ would look like this:\n", "\n", "![The text \"Most noble and illust\" is shown twice, one two consecutive lines, with each letter surrounded by a box. The second line is shifted to the left one character, so that the \"M\" of the first line appears above the \"o\" of the second line, etc.\n](https://gwu-libraries.github.io/engl-6130-dugan/_images/rabelais-1.png)\n", "With the exception of the dangling left-most and right-most characters, you now have a pair of strings that yield, for each position, a pair of characters. In the image below, the first few successive pairs are shown, along with the position of each pair of characters with respect to the \"glued\" strings.\n", "\n", "![A table with the letters \"h,\" a space, \"o,\" \"e,\" and \"i\" along the top (column headers), and \"t,\" space, \"c,\" \"w,\" \"s,\" and \"g\" along the left-hand side (row labels), and numbers in the cells of the table. \n](https://gwu-libraries.github.io/engl-6130-dugan/_images/rabelais-2.png)\n", "These pairs are called bigrams. But in order to construct a Markov chain, we're not just counting bigrams. Rather, we want to create what's called a _transition table_: a table where we can look up a given character -- the letter `e`, say -- and then for any other character that can follow `e`, find the frequency with which it occurs in that position (i.e., following an `e`). If a given character never follows another character, its bigram doesn't exist in the table. \n", "\n", "Below are shown some of the most common bigrams in such a transition table created on the basis of _Gargantua and Pantagruel_.\n"]}, {"cell_type": "markdown", "id": "33cbf760-a7c3-4ef1-a9dd-86fdf3860bd7", "metadata": {}, "source": ["#### Preparing the text\n", "\n", "To simplify our analysis, first we'll standardize the source text a bit. Removing punctuation and non-alphabetic characters, removing extra runs of white space and line breaks, and converting everything to lowercase will make patterns in the results easier to see (though it's really sort of an aesthetic choice, and as I've suggested, Shannon's* method doesn't presuppose any essential difference between the letters of words and the punctuation marks that accompany them). \n", "\n", "Run the two code sections below to clean the text of _Gargantua and Pantagruel_."]}, {"cell_type": "code", "execution_count": null, "id": "ff5d5b7e-243a-45e7-bede-40b7a01fbc3b", "metadata": {}, "outputs": [], "source": ["def normalize_text(text):\n", " '''\n", " Reduces the provided string to a string consisting of just alphabetic, lowercase characters from the Latin script and non-contiguous spaces.\n", " '''\n", " text_lower = text.lower()\n", " text_lower = text_lower.replace(\"\\n\", \" \").replace(\"\\t\", \" \")\n", " text_norm = \"\"\n", " for char in text_lower:\n", " if (char in \"abcdefghijklmnopqrstuvwxyz\") or (char == \" \" and text_norm[-1] != \" \"):\n", " text_norm += char\n", " return text_norm"]}, {"cell_type": "code", "execution_count": null, "id": "5618ceb4-371c-481c-908c-60685091d653", "metadata": {}, "outputs": [], "source": ["g_text_norm = normalize_text(g_text)\n", "g_text_norm[:1000]"]}, {"cell_type": "markdown", "id": "78b07409-72c2-4cdd-a092-461331a6eb45", "metadata": {}, "source": ["This method isn't perfect, but we'll trust that any errors -- like the disappearance of accented characters from French proper nouns, etc. -- will get smoothed over in the aggregate. "]}, {"cell_type": "markdown", "id": "06d02ac4-01dd-4ac1-9687-a8f37068c210", "metadata": {}, "source": ["#### Setting the table\n", "\n", "To create our transition table of bigrams, we'll define two new functions in Python. The first function, `create_ngrams`, generalizes a bit from our immediate use case; by setting the parameter called `n` in the function call to a number higher than 2, we can create combinations of three or more successive characters (trigrams, quadgrams, etc.). This feature will be useful a little later.\n", "\n", "Run the code below to define the function."]}, {"cell_type": "code", "execution_count": null, "id": "abd7f99f-dd94-4e1f-9466-a38396127f4b", "metadata": {}, "outputs": [], "source": ["def create_ngrams(text, n=2):\n", " '''\n", " Creates a series of ngrams out of the provided text argument. The argument n determines the size of each ngram; n must be greater than or equal to 2. \n", " Returns a list of ngrams, where each ngram is a Python tuple consisting of n characters.\n", " '''\n", " text_arrays = []\n", " for i in range(n):\n", " last_index = len(text) - (n - i - 1)\n", " text_arrays.append(text[i:last_index])\n", " return list(zip(*text_arrays))"]}, {"cell_type": "markdown", "id": "a1642dce-0701-47ce-b5fd-92850916afe5", "metadata": {}, "source": ["Let's illustrate our function with a small text first. The output is a Python list, which contains a series of additional collections (called tuples) nested within it. Each subcollection corresponds to a 2-character window, and the window is moved one character to the right each time. \n", "\n", "This structure will allow us to create our transition table, showing which characters follow which other characters most often. "]}, {"cell_type": "code", "execution_count": null, "id": "73eb3280-68d9-452c-b114-dc2ffb6fe4d8", "metadata": {}, "outputs": [], "source": ["text = 'abcdefghijklmnopqrstuvwxyz'\n", "create_ngrams(text, 2)"]}, {"cell_type": "markdown", "id": "a032fd08-b031-4ff4-aca6-1303413266d4", "metadata": {}, "source": ["Run the code section below to define another function, `create_transition_table`, which does what its name suggests."]}, {"cell_type": "code", "execution_count": null, "id": "e63fcb39-6bda-41a0-8669-08ad9bc7d8c6", "metadata": {}, "outputs": [], "source": ["from collections import Counter\n", "def create_transition_table(ngrams):\n", " '''\n", " Expects as input a list of tuples corresponding to ngrams.\n", " Returns a dictionary of dictionaries, where the keys to the outer dictionary consist of strings corresponding to the first n-1 elements of each ngram.\n", " The values of the outer dictionary are themselves dictionaries, where the keys are the nth elements each ngram, and the values are the frequence of occurrence.\n", " '''\n", " n = len(ngrams[0])\n", " ttable = {}\n", " for ngram in ngrams:\n", " key = \"\".join(ngram[:n-1])\n", " if key not in ttable:\n", " ttable[key] = Counter()\n", " ttable[key][ngram[-1]] += 1\n", " return ttable"]}, {"cell_type": "markdown", "id": "4bfbbbc8-6baa-4d6a-a345-14040db00f9c", "metadata": {}, "source": ["Now run the code below to create the transition table for the bigrams in the alphabet."]}, {"cell_type": "code", "execution_count": null, "id": "59585cd9-c15c-47a6-af5e-b6787848023d", "metadata": {}, "outputs": [], "source": ["create_transition_table(create_ngrams(text, 2))"]}, {"cell_type": "markdown", "id": "11e751a8-d82c-410a-85fa-d92fc1c88bb7", "metadata": {}, "source": ["Here our transition table consists of frequencies that are all 1, because (by definition) each letter occurs only once in the alphabet. The way to read the table, however, is as follows:\n", "> The letter `b` occurs after the letter `a` 1 time in our (alphabet) sample.\n", "> \n", "> The letter `c` occurs after the letter `b` 1 time in our sample.\n", "> \n", "> ...\n", "\n", "Now let's use these functions to create the transition table with bigrams _Gargantua and Pantagruel_."]}, {"cell_type": "code", "execution_count": null, "id": "8e2db7a4-f1ab-4a5f-a3cc-e280e2b8567e", "metadata": {}, "outputs": [], "source": ["g_ttable = create_transition_table(create_ngrams(g_text_norm, 2))"]}, {"cell_type": "markdown", "id": "9a60e6c6-c051-44e6-b681-67ed95f21f03", "metadata": {}, "source": ["Our table will now be significantly bigger. But let's use it see how frequently the letter `e` follows the letter `h` in our text:"]}, {"cell_type": "code", "execution_count": null, "id": "53b29a32-a875-4027-b20b-3a6e53a316e8", "metadata": {}, "outputs": [], "source": ["g_ttable['h']['e']"]}, {"cell_type": "markdown", "id": "bd48bd5c-eaa7-41ca-8cc9-1e6a073a7976", "metadata": {}, "source": ["We can visualize our table fairly easily by using a Python library called [pandas](https://pandas.pydata.org/).\n", "\n", "Run the code below, which may take a moment to finish."]}, {"cell_type": "code", "execution_count": null, "id": "9a341158-1569-4c36-b452-ecf7af97ad64", "metadata": {"scrolled": true}, "outputs": [], "source": ["import pandas as pd\n", "pd.set_option(\"display.precision\", 0)\n", "pd.DataFrame.from_dict(g_ttable, orient='index')"]}, {"cell_type": "markdown", "id": "dd76cd5b-b3c1-41ee-9251-2b46656d0436", "metadata": {}, "source": ["To read the table, select a row for the first letter, and then a column to find the frequency of the column letter appearing after the letter in the row. (In other words, read across then down.)\n", "\n", "The space character appears as the empty column/row label in this table. "]}, {"cell_type": "markdown", "id": "5a25ad62-75e5-4e48-8128-6be82c2a8571", "metadata": {}, "source": ["### Automatic writing\n", "\n", "In Shannon's* article, these kinds of transition tables are used to demonstrate the idea that English text can be effectively represented as a Markov chain. And to effect the demonstration, Shannon* presents the results of _generating_ text by weighted random sampling from the transition tables. \n", "\n", "To visualize how the weighted sampling works, imagine the following:\n", " 1. You choose a row at random on the transition table above, writing its character down on paper.\n", " 2. The numbers in that row correspond to the observed frequencies of characters following the character corresponding to that row.\n", " 3. You fill a big with Scrabble tiles, using as many tiles for each character as indicated by the corresponding cell in the selected row. If a cell has `NaN` in it -- the null value -- you don't put any tiles of that chracter in the bag.\n", " 5. You draw one tile from the bag. You write down the character you just selected. This character indicates the next row on the table.\n", " 6. Using that row, you repeat steps 1 through 4. And so on, for however many characters you want to include in your sample.\n", "\n", "Run the code below to define a function that will do this sampling for us."]}, {"cell_type": "code", "execution_count": null, "id": "f257d727-09ad-48c6-8939-de57e8e566a6", "metadata": {}, "outputs": [], "source": ["def create_sample(ttable, length=100):\n", " '''\n", " Using a transition table of ngrams, creates a random sample of the provided length (default is 100 characters).\n", " '''\n", " starting_chars = list(ttable.keys())\n", " first_char = last_char = choices(starting_chars, k=1)[0]\n", " l = len(first_char)\n", " generated_text = first_char\n", " for _ in range(length):\n", " chars = list(ttable[last_char].keys())\n", " weights = list(ttable[last_char].values())\n", " next_char = choices(chars, weights, k=1)[0]\n", " generated_text += next_char\n", " last_char = generated_text[-l:]\n", " return generated_text"]}, {"cell_type": "code", "execution_count": null, "id": "33b14ddc-a0b3-4c86-89eb-d26f933977ff", "metadata": {}, "outputs": [], "source": ["create_sample(g_ttable)"]}, {"cell_type": "markdown", "id": "ccf00920-e81a-45f5-a564-3746066519df", "metadata": {}, "source": ["Run the code above a few times for the full effect. It's still nonsense, but maybe it seems more like recognizable nonsense -- meaning nonsense that a human being who speaks English might make up -- compared with our previous randomly generated examples. If you agree that it's more recognizable, can you pinpoint features or moments that make it so?\n", "\n", "Personally, it reminds me of the outcome of using a Ouija board: recognizable words almost emerging from some sort of pooled subconscious, then sinking back into the murk before we can make any sense out of them. "]}, {"cell_type": "markdown", "id": "b1baf16a-9b19-4f86-93be-eda8d71fcc64", "metadata": {}, "source": ["#### More silly walks\n", "\n", "More adept Ouija-board users can be simulated by increasing the size of our n-grams. As Shannon's* article demonstrates, the approximation to the English lexicon increases by moving from bigrams to trigrams -- such that frequencies are calculated in terms of the occurrence of a given letter immediately after a pair of letters. \n", "\n", "So instead of a table like this:\n", "\n", "![A table with the letters \"h,\" a space, \"o,\" \"e,\" and \"i\" along the top (column headers), and \"t,\" space, \"c,\" \"w,\" \"s,\" and \"g\" along the left-hand side (row labels), and numbers in the cells of the table. \n](https://gwu-libraries.github.io/engl-6130-dugan/_images/bigram-table.png)\n", "\n", "we have this (where the `h`, `b`, and `w` in the row labels are all preceded by the space character):\n", "\n", "![A table with the letters \"e,\" \"a,\" space, \"i,\" \"o\" along the top (column headers), and \"th,\" space \"h\", space \"b\",\" \"er,\" and space \"w\" along the left-hand side (row labels), and numbers in the cells of the table. \n](https://gwu-libraries.github.io/engl-6130-dugan/_images/bigram-table-2.png)\n", "\n", "Note, however, that throughout these experiments, the level of approximation to any particular understanding of \"the English lexicon\" depends on the nature of the data from which we derive our frequencies. Urquhart's translation of Rabelais, dating from the 16th Century, has a rather distinctive vocabulary, as you might expect, even with the modernized spelling and grammar of the Project Gutenberg edition. \n", "\n", "The code below defines some interactive controls to make our experiments easier to manipulate. Run both sections of code to create the controls."]}, {"cell_type": "code", "execution_count": null, "id": "410bc842-9823-4d44-8627-95c56fb40b08", "metadata": {}, "outputs": [], "source": ["import ipywidgets as widgets\n", "from IPython.display import display\n", "\n", "def create_slider(min_value=2, max_value=5):\n", " return widgets.IntSlider(\n", " value=2,\n", " min=min_value,\n", " max=max_value,\n", " description='Set value of n:')\n", " \n", "def create_update_function(text, transition_function, slider):\n", " '''\n", " returns a callback function for use in updating the provided transition table with ngrams from text, given slider.value, as well as an output widget\n", " for displaying the output of the callback\n", " '''\n", " output = widgets.Output()\n", " def on_update(change):\n", " with output:\n", " global ttable\n", " ttable = transition_function(create_ngrams(text, slider.value))\n", " print(f'Updated! Value of n is now {slider.value}.')\n", " return on_update, output\n", "\n", "def create_generate_function(sample_function, slider):\n", " '''\n", " returns a callback function for use in generating new random samples from the provided trasition table.\n", " '''\n", " output = widgets.Output()\n", " def on_generate(change):\n", " with output:\n", " print(f'(n={slider.value}) {sample_function(ttable)}')\n", " return on_generate, output\n", " \n", "def create_button(label, callback):\n", " '''\n", " Creates a new button with the provided label, and sets its click handler to the provided callback function\n", " '''\n", " button = widgets.Button(description=label)\n", " button.on_click(callback)\n", " return button"]}, {"cell_type": "code", "execution_count": null, "id": "d530f07b-8956-4449-939d-5e522a55888f", "metadata": {}, "outputs": [], "source": ["ttable = g_ttable\n", "ngram_slider = create_slider()\n", "update_callback, update_output = create_update_function(g_text_norm, create_transition_table, ngram_slider)\n", "update_button = create_button(\"Update table\", update_callback)\n", "generate_callback, generate_output = create_generate_function(create_sample, ngram_slider)\n", "generate_button = create_button(\"New sample\", generate_callback)\n", "display(ngram_slider, update_button, update_output, generate_button, generate_output)\n"]}, {"cell_type": "markdown", "id": "beef17bd-ebad-4894-aa24-0fe3a9a0f75b", "metadata": {}, "source": ["Use the slider above to change the value of `n`. Click `Update table` to recreate the transition table using the new value of `n`. Then use the `New sample` button to generate a new, random sample of text from the transition table. You can generate as many samples as you like, and you can update the size of the ngrams in between in order to compare samples of different sizes."]}, {"cell_type": "markdown", "id": "f88eb45f-60b7-4ca3-9614-6a539a4a5e51", "metadata": {}, "source": ["What do you notice about the effect of higher values of `n` on the nature of the random samples produced? "]}, {"cell_type": "markdown", "id": "bb74992f-68e6-48e9-8b55-c70bdcb3ef9b", "metadata": {}, "source": ["### A Rabelaisian chatbot\n", "\n", "Following Shannon's* article, we can observe the same phenomena using whole words to create our n-grams. I find such examples more compelling, perhaps because I find it easier or more fun to look for the glimmers of sense of random strings of words than in random strings of letters, which may or may not be recognizable words. \n", "\n", "But the underlying procedure is the same. We first create a list of \"words\" out of our normalized text by splitting the latter on the occurrences of white space. As a result, instead of a single string containing the entire text, we'll have a Python list of strings, each of which is a word from the orginal text.\n", "\n", "Note that this process is not a rigorous way of tokenizing a text. If that is your goal -- to split a text into words, in order to employ word-frequency analysis or similar techniques -- there are very useful [Python libraries](https://spacy.io/) for this task, which use sophisticated tokenizing techniques.\n", "\n", "For purposes of our experiment, however, splitting on white space will suffice."]}, {"cell_type": "code", "execution_count": null, "id": "a802eb03-92e2-422c-922d-ddf456daee07", "metadata": {}, "outputs": [], "source": ["g_text_words = g_text_norm.split()"]}, {"cell_type": "markdown", "id": "32291c6e-625a-4711-b3b1-ad1f3bfedac0", "metadata": {}, "source": ["From here, we can create our ngrams and transition table as before. First, we just need to modify our previous code to put the spaces back (since we took them out in order to create our list of words). \n", "\n", "Run the code sections below to create some new functions, and the to create some more HTML controls for these functions."]}, {"cell_type": "code", "execution_count": null, "id": "827fa854-204d-47e2-ae88-06f0b107c359", "metadata": {}, "outputs": [], "source": ["def create_ttable_words(ngrams):\n", " '''\n", " Expects as input a list of tuples corresponding to ngrams.\n", " Returns a dictionary of dictionaries, where the keys to the outer dictionary consist of strings corresponding to the first n-1 elements of each ngram.\n", " The values of the outer dictionary are themselves dictionaries, where the keys are the nth elements each ngram, and the values are the frequence of occurrence.\n", " '''\n", " n = len(ngrams[0])\n", " ttable = {}\n", " for ngram in ngrams:\n", " key = ngram[:n-1]\n", " if key not in ttable:\n", " ttable[key] = Counter()\n", " ttable[key][(ngram[-1],)] += 1\n", " return ttable\n", " \n", "def create_sample_words(ttable, length=100):\n", " '''\n", " Using a transition table of ngrams, creates a random sample of the provided length (default is 100 characters).\n", " '''\n", " starting_words = list(ttable.keys())\n", " first_words = last_words = tuple(choices(starting_words, k=1)[0])\n", " n = len(first_words)\n", " text = list(first_words)\n", " for _ in range(length):\n", " words = list(ttable[last_words].keys())\n", " weights = list(ttable[last_words].values())\n", " next_word = choices(words, weights, k=1)[0]\n", " text.append(next_word[0])\n", " last_words = tuple(text[-n:])\n", " return \" \".join(text)"]}, {"cell_type": "code", "execution_count": null, "id": "e308046c-9d65-46ab-8090-98dd805750fd", "metadata": {}, "outputs": [], "source": ["ttable = create_ttable_words(create_ngrams(g_text_words))\n", "ngram_slider_w = create_slider()\n", "update_callback_w, update_output_w = create_update_function(g_text_words, create_ttable_words, ngram_slider_w)\n", "update_button_w = create_button(\"Update table\", update_callback_w)\n", "generate_callback_w, generate_output_w = create_generate_function(create_sample_words, ngram_slider_w)\n", "generate_button_w = create_button(\"New sample\", generate_callback_w)\n", "display(ngram_slider_w, update_button_w, update_output_w, generate_button_w, generate_output_w)\n"]}, {"cell_type": "markdown", "id": "80dc6351-f1ba-4e3c-8063-a91d2ab040c4", "metadata": {}, "source": ["Use the slider and buttons above to generate sample text for various values of `n`. Samples are based on n-grams of words from the source text."]}, {"cell_type": "markdown", "id": "f99da874-6728-4368-8908-13d53bb73985", "metadata": {}, "source": ["### How drunken was our walk?\n", "\n", "In his article, Shannon* reports various results of these experiments, using different values for `n` with both letter- and word-frequencies. He includes the following sample, apparently produced at random with word bigrams, though he does not disclose the particular textual sources from which he derived his transition tables:\n", "\n", ">THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHOEVER TOLD THE PROBLEM FOR AN UNEXPECTED.\n", "\n", "I've always thought that Shannon's* example seems suspiciously fortuitous, given its mention of attacks on English writers and methods for letters, etc. Who knows how many trials he made before he got this result (assuming he didn't fudge anything). All the same, one of the enduring charms of the \"Markov text generator\" is its propensity to produce uncanny stretches of text that, as Shannon* writes, sound \"not at all unreasonable.\" \n", "\n", "A question does arise: how novel are these stretches? In other words, what proportion of the generated sample is unique relative to the source? One way approach to the question is to think in terms of unique n-grams. When using a value of 3 for `n`, by definition every three-word sequence in our generated sample will match some sequence in the source text. But what about sequences of 4 words? Just looking at the samples we've created, it's clear that at least some of these are novel, since some are plainly nonsense and not likely to appear in Rabelais' text. \n", "\n", "We might measure their novelty by creating a lot of samples and then, for each sample, calculating the percentage of 4-word n-grams that are _not_ in the source text. Running this procedure over 1,000 samples, I arrive at an average of 40% -- so a little less than half of all the 4-word sequences across all the samples are sequences that do _not_ appear in Rabelais' text. \n", "\n", "As for what percentage of those constitute phrases that are not \"unreasonable\" as spontaneous English utterances, that's a question that's hard to answer computationally. Obviously, it depends in part on your definition of \"not unreasonable.\" But it's kind of fun to pick out phrases of length `n+1` (or `n+2`, etc.) from your sample and see if they appear in the original. You can do so by running code like the following. Just edit the part between the quotation marks so that they contain a phrase from your sample. If Python returns `True`, the phrase is _not_ in the source."]}, {"cell_type": "code", "execution_count": null, "id": "f3b0a7b6-82e7-4ff7-947c-5c31dec069b4", "metadata": {}, "outputs": [], "source": ["'to do a little untruss' in g_text_norm"]}, {"cell_type": "markdown", "id": "3d3a70cc-1050-4cfd-8937-8cb6923fd982", "metadata": {}, "source": ["### Where lies the labor?\n", "\n", "The code in this notebook implements a kind of algorithm, albeit a simple one. A great many procedures, now standard parts of computer applications -- e.g., efficiently sorting a list -- involve more logical complexity. Our Markovian model of Rabelais' novel seems almost _too_ simple to produce the results it does, which is perhaps partly why the results can feel uncanny. \n", "\n", "And while it needs a gargantuan leap to get from our rudimentary text machine to Chat GPT, the large-language model behind the latter is, like ours, a statistical representation of patterns occurring in the textual data on which it is based. The novelty of the latest models derives from their capacity to encode overlapping contexts: to represent how the units that make up text occur in multiple relations to each other: e.g., to capture, mathematically, the fact that a certain word frequently follows another word but often appears in the same sentence or paragraph as a third word, and so on. This complexity of representation, coupled with the sheer size of the data used to train the model, leads to Chat-GPT's uncanny ability to mimic textual genres with a high degree of stylistic fidelity.\n", "\n", "", "\n", "", "\n", "But perhaps we do Rabelais' text a disservice by calling it the \"data\" behind our model. We could, just as reasonably, speak of the text itself as the model -- likewise for the tera- or petabytes of text used to train Chat-GPT and its ilk. On Shannon's* theory, language encodes information. The ultimate aim of the theory is to find the most _efficient_ means of encoding (in order to solve \"the engineering problem\" of modern telecommunications networks); nonetheless, the success of the theory implies that any use of language (any use recognizable as such by users of the language) _already_ encodes information. In other words, the transition probabilities we generated from Rabelais' text are already expressed by Rabelais' text; our transition matrix just encodes that information in a more computationally tractable form. Every text encodes its producer's \"knowledge of the statistics of the language\" (in Shannon's* words). And one might argue that every text encodes its readers' knowledge, too. It's on the basis of such knowledge that we can \"decode\" Rabelais' novel, as well as the stochastic quasi-nonsense we can generate on its basis, which feels, relative to the former, like an excess of sense (an excess over and above Rabelais' already excessive text), spilling over the top.\n", "\n", "#### Further experiments\n", "\n", "To see how differences in the source of the model impact the result, try running our code on different texts. As written, our code only works on plain text format (files with a `.txt` extension). [Project Gutenberg](https://www.gutenberg.org/) is a good source for these -- just make sure that you choose the `Plain Text UTF-8` option for displaying a given text. You can copy the URL for the plain text either from your browser's address bar (if the text opens as a separate tab or page), or by right-clicking on the `Plaint Text UTF-8` link and selecting the option `Copy Link`. \n", "\n", "The following code creates a text box into which you can paste the link.\n"]}, {"cell_type": "code", "execution_count": null, "id": "ac3eaa96-b68c-4291-8446-5240f02e07d3", "metadata": {}, "outputs": [], "source": ["url_box = widgets.Text(\n", " value='',\n", " placeholder='Type something',\n", " description='URL:',\n", " disabled=False \n", ")"]}, {"cell_type": "markdown", "id": "57e7feb2-ad96-42d0-8e42-ce2e0cd28d2b", "metadata": {}, "source": ["The big block of code below re-uses code from above to download and normalize the text at the provided URL, create a transition table of words from the text, and present options for changing the value of N and for generating new samples. "]}, {"cell_type": "code", "execution_count": null, "id": "816414b3-4d5c-4a0a-af96-a75b346734f4", "metadata": {}, "outputs": [], "source": ["if url_box.value:\n", " text_file, _ = urlretrieve(url_box.value)\n", " with open(text_file) as f:\n", " text = f.read()\n", " norm_text_words = normalize_text(text).split()\n", " ttable = create_ttable_words(create_ngrams(norm_text_words))\n", " ngram_slider_new = create_slider()\n", " update_callback_new, update_output_new = create_update_function(norm_text_words, create_ttable_words, ngram_slider_new)\n", " update_button_new = create_button(\"Update table\", update_callback_new)\n", " generate_callback_new, generate_output_new = create_generate_function(create_sample_words, ngram_slider_new)\n", " generate_button_new = create_button(\"New sample\", generate_callback_new)\n", " display(ngram_slider_new, update_button_new, update_output_new, generate_button_new, generate_output_new)\n", " "]}, {"cell_type": "markdown", "id": "5f809cba-e34c-4426-b7b2-19e426767c05", "metadata": {}, "source": ["### Carnival intelligence?\n", "\n", "Lacan famously said that the unconscious is structured like a language. Whether that's an apt description of the human psyche is at least debatable. But might we say that these models manifest the unconscious structures of language itself? We can catch glimpses of this manifestation in the relatively humble outcome of Shannon's* experiments: in the Markovian leaps that lead us to make _sense_ out of patterned randomness, leaps which, at the same time, reveal the nonsense that riots on the other side of sense. These experiments allow us to wander through spaces of grammatical, lexical, and stylistic possibility -- and the pleasure they offer, for me, lies in their letting us stumble into places where our rule-observant habits might not otherwise let us go. \n", "\n", "What if we were to approach generative AI in the same spirit? Not as the _deus ex machina_ that will save the world (which it almost certainly is not), and not only as a technology that will further alienate and oppress our labor (which it very probably is). But to borrow from Bakhtin, as a carnivalesque mirror of our collective linguistic unconscious: like carnival, offering a sense of freedom from restraint that is, at the same time, the affirmation, by momentary inversion, of the prevailing order of things. But also a reminder that language is the repository of an intelligence neither of the human (considered as an isolated being), nor of the machine, but of the collective, and that making sense is always a political act ({cite}`bakhtin_rabelais_1984`)."]}], "metadata": {"kernelspec": {"display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.6"}}, "nbformat": 4, "nbformat_minor": 5} \ No newline at end of file +{"cells": [{"cell_type": "markdown", "id": "8ee233a9-62a7-4f38-80ee-bcd72b368f2f", "metadata": {}, "source": ["# Reading Machines\n", "## Exploring the Linguistic Unconscious of AI\n", "\n", "### Introduction: Two ways of thinking about computation\n", "\n", "The history of computing revolves around efforts to automate the human labor of computation. And in many narratives of this history, the algorithm plays a central role. By _algorithm_, I refer to methods of reducing complex calculations and other operations to explicit formal rules, rules that can be implemented with rigor and precision by purely mechanical or electronic means.\n", "\n", "", "\n", "", "\n", "But as a means of understanding Chat GPT and other forms of [generative AI](https://en.wikipedia.org/wiki/Generative_artificial_intelligence), a consideration of algorithms only gets us so far. In fact, when it comes to the [large language models](https://en.wikipedia.org/wiki/Large_language_model) that have captivated the public imagination, in order to make sense of their \"unreasonable effectiveness,\" we must attend to another strand of computing, one which, though bound up with the first, manifests distinct pressures and concerns. Instead of formal logic and mathematical proof, this strand draws on traditions of thinking about data, randomness, and probability. And instead of the prescription of (computational) actions, it aims at the description and prediction of (non-computational) aspects of the world. \n", "\n", "", "\n", "A key moment in this tradition, in light of later developments, remains Claude Shannon's* work on modeling the statistical structure of printed English ({cite}`shannon_mathematical_1948`). In this interactive document, we will use the [Python programming language](https://www.python.org) to reproduce a couple of the experiments that Shannon* reported in his famous article, in the hopes of pulling back the curtain a bit on what seems to many (and not unreasonably) as evidence of a ghost in the machine. I, for one, do find many of these experiences haunting. But maybe the haunting doesn't happen where we at first assume.\n", "\n", "", "\n", "The material that follows draws on and is inspired by my reading of Lydia Liu's _The Freudian Robot_, one of the few works in the humanities that I'm aware of to deal with Shannon's work in depth. See {cite}`liu_freudian_2010`."]}, {"cell_type": "markdown", "id": "209cb756-c356-4150-adf3-a4a8a2cf0b24", "metadata": {}, "source": ["### Two kinds of coding\n", "\n", "Before we delve into our experiments, let's clarify some terminology. In particular, what do we mean by _code_? \n", "\n", "The demonstration below goes into a little more explicit detail, as far as the mechanics of Python are concerned, than the rest of this document. That's intended to motivate the contrast to follow, between the kind of code we write in Python, and the kind of coding that Shannon's* work deals with. \n", "\n", "#### Programs as code(s)\n", "\n", "We imagine computers as machines that operate on 1's and 0's. In fact, the 1's and 0's are themselves an abstraction for human convenience: digital computation happens as a series of electronic pulses: switches that are either \"on\" or \"off.\" (Think of counting to 10 by flipping a light switch on and off 10 times.)\n", "\n", "Every digital representation -- everything that can be computed by a digital computer -- must be encoded, ultimately, in this binary form. \n", "\n", "But to make computers efficient for human use, many additional layers of abstraction have been developed on top of the basic binary layer. By virtue of using computers and smartphones, we are all familiar with the concept of an interface, which instantiates a set of rules prescribing how we are to interact with the device in order to accomplish well-defined tasks. These interactions get encoded down to the level of electronic pulses (and the results of the computation are translated back into the encoding of the interface). \n", "\n", "A programming language is also an interface: a text-based one. It represents a code into which we can translate our instructions for computation, in order for those instructions to be encoded further for processing. \n", "\n", "#### Baby steps in Python\n", "\n", "\n", "Let's start with a single instruction. Run the following line of Python code by clicking the button,. You won't see any output -- that's okay."]}, {"cell_type": "code", "execution_count": null, "id": "fa425114-e402-4761-b30c-c1e1762dd61b", "metadata": {}, "outputs": [], "source": ["answer_to_everything = 42"]}, {"cell_type": "markdown", "id": "c43f26d2-9ed3-49fb-a2c6-9591eb1738da", "metadata": {}, "source": ["In the encoding specified by the Python language, the equals sign (`=`) is an instruction that loosely translates to: \"Store this value (on the right side) somewhere in memory, and give that location in memory the provided name (on the left side).\" The following image presents one way of imagining what happens in response to this code (with the caveat that, ultimately, the letters and numbers are represented by their binary encoding). "]}, {"cell_type": "markdown", "id": "063a8ee0-c7cb-4ce2-b74d-d447fb9b0865", "metadata": {}, "source": []}, {"cell_type": "markdown", "id": "fb21619b-0b45-4159-b520-63c6f4f08952", "metadata": {}, "source": ["By running the previous line of code, we have created a _variable_, which maps the name `answer_to_everything` to the value `42`. We can use the variable to retrieve its value (for use in other parts of our program). Run the code below to see some output."]}, {"cell_type": "code", "execution_count": null, "id": "e41c570a-0627-4a97-9785-b6b5faf94b4b", "metadata": {}, "outputs": [], "source": ["print(answer_to_everything)"]}, {"cell_type": "markdown", "id": "83d96246-c61e-404d-ba0f-f8096b11bf47", "metadata": {}, "source": ["The `print()` _function_ is a command in Python syntax that displays a value on the screen. Python's syntax picks out the following elements:\n", " - the name `print`\n", " - the parentheses that follow it, which enclose the _argument_\n", " - the argument itself, which in this case is a variable name (previously defined)\n", "\n", "These elements are perfectly arbitrary (in the Saussurean sense). This syntax was invented by the designers of the Python language, though they drew on conventions found in other programming languages. The point is that nothing about the Python command `print(answer_to_everything)` makes its operation transparent; to know what it does, you have to know the language (or, at least, be familiar with the conventions of programming languages more generally) -- just as when learning to speak a foreign language, you can't deduce much about the meaning of the words from the way they look or sound.\n", "\n", "However, unlike so-called _natural languages_, even minor deviations in syntax will usually cause errors, and errors will usually bring the whole program to a crashing halt.\n", "\n", "", "\n", "Run the code below -- you should see an error message."]}, {"cell_type": "code", "execution_count": null, "id": "2633fb92-6114-4b5c-aea8-571269503f8a", "metadata": {}, "outputs": [], "source": ["print(answer_to_everythin)"]}, {"cell_type": "markdown", "id": "76593f24-7ab4-48dd-a6a6-19d2b2016e13", "metadata": {}, "source": ["A misspelled variable name causes Python to abort its computation. Imagine if conversation ground to a halt whenever one of the parties mispronounced a word or used a malapropism!\n", "\n", "I tend to say that Python is extremely literal. But of course, this is merely an analogy, and a loose one. There is no room for metaphor in programming languages, at least, not as far as the computation itself is concerned. The operation of a language like Python is determined by the algorithms used to implement it. Given the same input and the same conditions of operation, a given Python program should produce the same output every time. (If it does not, that's usually considered a bug.)"]}, {"cell_type": "markdown", "id": "382482b5-5d87-455a-b07d-0d05451e72db", "metadata": {}, "source": ["#### Encoding text\n", "\n", "While _programming languages_ are ways of encoding algorithms, the operation of the resulting _programs_ does depend, in most cases, on more than just the algorithm itself. Programs depend on data. And in order to be used in computation, data must be encoded, too.\n", "\n", "As an engineer at Bell Labs, Claude Shannon* wanted to find -- mathematically -- the most efficient means of encoding data for electronic transmission. Note that this task involves a rather different set of factors from those that influence the design of a programming language.\n", "\n", "The designer of the language has the luxury of insisting on a programmer's fidelity to the specified syntax. In working in Python, we have to write `print(42)`, exactly as written, in order to display the number `42` on the screen. if we forget the parentheses, for instance, the command won't work. But when we talk on the phone (or via Zoom, etc.), it would certainly be a hassle if we had to first translate our words into a strict, fault-intolerant code like that of Python. \n", "\n", "All the same, there is no digital (electronic) representation without encoding. To refer to the difference between these two types of codes, I am drawing a distinction between _algorithms_ and _data_. Shannon's* work illustrates the importance of this distinction, which remains relevant to any consideration of machine learning and generative AI."]}, {"cell_type": "markdown", "id": "8d76bcb8-0c51-41cb-9153-8606436c8c9d", "metadata": {}, "source": ["#### Representing text in Python\n", "\n", "Before we turn to Shannon's* experiments with English text, let's look briefly at how Python represents text as data."]}, {"cell_type": "code", "execution_count": null, "id": "23a16da0-d11a-43f3-a179-ede5979f3369", "metadata": {}, "outputs": [], "source": ["a_text = \"Most noble and illustrious drinkers, and you thrice precious pockified blades (for to you, and none else, do I dedicate my writings), Alcibiades, in that dialogue of Plato's, which is entitled The Banquet, whilst he was setting forth the praises of his schoolmaster Socrates (without all question the prince of philosophers), amongst other discourses to that purpose, said that he resembled the Silenes.\""]}, {"cell_type": "markdown", "id": "02ab3533-27f3-4480-bc29-bea8410ba8fd", "metadata": {}, "source": ["Running the code above creates a new variable, `a_text`, and assigns it to a _string_ representing the first sentence from Francois Rabelais' early Modern novel, _Gargantua and Pantagruel_. A string is the most basic way in Python of representing text, where \"text\" means anything that is not to be treated purely a numeric value. \n", "\n", "Anything between quotation marks (either double `\"\"` or single `''`) is a string.\n", "\n", "One problem with strings in Python (and other programming languages) is that they have very little structure. A Python string is a sequence of characters, where a _character_ is a letter of a recognized alphabet, a punctuation mark, a space, etc. Each character is stored in the computer's memory as a numeric code, and from that perspective, all characters are essentially equal. We can access a single character in a string by supplying its position. (Python counts characters in strings from left to right, starting with 0, not 1, for the first character.)"]}, {"cell_type": "code", "execution_count": null, "id": "703efe54-ca3b-488d-b76e-61bba4ddc8fb", "metadata": {}, "outputs": [], "source": ["a_text[5]"]}, {"cell_type": "markdown", "id": "a655e224-65ab-4b8d-bf14-3aed95941064", "metadata": {}, "source": ["We can access a sequence of characters -- here, the characters in positions 11 through 50."]}, {"cell_type": "code", "execution_count": null, "id": "46981216-84ec-4a56-a2d6-7a0edc0cf788", "metadata": {}, "outputs": [], "source": ["a_text[10:50]"]}, {"cell_type": "markdown", "id": "b1ef2a6f-fed0-4929-8df3-41dcfb06427b", "metadata": {}, "source": ["We can even divide the string into pieces, using the occurences of particular characters. The code below divides our text on the white space, returning a _list_ (another Python construct) of smaller strings."]}, {"cell_type": "code", "execution_count": null, "id": "6d2ff07f-e349-4cc8-bb80-593ad37126dd", "metadata": {"scrolled": true}, "outputs": [], "source": ["a_text.split()"]}, {"cell_type": "markdown", "id": "f0492dc1-df43-4eee-8357-ecac1e6be83b", "metadata": {}, "source": ["The strings in the list above correspond, loosely, to the individual words in the sentence from Rabelais' text. But Python really has no concept of \"word,\" neither in English, nor any other (natural) language. "]}, {"cell_type": "markdown", "id": "ce5c70c2-238e-4277-bd02-f17e2d6b3e6b", "metadata": {}, "source": ["### Language & chance\n", "\n", "It's probably fair to say that when Shannon* was developing his mathematical approach to encoding information, the algorithmic ideal dominated computational research in Western Europe and the United States. In previous decades, philosophers like Bertrand Russell and mathematicians like David Hilbert had sought to develop a formal approach to mathematical proof, an approach that, they hoped, would ultimately unify the scientific disciplines. The goal of such research was to identify a core set of axioms, or logical rules, in terms of which all other \"rigorous\" methods of thought could be expressed. In other words, to reduce to zero the uncertainty and ambiguity plaguing natural language as a tool for expression: to make language algorithmic.\n", "\n", "Working within this tradition, Alan Turing had developed his model of what would become the digital computer. \n", "\n", "But can language as humans use it be reduced to such formal rules? On the face of it, it's easy to think not. However, that conclusion presents a problem for computation involving natural language, since the computer is, at bottom, a formal-rule-following machine. Shannon's* work implicitly challenges the assumption that we need to resort to formal rules in order to deal with the uncertainty in language. Instead, he sought mathematical means for _quantifying_ that uncertainty. And as Lydia Liu points out, that effort began with a set of observations about patterns in printed English texts.\n", "\n", ""]}, {"cell_type": "markdown", "id": "0e382de6-5c4c-4372-8965-6d8f7712cc86", "metadata": {}, "source": ["#### The long history of code\n", "\n", "Of course, Shannon's* insights do not begin with Shannon*. A long history predates him of speculation on what we might call the statistical features of language. Speculations of some practical urgency, given the even longer history of cryptographic communication in political, military, and other contexts.\n", "\n", "In the 9th Century CE, the Arab mathematician and philosopher Al-Kindi composed a work on cryptography in which he applied the relative frequency of letters in Arabic to a method for decrypting coded text ({cite}`broemeling_account_2011`). Al-Kindi, alongside his many other accomplishments, composed the earliest surviving analysis of this kind, which is a direct precursor of methods popular in the digital humanities (word frequency analysis), among other many other domains. \n", "\n", "Closer yet to the hearts of digital humanists, the Russian mathematician Andrei Markov, in a 1913 address to the Russian Academy of Sciences, reported on the results of his experiment with Aleksandr Pushkin's _Evegnii Onegin_: a statistical analysis of the occurrences of consonants and vowels in the first two chapters of Pushkin's novel in verse ({cite}`markov_example_2006`). From the perspective of today's large-language models, Markov improved on Al-Kindi's methods by counting not just isolated occurrences of vowels or consonants, but co-occurences: that is, where a vowel follows a consonant, a consonant a vowel, etc. As a means of articulating the structure of a sequential process, Markov's method generalizes into a powerful mathematical tool, to which he lends his name. We will see how Shannon* used [Markov chains](https://en.wikipedia.org/wiki/Markov_chain) shortly. "]}, {"cell_type": "markdown", "id": "adb9b846-06b4-4ad9-9aa9-8042402b9192", "metadata": {}, "source": ["#### A spate of tedious counting\n", "\n", "First, however, let's illustrate the more basic method, just to get a feel for its effectiveness.\n", "\n", "We'll take a text of sufficient length. Urquhart's English translation of _Gargantual and Pantagruel_, in the Everyman's Library edition, clocks in at 823 pages; that's a decent sample. If we were following the methods used by Al-Kindi, Markov, or even Shannon* himself, we would proceed as follows:\n", " 1. Make a list of the letters of the alphabet on a sheet of paper.\n", " 2. Go through the text, letter by letter.\n", " 3. Beside each letter on your paper, make one mark each time you encounter that letter in the text.\n", "\n", "Fortunately for us, we can avail ourselves of a computer to do this work. \n", "\n", "In the following sections of Python code, we download the Project Gutenberg edition of Rabelais' novel, saving it to the computer as a text file. We can read the whole file into the computer's memory as a single Python string. Then using a property of Python strings that allows us to _iterate_ over them, we can automate the process of counting up the occurences of each character. "]}, {"cell_type": "code", "execution_count": null, "id": "f41af9d8-6b35-427b-ad0b-14636f6027c1", "metadata": {}, "outputs": [], "source": ["from urllib.request import urlretrieve\n", "urlretrieve(\"https://www.gutenberg.org/cache/epub/1200/pg1200.txt\", \"gargantua.txt\")"]}, {"cell_type": "code", "execution_count": null, "id": "f8299d21-85e0-412b-902e-bd4a3e875301", "metadata": {}, "outputs": [], "source": ["with open('gargantua.txt') as f:\n", " g_text = f.read()"]}, {"cell_type": "markdown", "id": "e5efa9a4-9b35-433f-b4f9-ea3781a41e2f", "metadata": {}, "source": ["Running the code below uses the `len()` function to display the length -- in characters -- of a string. "]}, {"cell_type": "code", "execution_count": null, "id": "e21a11d7-ecf8-47d8-8805-e615901f68b6", "metadata": {}, "outputs": [], "source": ["len(g_text)"]}, {"cell_type": "markdown", "id": "49dc8fc5-ec15-4626-8ddd-a44348c03725", "metadata": {}, "source": ["The Project Gutenberg version of _Gargantua and Pantagruel_ has close to a 2 million characters."]}, {"cell_type": "markdown", "id": "522242ba-9511-4911-9dad-9769fad74386", "metadata": {}, "source": ["As an initial exercise, we can count the frequency with which each character appears. Run the following section of code to create a structure mapping each character to its frequency."]}, {"cell_type": "code", "execution_count": null, "id": "a68be960-bc75-4593-8f69-07c5e98cd318", "metadata": {}, "outputs": [], "source": ["g_characters = {}\n", "for character in g_text:\n", " if character in g_characters:\n", " g_characters[character] += 1\n", " else:\n", " g_characters[character] = 1"]}, {"cell_type": "markdown", "id": "ee699718-3a61-445f-bd61-ef6a7f31b8ea", "metadata": {}, "source": ["Run the code below to reveal the frequencies."]}, {"cell_type": "code", "execution_count": null, "id": "8533ef6e-8dd2-4cb1-81a5-7849b452fff4", "metadata": {}, "outputs": [], "source": ["g_characters"]}, {"cell_type": "markdown", "id": "2d6db4f5-c0ee-4f00-9ed2-25e358ca41b2", "metadata": {}, "source": ["Looking at the contents of `g_characters`, we can see that it consists of more than just the letters in standard [Latin script](https://en.wikipedia.org/wiki/Latin_script). There are punctuation marks, numerals, and other symbols, like `\\n`, which represents a line break. \n", "\n", "But if we look at the 10 most commonly occurring characters, with one exception, it aligns well with the [relative frequency of letters in English](https://en.wikipedia.org/wiki/Letter_frequency) as reported from studying large textual corpora. "]}, {"cell_type": "code", "execution_count": null, "id": "f82644ae-d893-4328-859a-34dfadafc2b6", "metadata": {}, "outputs": [], "source": ["sorted(g_characters.items(), key=lambda x: x[1], reverse=True)[:10]"]}, {"cell_type": "markdown", "id": "243b23b0-3fbe-40d7-92f7-815d28fa7a99", "metadata": {}, "source": ["#### Random writing\n", "\n", "At the heart of Shannon's* method lies the notion of _random sampling_. It's perhaps easiest to illustrate this concept before defining it.\n", "\n", "Using more Python code, let's compare what happens when we construct two random samples of the letters of the Latin script, one in which we select each letter with equal probability, and the other in which we weight our selections according to the frequency we have computed above."]}, {"cell_type": "code", "execution_count": null, "id": "cb83d49e-0dd2-4faa-9a25-af3166b6f50c", "metadata": {}, "outputs": [], "source": ["from random import choices\n", "alphabet = \"abcdefghijklmnopqrstuvwxyz\"\n", "print(\"\".join(choices(alphabet, k=50)))"]}, {"cell_type": "markdown", "id": "6f90a735-bd96-40cf-9bf3-c0baa2b95abb", "metadata": {}, "source": ["The code above uses the `choices()` method to create a sample of 50 letters, where each letter is equally likely to appear in our sample. Imagine rolling a 26-sided die, with a different letter on each face, 50 times, writing down the letter that comes up on top each time.\n", "\n", "Now let's run this trial again, this time supplying the observed frequency of the letters in _Gargantual and Pantagruel_ as weights to the sampling. (For simplicity's sake, we first remove everything but the 26 lowercase letters of the Latin script: numbers, punctuation marks, spaces, letters with accent marks, etc.)"]}, {"cell_type": "code", "execution_count": null, "id": "b6d07380-29f0-4150-82eb-4114d209cc9c", "metadata": {}, "outputs": [], "source": ["g_alpha_chars = {}\n", "for c, n in g_characters.items():\n", " if c in alphabet:\n", " g_alpha_chars[c] = n\n", "letters = list(g_alpha_chars.keys())\n", "weights = g_alpha_chars.values()\n", "print(''.join(choices(letters, weights, k=50)))"]}, {"cell_type": "markdown", "id": "31c74a8c-264a-4512-b5bf-393b34045eac", "metadata": {}, "source": ["Do you notice any difference between the two results? It depends to some extent on roll of the dice, since both selections are still random. But you might see _more_ runs of letters in the second that resemble sequences you could expect in English, maybe even a word or two hiding in there."]}, {"cell_type": "markdown", "id": "6ceb4e52-ba65-49e9-a7c2-3d18ce75191b", "metadata": {}, "source": ["#### The difference a space makes\n", "\n", "On Liu's telling, one of Shannon's* key innovations was his realization that in analyzing _printed_ English, the _space between words_ counts as a character. It's the spaces that delimit words in printed text; without them, our analysis fails to account for word boundaries. \n", "\n", "Let's say what happens when we include the space character in our frequencies."]}, {"cell_type": "code", "execution_count": null, "id": "6e6589e6-5acd-4a41-b740-b14c59d1332c", "metadata": {}, "outputs": [], "source": ["g_shannon_chars = {}\n", "for c, n in g_characters.items():\n", " if c in alphabet or c == \" \":\n", " g_shannon_chars[c] = n\n", "letters = list(g_shannon_chars.keys())\n", "weights=g_shannon_chars.values()\n", "print(''.join(choices(letters, weights, k=50)))"]}, {"cell_type": "markdown", "id": "9f9e98e6-1f46-40b5-a548-a6182bc06147", "metadata": {}, "source": ["It may not seem like much improvement, but now we're starting to see sequences of recognizable \"word length,\" considering the average lengths of words in English. \n", "\n", "But note that we haven't so far actually tallied anything that would count as a word: we're still operating exclusively at the level of individual characters or letters."]}, {"cell_type": "markdown", "id": "582e9daa-bd74-429c-b87f-3e15dd3382b0", "metadata": {}, "source": ["#### Law-abiding numbers\n", "\n", "To unpack what we're doing a little more: when we make a _weighted_ selection from the letters of the alphabet, using the frequencies we've observed, it's equivalent to drawing letters out of a bag of Scrabble tiles, where different tiles appear in a different amounts. If there are 5 `e`'s in the bag but only 1 `z`, you might draw a `z`, but over time, you're more likely to draw an `e`. And if you make repeated draws, recording the letter you draw each time before putting it back in the bag, your final tally of letters will usually have more `e`'s than `z`'s. \n", "\n", "In probability theory, this expectation is called [the law of large numbers](https://en.wikipedia.org/wiki/Law_of_large_numbers). It describes the fundamental intuition behind the utility of averages, as well as their limitation: sampling better approximates the mathematical average as the samples get larger, but in every case, we're talking about behavior in the aggregate, not the individual case. "]}, {"cell_type": "markdown", "id": "129c012a-9824-481f-9b34-e557f9788f7a", "metadata": {}, "source": ["### Language as a drunken walk\n", "\n", "How effectively can we model natural language using statistical means? It's worth dwelling on the assumptions latent in this question. Parts of speech, word order, syntactic dependencies, etc: none of these classically linguistic entities come up for discussion in Shannon's* article. Nor are there any claims therein about underlying structures of thought that might map onto grammatical or syntactic structures, such as we find in the Chomskian theory of [generative grammar](https://en.wikipedia.org/wiki/Generative_grammar). The latter theory remains squarely within the algorithmic paradigm: the search for formal rules or laws of thought. \n", "\n", "Language, in Shannon's* treatment, resembles a different kind of phenomena: biological populations, financial markets, or the weather. In each of these systems, it is taken as a given that there are simply too many variables at play to arrive at the kind of description that would even remotely resemble the steps of a formally logical proof. Rather, the systems are described, and attempts are made to predict their behavior over time, drawing on observable patterns held to be valid in the aggregate. \n", "\n", "Whether the human linguistic faculty is best described in terms of formal, algorithmic rules, or as something else (emotional weather, perhaps), was not a question germane to Shannon's* analysis. Inn the introduction to his 1948 article, he claims that the \"semantic aspects of communication are irrelevant to the engineering problem\" (i.e., the problem of devising efficient means of encoding messages, linguistic or otherwise). These \"semantic aspects,\" excluded from \"the engineering problem,\" return to haunt the scene of generative AI with a vengeance. But in order to set this scene, let's return to Shannon's* experiments.\n", "\n", "Following Andrei Markov, Shannon* modeled printed English as a Markov chain: as a special kind of weighted selection where the weights of the current selection depend _only_ on the immediately previous selection. A Markov chain is often called a _random walk_, though the conventional illustration is of a person who has had a bit too much to drink stumbling about. Observing such a situation, you might not be able to determine where the person is trying to go; all you can predict is that their next position will fall within stumbling distance of where they're standing right now. Or if you prefer a less Rabelaisian metaphor, imagine threading your way among a host of puddles. With each step, you try to keep to dry land, but your path is likely to be anything but linear.\n", "\n", "It turns out that Markov chains can be used to model lots of processes in the physical world. And they can be used to model language, too, as Claude Shannon* showed."]}, {"cell_type": "markdown", "id": "8b2c4e45-e87f-4c24-8f71-739f4b007180", "metadata": {"jp-MarkdownHeadingCollapsed": true}, "source": ["#### More tedious counting\n", "\n", "One way to construct such an analysis is as follows: represent your sample of text as a continuous string of characters. (As we've seen, that's easy to do in Python.) Then \"glue\" it to another string, representing the same text, but with every character shifted to the left by one position. For example, the first several characters of the first sentence from _Gargantua and Pantagruel_ would look like this:\n", "\n", "![The text \"Most noble and illust\" is shown twice, one two consecutive lines, with each letter surrounded by a box. The second line is shifted to the left one character, so that the \"M\" of the first line appears above the \"o\" of the second line, etc.\n](https://gwu-libraries.github.io/engl-6130-dugan/_images/rabelais-1.png)\n", "With the exception of the dangling left-most and right-most characters, you now have a pair of strings that yield, for each position, a pair of characters. In the image below, the first few successive pairs are shown, along with the position of each pair of characters with respect to the \"glued\" strings.\n", "\n", "![A table with the letters \"h,\" a space, \"o,\" \"e,\" and \"i\" along the top (column headers), and \"t,\" space, \"c,\" \"w,\" \"s,\" and \"g\" along the left-hand side (row labels), and numbers in the cells of the table. \n](https://gwu-libraries.github.io/engl-6130-dugan/_images/rabelais-2.png)\n", "These pairs are called bigrams. But in order to construct a Markov chain, we're not just counting bigrams. Rather, we want to create what's called a _transition table_: a table where we can look up a given character -- the letter `e`, say -- and then for any other character that can follow `e`, find the frequency with which it occurs in that position (i.e., following an `e`). If a given character never follows another character, its bigram doesn't exist in the table. \n", "\n", "Below are shown some of the most common bigrams in such a transition table created on the basis of _Gargantua and Pantagruel_.\n"]}, {"cell_type": "markdown", "id": "33cbf760-a7c3-4ef1-a9dd-86fdf3860bd7", "metadata": {}, "source": ["#### Preparing the text\n", "\n", "To simplify our analysis, first we'll standardize the source text a bit. Removing punctuation and non-alphabetic characters, removing extra runs of white space and line breaks, and converting everything to lowercase will make patterns in the results easier to see (though it's really sort of an aesthetic choice, and as I've suggested, Shannon's* method doesn't presuppose any essential difference between the letters of words and the punctuation marks that accompany them). \n", "\n", "Run the two code sections below to clean the text of _Gargantua and Pantagruel_."]}, {"cell_type": "code", "execution_count": null, "id": "ff5d5b7e-243a-45e7-bede-40b7a01fbc3b", "metadata": {}, "outputs": [], "source": ["def normalize_text(text):\n", " '''\n", " Reduces the provided string to a string consisting of just alphabetic, lowercase characters from the Latin script and non-contiguous spaces.\n", " '''\n", " text_lower = text.lower()\n", " text_lower = text_lower.replace(\"\\n\", \" \").replace(\"\\t\", \" \")\n", " text_norm = \"\"\n", " for char in text_lower:\n", " if (char in \"abcdefghijklmnopqrstuvwxyz\") or (char == \" \" and text_norm[-1] != \" \"):\n", " text_norm += char\n", " return text_norm"]}, {"cell_type": "code", "execution_count": null, "id": "5618ceb4-371c-481c-908c-60685091d653", "metadata": {}, "outputs": [], "source": ["g_text_norm = normalize_text(g_text)\n", "g_text_norm[:1000]"]}, {"cell_type": "markdown", "id": "78b07409-72c2-4cdd-a092-461331a6eb45", "metadata": {}, "source": ["This method isn't perfect, but we'll trust that any errors -- like the disappearance of accented characters from French proper nouns, etc. -- will get smoothed over in the aggregate. "]}, {"cell_type": "markdown", "id": "06d02ac4-01dd-4ac1-9687-a8f37068c210", "metadata": {}, "source": ["#### Setting the table\n", "\n", "To create our transition table of bigrams, we'll define two new functions in Python. The first function, `create_ngrams`, generalizes a bit from our immediate use case; by setting the parameter called `n` in the function call to a number higher than 2, we can create combinations of three or more successive characters (trigrams, quadgrams, etc.). This feature will be useful a little later.\n", "\n", "Run the code below to define the function."]}, {"cell_type": "code", "execution_count": null, "id": "abd7f99f-dd94-4e1f-9466-a38396127f4b", "metadata": {}, "outputs": [], "source": ["def create_ngrams(text, n=2):\n", " '''\n", " Creates a series of ngrams out of the provided text argument. The argument n determines the size of each ngram; n must be greater than or equal to 2. \n", " Returns a list of ngrams, where each ngram is a Python tuple consisting of n characters.\n", " '''\n", " text_arrays = []\n", " for i in range(n):\n", " last_index = len(text) - (n - i - 1)\n", " text_arrays.append(text[i:last_index])\n", " return list(zip(*text_arrays))"]}, {"cell_type": "markdown", "id": "a1642dce-0701-47ce-b5fd-92850916afe5", "metadata": {}, "source": ["Let's illustrate our function with a small text first. The output is a Python list, which contains a series of additional collections (called tuples) nested within it. Each subcollection corresponds to a 2-character window, and the window is moved one character to the right each time. \n", "\n", "This structure will allow us to create our transition table, showing which characters follow which other characters most often. "]}, {"cell_type": "code", "execution_count": null, "id": "73eb3280-68d9-452c-b114-dc2ffb6fe4d8", "metadata": {}, "outputs": [], "source": ["text = 'abcdefghijklmnopqrstuvwxyz'\n", "create_ngrams(text, 2)"]}, {"cell_type": "markdown", "id": "a032fd08-b031-4ff4-aca6-1303413266d4", "metadata": {}, "source": ["Run the code section below to define another function, `create_transition_table`, which does what its name suggests."]}, {"cell_type": "code", "execution_count": null, "id": "e63fcb39-6bda-41a0-8669-08ad9bc7d8c6", "metadata": {}, "outputs": [], "source": ["from collections import Counter\n", "def create_transition_table(ngrams):\n", " '''\n", " Expects as input a list of tuples corresponding to ngrams.\n", " Returns a dictionary of dictionaries, where the keys to the outer dictionary consist of strings corresponding to the first n-1 elements of each ngram.\n", " The values of the outer dictionary are themselves dictionaries, where the keys are the nth elements each ngram, and the values are the frequence of occurrence.\n", " '''\n", " n = len(ngrams[0])\n", " ttable = {}\n", " for ngram in ngrams:\n", " key = \"\".join(ngram[:n-1])\n", " if key not in ttable:\n", " ttable[key] = Counter()\n", " ttable[key][ngram[-1]] += 1\n", " return ttable"]}, {"cell_type": "markdown", "id": "4bfbbbc8-6baa-4d6a-a345-14040db00f9c", "metadata": {}, "source": ["Now run the code below to create the transition table for the bigrams in the alphabet."]}, {"cell_type": "code", "execution_count": null, "id": "59585cd9-c15c-47a6-af5e-b6787848023d", "metadata": {}, "outputs": [], "source": ["create_transition_table(create_ngrams(text, 2))"]}, {"cell_type": "markdown", "id": "11e751a8-d82c-410a-85fa-d92fc1c88bb7", "metadata": {}, "source": ["Here our transition table consists of frequencies that are all 1, because (by definition) each letter occurs only once in the alphabet. The way to read the table, however, is as follows:\n", "> The letter `b` occurs after the letter `a` 1 time in our (alphabet) sample.\n", "> \n", "> The letter `c` occurs after the letter `b` 1 time in our sample.\n", "> \n", "> ...\n", "\n", "Now let's use these functions to create the transition table with bigrams _Gargantua and Pantagruel_."]}, {"cell_type": "code", "execution_count": null, "id": "8e2db7a4-f1ab-4a5f-a3cc-e280e2b8567e", "metadata": {}, "outputs": [], "source": ["g_ttable = create_transition_table(create_ngrams(g_text_norm, 2))"]}, {"cell_type": "markdown", "id": "9a60e6c6-c051-44e6-b681-67ed95f21f03", "metadata": {}, "source": ["Our table will now be significantly bigger. But let's use it see how frequently the letter `e` follows the letter `h` in our text:"]}, {"cell_type": "code", "execution_count": null, "id": "53b29a32-a875-4027-b20b-3a6e53a316e8", "metadata": {}, "outputs": [], "source": ["g_ttable['h']['e']"]}, {"cell_type": "markdown", "id": "bd48bd5c-eaa7-41ca-8cc9-1e6a073a7976", "metadata": {}, "source": ["We can visualize our table fairly easily by using a Python library called [pandas](https://pandas.pydata.org/).\n", "\n", "Run the code below, which may take a moment to finish."]}, {"cell_type": "code", "execution_count": null, "id": "9a341158-1569-4c36-b452-ecf7af97ad64", "metadata": {"scrolled": true}, "outputs": [], "source": ["import pandas as pd\n", "pd.set_option(\"display.precision\", 0)\n", "pd.DataFrame.from_dict(g_ttable, orient='index')"]}, {"cell_type": "markdown", "id": "dd76cd5b-b3c1-41ee-9251-2b46656d0436", "metadata": {}, "source": ["To read the table, select a row for the first letter, and then a column to find the frequency of the column letter appearing after the letter in the row. (In other words, read across then down.)\n", "\n", "The space character appears as the empty column/row label in this table. "]}, {"cell_type": "markdown", "id": "5a25ad62-75e5-4e48-8128-6be82c2a8571", "metadata": {}, "source": ["### Automatic writing\n", "\n", "In Shannon's* article, these kinds of transition tables are used to demonstrate the idea that English text can be effectively represented as a Markov chain. And to effect the demonstration, Shannon* presents the results of _generating_ text by weighted random sampling from the transition tables. \n", "\n", "To visualize how the weighted sampling works, imagine the following:\n", " 1. You choose a row at random on the transition table above, writing its character down on paper.\n", " 2. The numbers in that row correspond to the observed frequencies of characters following the character corresponding to that row.\n", " 3. You fill a big with Scrabble tiles, using as many tiles for each character as indicated by the corresponding cell in the selected row. If a cell has `NaN` in it -- the null value -- you don't put any tiles of that chracter in the bag.\n", " 5. You draw one tile from the bag. You write down the character you just selected. This character indicates the next row on the table.\n", " 6. Using that row, you repeat steps 1 through 4. And so on, for however many characters you want to include in your sample.\n", "\n", "Run the code below to define a function that will do this sampling for us."]}, {"cell_type": "code", "execution_count": null, "id": "f257d727-09ad-48c6-8939-de57e8e566a6", "metadata": {}, "outputs": [], "source": ["def create_sample(ttable, length=100):\n", " '''\n", " Using a transition table of ngrams, creates a random sample of the provided length (default is 100 characters).\n", " '''\n", " starting_chars = list(ttable.keys())\n", " first_char = last_char = choices(starting_chars, k=1)[0]\n", " l = len(first_char)\n", " generated_text = first_char\n", " for _ in range(length):\n", " chars = list(ttable[last_char].keys())\n", " weights = list(ttable[last_char].values())\n", " next_char = choices(chars, weights, k=1)[0]\n", " generated_text += next_char\n", " last_char = generated_text[-l:]\n", " return generated_text"]}, {"cell_type": "code", "execution_count": null, "id": "33b14ddc-a0b3-4c86-89eb-d26f933977ff", "metadata": {}, "outputs": [], "source": ["create_sample(g_ttable)"]}, {"cell_type": "markdown", "id": "ccf00920-e81a-45f5-a564-3746066519df", "metadata": {}, "source": ["Run the code above a few times for the full effect. It's still nonsense, but maybe it seems more like recognizable nonsense -- meaning nonsense that a human being who speaks English might make up -- compared with our previous randomly generated examples. If you agree that it's more recognizable, can you pinpoint features or moments that make it so?\n", "\n", "Personally, it reminds me of the outcome of using a Ouija board: recognizable words almost emerging from some sort of pooled subconscious, then sinking back into the murk before we can make any sense out of them. "]}, {"cell_type": "markdown", "id": "b1baf16a-9b19-4f86-93be-eda8d71fcc64", "metadata": {}, "source": ["#### More silly walks\n", "\n", "More adept Ouija-board users can be simulated by increasing the size of our n-grams. As Shannon's* article demonstrates, the approximation to the English lexicon increases by moving from bigrams to trigrams -- such that frequencies are calculated in terms of the occurrence of a given letter immediately after a pair of letters. \n", "\n", "So instead of a table like this:\n", "\n", "![A table with the letters \"h,\" a space, \"o,\" \"e,\" and \"i\" along the top (column headers), and \"t,\" space, \"c,\" \"w,\" \"s,\" and \"g\" along the left-hand side (row labels), and numbers in the cells of the table. \n](https://gwu-libraries.github.io/engl-6130-dugan/_images/bigram-table.png)\n", "\n", "we have this (where the `h`, `b`, and `w` in the row labels are all preceded by the space character):\n", "\n", "![A table with the letters \"e,\" \"a,\" space, \"i,\" \"o\" along the top (column headers), and \"th,\" space \"h\", space \"b\",\" \"er,\" and space \"w\" along the left-hand side (row labels), and numbers in the cells of the table. \n](https://gwu-libraries.github.io/engl-6130-dugan/_images/bigram-table-2.png)\n", "\n", "Note, however, that throughout these experiments, the level of approximation to any particular understanding of \"the English lexicon\" depends on the nature of the data from which we derive our frequencies. Urquhart's translation of Rabelais, dating from the 16th Century, has a rather distinctive vocabulary, as you might expect, even with the modernized spelling and grammar of the Project Gutenberg edition. \n", "\n", "The code below defines some interactive controls to make our experiments easier to manipulate. Run both sections of code to create the controls."]}, {"cell_type": "code", "execution_count": null, "id": "410bc842-9823-4d44-8627-95c56fb40b08", "metadata": {}, "outputs": [], "source": ["import ipywidgets as widgets\n", "from IPython.display import display\n", "\n", "def create_slider(min_value=2, max_value=5):\n", " return widgets.IntSlider(\n", " value=2,\n", " min=min_value,\n", " max=max_value,\n", " description='Set value of n:')\n", " \n", "def create_update_function(text, transition_function, slider):\n", " '''\n", " returns a callback function for use in updating the provided transition table with ngrams from text, given slider.value, as well as an output widget\n", " for displaying the output of the callback\n", " '''\n", " output = widgets.Output()\n", " def on_update(change):\n", " with output:\n", " global ttable\n", " ttable = transition_function(create_ngrams(text, slider.value))\n", " print(f'Updated! Value of n is now {slider.value}.')\n", " return on_update, output\n", "\n", "def create_generate_function(sample_function, slider):\n", " '''\n", " returns a callback function for use in generating new random samples from the provided trasition table.\n", " '''\n", " output = widgets.Output()\n", " def on_generate(change):\n", " with output:\n", " print(f'(n={slider.value}) {sample_function(ttable)}')\n", " return on_generate, output\n", " \n", "def create_button(label, callback):\n", " '''\n", " Creates a new button with the provided label, and sets its click handler to the provided callback function\n", " '''\n", " button = widgets.Button(description=label)\n", " button.on_click(callback)\n", " return button"]}, {"cell_type": "code", "execution_count": null, "id": "d530f07b-8956-4449-939d-5e522a55888f", "metadata": {}, "outputs": [], "source": ["ttable = g_ttable\n", "ngram_slider = create_slider()\n", "update_callback, update_output = create_update_function(g_text_norm, create_transition_table, ngram_slider)\n", "update_button = create_button(\"Update table\", update_callback)\n", "generate_callback, generate_output = create_generate_function(create_sample, ngram_slider)\n", "generate_button = create_button(\"New sample\", generate_callback)\n", "display(ngram_slider, update_button, update_output, generate_button, generate_output)\n"]}, {"cell_type": "markdown", "id": "beef17bd-ebad-4894-aa24-0fe3a9a0f75b", "metadata": {}, "source": ["Use the slider above to change the value of `n`. Click `Update table` to recreate the transition table using the new value of `n`. Then use the `New sample` button to generate a new, random sample of text from the transition table. You can generate as many samples as you like, and you can update the size of the ngrams in between in order to compare samples of different sizes."]}, {"cell_type": "markdown", "id": "f88eb45f-60b7-4ca3-9614-6a539a4a5e51", "metadata": {}, "source": ["What do you notice about the effect of higher values of `n` on the nature of the random samples produced? "]}, {"cell_type": "markdown", "id": "bb74992f-68e6-48e9-8b55-c70bdcb3ef9b", "metadata": {}, "source": ["### A Rabelaisian chatbot\n", "\n", "Following Shannon's* article, we can observe the same phenomena using whole words to create our n-grams. I find such examples more compelling, perhaps because I find it easier or more fun to look for the glimmers of sense of random strings of words than in random strings of letters, which may or may not be recognizable words. \n", "\n", "But the underlying procedure is the same. We first create a list of \"words\" out of our normalized text by splitting the latter on the occurrences of white space. As a result, instead of a single string containing the entire text, we'll have a Python list of strings, each of which is a word from the orginal text.\n", "\n", "Note that this process is not a rigorous way of tokenizing a text. If that is your goal -- to split a text into words, in order to employ word-frequency analysis or similar techniques -- there are very useful [Python libraries](https://spacy.io/) for this task, which use sophisticated tokenizing techniques.\n", "\n", "For purposes of our experiment, however, splitting on white space will suffice."]}, {"cell_type": "code", "execution_count": null, "id": "a802eb03-92e2-422c-922d-ddf456daee07", "metadata": {}, "outputs": [], "source": ["g_text_words = g_text_norm.split()"]}, {"cell_type": "markdown", "id": "32291c6e-625a-4711-b3b1-ad1f3bfedac0", "metadata": {}, "source": ["From here, we can create our ngrams and transition table as before. First, we just need to modify our previous code to put the spaces back (since we took them out in order to create our list of words). \n", "\n", "Run the code sections below to create some new functions, and the to create some more HTML controls for these functions."]}, {"cell_type": "code", "execution_count": null, "id": "827fa854-204d-47e2-ae88-06f0b107c359", "metadata": {}, "outputs": [], "source": ["def create_ttable_words(ngrams):\n", " '''\n", " Expects as input a list of tuples corresponding to ngrams.\n", " Returns a dictionary of dictionaries, where the keys to the outer dictionary consist of strings corresponding to the first n-1 elements of each ngram.\n", " The values of the outer dictionary are themselves dictionaries, where the keys are the nth elements each ngram, and the values are the frequence of occurrence.\n", " '''\n", " n = len(ngrams[0])\n", " ttable = {}\n", " for ngram in ngrams:\n", " key = ngram[:n-1]\n", " if key not in ttable:\n", " ttable[key] = Counter()\n", " ttable[key][(ngram[-1],)] += 1\n", " return ttable\n", " \n", "def create_sample_words(ttable, length=100):\n", " '''\n", " Using a transition table of ngrams, creates a random sample of the provided length (default is 100 characters).\n", " '''\n", " starting_words = list(ttable.keys())\n", " first_words = last_words = tuple(choices(starting_words, k=1)[0])\n", " n = len(first_words)\n", " text = list(first_words)\n", " for _ in range(length):\n", " words = list(ttable[last_words].keys())\n", " weights = list(ttable[last_words].values())\n", " next_word = choices(words, weights, k=1)[0]\n", " text.append(next_word[0])\n", " last_words = tuple(text[-n:])\n", " return \" \".join(text)"]}, {"cell_type": "code", "execution_count": null, "id": "e308046c-9d65-46ab-8090-98dd805750fd", "metadata": {}, "outputs": [], "source": ["ttable = create_ttable_words(create_ngrams(g_text_words))\n", "ngram_slider_w = create_slider()\n", "update_callback_w, update_output_w = create_update_function(g_text_words, create_ttable_words, ngram_slider_w)\n", "update_button_w = create_button(\"Update table\", update_callback_w)\n", "generate_callback_w, generate_output_w = create_generate_function(create_sample_words, ngram_slider_w)\n", "generate_button_w = create_button(\"New sample\", generate_callback_w)\n", "display(ngram_slider_w, update_button_w, update_output_w, generate_button_w, generate_output_w)\n"]}, {"cell_type": "markdown", "id": "80dc6351-f1ba-4e3c-8063-a91d2ab040c4", "metadata": {}, "source": ["Use the slider and buttons above to generate sample text for various values of `n`. Samples are based on n-grams of words from the source text."]}, {"cell_type": "markdown", "id": "f99da874-6728-4368-8908-13d53bb73985", "metadata": {}, "source": ["### How drunken was our walk?\n", "\n", "In his article, Shannon* reports various results of these experiments, using different values for `n` with both letter- and word-frequencies. He includes the following sample, apparently produced at random with word bigrams, though he does not disclose the particular textual sources from which he derived his transition tables:\n", "\n", ">THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHOEVER TOLD THE PROBLEM FOR AN UNEXPECTED.\n", "\n", "I've always thought that Shannon's* example seems suspiciously fortuitous, given its mention of attacks on English writers and methods for letters, etc. Who knows how many trials he made before he got this result (assuming he didn't fudge anything). All the same, one of the enduring charms of the \"Markov text generator\" is its propensity to produce uncanny stretches of text that, as Shannon* writes, sound \"not at all unreasonable.\" \n", "\n", "A question does arise: how novel are these stretches? In other words, what proportion of the generated sample is unique relative to the source? One way approach to the question is to think in terms of unique n-grams. When using a value of 3 for `n`, by definition every three-word sequence in our generated sample will match some sequence in the source text. But what about sequences of 4 words? Just looking at the samples we've created, it's clear that at least some of these are novel, since some are plainly nonsense and not likely to appear in Rabelais' text. \n", "\n", "We might measure their novelty by creating a lot of samples and then, for each sample, calculating the percentage of 4-word n-grams that are _not_ in the source text. Running this procedure over 1,000 samples, I arrive at an average of 40% -- so a little less than half of all the 4-word sequences across all the samples are sequences that do _not_ appear in Rabelais' text. \n", "\n", "As for what percentage of those constitute phrases that are not \"unreasonable\" as spontaneous English utterances, that's a question that's hard to answer computationally. Obviously, it depends in part on your definition of \"not unreasonable.\" But it's kind of fun to pick out phrases of length `n+1` (or `n+2`, etc.) from your sample and see if they appear in the original. You can do so by running code like the following. Just edit the part between the quotation marks so that they contain a phrase from your sample. If Python returns `True`, the phrase is _not_ in the source."]}, {"cell_type": "code", "execution_count": null, "id": "f3b0a7b6-82e7-4ff7-947c-5c31dec069b4", "metadata": {}, "outputs": [], "source": ["'to do a little untruss' in g_text_norm"]}, {"cell_type": "markdown", "id": "3d3a70cc-1050-4cfd-8937-8cb6923fd982", "metadata": {}, "source": ["### Where lies the labor?\n", "\n", "The code in this notebook implements a kind of algorithm, albeit a simple one. A great many procedures, now standard parts of computer applications -- e.g., efficiently sorting a list -- involve more logical complexity. Our Markovian model of Rabelais' novel seems almost _too_ simple to produce the results it does, which is perhaps partly why the results can feel uncanny. \n", "\n", "And while it needs a gargantuan leap to get from our rudimentary text machine to Chat GPT, the large-language model behind the latter is, like ours, a statistical representation of patterns occurring in the textual data on which it is based. The novelty of the latest models derives from their capacity to encode overlapping contexts: to represent how the units that make up text occur in multiple relations to each other: e.g., to capture, mathematically, the fact that a certain word frequently follows another word but often appears in the same sentence or paragraph as a third word, and so on. This complexity of representation, coupled with the sheer size of the data used to train the model, leads to Chat-GPT's uncanny ability to mimic textual genres with a high degree of stylistic fidelity.\n", "\n", "", "\n", "", "\n", "But perhaps we do Rabelais' text a disservice by calling it the \"data\" behind our model. We could, just as reasonably, speak of the text itself as the model -- likewise for the tera- or petabytes of text used to train Chat-GPT and its ilk. On Shannon's* theory, language encodes information. The ultimate aim of the theory is to find the most _efficient_ means of encoding (in order to solve \"the engineering problem\" of modern telecommunications networks); nonetheless, the success of the theory implies that any use of language (any use recognizable as such by users of the language) _already_ encodes information. In other words, the transition probabilities we generated from Rabelais' text are already expressed by Rabelais' text; our transition matrix just encodes that information in a more computationally tractable form. Every text encodes its producer's \"knowledge of the statistics of the language\" (in Shannon's* words). And one might argue that every text encodes its readers' knowledge, too. It's on the basis of such knowledge that we can \"decode\" Rabelais' novel, as well as the stochastic quasi-nonsense we can generate on its basis, which feels, relative to the former, like an excess of sense (an excess over and above Rabelais' already excessive text), spilling over the top.\n", "\n", "#### Further experiments\n", "\n", "To see how differences in the source of the model impact the result, try running our code on different texts. As written, our code only works on plain text format (files with a `.txt` extension). [Project Gutenberg](https://www.gutenberg.org/) is a good source for these -- just make sure that you choose the `Plain Text UTF-8` option for displaying a given text. You can copy the URL for the plain text either from your browser's address bar (if the text opens as a separate tab or page), or by right-clicking on the `Plaint Text UTF-8` link and selecting the option `Copy Link`. \n", "\n", "The following code creates a text box into which you can paste the link.\n"]}, {"cell_type": "code", "execution_count": null, "id": "ac3eaa96-b68c-4291-8446-5240f02e07d3", "metadata": {}, "outputs": [], "source": ["url_box = widgets.Text(\n", " value='',\n", " placeholder='Type something',\n", " description='URL:',\n", " disabled=False \n", ")\n", "display(url_box)"]}, {"cell_type": "markdown", "id": "57e7feb2-ad96-42d0-8e42-ce2e0cd28d2b", "metadata": {}, "source": ["The big block of code below re-uses code from above to download and normalize the text at the provided URL, create a transition table of words from the text, and present options for changing the value of N and for generating new samples. "]}, {"cell_type": "code", "execution_count": null, "id": "816414b3-4d5c-4a0a-af96-a75b346734f4", "metadata": {}, "outputs": [], "source": ["if url_box.value:\n", " text_file, _ = urlretrieve(url_box.value)\n", " with open(text_file) as f:\n", " text = f.read()\n", " norm_text_words = normalize_text(text).split()\n", " ttable = create_ttable_words(create_ngrams(norm_text_words))\n", " ngram_slider_new = create_slider()\n", " update_callback_new, update_output_new = create_update_function(norm_text_words, create_ttable_words, ngram_slider_new)\n", " update_button_new = create_button(\"Update table\", update_callback_new)\n", " generate_callback_new, generate_output_new = create_generate_function(create_sample_words, ngram_slider_new)\n", " generate_button_new = create_button(\"New sample\", generate_callback_new)\n", " display(ngram_slider_new, update_button_new, update_output_new, generate_button_new, generate_output_new)\n", " "]}, {"cell_type": "markdown", "id": "5f809cba-e34c-4426-b7b2-19e426767c05", "metadata": {}, "source": ["### Carnival intelligence?\n", "\n", "Lacan famously said that the unconscious is structured like a language. Whether that's an apt description of the human psyche is at least debatable. But might we say that these models manifest the unconscious structures of language itself? We can catch glimpses of this manifestation in the relatively humble outcome of Shannon's* experiments: in the Markovian leaps that lead us to make _sense_ out of patterned randomness, leaps which, at the same time, reveal the nonsense that riots on the other side of sense. These experiments allow us to wander through spaces of grammatical, lexical, and stylistic possibility -- and the pleasure they offer, for me, lies in their letting us stumble into places where our rule-observant habits might not otherwise let us go. \n", "\n", "What if we were to approach generative AI in the same spirit? Not as the _deus ex machina_ that will save the world (which it almost certainly is not), and not only as a technology that will further alienate and oppress our labor (which it very probably is). But to borrow from Bakhtin, as a carnivalesque mirror of our collective linguistic unconscious: like carnival, offering a sense of freedom from restraint that is, at the same time, the affirmation, by momentary inversion, of the prevailing order of things. But also a reminder that language is the repository of an intelligence neither of the human (considered as an isolated being), nor of the machine, but of the collective, and that making sense is always a political act ({cite}`bakhtin_rabelais_1984`)."]}], "metadata": {"kernelspec": {"display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3"}, "language_info": {"codemirror_mode": {"name": "ipython", "version": 3}, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.6"}}, "nbformat": 4, "nbformat_minor": 5} \ No newline at end of file diff --git a/reading_writing_machines.html b/reading_writing_machines.html index e8df2ce..8bc06aa 100644 --- a/reading_writing_machines.html +++ b/reading_writing_machines.html @@ -1179,6 +1179,7 @@

Further experimentsdescription='URL:', disabled=False ) +display(url_box)