diff --git a/slides/0000-00-00/Chapter 0 - Introduction.md b/slides/0000-00-00/Chapter 0 - Introduction.md new file mode 100644 index 00000000000..7892707c18a --- /dev/null +++ b/slides/0000-00-00/Chapter 0 - Introduction.md @@ -0,0 +1,101 @@ + + +# Large Language Models: The Digital Grimoires of the 21st Century +Ing. Flavio Cordari + +-- + +## Agenda +asdasd asdd + +-- + +## Why Large Language Models? + +-- + +LLMs represent a significant leap in artificial intelligence and natural language processing capabilities. Their ability to understand, generate, and interact using human-like language has opened up new possibilities in AI, from creating more intuitive user interfaces to generating content and even coding. + +-- + +## Why Grimoires? + +-- + +![[grimoire.webp]] + + +Notes: +The analogy here is that just as grimoires were the repositories of arcane knowledge and power in their time, LLMs are the contemporary digital equivalents, holding vast amounts of human knowledge. However, instead of spells and magical rites, LLMs contain the collective textual data of humanity, capable of generating insights, answers, and even creating new content based on this data. + +-- + +## The "Imitation Game" + +-- + +The Turing Test was designed to assess a machine’s ability to exhibit intelligent verbal behavior comparable to that of a human. Turing proposed that a human evaluator would engage in natural language conversations with both a human and a machine, and if the evaluator could not distinguish between them, the machine would demonstrate its capacity for faithfully imitating human verbal behavior. + +-- + +> ChatGPT-4 exhibits behavioral and personality traits that are statistically indistinguishable from a random human from tens of thousands of human subjects from more than 50 countries. + +[A Turing test of whether AI chatbots are behaviorally similar to humans](https://www.pnas.org/doi/10.1073/pnas.2313925121) + +-- +## Characteristica Universalis and Calculus Ratiocinator + +This concept envisioned a universal language or symbolism that could represent all human knowledge in a formal, logical system. Leibniz imagined this as a means to encode ideas, arguments, and principles in a way that they could be analyzed and manipulated logically. The ultimate goal was to reduce reasoning to a form of computation, where arguments could be solved with the same certainty as mathematical equations. + +Notes: +Gottfried Wilhelm Leibniz (1646–1716) was a German polymath and philosopher who made significant contributions across a wide range of academic fields, including mathematics, logic, philosophy, ethics, theology, law, and history. He is perhaps best known for his development of calculus independently of Sir Isaac Newton, which led to a notorious dispute over priority. Beyond his advancements in mathematics, Leibniz's work in philosophy is also highly regarded, particularly his ideas regarding metaphysics, the problem of evil, and his optimistic belief that we live in the best of all possible worlds. + +-- + +A LLM can be seen as a realization of Leibniz's vision in several ways. It processes natural language (a form of universal language) to understand, generate, and manipulate information. Though not precisely what Leibniz envisioned as a purely symbolic system, natural language processing (NLP) technologies achieve a similar end: encoding and reasoning about human knowledge. + +-- + +## Compression is Comprehension + +-- + +In information theory, compression is about representing information in a way that reduces redundancy without losing the essence of the original data. This is done through various algorithms that identify patterns and represent them more efficiently. + +-- + +In the context of cognitive science, our brains understand and learn about the world by compressing sensory inputs and experiences into models, schemas, or concepts that are simpler than the sum total of possible data. This process allows us to make sense of complex environments and predict future events based on past experiences. + +-- + +As early as 1969, neuroscientist Horace Barlow wrote that the operations involved in the compression of information: + +> “… have a rather fascinating similarity to the task of answering an intelligence test, finding an appropriate scientific concept, or other exercises in the use of inductive reasoning. Thus, compression of information may lead one towards understanding something about the organization of memory and intelligence, as well as pattern recognition and discrimination.” + +-- + +LLMs training involve a lossy compression of textual datasets but, despite this, can still generate text. + +-- + +## What about consciousness? + +-- + +There are reported examples of individuals who believe that ChatGPT is conscious. As reported by The New York Times on 23 July 2022 (accessed on 23 July 2022), Google fired engineer Blake Lemoine for claiming that Google’s Language Model for Dialogue Applications (LaMDA) was sentient, (i.e., experiencing sensations, perceptions, and other subjective experiences). + +-- + +## Consciousness vs Intelligence + +-- + +According to Daniel Kahneman, humans possess two complementary cognitive systems: “System 1”, which involves rapid, intuitive, automatic, and non-conscious information processing; and “System 2”, which encompasses slower, reflective, conscious reasoning and decision-making + +-- + +The fast neural network computation performed by LLMs, resulting in convincing dialogues, aligns with the fast thinking associated with “System 1”. According to Kahneman’s description, being on the “System 1” level means that LLMs lack consciousness, which, in this context, is characteristic of “System 2”. + + + + diff --git a/slides/0000-00-00/Chapter 1 - Deep Learning.md b/slides/0000-00-00/Chapter 1 - Deep Learning.md new file mode 100644 index 00000000000..44a9a4554c1 --- /dev/null +++ b/slides/0000-00-00/Chapter 1 - Deep Learning.md @@ -0,0 +1,4 @@ +# DL - Deep Learning + +-- + diff --git a/slides/0000-00-00/Chapter 2 - Natural Language Processing.md b/slides/0000-00-00/Chapter 2 - Natural Language Processing.md new file mode 100644 index 00000000000..193c720095e --- /dev/null +++ b/slides/0000-00-00/Chapter 2 - Natural Language Processing.md @@ -0,0 +1,3 @@ +# NLP - Natural Language Processing + +-- diff --git a/slides/0000-00-00/Chapter 3 - Large Language Models.md b/slides/0000-00-00/Chapter 3 - Large Language Models.md new file mode 100644 index 00000000000..7e6ac81076a --- /dev/null +++ b/slides/0000-00-00/Chapter 3 - Large Language Models.md @@ -0,0 +1,116 @@ +# LLMs - Large Language Models + +-- + +## What is a Language Model? + +-- + +A language model is a statistical and computational tool that enables a computer to understand, interpret, and generate human language based on the likelihood of occurrence of words and sequences of words. + +-- + +**Statistical Language Models:** These earlier models rely on the statistical properties of language, using the probabilities of sequences of words (n-grams) to predict the likelihood of the next word in a sequence. + +[Bigrams Example](https://colab.research.google.com/drive/1ikJuNYOOliuy8tTl9csKuWDlVdHJhVQg?usp=sharing) + +-- + +**Neural Language Models:** These models use **neural networks** to predict the likelihood of a sequence of words, learning and representing language in high-dimensional spaces. + +[Simplified NLM Example](https://colab.research.google.com/drive/1ON9CO6LUtX1mbDmYIq3Pt5mSqoxzGxPr?usp=sharing) + +-- + +## What is a *Large* Language Model? + +-- + +A Large Language Model is a Neural Language Model +- which is trained on very big datasets +- where its underlying neural network uses billions of parameters + +Notes: +A large language model is a type of artificial intelligence algorithm designed to understand, generate, and work with human language in a way that mimics human-like understanding and production. These models are "large" both in terms of the size of the neural network architecture they are based on and the amount of data they are trained on. + +-- + +## Modern Large Language Models Architectures + +-- + +## Transformer-Based Models + +- **BERT (Bidirectional Encoder Representations from Transformers)** +- **GPT (Generative Pre-trained Transformer) Series** +- **T5 (Text-to-Text Transfer Transformer)** + +-- + +## Attention Is All You Need + +![[1706.03762.pdf]] + +Notes: +Developed by Google, BERT was one of the first transformer-based models to use bidirectional training to understand the context of words in a sentence. It significantly improved the performance of NLP tasks such as question answering and language inference. + +OpenAI's GPT series, including GPT-3 and its successors, are known for their generative capabilities, enabling them to produce human-like text. These models are pre-trained on diverse internet text and fine-tuned for specific tasks, showcasing remarkable language understanding and creativity. + +Developed by Google, T5 approaches NLP tasks by converting all text-based language problems into a unified text-to-text format, allowing it to perform a wide range of tasks from translation to summarization with the same model architecture. + +-- + +### Sparse Models + +- **Mixture of Experts (MoE)** + +### Hybrid Models + +- **ERNIE (Enhanced Representation through kNowledge Integration)** + +Notes: +The MoE architecture involves a set of expert models (typically, neural networks) where each expert is trained on a subset of the data. A gating mechanism decides which expert to use for a given input. This approach allows for more scalable and efficient training on large datasets. + +Developed by Baidu, ERNIE is designed to better understand the syntax and semantic information in a language by integrating knowledge graphs with text, leading to improved performance on NLP tasks that require world knowledge and reasoning. + + +-- + +## Real World Examples + +-- + +Large language models (LLMs) can also be categorized based on their availability as either open source, where the model architecture and weights are publicly accessible, or closed source, where the model details are proprietary and access is restricted. + +-- + +- Closed source + - OpenAI's GPT-3 / GPT-4 + - Google's BERT models + - ... + +-- + +- Open source + - [OpenAI's GPT-2](https://github.com/openai/gpt-2) + - [Hugging Face’s Transformers](https://huggingface.co/) (repository of open source models) + - ... + +-- + +- Mixed open/closed source + - [Meta's LLaMA](https://github.com/Meta-Llama/llama) + - the company has provided some level of access to the research community but still maintains control over the distribution and usage of the model. + +-- + +- [Navigating the World of Large Language Models](https://www.bentoml.com/blog/navigating-the-world-of-large-language-models) + +-- + +## LLaMA 2 + +![[10000000_662098952474184_2584067087619170692_n.pdf]] + +-- + diff --git a/slides/0000-00-00/Chapter 4 - LLMs Horizons.md b/slides/0000-00-00/Chapter 4 - LLMs Horizons.md new file mode 100644 index 00000000000..740c403bd6d --- /dev/null +++ b/slides/0000-00-00/Chapter 4 - LLMs Horizons.md @@ -0,0 +1,13 @@ +# LLMs Horizons + +-- + +## Tools Use + +-- + +## LLMs OS + +-- + +## LLMs Security \ No newline at end of file diff --git a/slides/0000-00-00/assets/test.jpg b/slides/0000-00-00/assets/test.jpg deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/slides/0000-00-00/assets/venn-0.png b/slides/0000-00-00/assets/venn-0.png new file mode 100644 index 00000000000..fdae7cb72f2 Binary files /dev/null and b/slides/0000-00-00/assets/venn-0.png differ diff --git a/slides/0000-00-00/assets/venn-1.png b/slides/0000-00-00/assets/venn-1.png new file mode 100644 index 00000000000..14526627a22 Binary files /dev/null and b/slides/0000-00-00/assets/venn-1.png differ diff --git a/slides/0000-00-00/assets/venn-10.png b/slides/0000-00-00/assets/venn-10.png new file mode 100644 index 00000000000..8929abe72c3 Binary files /dev/null and b/slides/0000-00-00/assets/venn-10.png differ diff --git a/slides/0000-00-00/assets/venn-2.png b/slides/0000-00-00/assets/venn-2.png new file mode 100644 index 00000000000..31d28f23a3a Binary files /dev/null and b/slides/0000-00-00/assets/venn-2.png differ diff --git a/slides/0000-00-00/assets/venn-3.png b/slides/0000-00-00/assets/venn-3.png new file mode 100644 index 00000000000..cb66f3527f6 Binary files /dev/null and b/slides/0000-00-00/assets/venn-3.png differ diff --git a/slides/0000-00-00/assets/venn-4.png b/slides/0000-00-00/assets/venn-4.png new file mode 100644 index 00000000000..097dfb750ba Binary files /dev/null and b/slides/0000-00-00/assets/venn-4.png differ diff --git a/slides/0000-00-00/assets/venn-5.png b/slides/0000-00-00/assets/venn-5.png new file mode 100644 index 00000000000..d849ce491ac Binary files /dev/null and b/slides/0000-00-00/assets/venn-5.png differ diff --git a/slides/0000-00-00/assets/venn-6.png b/slides/0000-00-00/assets/venn-6.png new file mode 100644 index 00000000000..8dde45220e2 Binary files /dev/null and b/slides/0000-00-00/assets/venn-6.png differ diff --git a/slides/0000-00-00/assets/venn-7.png b/slides/0000-00-00/assets/venn-7.png new file mode 100644 index 00000000000..9a661b1e04e Binary files /dev/null and b/slides/0000-00-00/assets/venn-7.png differ diff --git a/slides/0000-00-00/assets/venn-8.png b/slides/0000-00-00/assets/venn-8.png new file mode 100644 index 00000000000..3b4e21b8c47 Binary files /dev/null and b/slides/0000-00-00/assets/venn-8.png differ diff --git a/slides/0000-00-00/assets/venn-9.png b/slides/0000-00-00/assets/venn-9.png new file mode 100644 index 00000000000..d8c3e2a89b2 Binary files /dev/null and b/slides/0000-00-00/assets/venn-9.png differ diff --git a/slides/0000-00-00/config.yml b/slides/0000-00-00/config.yml index be576b67ba9..782d476722d 100644 --- a/slides/0000-00-00/config.yml +++ b/slides/0000-00-00/config.yml @@ -1,3 +1,6 @@ title: "Hello World!" theme: "cloudogu" +slides: 'C:\Users\bitwise\projects\presentations\slides\0000-00-00\slides.html' +width: 1920 +height: 1080 show_notes_for_printing: false \ No newline at end of file diff --git a/slides/0000-00-00/slides.html b/slides/0000-00-00/slides.html new file mode 100644 index 00000000000..659902f869c --- /dev/null +++ b/slides/0000-00-00/slides.html @@ -0,0 +1,4 @@ +
+
+
+
diff --git a/slides/0000-00-00/slides.md b/slides/0000-00-00/slides.md index a9d2dc69763..a9b25b85045 100644 --- a/slides/0000-00-00/slides.md +++ b/slides/0000-00-00/slides.md @@ -1,24 +1,55 @@ -# Markdown Demo + + + --- -## External 1.1 +## Natural Language Processing and Machine Learning relationship + +![](assets/venn-0.png) -Content 1.1 +testotesto testo testo testotesto testo testotestotesto testo testotestotesto testo testotestotesto testo testotestotesto testo testotestotesto testo testotestotesto testo testotestotesto testo testotestotesto testo testo -Note: This will only appear in the speaker notes window. +Note: +This will only appear in the speaker notes window. -- -## External 1.2 +## Natural Language Processing and Machine Learning relationship -Content 1.2 +![](assets/venn-1.png) ---- +Note: +This will only appear in the speaker notes window. + +-- + + +## Artificial Intelligence + +is a part of the greater field of Computer Science that __enables computers to solve problems__ previously handled by biological systems. AI has many applications in today's society. NLP and ML are both parts of AI. + + +-- + +## Natural Language Processing +is a form of AI that gives machines the ability to not just read, but to __understand and interpret human language__. With NLP, machines can make sense of written or spoken text and perform tasks including speech recognition, sentiment analysis, and automatic text summarization. -## External 2 +-- -Content 2.1 +## Machine Learning +is an application of AI that provides systems the ability to __automatically learn and improve from experience__ without being explicitly programmed. Machine Learning can be used to help solve AI problems and to improve NLP by automating processes and delivering accurate responses. + +-- + +## How Natural Language Processing can be applied + +- Translation +- Speech recognition +- Sentiment Analysis +- Chatbots +- Question-Answer Systems +- --- @@ -36,13 +67,19 @@ Content 3.2 ## External 3.3 (Image) -![External Image](https://s3.amazonaws.com/static.slid.es/logo/v2/slides-symbol-512x512.png) +![](https://s3.amazonaws.com/static.slid.es/logo/v2/slides-symbol-512x512.png) -- -## External 3.4 (Math) -`\[ J(\theta_0,\theta_1) = \sum_{i=0} \]` + +## The Lorenz Equations + +`\[\begin{aligned} +\dot{x} &= \sigma(y-x) \\ +\dot{y} &= \rho x - y - xz \\ +\dot{z} &= -\beta z + xy +\end{aligned} \]` --- diff --git a/slides/index-template.html b/slides/index-template.html index 52db8e7534d..47c1cd3ffc2 100644 --- a/slides/index-template.html +++ b/slides/index-template.html @@ -6,12 +6,21 @@ <%= title %> - - - + + + + + + + + + + + +
@@ -22,7 +31,7 @@ <% if (locals.slides) { %> <%- include(slides) %> <% } else { %> -
+
<% } %>
@@ -31,9 +40,11 @@ - + + + <% if (locals.scripts) { %> <%- include(scripts) %> @@ -58,6 +69,21 @@ // Use the default (h.v), because the printed version will always have this anyway slideNumber: 'true', showNotes: showNotes, + customcontrols: { + controls: [ + { icon: '', + title: 'Toggle chalkboard (B)', + action: 'RevealChalkboard.toggleChalkboard();' + }, + { icon: '', + title: 'Toggle notes canvas (C)', + action: 'RevealChalkboard.toggleNotesCanvas();' + }, + ] + }, + chalkboard: { + // add configuration here + }, // Learn about plugins: https://revealjs.com/plugins/ plugins: [ // Interpret Markdown in
elements @@ -68,6 +94,9 @@ RevealNotes, // Zoom in and out with Ctrl+Alt+click RevealZoom, + RevealMath, + RevealChalkboard, + RevealCustomControls, // Search slides with Ctrl+Shift+f RevealSearch, <% if (locals.plugins) { %>