Skip to content

Commit

Permalink
Rebuilt site
Browse files Browse the repository at this point in the history
  • Loading branch information
David Evans committed Nov 20, 2023
1 parent 12628a2 commit 46d3d20
Show file tree
Hide file tree
Showing 18 changed files with 432 additions and 383 deletions.
324 changes: 60 additions & 264 deletions index.html

Large diffs are not rendered by default.

51 changes: 33 additions & 18 deletions index.xml

Large diffs are not rendered by default.

52 changes: 27 additions & 25 deletions post/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,21 @@
<div class="row">


<h2><a href="/week12/">Week 12: Regulating Dangerous Technologies</a></h2>
<div class="post-metadata">
<span class="post-date">
<time datetime="2023-11-20 00:00:00 &#43;0000 UTC" itemprop="datePublished">20 November 2023</time>
</span>

</div>


The slides are here: Regulating Dangerous Technologies (I&rsquo;ve included some slides in the posted slides that I didn&rsquo;t present in class but you might find interesting, including some excerpts from a talk I gave in 2018 on Mutually Assured Destruction and the Impending AI Apocalypse.)
Since one of the groups made the analogy to tobacco products, I also will take the liberty of pointing to a talk I gave at Google making a similar analogy: The Dragon in the Room.
<p class="text-right"><a href="/week12/">Read More…</a></p>



<h2><a href="/week11/">Week 11: Watermarking on Generative Models</a></h2>
<div class="post-metadata">
<span class="post-date">
Expand All @@ -109,7 +124,8 @@ <h2><a href="/week10/">Week 10: Data Selection for LLMs</a></h2>


(see bottom for assigned readings and questions)
Presenting Team: Haolin Liu, Xueren Ge, Ji Hyun Kim, Stephanie Schoch Blogging Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan
Presenting Team: Haolin Liu, Xueren Ge, Ji Hyun Kim, Stephanie Schoch
Blogging Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan
Monday, 30 October: Data Selection for Fine-tuning LLMs Question: Would more models help? We&rsquo;ve discussed so many risks and issues of GenAI so far and one question is that it can be difficult for us to come up with a possible solution to these problems.
<p class="text-right"><a href="/week10/">Read More…</a></p>

Expand All @@ -127,7 +143,7 @@ <h2><a href="/week9/">Week 9: Interpretability</a></h2>
(see bottom for assigned readings and questions)
Presenting Team: Anshuman Suri, Jacob Christopher, Kasra Lekan, Kaylee Liu, My Dinh
Blogging Team: Hamza Khalid, Liu Zhe, Peng Wang, Sikun Guo, Yinhan He, Zhepei Wei
Monday, 23 October: Interpretability: Overview, Limitations, &amp; Challenges Definition of Interpretability Interpretability in the context of artificial intelligence (AI) and machine learning refers to the extent to which a model&rsquo;s decisions, predictions, or internal workings can be understood and explained by humans.
Monday, 23 October: Interpretability: Overview, Limitations, &amp; Challenges Definition of Interpretability Interpretability in the context of artificial intelligence (AI) and machine learning refers to the extent to which a model&rsquo;s decisions, predictions, or internal workings can be understood and explained by humans.
<p class="text-right"><a href="/week9/">Read More…</a></p>


Expand Down Expand Up @@ -159,8 +175,10 @@ <h2><a href="/week7/">Week 7: GANs and DeepFakes</a></h2>


(see bottom for assigned readings and questions)
Presenting Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan Blogging Team: Haochen Liu, Haolin Liu, Ji Hyun Kim, Stephanie Schoch, Xueren Ge Monday, 9 October: Generative Adversarial Networks and DeepFakes Today's topic is how to utilize generative adversarial networks to create fake images and how to identify the images generated by these models.
Generative Adversarial Network (GAN) is a revolutionary deep learning framework that pits two neural networks against each other in a creative showdown.
Presenting Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan
Blogging Team: Haochen Liu, Haolin Liu, Ji Hyun Kim, Stephanie Schoch, Xueren Ge
Monday, 9 October: Generative Adversarial Networks and DeepFakes Today's topic is how to utilize generative adversarial networks to create fake images and how to identify the images generated by these models.
Generative Adversarial Network (GAN) is a revolutionary deep learning framework that pits two neural networks against each other in a creative showdown.
<p class="text-right"><a href="/week7/">Read More…</a></p>


Expand All @@ -177,7 +195,7 @@ <h2><a href="/week5/">Week 5: Hallucination</a></h2>
(see bottom for assigned readings and questions)
Hallucination (Week 5) Presenting Team: Liu Zhe, Peng Wang, Sikun Guo, Yinhan He, Zhepei Wei
Blogging Team: Anshuman Suri, Jacob Christopher, Kasra Lekan, Kaylee Liu, My Dinh
Wednesday, September 27th: Intro to Hallucination People Hallucinate Too Hallucination Definition There are three types of hallucinations according to the “Siren's Song in the AI Ocean” paper: Input-conflict: This subcategory of hallucinations deviates from user input. Context-conflict: Context-conflict hallucinations occur when a model generates contradicting information within a response.
Wednesday, September 27th: Intro to Hallucination People Hallucinate Too Hallucination Definition There are three types of hallucinations according to the “Siren's Song in the AI Ocean” paper: Input-conflict: This subcategory of hallucinations deviates from user input.
<p class="text-right"><a href="/week5/">Read More…</a></p>


Expand Down Expand Up @@ -209,8 +227,9 @@ <h2><a href="/week3/">Week 3: Prompting and Bias</a></h2>


(see bottom for assigned readings and questions)
Prompt Engineering (Week 3) Presenting Team: Haolin Liu, Xueren Ge, Ji Hyun Kim, Stephanie Schoch Blogging Team: Aparna Kishore, Erzhen Hu, Elena Long, Jingping Wan
(Monday, 09/11/2023) Prompt Engineering Warm-up questions What is Prompt Engineering? How is prompt-based learning different from traditional supervised learning? In-context learning and different types of prompts What is the difference between prompts and fine-tuning? When is the best to use prompts vs fine-tuning?
Prompt Engineering (Week 3) Presenting Team: Haolin Liu, Xueren Ge, Ji Hyun Kim, Stephanie Schoch
Blogging Team: Aparna Kishore, Erzhen Hu, Elena Long, Jingping Wan
(Monday, 09/11/2023) Prompt Engineering Warm-up questions What is Prompt Engineering? How is prompt-based learning different from traditional supervised learning? In-context learning and different types of prompts What is the difference between prompts and fine-tuning? When is the best to use prompts vs fine-tuning?
<p class="text-right"><a href="/week3/">Read More…</a></p>


Expand All @@ -225,28 +244,11 @@ <h2><a href="/week2/">Week 2: Alignment</a></h2>


(see bottom for assigned readings and questions)
Table of Contents (Monday, 09/04/2023) Introduction to Alignment Introduction to AI Alignment and Failure Cases Discussion Questions The Alignment Problem from a Deep Learning Perspective Group of RL-based methods Group of LLM-based methods Group of Other ML methods (Wednesday, 09/06/2023) Alignment Challenges and Solutions Opening Discussion Introduction to Red-Teaming In-class Activity (5 groups) How to use Red-Teaming? Alignment Solutions LLM Jailbreaking - Introduction LLM Jailbreaking - Demo Observations Potential Improvement Ideas Closing Remarks (by Prof.
Table of Contents (Monday, 09/04/2023) Introduction to Alignment Introduction to AI Alignment and Failure Cases Discussion Questions The Alignment Problem from a Deep Learning Perspective Group of RL-based methods Group of LLM-based methods Group of Other ML methods (Wednesday, 09/06/2023) Alignment Challenges and Solutions Opening Discussion Introduction to Red-Teaming In-class Activity (5 groups) How to use Red-Teaming?
<p class="text-right"><a href="/week2/">Read More…</a></p>



<h2><a href="/week1/">Week 1: Introduction</a></h2>
<div class="post-metadata">
<span class="post-date">
<time datetime="2023-09-03 00:00:00 &#43;0000 UTC" itemprop="datePublished">3 September 2023</time>
</span>

</div>


(see bottom for assigned readings and questions)
Attention, Transformers, and BERT Monday, 28 August
Transformers1 are a class of deep learning models that have revolutionized the field of natural language processing (NLP) and various other domains. The concept of transformers originated as an attempt to address the limitations of traditional recurrent neural networks (RNNs) in sequential data processing. Here&rsquo;s an overview of transformers&rsquo; evolution and significance.
Background and Origin RNNs2 were one of the earliest models used for sequence-based tasks in machine learning.
<p class="text-right"><a href="/week1/">Read More…</a></p>



<div class="row">
<div class="column small-12">
<ul class="pagination" role="navigation" aria-label="Pagination">
Expand Down
Loading

0 comments on commit 46d3d20

Please sign in to comment.