Skip to content

Commit

Permalink
Rebuilt site
Browse files Browse the repository at this point in the history
  • Loading branch information
evansuva committed Nov 18, 2023
1 parent 1077eab commit 12628a2
Show file tree
Hide file tree
Showing 17 changed files with 151 additions and 165 deletions.
Binary file added images/week11/Day2/Slide7.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
96 changes: 46 additions & 50 deletions index.html

Large diffs are not rendered by default.

41 changes: 18 additions & 23 deletions index.xml

Large diffs are not rendered by default.

24 changes: 10 additions & 14 deletions post/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ <h2><a href="/week11/">Week 11: Watermarking on Generative Models</a></h2>

Presenting Team: Tseganesh Beyene Kebede, Zihan Guan, Xindi Guo, Mengxuan Hu
Blogging Team: Ajwa Shahid, Caroline Gihlstorf, Changhong Yang, Hyeongjin Kim, Sarah Boyce
Monday, November 6: Watermarking LLM Output Recent instances of AI-generated text passing for human text and the writing of students being misattributed to AI suggest the need for a tool to distinguish between human-written and AI-generated text. The presenters also noted that the increase in the amount of AI-generated text online is a risk for training future LLMs on this data.
Monday, November 6: Watermarking LLM Outputs Recent instances of AI-generated text passing for human text and the writing of students being misattributed to AI suggest the need for a tool to distinguish between human-written and AI-generated text. The presenters also noted that the increase in the amount of AI-generated text online is a risk for training future LLMs on this data.
<p class="text-right"><a href="/week11/">Read More…</a></p>


Expand All @@ -109,8 +109,7 @@ <h2><a href="/week10/">Week 10: Data Selection for LLMs</a></h2>


(see bottom for assigned readings and questions)
Presenting Team: Haolin Liu, Xueren Ge, Ji Hyun Kim, Stephanie Schoch
Blogging Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan
Presenting Team: Haolin Liu, Xueren Ge, Ji Hyun Kim, Stephanie Schoch Blogging Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan
Monday, 30 October: Data Selection for Fine-tuning LLMs Question: Would more models help? We&rsquo;ve discussed so many risks and issues of GenAI so far and one question is that it can be difficult for us to come up with a possible solution to these problems.
<p class="text-right"><a href="/week10/">Read More…</a></p>

Expand All @@ -128,7 +127,7 @@ <h2><a href="/week9/">Week 9: Interpretability</a></h2>
(see bottom for assigned readings and questions)
Presenting Team: Anshuman Suri, Jacob Christopher, Kasra Lekan, Kaylee Liu, My Dinh
Blogging Team: Hamza Khalid, Liu Zhe, Peng Wang, Sikun Guo, Yinhan He, Zhepei Wei
Monday, 23 October: Interpretability: Overview, Limitations, &amp; Challenges Definition of Interpretability Interpretability in the context of artificial intelligence (AI) and machine learning refers to the extent to which a model&rsquo;s decisions, predictions, or internal workings can be understood and explained by humans.
Monday, 23 October: Interpretability: Overview, Limitations, &amp; Challenges Definition of Interpretability Interpretability in the context of artificial intelligence (AI) and machine learning refers to the extent to which a model&rsquo;s decisions, predictions, or internal workings can be understood and explained by humans.
<p class="text-right"><a href="/week9/">Read More…</a></p>


Expand Down Expand Up @@ -160,10 +159,8 @@ <h2><a href="/week7/">Week 7: GANs and DeepFakes</a></h2>


(see bottom for assigned readings and questions)
Presenting Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan
Blogging Team: Haochen Liu, Haolin Liu, Ji Hyun Kim, Stephanie Schoch, Xueren Ge
Monday, 9 October: Generative Adversarial Networks and DeepFakes Today's topic is how to utilize generative adversarial networks to create fake images and how to identify the images generated by these models.
Generative Adversarial Network (GAN) is a revolutionary deep learning framework that pits two neural networks against each other in a creative showdown.
Presenting Team: Aparna Kishore, Elena Long, Erzhen Hu, Jingping Wan Blogging Team: Haochen Liu, Haolin Liu, Ji Hyun Kim, Stephanie Schoch, Xueren Ge Monday, 9 October: Generative Adversarial Networks and DeepFakes Today's topic is how to utilize generative adversarial networks to create fake images and how to identify the images generated by these models.
Generative Adversarial Network (GAN) is a revolutionary deep learning framework that pits two neural networks against each other in a creative showdown.
<p class="text-right"><a href="/week7/">Read More…</a></p>


Expand All @@ -180,7 +177,7 @@ <h2><a href="/week5/">Week 5: Hallucination</a></h2>
(see bottom for assigned readings and questions)
Hallucination (Week 5) Presenting Team: Liu Zhe, Peng Wang, Sikun Guo, Yinhan He, Zhepei Wei
Blogging Team: Anshuman Suri, Jacob Christopher, Kasra Lekan, Kaylee Liu, My Dinh
Wednesday, September 27th: Intro to Hallucination People Hallucinate Too Hallucination Definition There are three types of hallucinations according to the “Siren's Song in the AI Ocean” paper: Input-conflict: This subcategory of hallucinations deviates from user input.
Wednesday, September 27th: Intro to Hallucination People Hallucinate Too Hallucination Definition There are three types of hallucinations according to the “Siren's Song in the AI Ocean” paper: Input-conflict: This subcategory of hallucinations deviates from user input. Context-conflict: Context-conflict hallucinations occur when a model generates contradicting information within a response.
<p class="text-right"><a href="/week5/">Read More…</a></p>


Expand Down Expand Up @@ -212,9 +209,8 @@ <h2><a href="/week3/">Week 3: Prompting and Bias</a></h2>


(see bottom for assigned readings and questions)
Prompt Engineering (Week 3) Presenting Team: Haolin Liu, Xueren Ge, Ji Hyun Kim, Stephanie Schoch
Blogging Team: Aparna Kishore, Erzhen Hu, Elena Long, Jingping Wan
(Monday, 09/11/2023) Prompt Engineering Warm-up questions What is Prompt Engineering? How is prompt-based learning different from traditional supervised learning? In-context learning and different types of prompts What is the difference between prompts and fine-tuning? When is the best to use prompts vs fine-tuning?
Prompt Engineering (Week 3) Presenting Team: Haolin Liu, Xueren Ge, Ji Hyun Kim, Stephanie Schoch Blogging Team: Aparna Kishore, Erzhen Hu, Elena Long, Jingping Wan
(Monday, 09/11/2023) Prompt Engineering Warm-up questions What is Prompt Engineering? How is prompt-based learning different from traditional supervised learning? In-context learning and different types of prompts What is the difference between prompts and fine-tuning? When is the best to use prompts vs fine-tuning?
<p class="text-right"><a href="/week3/">Read More…</a></p>


Expand All @@ -229,7 +225,7 @@ <h2><a href="/week2/">Week 2: Alignment</a></h2>


(see bottom for assigned readings and questions)
Table of Contents (Monday, 09/04/2023) Introduction to Alignment Introduction to AI Alignment and Failure Cases Discussion Questions The Alignment Problem from a Deep Learning Perspective Group of RL-based methods Group of LLM-based methods Group of Other ML methods (Wednesday, 09/06/2023) Alignment Challenges and Solutions Opening Discussion Introduction to Red-Teaming In-class Activity (5 groups) How to use Red-Teaming?
Table of Contents (Monday, 09/04/2023) Introduction to Alignment Introduction to AI Alignment and Failure Cases Discussion Questions The Alignment Problem from a Deep Learning Perspective Group of RL-based methods Group of LLM-based methods Group of Other ML methods (Wednesday, 09/06/2023) Alignment Challenges and Solutions Opening Discussion Introduction to Red-Teaming In-class Activity (5 groups) How to use Red-Teaming? Alignment Solutions LLM Jailbreaking - Introduction LLM Jailbreaking - Demo Observations Potential Improvement Ideas Closing Remarks (by Prof.
<p class="text-right"><a href="/week2/">Read More…</a></p>


Expand All @@ -245,7 +241,7 @@ <h2><a href="/week1/">Week 1: Introduction</a></h2>

(see bottom for assigned readings and questions)
Attention, Transformers, and BERT Monday, 28 August
Transformers1 are a class of deep learning models that have revolutionized the field of natural language processing (NLP) and various other domains. The concept of transformers originated as an attempt to address the limitations of traditional recurrent neural networks (RNNs) in sequential data processing. Here&rsquo;s an overview of transformers' evolution and significance.
Transformers1 are a class of deep learning models that have revolutionized the field of natural language processing (NLP) and various other domains. The concept of transformers originated as an attempt to address the limitations of traditional recurrent neural networks (RNNs) in sequential data processing. Here&rsquo;s an overview of transformers&rsquo; evolution and significance.
Background and Origin RNNs2 were one of the earliest models used for sequence-based tasks in machine learning.
<p class="text-right"><a href="/week1/">Read More…</a></p>

Expand Down
Loading

0 comments on commit 12628a2

Please sign in to comment.