Skip to content

EzgiKorkmaz/AI-Safety

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

AAAI 2025 Tutorial

AI Safety: From Reinforcement Learning to Foundation Models

From learning to make sequential decisions from raw high-dimensional data to interacting with humans solely based on learning a model of probability distributions over tokens, i.e. large language models, the machine learning field is experiencing immense progress towards achieving intelligent agents making important decisions for humanity in everyday life. The advancements of reinforcement learning further fuel the research on foundation models aiming to build large language agents that can reason, and are responsible, aligned, unbiased and robust. While these models are currently being deployed in high stake decision making with societal impact, the concerns on the reliability, robustness and safety of these models remains to be an open problem.

This tutorial will introduce a principled analysis of current learning paradigms on responsible, robust and safe machine learning, and further will reveal how and why the current learning paradigms fall short on providing safety, robustness and generalization.

Tutorial Website: https://sites.google.com/view/aisafety-aaai2025

Organizer : Ezgi Korkmaz