Skip to content

Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety and robustness (jailbreaking, oversight, uncertainty), representations, interpretability (circuits), etc.

License

Notifications You must be signed in to change notification settings

dobriban/Principles-of-AI-LLMs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

86 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Principles of AI: LLMs (UPenn, Stat 9911, Spring 2025)

Instructor: Edgar Dobriban

This course explores Large Language Models (LLMs), from the basics to cutting-edge research.

Reference Materials

Lectures

Link Topic
01 Motivation and Context
02 AI: Goals and definitions. The role of LLMs.
03 LLM architectures: attention and transformers.
04 Insight into transformer architectures.
05 Position encoding.
06 Specific LLM families: GPT, Llama, DeepSeek, LLM360.
07 Training LLMs: pre- and post-training, supervised fine-tuning, learning from preferences (PPO, DPO, GRPO).
08 Test-time computation: sampling, prompting, reasoning.
09 Empirical Behaviors: scaling laws, emergence, memorization, super-phenomena.

Student Presentations

Additional Resources

Links to other courses

Videos

  • Andrej Karpathy's Neural Networks: Zero to Hero video lectures. 100% coding-based, hands-on tutorial on implementing basic autodiff, neural nets, language models, and GPT-2 mini (124M params).

Key papers

Tutorials, books and book chapters

Workshops, conferences

NeurIPS, ICML, ICLR

About

Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety and robustness (jailbreaking, oversight, uncertainty), representations, interpretability (circuits), etc.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published