Lectures

Lectures


Lecture 1/3 “Self-supervised Learning 1”

Abstract TBA

Lecture 2/3 “Self-supervised Learning 2”

Abstract TBA

Lecture 3/3 “Vision-Language Learning”

Abstract TBA



Lecture 1/3 “The AI-driven Hospital of the Future”

Abstract TBA

Lecture 2/3 “Foundations of Attention Mechanisms and Transformers”

Abstract TBA

Lecture 3/3 “AI Safety: Challenges and Solutions”

Abstract TBA



Lecture 1/4 “Transformers Part 1”

Abstract TBA

Lecture 2/4 “Transformers Part 2”

Abstract TBA

Lecture 3/4 “Vision and VLMs Part 1”

Abstract TBA

Lecture 4/4 “Vision and VLMs Part 2”

Abstract TBA



Lecture 1/3 “Foundation Models for Earth Systems 1/3”

Abstract TBA

Lecture 2/3 “Foundation Models for Earth Systems 2/3”

Abstract TBA

Lecture 3/3 “Foundation Models for Earth Systems 3/3”

Abstract TBA



Lecture 1/3 “Learning Low-Dimensional Linear and Independent Structures”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pei)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pei)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)
Lecture 2/3 “Deep Representations via Unrolled Optimization”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pei)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pei)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)
Lecture 3/3 “Consistent Representations via Autoencoding”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pei)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pei)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)


Lecture 1/3 “Foundations of GNN Expressiveness”

Abstract TBA

Lecture 2/3 “Beyond Standard GNNs: Increasing Expressiveness”

Abstract TBA

Lecture 3/3 “Expressiveness of GNNs in Practice”

Abstract TBA



Lecture 1/3 “From Large Language Models to Reasoning Models”

Abstract TBA

Lecture 2/3 “Multi-Agent System”

Abstract TBA

Lecture 3/3 “Applications of Foundation Models”

Abstract TBA



Lecture 1/3 “From GANs to Diffusion Models for Image Synthesis “

Abstract TBA

Lecture 2/3 “Control and Guidance in Diffusion Models”

Abstract TBA

Lecture 3/3 “Conditional Generation of Multimodal Data “

Abstract TBA



Lecture 1/3 “Introduction to Graph Neural Networks”

Abstract TBA

Lecture 2/3 “Graph Neural Networks for Physical Simulation”

Abstract TBA

Lecture 3/3 “Compositional World Models”

Abstract TBA



Lecture 1/3 “History and Principles of Intelligence”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pei)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pei)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)
Lecture 2/3 “Pursuing General Low-Dimensional Structures via Compression”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pei)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pei)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)
Lecture 3/3 “Future Directions for Machine Intelligence”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pei)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pei)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)


Lecture TBA

Abstract TBA



Lecture 1/3 “Detecting AI-Generated Text”

1. Detecting AI-generated text
2. SynthID: a watermark for LLM-generated text
3. Neural text generation: history to present

Lecture 2/3 “SynthID: a Watermark for LLM-Generated Text”

1. Detecting AI-generated text
2. SynthID: a watermark for LLM-generated text
3. Neural text generation: history to present

Sumanth Dathathri, Abigail See, …, Demis Hassabis & Pushmeet Kohli, “Scalable watermarking for identifying large language model outputs“, Nature volume 634, pages 818–823 (2024)

https://www.nature.com/articles/s41586-024-08025-4

Lecture 3/3 “Neural Text Generation: History to Present”

1. Detecting AI-generated text
2. SynthID: a watermark for LLM-generated text
3. Neural text generation: history to present



Lecture 1/3 “Thinking about thinking: Metacognitive Capabilities of LLMs”

Metacognitive knowledge refers to humans’ intuitive knowledge of their own thinking and reasoning processes. Today’s best LLMs clearly possess some reasoning processes. The paper gives evidence that they also have metacognitive knowledge, including ability to name skills and procedures to apply given a task. We explore this primarily in the context of math reasoning, developing a prompt-guided interaction procedure to get a powerful LLM to assign sensible skill labels to math questions, followed by having it perform semantic clustering to obtain coarser families of skill labels. These coarse skill labels look interpretable to humans.

To validate that these skill labels are meaningful and relevant to the LLM’s reasoning processes we perform the following experiments. (a) We ask GPT-4 to assign skill labels to training questions in math datasets GSM8K and MATH. (b) When using an LLM to solve the test questions, we present it with the full list of skill labels and ask it to identify the skill needed. Then it is presented with randomly selected exemplar solved questions associated with that skill label. This improves accuracy on GSM8k and MATH for several strong LLMs, including code-assisted models. The methodology presented is domain-agnostic, even though this article applies it to math problems.



Lecture 2/3 “Thinking about thinking: Metacognitive Capabilities of LLMs”

Abstract (see Lecture 1/3).

Lecture 3/3 “Thinking about thinking: Metacognitive Capabilities of LLMs”

Abstract (see Lecture 1/3).



Tutorial 1/3

Abstract TBA

Tutorial 2/3

Abstract TBA

Tutorial 3/3

Abstract TBA




 

Tutorials


Tutorial 1/3 “Pursuing General Low-Dimensional Structures via Denoising”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pai)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pai)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)
Tutorial 2/3 “White-Box Deep Network Architectures via Compression and Optimization”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pai)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pai)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)
Tutorial 3/3 “Self-Consistent Representations via Closed-Loop Transcription”

Abstract TBA

  • Lecture 1: “History and Principles of Intelligence” (Yi Ma)
  • Lecture 2: “Learning Low-Dimensional Linear and Independent Structures” (Sam Buchanan)
  • Lecture 3: “Pursuing General Low-Dimensional Structures via Denoising” (Druv Pai)
  • Lecture 4: “Pursuing General Low-Dimensional Structures via Compression” (Yi Ma)
  • Lecture 5: “Deep Representations via Unrolled Optimization” (Sam Buchanan)
  • Lecture 6: “White-Box Deep Network Architectures via Compression and Optimization” (Druv Pai)
  • Lecture 7: “Consistent Representations via Autoencoding” (Sam Buchanan)
  • Lecture 8: “Self-Consistent Representations via Closed-Loop Transcription” (Druv Pai)
  • Lecture 9: “Future Directions for Machine Intelligence” (Yi Ma)