Link

Learning Deep Low-Dimensional Models from High-Dimensional Data: From Theory to Practice

ICCV 2025 Tutorial

Date: TBD (full day tutorial)

Location: TBD

Representation Learning SVG Image

Overview

Over the past decade, the advent of machine learning and large-scale computing has immeasurably changed the ways we process, interpret, and predict with data in imaging and computer vision. The “traditional” approach to algorithm design, based around parametric models for specific structures of signals and measurements—say sparse and low-rank models—and the associated optimization toolkit, is now significantly enriched with data-driven learning-based techniques, where large-scale networks are pre-trained and then adapted to a variety of specific tasks. Nevertheless, the successes of both modern data-driven and classic model-based paradigms rely crucially on correctly identifying the low-dimensional structures present in real-world data, to the extent that we see the roles of learning and compression of data processing algorithms—whether explicit or implicit, as with deep networks—as inextricably linked.

As such, this tutorial provides a timely tutorial that uniquely bridges low-dimensional models with deep learning in imaging and vision. This tutorial will show how:

  1. Low-dimensional models and principles provide a valuable lens for formulating problems and understanding the behavior of modern deep models in imaging and computer vision; and how
  2. Ideas from low-dimensional models can provide valuable guidance for designing new parameter efficient, robust, and interpretable deep learning models for computer vision problems in practice.

We will begin by introducing fundamental low-dimensional models (e.g., basic sparse and low-rank models) with motivating computer vision applications. Based on these developments, we will discuss strong conceptual, algorithmic, and theoretical connections between low-dimensional structures and deep models, providing new perspectives to understand state-of-the-art deep models in terms of learned representations and generative models. Finally, we will demonstrate that these connections can lead to new principles for designing deep networks and learning low-dimensional structures in computer vision, with both clear interpretability and practical benefits. We will conclude with a panel discussion with expert researchers from academia and industry on what role low-dimensional models can and should play in our current age of generative AI and foundation models for computer vision.

Speakers

Yi Ma

UC Berkeley
HKU IDS

Qing Qu

UMichigan

Liyue Shen

UMichigan

Atlas Wang

UT Austin

Zhihui Zhu

Ohio State

Panelists

Berivan Isik

Google

Vladimir Pavlovic

NSF & Rutgers University

Schedule

The tutorial will take place at ICCV 2025.

Lecture Speaker Time
Session I: Introduction of Basic Low-dimensional Models in Vision
Introduction of Basic Low-dimensional Models
(Lecture Abstract)
Yi Ma 9:00-9:45 am
Introduction of Low-dimensional Models in Deep Learning
(Lecture Abstract)
Atlas Wang 9:45-10:30 am
Session II: Understanding Low-Dimensional Structures in Representation Learning
Understanding Low-Dimensional Representation in Last-layer
(Lecture Abstract)
Zhihui Zhu 10:45-11:30 am
Understanding Low-Dimensional Representation in Intermediate Layer
(Lecture Abstract)
Qing Qu 11:30 am-12:00 pm
Session III: Understanding Low-Dimensional Structures in Diffusion Generative Models
Low-Dimensional Models for Understanding Generalization in Diffusion Models
(Lecture Abstract)
Qing Qu 1:15-2:00 pm
Exploring Low-Dimensional Structures for Controlling Diffusion Models
(Lecture Abstract)
Liyue Shen 2:00-2:45 pm
Session IV: Designing Deep Networks for Pursuing Low-Dimensional Structures
ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction
(Lecture Abstract)
Yi Ma 3:00-3:45 pm
White-Box Transformers via Sparse Rate Reduction
(Lecture Abstract)
Sam Buchanan 3:45-4:30 pm
Session V: Panel Discussion (led by Liyue Shen) 4:45-5:45 pm

Materials

Materials for the tutorial will be made available closer to the conference date.