CVPR 2026 Tutorial

The Road to Convergence:
Evolution of Unified Multimodal Models

A comprehensive tutorial on the architecture design, representation learning, training dynamics, and evaluation of unified multimodal models that integrate understanding and generation within a single framework.

Conference CVPR 2026
Duration Half-Day (~3.5 hrs)
Format Hybrid (In-person + Zoom)
Audience 100–300 Attendees
Three Central Questions
We structure this tutorial around three fundamental questions that define the design space of unified multimodal models.
🏗

How to Model?

A systematic taxonomy of UMM architectures — External Expert Integration, Modular Joint Modeling, and End-to-End Unified Modeling — with trade-off analysis between autoregressive, diffusion, and hybrid approaches.

🧩

How to Represent?

The "Unified Tokenizer" debate: continuous representations (e.g., CLIP) vs. discrete tokens (e.g., VQ-VAE), and hybrid encoding strategies balancing semantic understanding with generative fidelity.

How to Train?

The full training lifecycle — from constructing interleaved image-text data to unified pre-training objectives and advanced post-training alignment methods such as DPO and GRPO.

Tutorial Outline
A structured half-day journey from foundational motivations to advanced architectures and practical training recipes.
Session 1 · 30 min

Introduction & Motivation

Tracing the evolution of multimodal AI from isolated expertise to Unified Multimodal Models. We introduce the core motivations driving unification — particularly the mutual reinforcement between understanding and generation — and provide a rigorous definition of UMMs.

Session 2 · 45 min

Modeling Architectures

A systematic taxonomy including External Expert Integration, Modular Joint Modeling, and End-to-End Unified Modeling. Deep dive into trade-offs between autoregressive, diffusion, and emerging AR-Diffusion hybrid approaches.

Session 3 · 45 min

The Unified Tokenizer Challenge

Comparing continuous representations versus discrete tokenization schemes. Review of encoding/decoding strategies and state-of-the-art hybrid approaches — cascade and dual-branch designs — bridging semantic richness with generative fidelity.

Session 4 · 45 min

Training Recipes & Data

Constructing high-quality modality-interleaved datasets, unified pre-training objectives, and advanced post-training alignment methods including preference-based approaches such as DPO and GRPO.

Session 5 · 30 min

Evaluation, Applications & Future Directions

Reviewing existing benchmarks for standardized evaluation, discussing real-world applications in robotics and autonomous driving, and highlighting open challenges including scalable unified tokenizers and unified world models.

Session 6 · 15 min

Unified Codebase & Integration

A practical walkthrough of our unified multimodal codebase, explaining how core components — tokenizers, multimodal encoders, and generative backbones — are organized and connected in practice.

Meet the Team
Our tutorial is led by researchers from academia and industry with extensive experience in multimodal foundation models.
JW

Jindong Wang

Assistant Professor, William & Mary
Presenter
Faculty member of the Future of Life Institute. Former Senior Researcher at Microsoft Research Asia. 60+ papers with 23,000+ citations (h-index 54). World's Top 2% Highly Cited Scientists. Extensive tutorial experience at IJCAI, WSDM, KDD, AAAI, and CVPR.
jd92.wang →
HC

Hao Chen

Research Scientist, Google DeepMind
Presenter
Ph.D. from Carnegie Mellon University (advised by Prof. Bhiksha Raj). Research on data-centric learning for reliable foundation models, including pre-training data imperfections, catastrophic inheritance, and multimodal generalization. Published at NeurIPS, ICML, and ICLR.
JH

Jiakui Hu

Ph.D. Student, Peking University
Presenter
Research on unified models, computational imaging, and inductive biases in visual foundation models. First-author papers at ICCV, CVPR, ICLR, ICML, and AAAI. Reviewer for major conferences and journals.
ZS

Zhaolong Su

Ph.D. Student, William & Mary
Code Preparation
Conducts research on unified multimodal training and foundation models in the Department of Data Science.
SL

Sharon Li

Associate Professor, UW–Madison
Advisor
Research on reliable and safe AI systems. Alfred P. Sloan Fellowship and MIT Technology Review Innovators Under 35 recipient. Ph.D. from Cornell University, postdoc at Stanford University.
Key Topics
From architectural paradigms to real-world deployment, the tutorial covers the full spectrum of unified multimodal model research.
01

Evolution of Multimodal Models

From isolated multimodal understanding or generation systems to unified multimodal foundation models capable of handling both tasks simultaneously.

02

Modeling Paradigms for UMMs

A taxonomy of architectures including External Expert Integration, Modular Joint Modeling, and End-to-End Unified Modeling, with comparisons between autoregressive, diffusion, and hybrid approaches.

03

Unified Tokenizer & Representation Design

Continuous versus discrete representations, their advantages and limitations, and emerging hybrid encoding strategies that balance semantic understanding and generative fidelity.

04

Training Lifecycle & Alignment

Construction of modality-interleaved datasets, unified pre-training objectives, and post-training alignment methods such as DPO and GRPO.

05

Benchmarks, Applications & Open Challenges

Evaluation protocols, real-world applications in robotics and autonomous driving, and future directions such as scalable unified tokenizers and unified world models.

Selected Publications
Representative publications by the organizers and foundational research in unified multimodal models.
Jiakui Hu, et al. Unified Multimodal Understanding and Generation Models: Advances, Challenges, and Opportunities. Survey
Jindong Wang, Hao Chen, et al. On Fairness of Unified Multimodal Large Language Models for Image Generation. NeurIPS 2025
Jindong Wang, Hao Chen, et al. Is Your (Reasoning) Multimodal Language Model Vulnerable toward Distractions? AAAI 2026
Hao Chen, et al. ImageFolder: Autoregressive Image Generation with Folded Tokens. ICLR 2025
Hao Chen, et al. Masked Autoencoders Are Effective Tokenizers for Diffusion Models. ICML 2025
Sharon Li, et al. Understanding Multimodal LLMs Under Distribution Shifts: An Information-Theoretic Approach. ICML 2025 Oral
Jindong Wang, et al. Open-Vocabulary Calibration for Vision–Language Models. ICML 2024
Zhaolong Su, Hao Chen, Jindong Wang, et al. UniGame: Turning a Unified Multimodal Model Into Its Own Adversary. Preprint
Materials & Resources
We are committed to open science and ensuring reproducibility. All materials will be publicly available.
📑

Slides

All presentation slides will be made publicly available on this website following the event.

Coming Soon
📚

Bibliography

An annotated compilation of all references discussed in the tutorial as a comprehensive reading list.

Coming Soon
💻

Codebase

Open-source unified multimodal codebase with annotated pointers to models (e.g., Emu, Janus) and datasets.

Coming Soon