Ideas in Inference-time Scaling can Benefit Generative Pre-training Algorithms
Abstract
Recent years have seen significant advancements in foundation models through generative pre-training, yet algorithmic innovation in this space has largely stagnated around autoregressive models for discrete signals and diffusion models for continuous signals. This stagnation creates a bottleneck that prevents us from fully unlocking the potential of rich multi-modal data, which in turn limits the progress on multimodal intelligence. We argue that an inference-first perspective, which prioritizes scaling efficiency during inference time across sequence length and refinement steps, can inspire novel generative pre-training algorithms. Using Inductive Moment Matching (IMM) as a concrete example, we demonstrate how addressing limitations in diffusion models' inference process through targeted modifications yields a stable, single-stage algorithm that achieves superior sample quality with over an order of magnitude greater inference efficiency.
Community
A position paper accompanying Inductive Moment Matching, we explain the high-level motivation for designing new types of generative models that scale efficiently at inference time. We discuss the inference-first perspective on designing new generative paradigms and reveal shortcomings of existing methods that should be more widely accounted for. We hope it can open up a wider design space for future generative models to come.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps (2025)
- Enabling Autoregressive Models to Fill In Masked Tokens (2025)
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More (2025)
- SDE Matching: Scalable and Simulation-Free Training of Latent Stochastic Differential Equations (2025)
- Speculative Decoding and Beyond: An In-Depth Survey of Techniques (2025)
- Generalized Interpolating Discrete Diffusion (2025)
- Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper