A single neural network architecture can handle multiple super-resolution scales by adapting just three hyperparameters (noise schedule, context length, and mass conservation), eliminating the need to train separate models for each upscaling factor.
This paper presents a flexible deep-learning framework for video super-resolution that works across different spatial and temporal upscaling factors without retraining from scratch.