Video Enhancement – Super Resolution

See the Intelligent super-resolution Image process in action

Click the orange play button to see how you could enhance low resolution footage with Visionular’s Video Enhancement feature, Super Resolution.

More about Video Enhancement

The process of encoding video involves more than just the encoding. Analyzing and preparing your source content is equally important. Visionular has developed a suite of AI-driven image enhancement processes that can intelligently apply the correct optimization before the encoder goes to work. Perhaps most importantly, our AI communicates with the encoder, so it knows how to encode each video considering the pre-encode optimizations.

Super Resolution

Now non-HDR devices can deliver HDR content in an SDR package to support more devices. Slower networks aren’t an issue. You can deliver HDR-like footage over networks that can’t sustain high bitrates, traditionally needed to deliver HDR. UGC – User-generated content, by nature, varies widely in quality, making encoding and color matching a painstaking process. Now scene identification and pre-processing are widely used to address this variance.


In the past, if your source video was a lower resolution, it was more or less left in the dark ages of standard definition. You had to rely on the HD and SHD screens they were being viewed on to blindly scale it up and deal with whatever happened to your image.

Whether your content is vintage films, evergreen content, user-generated videos, or RTC video captured with a webcam, you want it to stand out and look its best. Visionular’s super-resolution technology intelligently analyzes the source and outputs a higher resolution file that retains sharp details, displays depth, and reduces noise regardless of the screen you watch.

Behind The Curtain

Intelligent Super-Resolution technology

Intelligent Super-Resolution technology scales the video adding additional lines of detail (resolution) to the picture by optimizing the perceptual loss. Our analysis approach uses a generative adversarial network closer to the human visual system (HVS), which improves the subjective experience.

We train the generative adversarial networks using millions of high-definition images and video sets with extended datasets. These training datasets cover a variety of content types and resolutions, which meet the super-score requirements of a multitude of scenarios.

The traditional generative adversarial network structure is complex and intensive, greatly affecting the calculation speed. Our Intelligent Super-Resolution technology uses pruning and distillation technology to optimize and slim down the traditional generative adversarial network, which improves computational efficiency.