The method, known as sub-pixel alignment, has been used in astrophotography for decades. Imagine a perfect scenario where a ray of light strikes the exact center of a pixel. In this case, that pixel would receive 100% of the light’s intensity, while the surrounding pixels would receive nothing. Now, consider the same ray of light hitting the border between two pixels; in this situation, each pixel would receive 50% of the light’s intensity. This is kind of the reverse process of anti-aliasing.
Next, imagine capturing 20 images of the same object, with slight camera movements between each shot. By combining this information, you can reconstruct a super-resolution image, recovering some of the detail that would otherwise be lost if you relied on just a single exposure. Multiple slightly shifted frames from a camera can provide 1.5–2× resolution beyond the sensor’s native resolution.
The biggest advantage of sub-pixel alignment is that it effectively increases the resolution and detail of an image beyond the native pixel grid. Fine textures and edges that are smaller than one pixel in a single frame can be reconstructed from multiple slightly shifted frames. Noise reduction makes these subtle details visible because the underlying signal is no longer masked by random noise.

