Press release

The image

Worlds Largest Panoramic Image

About the team

Marek Rzewuski

Marek Rzewuski

Skilled programmer with passion for computer graphics and photography. Running his own business, creating web-pages with focus on speed, security and clean design. Common tasks: coding custom solutions, hosting and operations, performance optimization, photography, simple video work, and customer support. Some experience in PCB design and programming using LoRA/WIFI/BLE.

August Rzewuski

August Rzewuski

Aircraft technician apprentice. Passionate about drones, planes, computers, photography and he spends all his spare time as a horseracing jockey, helping to train horses. Curious, solution-oriented and always ready to help. He is the kind of colleague you can openly discuss ideas with and together find smarter solutions.

Marek and August Rzewuski

Holmenkollen360.com crew.
Do you need a world record? Hire us!

Graphics that emphasize the size

9000 x 4320 desktop

This panorama was edited on 55″ 8K display, but even this was to small and I had to extend the desktop to 9000 x 4320. Note that one and each of the tiny thumbnails represents 180MP image. This part is one of 4 parts which made the whole panorama.

9000 x 4320 desktop size

About the technology

Photographing distant objects with an 800mm lens is difficult due to atmospheric disturbances,
which distort images beyond few kilometers. Straight features such as building corners, lamp posts,
and signs no longer appear straight and looks distorted. Because of this it is challenging to maintain
image quality even with a high-resolution lens. An f-stop of f/11 was chosen after careful
consideration. When using the 2x extender with the 400mm f/2.8 lens, the combination has to be
stopped down to minimum f/8 to get sharp images (it’s designed like this). F/11 was chosen to get
some longer depth of field, even if this means loosing some resolution on objects very far away.
To achieve the best results many approaches were tested and a combination of algorithms was
employed:

  • Rough image alignment using image features
  • Sub-pixel alignment using Enhanced Correlation Coefficient (ECC)
  • Farnebäck Optical Flow
  • EDSR TensorFlow (Enhanced Deep Super-Resolution)

Applying a median on sub-pixel aligned images effectively reduces noise and preserves fine details,
even recovering straight lines in distant objects. A more advanced approach used here involves
creating a master image using the median, and then using optical flow algorithm to align source
images to the master. This method restores details in trees and moving objects and improves
resolution in distant areas.

Further refinement is achieved using EDSR TensorFlow, which enhances the quality of extremely
distant objects. This approach helps distinguish texture from noise, resulting in a more visually
pleasing image in areas affected by strong atmospheric disturbances. This approach had no visible
downsides on close object where the image is full of details. A key part of sub-pixel alignment is
applying an unsharp mask at the end to recover fine details

The pipeline is the result with the goal of enhancing resolution and producing a visually pleasing
image. The process can be summarized by the following points:

  • Group the images into 4 large sub-panoramas based on optimal suitability (north, south, east,
    west).
  • Divide the images into groups, with each group dedicated to upscaling and creating a 180MP
    super-resolution image. Each group was a burst of 20 images captured in about 1 second for
    each burst.
  • Correct colors and remove haze for improved clarity.
  • Develop RAW files into 16-bit TIFF format.
  • Sort the images within each group by sharpness, discarding the two least sharp images.
  • Mask out low contrast areas in images to prevent using for features and ECC.
  • Identify features for a rough alignment of the images.
  • Use the Enhanced Correlation Coefficient (ECC) method to align the images with sub-pixel
    precision.
  • Compute the median and enhance sharpness to create a master image for optical flow
    correction.
  • Apply optical flow correction on all images in the group using the master image as source to
    correct atmospheric turbulence.
  • Upscale the images by 4x using Tensorflow EDSR (Enhanced Deep Residual Network).
  • Compute the median of the upscaled images.
  • Apply an unsharp mask to restore fine details.
  • Merge four panoramas into one single large 360° panorama.
  • Merge inn the sky and render the final panorama.
To genuinely increase resolution, you must introduce extra data from somewhere. One approach is
to use multiple images of the same scene, slightly shifted or dithered, and combine them through
techniques like multi-frame super-resolution or Drizzle integration. These methods exploit sub-pixel
differences between frames to reconstruct details that a single image could not provide. Another
approach is to use prior knowledge encoded in algorithms. For example, AI-based upscaling (such
as convolutional neural networks or GANs) is trained on vast datasets of high- and low-resolution
image pairs. The AI learns statistical patterns of textures, edges, and shapes, and can “hallucinate”
plausible fine details when enlarging an image.

There is solid math behind why a single image cannot yield more real detail without extra
information. In signal processing terms, a low-resolution image y is typically modeled as a blurred,
downsampled version of an unknown high-resolution scene x: y=D H x+n where H is optical blur,
D is downsampling, and n is noise. Many different high-resolution images x can produce exactly
the same y after blur and downsampling. That means the inverse problem is non-unique: from one y
alone, you cannot uniquely recover the lost high-frequency content. Interpolation (bilinear, bicubic,
Lanczos) can only estimate values consistent with y; it cannot determine which of the many
possible x’s is the “true” one.

Short videos demonstrating the technology

  • Unstabilised: Please note that the movement is dampened by the VR system. The VR system resets itself between shots, which is why the image moves so much.
  • Registered: Pictures are aligned on top of each other. Please note the wobbling caused by atmospheric disturbance.
  • Optical flow: The algorithm adjusts the pixels toward the average of all images.
  • Stacked: The images are median-stacked, and the extra resolution is restored.
  • Sharpened: Sharpening has been applied, and the image is ready.

Sailboats

Urban intersection

One exposure vs sub-pixel aligment and processing.

This section presents before-and-after comparisons. The “before” images show a single exposure exactly as captured by the camera. Due to atmospheric disturbance, the image appears wobbly and distorted — which is clearly visible in the animation. The “after” images show the result after the full workflow has been applied, revealing the final detail, clarity, improved image quality, and higher resolution of the finished panorama.

Sailboats

One exposure

20 exposures, after processing

1 cm details at 8 km — why this shouldn’t be possible (but looks like it is)

Please bear in mind that the boats are ~8 km away. The wires on the sailboats are ≈1 cm thick. Seeing 1 cm details at 8 km should impress you, because it’s impossible —so what’s going on?

The left image is a single exposure. The right image is the final result after processing, upscaling, and sharpening. With an 800 mm f/11 lens on a 45 MP full-frame camera at 8 km, the theoretical diffraction-limited resolution is only about 7–10 cm.

Stacking and upscaling seem to reveal 1 cm “wires,” but that’s an optical illusion. The camera isn’t resolving the wire’s true thickness; it’s detecting tiny contrast changes. Even far below the resolution limit, a thin wire can block or scatter a little light and register as a one-pixel-wide dark or bright line. Sub-pixel alignment and upscaling enhance these edges, making the wire appear visible even though its width isn’t actually resolved. A 1 cm object might cover ~25% of a pixel, yet repeated frames, careful alignment, and sharpening can pull out that sub-pixel contrast—this image is a good example.

Why 1 cm “appears” visible at 8 km (short explanation):

  • Diffraction limit (~7–10 cm at f/11, 800 mm): Sets the smallest resolvable detail, larger than 1 cm.
  • Sub-pixel edge detection: Features smaller than a pixel still shift pixel intensities; edges are detectable even when widths aren’t resolved.
  • Stacking (higher SNR) + super-resolution: Multiple frames let you align at sub-pixel precision, average out noise, and reconstruct sharper edges.
  • Sharpening/deconvolution: Boosts contrast at those edges, making thin lines look distinct without truly measuring their thickness.
  • Result: You’re seeing enhanced edge contrast from a < resolution-limit object—not a faithful 1 cm measurement at 8 km.

Urban intersection

One exposure

20 exposures, after processing

Object within theoretical resolving limits.

The image on left is one single exposure. The image on right is the final image after processing,
upscaling and sharpening. Distance to the signs is about 5 km. Letters in the signs are big enough
for camera to resolve and we see improved image quality. We see generally better quality everywhere in the image, as expected.

Do you have any questions?

Please feel free to ask about anything.