Skip to main content
The Keyword

Google AR & VR

Announcing high-quality stitching for Jump



We announced Jump in 2015 to simplify VR video production from capture to playback. High-quality VR cameras make capture easier, and Jump Assembler makes automated stitching quicker, more accessible and affordable for VR creators. Using sophisticated computer vision algorithms and the computing power of Google's data centers, Jump Assembler creates clean, realistic image stitching resulting in immersive 3D 360 video.

Stitching, then and now

Today, we’re introducing an option in Jump Assembler to use a new, high-quality stitching algorithm based on multi-view stereo. This algorithm produces the same seamless 3D panoramas as our standard algorithm (which will continue to be available), but it leaves fewer artifacts in scenes with complex layers and repeated patterns. It also produces depth maps with much cleaner object boundaries which is useful for VFX.

Let’s first take a look at how our standard algorithm works. It’s based on the concept of optical flow, which matches pixels in one image to those in another. When matched, you can tell how pixels “moved” or “flowed” from one image to the next. And once every pixel is matched, you can interpolate the in-between views by shifting the pixels part of the way. This means that you can “fill in the gaps” between the cameras on the rig, so that, when stitched together, the result is a seamless, coherent 360° panorama.

Optical-flow based view interpolation.gif

Optical-flow based view interpolation
Left: Image from left camera. Center: Images interpolated between cameras. Right: Image from right camera.

Using depth for better stitches

Our new, high-quality stitching algorithm uses multi-view stereo to render the imagery. The big difference? This approach can find matches in several images at the same time. The standard optical flow algorithm only uses one pair of images at a time, even though other cameras on the rig may also see the same objects.

Instead, the new, multi-view stereo algorithm computes the depth of each pixel (e.g., the distance to the object at that pixel, a 3D point), and any camera on the rig that sees that 3D point can help to establish it’s depth, making the matching process more reliable.

standard vs high-quality stitching.jpg

Standard quality stitching on the left: Note the artifacts around the right pole. High quality stitching on the right: Artifacts removed by the high quality algorithm.

standard vs high-quality stitching b&w.jpg

Standard quality depth map on the left: Note the blurry edges. High quality depth map on the right: More detail and sharper edges.

The new approach also helps resolve a key challenge for any stitching algorithm: occlusion. That is, handling objects that are visible in one image but not in another. Multi-view stereo stitching is better at dealing with occlusion because if an object is hidden in one image, the algorithm can use an image from any of the surrounding cameras on the rig to determine the correct depth of that point. This helps reduce stitching artifacts and produce depth maps with clean object boundaries.

If you’re a VR filmmaker and want to try this new algorithm for yourself, select “high quality” in the stitching quality dropdown in Jump Manager for your next stitch!

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe