Michael Mao

HUSTL

Hyperlapse processed with HUSTL

Links: GitHub Repo, Project Writeup
Members: Michael Mao, Jiaju Ma, James Li
Duration: Spring 2019

Overview

Making good hyperlapse videos is difficult. It usually requires photographers to take photos between consistent distance intervals and keep the height of the camera and the direction of the lens steady and smooth. Alternatively, they can mount their camera on expensive steadicams or gimbals. To make creating high-quality hyperlapses easier, we present HUSTL (Hyper-Ultra-Super-Time-Lapse), an open source three-stage software pipeline based on the state of the art academic research papers. Our tool allows users to only provide rough hand-held hyperlapse frames to create a smoothed hyperlapse video.

This is a class project for Brown University’s CS 1430 Computer Vision class taught by Prof. James Tompkin.

Methodology

There are 3 major challenges to creating a high-quality hyperlapse video:

  1. Some frames can be sub-optimal and should be skipped
  2. Frames may have different white balance, lighting conditions, and some frames may be over-exposed or under-exposed
  3. Frames need to be aligned to ensure that the camera movement looks smooth

To deal with these three major issues, we propose a 3-step pipeline: frame selection1, color matching2, and video stabilization3.

For more details, please refer to the project writeup.

HUSTL pipeline

HUSTL pipeline

Demonstrations

Faunce Arch

Baseline

HUSTL

Soldiers Memorial Gate

Baseline

HUSTL

Areas Needing Improvements

As shown in the demo, even though the camera movements have been smoothed, there are still a few issues degrading the hyperlapse:

  1. Objects can appear to be warped unnaturally, especially in image corners and especially when the warping needed is significant.
  2. HUSTL struggles in narrow environments (like hallways) where objects move out of frame relatively quickly compared to an open environment.
  3. HUSTL may cause unnatural lighting effects and unintended brightness changes when transitioning between well-lit and dimly-lit environments.

We believe a better video stabilization algorithm (functioning like the “Warp Stabilizer” in Adobe Premiere) may be better equipped to resolve the first 2 issues. A potential method to resolve the 3rd issue may look like:

  1. Project the color space to a logarithmic space to reduce risk of information loss when adjusting brightness
  2. Adjust the images so that the brightness (average brightness) is consistent across all frames
  3. Run the images through the HUSTL color consistency pipeline
  4. Project the logarithmic space back to the normal linear color space

  1. Neel Joshi, Wolf Kienzle, Mike Toelle, Matt Uyttendaele, and Michael F. Cohen. Real-Time Hyperlapse Creation via Optimal Frame Selection ACM Transactions on Graphics, 2015 ↩︎

  2. Jaesik Park, Yu-Wing Tai, Sudipta Sinha, and In So Kweon. Efficient and Robust Color Consistency for Community Photo Collections Computer Vision and Pattern Recognition (CVPR), 2016 ↩︎

  3. Shuaicheng Liu, Lu Yuan, Ping Tan, and Jian Sun. Bundled Camera Paths for Video Stabilization ACM Transactions on Graphics, 2013 ↩︎