Contributions in Computational Video
As mentioned in the introduction,
there are three steps in video stabilization and rolling shutter
rectification.
We are working primarily on the first two steps of motion estimation.
We have developed an online algorithm for camera and gyroscope calibration
without any prior knowledge of the devices.
The camera motion can be reliably obtained from gyroscope after the
calibration and synchronization is performed.
For offline motion smoothing, we exploit the manifold structure of the
sequence of 3D rotation matrices.
Then we formulate the offline motion smoothing problem as a constrained
manifold regression problem.
The formulated problem is solved efficiently by an extension of
the two-metric gradient projection method.
For online motion smoothing, we develop a constrained multiple-model
estimation algorithm to adaptively smooth the camera motion while
guaranteeing that no black borders intrude the stabilized video frames.
The primary contributions of our research be summarized as follows.
- Online Camera-Gyroscope Auto-Calibration for Cellphones:
In this contribution, we develop an online method that estimates all of
the necessary parameters while a user is capturing video.
This algorithm is based on an implicit extended Kalman filter (EKF).
Each video frame provides a view of the 3D scene and triggers the update of
the EKF through multiple-view geometry.
By extending the recent proposed multiple-view coplanarity constraint
of camera rotation to rolling shutter cameras, we propose a novel
implicit measurement that involves only camera rotation but works for
any camera translation, including zero translation (pure rotation).
The implicit measurements can be effectively used in the EKF to update
the estimate of state vectors.
This algorithm is able to estimate the needed calibration and synchronization
parameters online with all kinds of camera motion, and can be embedded
in video stabilization for fast camera motion estimation using gyroscopes.
Both Monte Carlo simulation and cellphone experiments show that this
online calibration and synchronization method converges fast to the
ground truth values.
- Constrained 3D Rotation Smoothing via Global Manifold Regression:
In this contribution, we present a novel offline motion smoothing
algorithm for video stabilization.
We use a pure 3D rotation motion model with known camera projection parameters.
We directly smooth the sequence of camera rotation matrices for the video
frames by exploiting the Riemannian geometry on a manifold.
We consider the entire set of sequences of rotation matrices as a Riemannian
manifold.
This allows formulation of the offline motion smoothing problem globally
as a regression problem on the manifold based on geodesic distance.
We introduce a geodesic-convex constraint on the manifold to approximate
black-border constraint so that the entire motion smoothing problem
is kept geodesic-convex on the manifold.
To solve the formulated constrained smoothing problem on the manifold,
we compute the gradient and Hessian of the objective function using
Riemannian geometry, and then extend the two-metric projection algorithm in
Euclidean space to non-linear manifolds.
The geodesic-distance-based smoothness metric better exploits the manifold
structure of sequences of rotation matrices.
The geodesic-convex constraints effectively guarantee that no black borders
intrude into the stabilized frames.
The proposed manifold optimization algorithm can find the global optimal
solution in only a few iterations.
Experimental results show that the proposed motion smoothing method outperforms
state-of-the-art methods by generating more stable videos with less distortion.
- Real-time Motion Smoothing via Constrained Multiple-Model Estimation:
In this contribution, we develop a real-time motion smoothing method.
This method is motivated by Kalman filtering-based motion smoothing with
a constant velocity model.
We use estimate projection to ensure that the smoothed motion satisfies
black-border constraints, which are modeled exactly as linear inequalities
for general 2D motion models.
Then we combine the estimate projection with Bayesian multiple-model estimation
to achieve adaptive smoothing in a probabilistic way.
Experimental results show how the proposed algorithm can better smooth the
camera motion and stabilize videos in real time.
Note: The above description came from the 2014 PhD Dissertation by
Dr. Chao Jia at The University of Texas at Austin.
Contributions in Computational Photography
For digital still cameras, we have automated selected rules of
photographic composition to help amateur photographers take
better pictures. The automated methods rely on an auto-focus
filter, software-controlled shutter aperture, and image processing
algorithms to detect and segment the main subject of the photograph.
The image processing algorithms are low-complexity and amenable
to fixed-point implementation:
- Main subject detection and segmentation
- Photographic composition rules using main subject segmentation
- Reposition main subject according to rule of thirds
- Blur background, e.g. for a motion shot
- Reduce merging of background objects and main subject
In conducting this research, we have kept in mind from the beginning
that the algorithms we develop will ultimately be implemented on a
fixed-point digital signal processor. We are currently mapping the
above algorithms onto fixed-point digital signal processors.
Mail comments about this page to
bevans@ece.utexas.edu.