CSE 557 - FinalProject

Morphing

by: Jed Liu and Alex Lindblad


 

 

Introduction

For the final project, we chose to implement a morphing algorithm describe in [BN92].  This algorithm allows us to morph still images as well as video files, as seen in the artifact section below.  

Morphing is the process of transforming one image into another.  It involves deforming the two images to take on the same shape and then blending the two deformed images into one.  There are associated difficulties with producing quality morphs.  These difficulties arise from the inherit differences in the two images being morphed. 

Mr. Brean
Mr. Brean

The following sections describe the implementation details of the algorithm, a description of the user interface, a few humorous artifacts and a short conclusion.

Algorithm

The morphing algorithm used is that described in [BN92].  A brief overview of the algorithm is given here.  For complete details, the reader is referred to [BN92].

Still Images

The main algorithm for morphing still images takes as input an image and a set of feature line pairs, and produces a distortion of the input image according to the given line pairs. (The term line is used to mean a directed line segment.) Each frame in the final morph is produced by distorting the source and target images towards a common set of feature lines, and cross-dissolving the two resulting images according to the morph progression.

The user specifies pairs of feature lines for the source and target images. For each frame in the final morph, these line pairs are linearly interpolated to produce the set of feature lines for that frame. The easiest way to interpolate a pair of lines is to simply interpolate the lines' end points. While this is easy to implement, it is often not a natural interpolation. Instead, this implementation interpolates the lines' midpoints, orientation, and lengths.

Given an input image and a pair of feature lines, the algorithm determines, for each pixel in the output image, the pixel in the input image from which to sample. Intuitively, this pixel is determined by mapping from a coordinate system defined by one feature line into a coordinate system defined by the other. The details can be found in [BN92].

Each pair of lines defines a separate distortion for the input image; as such, for each pixel, each line pair gives a (possibly) different pixel in the input image. These are combined in a weighted average to produce the actual pixel from which to sample.

Video

To perform a morph between two videos, a key framing system is used for the feature lines. The feature lines are linearly interpolated between key frames in the fashion described above to produce a set of feature lines for each frame of each input video. Each frame in the output video is produced by morphing between the two corresponding frames in the input videos.

User Interface

The user interface is designed to allow the user to easily generate and manipulate the control lines in each of the original images.  These lines are drawn by using the left mouse button and dragging a line segment out.  This is repeated for as many lines as the users deems necessary.

Once both images are populated with control lines, the user must now link pairs of lines between the images. This is accomplished by right-clicking on a line segment in the first image and the corresponding line segment in the second image.  To aid the user in this process, lines are highlighted in red when selected.  Once the user has selected the two lines, the "Add Line Pair" can be used to commit those two lines as corresponding lines.  This process is repeated until all lines have been committed.  Lines can be removed by right-clicking on them while holding the control button down.

During video morphing, the user may describe an initial set of lines that correspond to a control frame for keyframing.  Then the user can step through the frames and manipulate the line segments to fit the moving image.  When a line is changed, it has the effect of making the current frame a control frame for that line in the keyframing process.  As the videos progress, the lines will linearly interpolate their midpoints, angles, and lengths between the two closest control frame lines.

To advance in the morphing process, the user can use the "Morphing Progression" counter.  The value shown is the current frame of the morphing process.  For still image morphing the user can specify how long the morph should take, and for video morphing the length of the morph is defined by the length of the input videos.

 

Artifacts

Below we have included a couple videos of still image morphing, as well as a video of video morphing.  

Much to his dismay, here is Brian turning into Mr. Bean.

Who do you think this Terminator will become?

Our fearless authors showing us their primal sides.

Jed Alex "Jex"

 

Conclusions

This project proved to be interesting and very rewarding because the output can be quite humorous.  The algorithm described above does a nice job of taking two images, no matter how different, and producing a morphed image. The algorithm was fairly straight forward to implement, while the user interface took a little more creative thinking. It is easy to see by the artifacts that you can create any number of interesting artifacts using our application.  

 

References

[BN92] Thaddeus Beier and Shawn Neely.  Feature-Based Image Metamorphosis. Computer Graphics, 26(2):25-42, July 1992.