You will add the functionality to a skeleton version of the Impressionist program, which we will provide. The purpose of this project is to give you experience working with image manipulation, OpenGL primitives, user-interface design, and image processing.
Description
Impressionist is an interactive program that creates pictures that look like impressionistic paintings. It is based on a paper and a program by Paul Haeberli. Here is a copy of his paper "Paint by Numbers".
To create an impressionistic picture, the user loads an existing image and paints a seqence of "brush strokes" onto a blank pixel canvas. These brush strokes pick up color from the original image, giving the look of a painting. See some examples of what you can do with Impressionist, from previous quarters.
This is motivated by impressionist paintings that artists have been making for centuries. To help inspire you when creating new brush types, or when creating your artifacts, here are the Google Images "Impressionist Art" results.
Implement the features described below. The skeleton code has comments marked with // REQUIREMENT denoting
the locations at which you probably want to add some code.
Implement 5 different brush types:
single line, scattered lines, scattered points, (filled) circles, and scattered (filled) circles.
See the sample solution for an example of each brush's appearance.
Note that scattered brushes should sample from each location they color individually,
not just use a single color for each splotch.
Note: The sample solution includes Radius and Density sliders, but those are not required. Implementing them would earn a Whistle however.
WARNING: Do not implement drawing lines using GL_LINES and glLineWidth. glLineWidth is not guaranteed to work on all computers and its possible it won't work on our grading machines. Try an alternative method like drawing a rectangle.
Add sliders to control various line brush attributes.
You need to include sliders for the thickness (width) and orientation (angle) of line brushes,
in addition to the existing brush size slider. You will also implement controlling the length and orientation
with the mouse, described in the requirement below.
Add the ability to control the brush direction.
The line brush orientation should be controlled four different ways:
using a slider value (see above), using the right mouse button to drag out a direction line,
using the direction of the cursor movement, and using directions that are perpendicular to the gradient of the image.
Note that for the right mouse input, the red line is used to visually set both the size and orientation of the brush, and the size and orientation attributes should
update on mouse up. Refer to the sample solution for desired behavior.
Use the Sobel operator to determine the gradient (apply it to the Y channel).
The immediate brush directions obtained from mouse movement can be quite noisy,
and it is nicer if they are smoothed out with respect to previous brush directions.
You will need to handle this issue. Note that, if you remember the brush directions as a list of angles,
you cannot just average the angles. A simple average doesn't account for the fact that two brush angles,
such as 359 degrees and 0 degrees, may be closer than their numeric values imply.
We recommend using the atan2(y,x) function instead of atan(y/x),
because it handles the case where x=0. Note that its return value is between -pi and pi.
Allow the user to change the opacity (alpha value) of the brush stroke.
An alpha value slider should be added to the controls window.
Make sure that you only blend the RGB channels and not the Alpha channel when drawing the brushes (see glBlendFuncSeparate, in PaintView::PrepareBrush).
Implement the filter kernel.
The skeleton code already provides the user interface.
You should be able to specify any 5x5 filter by typing in the filter coefficients,
a scale factor by which each filter coefficient is divided,
and an offset which is added to the pixel before displaying.
The filter is applied to the entire right side image.
You will need to implement a method for handling boundary pixels,
when part of the filter kernel goes off the edge of the image.
You must do something "smarter" than assuming that the image is black beyond its boundary (i.e., do something other than zero-padding).
Implement the "normalize" checkbox that will automatically divide by the sum of the weights when the user wishes it.
The filter kernel allows users to enter values such that the resulting pixel value is out of the range [0...255] or the kernel values are divided by 0.
You will need to do something "reasonable" to handle these cases.
The filter is applied to the painted image (instead of the original image).
Also, the filter kernel dialog is a modal dialog.
You must first close the filter kernel dialog before you can continue painting on the "Paintview" canvas.
Tip: Use Edit->Copy Reference to copy the original image onto the right side canvas for easier testing of your kernel
Tip: Use a filter kernel with a 1 in the top right corner and 0 everywhere else to test that you have the orientation correct.
Which direction should the image shift with this filter?
Implement the mean bilateral filter. You should implement this just as a specialized mean filter, where you simply average values that are within the domain/range widths of each image pixel. Use the Euclidean distance between colors in RGB space. For extra credit (below), you may implement a Gaussian bilateral filter.
Skeleton Program
The skeleton program we provide does very little. It allows you to load the original image (which can be in BMP, PNG, or JPEG format), and save the painted version. Brush selection is done via a drop down list on a separate window called up via the "Brushes" menu. There is one brush implemented (points) and a slider for controlling the brush size.
The skeleton comprises the following classes. The following descriptions should give you a good idea of the purpose of each of the classes and provide some insight into where to add things when extending the skeleton code.
forms
This is where the user interface for the Impressionist project is defined. Add new widgets to your UI here.
mainwindow
Acts as the bridge between the views, filters, and brushes, and the UI.
brush
This is a base class for each of the paint brushes.
It defines the functionality that the brushes should have.
Your new brushes should inherit from this class.
The color that your brush paints with is also set here.
paintview
This maintains both the original view (left view) and the paint view (right view) of the input images and handles events related to the paint view.
filter
Applies filters to the paint view.
pointprush
This is the implementation of Point Brush. It is a kind of Brush. All your brush implementations will look like this with different GL calls.
Turn-in Information
Please follow the general instructions here. More details below:
Artifact Submission
For the Impressionist artifact, you will create an impressionistic painting from an image of your choice. Please turn in both the original and impressionized version in jpg or png form.
Bells and Whistles
Bells and whistles are extra extensions that are not required, and will be worth extra credit. You are also encouraged to come up with your own extensions for the project. Run your ideas by the TAs or Instructor, and we'll let you know if you'll be awarded extra credit for them. If you do decide to do something out of the ordinary (that is not listed here), be sure to mention it in a readme.txt when you submit the project.
To give your paintings more variety, add some additional brush types to the program.
These brush strokes should be substantially different from those you are required to implement.
You will get one whistle for each new brush (within reason).
Implement Radius and Density sliders for finer control of the scatter brushes, as seen in the sample solution.
When using your program, you currently can't see what part of the original image you're painting.
Extend the program so that when the cursor is in the painting window,
a marker appears on the original image showing where you're painting.
Sometimes it is useful to use the contents of the painting window as the original image.
Add a control to swap the contents of the painting window and the contents of the original image window.
When performing the bilateral filter, we measured the range distance as the Euclidean distance between
colors in RGB space. However, the distance between two colors in RGB space doesn't accurately represent what
humans perceive the difference between them to be. In your bilateral filters, use a perceptually based color
distance, such as Euclidean distance in the CIE LAB
color space (i.e. CIE76 Delta E*), or one of the more recent color distance functions
(Wikipedia).
Add controls that allow you to manipulate the color of the image.
For example, you could implement independent scaling of the red, green, and blue channels.
Design a brush that selectively applies one or more filters from your filter kernel.
This might require some UI changes to your filter kernel UI.
Note: you must take into account the brush size.
Example.
Add an undo feature with at least one level of undo so that you can try a brush and decide to undo its effect on the canvas.
This comes in very handy for experimenting with brush and filtering effects.
Add the ability to dissolve one image into another.
A different solution to the problem of not being able to see where you're painting is to show a dimmed version of the painting on the canvas.
Add a slider that allows the user to fade in or fade out the original image beneath the user's brush strokes on the canvas.
(Beware, this bell and whistle is more difficult than it looks).
Add a "mural" effect to your Impressionist by implementing the ability to load in different images while preserving what has been drawn on the canvas.
Add a "New Mural Image" or "Change Mural Image" to the controls window that allows the user to change images.
The user may then load an image, draw in what he / she prefers on the canvas,
and then load a different image and continue drawing on the canvas; thus, a "mural" effect.
Example.
To make your painting more interesting, add "alpha-mapped" brush strokes.
In other words, allow the user to load a bitmap representing a brush stroke.
This bitmap would contain an alpha value at each position.
Then when this brush is used to draw, a single color would be selected from the image,
all pixels in the brush bitmap would be set to this RGB color (without changing the alpha value),
and this partially transparent bitmap would be painted on the canvas.
A new color would be used each time the brush is drawn.
It can be time-consuming to paint an image manually.
Add a feature so that a whole painting can be created automatically.
The user should only have to specify a brush type, size, and angle to use.
Then the program should automatically paint brush strokes over the entire image,
using a randomized brush order and varying the brush attributes slightly as it goes (to increase realism).
At times, you may want the brush strokes to follow the gradient of a different image than the base image.
Add a button(s) that will cause the direction of brush strokes to be automatically determined from a user specified image.
The "accuracy" of the painting can also be improved by clipping long brush strokes to edges in the image.
Allow the user to load a black-and-white image that represents the edges in the picture.
Then add a checkbox so that the user can turn on edge-clipping,
which will automatically clip brush strokes at edges in the image.
x2
Use the image processing techniques described in class to automatically find the edges in the base image.
Once you have found the edges, add a button to the user interface that will allow the user to select whether or not the brush strokes should be clipped to the edges in the picture.
Design a brush that can be used to stretch and pull the image as if it were rubber.
Example.
x3
Implement "animated" brush strokes that make the image appear to move in interesting ways.
Ex. You could paint moving ripples over a picture of a lake, or rustling motions onto grass or tree.
Credit will vary depending on the success of your method.
x3
Given a source image, construct a new image that is really a mosaic of small (thumbnail) images.
To do this, you need to partition the original into tiles and find new thumbnails that are reasonable matches to the tiles.
Then draw the new image by substituting the thumbnails for the tiles. See, for example,
Adam Finkelsteins Web Gothic.
Mosaic sample solution (stand-alone command line executable).
Credit will vary depending on the success of your method.
To get full credit, you must perform some sort of edge detection to accurately determine which thumbnails to use,
and you must use the original color of the selected thumbnails.
Example 1.
Example 2.
Monster Bells
Disclaimer: please consult the course staff before spending any serious time on these. These are all quite difficult (I would say monstrous) and may qualify as impossible to finish in the given time. But they're cool.
Impressionist Video
Implement a method to automatically create non-photorealistic video. One very simple method (that would not get a monster bell) would be to run auto-paint on each frame of a sequence. For credit, your technique should exhibit temporal coherence.
Other artistic methods, such as charcoal sketch, often de-emphasize the background (or leave it out altogheter). When processing still images, it is practically impossible to distinguish the subject and the background without any human assistance; however, in a video stream, it may be possible to exploit movement to segment the image. For additional credit, implement a method that effectively uses this method to generate a convincing non-photorealistic version of live video.
For even more extra credit (and probably a conference paper) do all of this in real-time on a consumer PC.
Image collages
Image mosaics are often pieced together by stitching together a bunch of tiny rectangular images. Although this produces a cool effect, it looks computer generated. Implement a method to build collages, given a sample set of images. The primary difference is that the shapes need not be rectangular and that they can also overlap. A while back, some graduate students here implemented a method to do this, ultimately resulting in building a face with pictures of fruit.
Another approach is to note that, when humans build collages, we usually clip shapes out of images (cutting out a picture of a red car and pasting it in as someone's upper lip, for instance). Given a set of data images, we wish to automatically build a collage of some input image, given that we can cut simple shapes from the data images. If you've seen The Truman Show, you may remember that Truman puts together a picture of a woman's face using magazine clippings. This took him a while.
Okay, I completely made up that term. In artistic animations, the movement is often not completely realistic. One technique that has been used for some advertisements and music videos involves sampling the video at a very slow frame rate (say, two per second) and then filling in the disarded frames using morphing. You may want to use the optical flow (see CSE490CV) to assist with the morph. You may also want to split up the image, morphing different regions and varying frame rate according to how much movement there is. After you perform this pass, maybe try running your impressionist program on each frame, using the morph and the optical flow to guide the direction of temporally coherent brush strokes.
For even more extra credit (and probably a conference paper) do all of this in real-time on a consumer PC.