CSE logo University of Washington Department of Computer Science & Engineering
 CSE 142 Winter 2003 -- Project 3
  UW Home     CSE Home   Message Board    Contact Info 

CSE 142
 Home page
 Syllabus
Classwork
 Calendar
 Homework
 Projects
 Exams
 Quizzes
Software & Computing
 Programming Lab Info
 Computing at Home
 Java Libraries
 Troubleshooting
Staying in Touch
 Discussion Board
 Virtual TA
 Announcement Archive
 Mailing Lists
Staff
 Instructors
 TAs
 Consultants
 Consultant Schedule
Check Your Scores
 MyUW
   

CSE 142 Project #3: Digital Cameras and Image Manipulation

Part A Due With Homework 5: Sunday, March 9 at 9:00 PM

Project Code Due: Wednesday, March 12 at 9:00 PM

Individual Written Reports Due: Friday, March 14 at 9:00 PM

NEW!! Images for expansion filter
Thanks to Mack Williams for these images.
Mixed Colors
Colors

Directions: You should complete all design work, specification, coding in Java, testing of code, and code documentation with your assigned partner. Please work with your partner on all code components in front of the same computer using the pair programming style described in the Williams/Kessler paper and demonstrated in lecture. When working on the computer, you should normally switch roles every 10 or 15 minutes. One person is the driver (the person controlling the keyboard and mouse) while the other is the navigator. The navigator should help the driver by pointing out flaws in specification, code, and documents. The navigator also keeps track of your goals and helps the team accomplish steps to reach those goals. We ask that you work on projects in this manner so you learn to work with another student in the course and you learn to assist one another. Each member will bring strengths to the team. We hope you learn how to complement your strengths and the strengths of your partner to complete this project.

When you are finished with the project, you will turn in one set of Java files for the team. Each member should be satisfied with the content of these files. You may submit project files more than once (You are encouraged to submit files early and often!). We will grade the final set of files you submit electronically.

Once your team is finished with the Java files, each member of the team will write a project report individually and submit this by Friday, March 14. Details about what should be included in project reports can be found on this web page. Please read through the project report guidelines as you work on your project; the guidelines will help you take notes about the project as you complete the steps. Use this project report turnin page to submit your report.

Grading guidelines: Your team will receive the same grade on the Java programming part of the project. Your code will be graded on a scale from 0 to 4 in two categories:

  • Design, specification, implementation, code style, and test cases used (code quality)
  • Functionality of code and adherence to project specifications (code operation)

You will receive an individual grade based on your written report. Your written report will be graded on a scale from 0 to 4 in two categories:

  • Technical content of the written report
  • Writing style

The project is worth a total of 16 points. You can find more details about how this project will be evaluated in this grading guide.

Project Specification:

The DigiCam company has hired you to design algorithms for their digital cameras. Your task is to experiment with algorithms that can be embedded in a digital camera. You will also create algorithms to alter and manipulate digital pictures. The first algorithm that you implement will be the internal algorithm for creating a full-color picture based on light intensity levels the sensors in the camera record when a picture is taken. Below you will find more information about how digital cameras work and the Java classes that comprise the starter code. Throughout this project, feel free to use your own images.

How Color Images are Represented in a Camera or Computer: Pictures are a 2-dimensional array of pixels - individual dots that make up the image. Each image has a specific resolution - the width and height in pixels of the grid that makes up the image. Each pixel has three components describing the intensity of red, blue, and green (the primary colors) that combine to produce the color of that particular pixel. In a pixel, the red, green, and blue values range from 0 (none) to 255 (full intensity). A black pixel, for example, is one that has values red=0, green=0, and blue=0; while a white pixel has values red=255, green=255, and blue=255. Values in between 0 and 255 for these components produce different colors and intensities. If the red, green, and blue values are the same, the resulting color is a shade of gray. For another example, f there is no green and equal amounts of red and blue the resulting color is a shade of purple.

How Digital Cameras Work: Digital cameras contain a grid of sensors to record the light intensity at the different parts of the picture. Cheaper cameras (i.e., the consumer ones, which you will be modeling in your project) use color filters over the sensors, so that each sensor captures a single color for a single pixel in the image. So instead of recording a complete color image, the pixels in the image generated by the camera hardware each have only a single color value, either red, green, or blue. Software is used to generate the remaining colors in each pixel.

We'll assume the cameras have a Bayer filter pattern in front of the sensors. Because the human eye is most sensitive to green, the pattern is comprised of roughly half green sensors, a quarter blue sensors, and a quarter red sensors. The Bayer Pattern looks like the following:

Once the camera captures the intensities for the single color channels, the algorithm embedded in the camera must interpolate the other two channels to create the complete color image. These algorithms are called demosaicing algorithms since they convert the mosaic of separate colors into an image of true colors. Your main task in this project is to implement such an algorithm.. 

References about Digital Cameras: http://electronics.howstuffworks.com/digital-camera5.htm, http://www.shortcourses.com/choosing/how/03.htm

Description of Starter Code:

Seven classes comprise the starter code, but you only need to modify one class. You will create your own classes for different tasks outlined in this project. The classes that you should not modify are marked below.

SnapShop: Do not modify this class. This class creates the graphical user interface for the application and performs the loading operation for images. It controls the buttons that are part of the user interface. When a button is selected, the corresponding filter method in a class implementing the Filter interface is called. This class supports both the user view (the application window) and the engine to support the actions done by the user. You do not need to look at or understand this class file.     SnapShop.java

PixelImage: Do not modify this class. This class keeps track of the current test image (the image displayed on the left in the application). When creating a new filter class, you will want to get the data (2-Dimensional array of Pixels) and modify the data. Important methods in this class are getWidth(), getHeight(), getData(), and setData(Pixel[][] data). The data is stored in a 2-Dimensional array of Pixel objects where the first dimension corresponds to the rows and the second dimension corresponds to the columns.  PixelImage.java

Pixel: Do not modify this class. This class represents Pixel objects. Each Pixel has four instance variables: color is the color channel that was sensed by the camera; red is the value of the red channel; green is the value of the green channel; and blue is the value of the blue channel. Do not create new Pixel objects when modifying the image data; instead, modify the instance variables of the Pixel objects. Notice that the instance variables are declared public, which means that you can access the instance variable directly. If you have a Pixel named currentPixel, then you can access the red channel by currentPixel.red in your code.  Pixel.java (updated version with toString method posted 2/28. Either version will work properly.)

Filter: Do not modify this interface. Filter is simply an interface that all filter classes must implement. All classes implementing the Filter interface must have a method called filter that does not return anything and takes as a parameter a PixelImage. Filter.java

DigitalCameraFilter: Do not modify this class. This class performs the conversion from a full-color image file to the corresponding data that would be gathered by the sensors in a camera. Because we're not working with real digital cameras, this class serves as the inverse operation of producing the full-color image from the sensor data. DigitalCameraFilter.java

FlipHorizontalFilter: Do not modify this class. This class serves as an example of a class that implements the Filter interface. The filter method in this class flips the image across the vertical midline. The filter method is called when the associated button is pressed on the application window. FlipHorizontalFilter.java

SnapShopConfiguration: You may modify this class. This class configures the buttons for the application and has the main method (which simply creates a new SnapShop object). When you create filters, add the filter to the SnapShop passed to the configure method. Look at the configure method in this class. SnapShopConfiguration.java

Image Files: billg.jpg, diana.jpg, tammyInOslo.jpg (you may need to right-click or option-click these images to get a "save as..." dialog box instead of having them displayed directly by your web browser.) [You may use your own images for this project. Any .jpg image should work fine. Be sure to use images that are at most the size of the image in diana.jpg; otherwise, it may take a long time for your filters to process these images.]

Project Part A

1. Your first task is to familiarize yourself with the existing application. After downloading and compiling the Java files, run the application by typing the following in the interactions window:

java SnapShopConfiguration

You should see a dialog box pop up. Click on the "load new file" button. This brings up a file browser window. Select one of the .jpg files that you downloaded for the project. You should see the original image on the right and a grainy image on the left. The grainy image is the data collected from the red, green, and blue color filters before demosaicing algorithms transform the image to full color (which should look the one you see on the right). There are two filters already in place. The Digital Camera Filter, when executed after button is pushed, creates the mosaic of separate colors. This filter also gets executed (on the left image) when you load a new image. The Flip Horizontal flips the image across the vertical center when the button is pushed.

Note: You can specify a default image in the SnapShopConfiguration class in the configure method. You can change the default file with the setDefaultFilename method that takes a String as a parameter. The file loader expects the default file name to be fully specified -- that is, you should include the full path listing the directories. For example, you might have c:\directory\image.jpg as the full path name. To specify this a string, you need to write "c:\\directory\\image.jpg" [note the two \s instead of a single \]. Setting the default filename will save you time browsing for the file each time you want to load the default image.

2. Once you've gotten a feeling for how the application works, you should create a simple demosaicing algorithm. Create a new Java class that implements the Filter interface. Every class that implements the Filter interface must implement the method filter (see the Filter interface). To add a corresponding button for your filter in the application, add a line to the configure method in the SnapShopConfiguration class. The data for the image is represented as a 2-Dimensional array of Pixel objects where the first dimension represents the row and the second dimension represents the column. You should transform the image data in your Filter class to modify the two of the three color instance variables (red, green, and blue) for each Pixel object. Each Pixel object has a single accurate color channel (one that you should not modify) and has an instance variable named color that is set to the accurate color channel. For example, let's say we have a pixel that was produced by a red filter. Then, the green and blue channels will be equal to 0 before the demosaicing filter is executed. Your job is to calculate the values for the two color channels that did not get sensed as the camera took the picture for each Pixel in the image. To perform the calculation, look at the surrounding neighbors of pixels, find the ones that sensed the color you are trying to calculate, and average these values. Remember, color values go from 0 to 255. Also, remember not to go out of bounds in the 2-dimensional array of Pixel objects. The following is an example of how you can calculate the green and red channels for a pixel whose accurate color channel (color filter used when taking picture) is blue.

3. Test your filter, making sure that it does, in fact, set the color channels for the Pixel objects appropriately. You can load an image, apply your filter, and see what happens to the image.

4. Write another class for a filter that flips the image vertically across the horizontal midline by reflecting the values across this midline. An image with a person's head on the top should have an image of the head on the bottom after this filter is applied to the image. See the FlipHorizontalFilter class for an example of how to flip the picture horizontally across the vertical midline. Test your filter to be sure that it works as expected.

5. Turn in three Java files for Part A. One should be your class that applies the demosaicing algorithm and one should be the class that flips the image vertically. Also turn in your SnapShopConfiguration.java file. Include these files with your solution to homework 5 on the homework 5 turnin page. Your classes will be compiled with the original, unmodified starter files containing the rest of the SnapShop code, so be sure your code works correctly with the original files.

Project Part B

1. Find out what happens when sensors in a digital camera are broken. Write a class that implements the Filter interface that breaks a few sensors. You can simulate breaking sensors by setting the red, green, and blue color channels in a Pixel object to 0. Feel free to break as many sensors as you want. [Try breaking sensors that filter different colors of light. What happens to broken red sensors? How about broken green sensors? And blue?] After you break the sensors, apply your demosaicing filter to the image. [The order should be load an image, break a few sensors, and then apply the demosaicing filter.] What happens? (Include answers to questions in your written report.)

2. Write a class that implements the Filter interface that expands the range of color channels used. After the demosaicing algorithm is applied, figure out the minimum and maximum values for each color channel. Let's say you find that the minimum value is 10 and the maximum value is 100 for the blue channel. Take this range and expand it to fill the range from 0 to 255. Thus, 10 would map to 0 and 100 would map to 255. You may expand the range by linearly scaling the values, or you may decide to do something more complicated based on the distribution of color values. Be sure you include your transformation description in English in your specifications. Apply this type of mapping to all three color channels in your image. [After loading the image, the order of filters should be first demosaicing and then this expansion filter.]

3. Write a new demosaicing class that utilizes more than just the immediate neighboring pixels. Try averaging values for two levels of neighboring pixels, where level 1 comprises those pixels adjacent to the pixel for which you are calculating and level 2 are those adjacent to the level 1 pixels. You may want to weight the level 1 pixels more than the level 2 pixels when averaging. Compare your two demosaicing algorithms. What happens as you use a larger neighborhood for interpolating color channels? What happens if sensors are broken and this new filter is used?

When you are finished, turn in all your classes representing Filters (including those from Part A). Also, turn in the class SnapShopConfiguration.java. You and your partner should turn in one set of files. Use this turnin form to submit your work.

Additional Enrichment

[If you and your partner have finished parts A and B, feel free to work on the following tasks. You need not work with your partner on this section if he/she does not want to complete the additional enrichment.]

  • You are free to manipulate the color values for each pixel in the image. Try to make a photograph look more like a watercolor painting by altering the colors. You may need to experiment with color combinations to get the desired colors. Implement a filter for transforming the picture into an image more like a watercolor painting.
  • One important task in the computer science field of vision/graphics is recognizing objects in a picture. An important algorithm to help recognize objects is an edge detector. An edge detector finds edges of objects in a picture. You can find edges in colored pictures by looking for contrasting colors. Write a filter that outlines the edges of objects in the picture.
  • Write an original image manipulation filter of your choice.


CSE logo Department of Computer Science & Engineering
University of Washington
Box 352350
Seattle, WA  98195-2350
[comments to cse142-webmaster]