To implement some of the requirements of this project, you will need to know a little about image processing. More specifically, this document will cover some of the aspects of edge detection and image gradients.

Often in image processing, we apply an operation to just a small region of an image at
a time. One of the most common techniques used for applying such operations is by using
something called a filter. Diffenent image filters are distinguished by their *kernel*
and their *support*. Basically, you can think of a filter kernel as a grid of values
that we overlay on top of the pixels of an image. Here is an example of a kernel:

0 | 1 | 1 | 1 | 0 |

1 | 1 | 2 | 1 | 1 |

1 | 2 | 3 | 2 | 1 |

1 | 1 | 2 | 1 | 1 |

0 | 1 | 1 | 1 | 0 |

We apply an operation to a pixel in the image as follows. First, imagine placing the center of the kernel on top of the pixel of interest. Multiply the value at each position in the kernel by the color of the pixel underneath that position. Then, add up all these products, and divide by the sum of the values in the kernel (if it's nonzero).

We can do many interesting things by changing the values in the kernel. The rest of this section will explain how kernels and other image manipulation methods are used to find edges and gradients in an image.

The first step in finding edges and gradients in an image is to blur the image. (Note: for finding edges and image gradients, we use a grayscale version of the original image.) Blurring reduces the "noise" (randomness) in the picture, and allows us to find edges and determine image gradients more accurately.

To get a grayscale version of the image, we take the average of the red, green, and blue channels. However we use the following weights:

Intensity = 0.299R + 0.587G + 0.114B

Why do we use these weights? Well, the human eye is better at detecting some colors than others. It picks up green the best, followed by red then blue. We're interested in producing an image that represents what the eye thinks the intensity of light is at each pixel, and this weighting achieves that effect.

There are several choices for the filter kernel to use for blurring image. We will present three of them here.

__Box Filter:__The first, and the simplest, is the "box filter". Basically, the kernel for the box filter is filled with all 1s.__Bartlett Filter:__Somewhat more complicated is the Bartlett filter. In this, pixels closer to the center are weighted more heavily than those further away. For the Bartlett filter, imagine placing a cone on top of the kernel, and using the height of the cone at each location to determine that position's relative weight.__Gaussian Filter:__Like the Bartlett filter, pixels near the center are weighted more heavily in the Gaussian filter. However, instead of a cone placed over the kernel, imagine a 3-dimensional bell curve on top of it.

In general, which filter you should use depends on the image you're manipulating and what you want to accomplish. The box filter is the easiest to implement, but the Bartlett and Gaussian filters often produce "better" blurring results.

Once you have blurred the grayscale version of the original image, you are ready to do edge detection. The method of edge detection we present here is known as Sobel edge detection.

To do Sobel edge detection, we once again use filters. This time we make two passes over the image, using a different kernel each time. Here are the filter kernels that are used.

1 |
2 |
1 |

0 |
0 |
0 |

-1 |
-2 |
-1 |

1 |
0 |
-1 |

2 |
0 |
-2 |

1 |
0 |
-1 |

By using both these filter kernels on each pixel in the image, we get two new values at each pixel location (call them temp1 and temp2). We then produce a single value at each location using this formula: sqrt( temp1*temp1 + temp2*temp2 ). (You may notice a striking similarity to the distance formula used in mathematics.) Finally, we say that if this single value is above a certain threshold (usually specified by the user), then there is an edge at that location.

Given a blurred image, we can also produce the gradients for an image. To intuitively understand what the image gradient is, recall what a gradient in is in mathematics --- the direction of maximum rate of change. For a surface (such as an array of pixels), the gradient is the direction in which the color changes the fastest (and perpendicular to the gradient, the color changes the slowest --- i.e., it's close to the same color).

Why is this important to us (and important for the Impressionist program, in particular)? Well, if we know (at each pixel) the direction in which the color is changing the least, we can orient our brush strokes in that direction to make it more realistic. (Because if we paint a stroke in a certain direction, the color of the paint stays the same during that stroke!)

So how do we implement this? If you answer contained the word "filters," you're on the right track. We will actually produce two gradients, the x-gradient (telling how fast the color changes along each row), and the y-gradient (telling how fast the color changes along each column). Here are the kernels that are used.

x-gradient filter kernel:

-1 |
1 |

y-gradient filter kernel:

1 |

-1 |

Using these kernels gives us two separate images, one of which tells us how fast the color changes (at each pixel) in the x-direction, and one telling us how fast it changes in the y-direction. Finally, we use this information to determine the direction of the maximum rate of change in color, and we orient our brush strokes perpendicular to that direction.