CSE 477 -- Video Imaged Spatial Positioning Project

  Home

  Proposal

  Schedule

  Parts List

  Weekly Status Reports

  Preliminary Design Package
    Introduction
    Project Design
    Project Requirements
    Parts List
    Design Analysis
    Test Plans
    Design Issues
    Technical References
    Powerpoint Presentation
    Word Format
    Group H Design Review
    Design Review (Word)

  Final Design Report

  Product Brochure

  Final Project Report

  Related Links

  Downloadable Documents

  About US

DESIGN ISSUES

Understanding the details of the specifications for the parts we intend to use is the primary design issue we are currently facing. On a theoretical or logical level, almost all of the design issues have been worked out. However, over the last two weeks we have spent the majority of our time reading documentation on how to interface between the individual components: the cameras, the XS-40, and the Atmel Microcontroller. Three secondary issues also remain to be worked out regarding specific points of our overall design.

It is still a question how we plan to locate the laser marker in the visual data. Our original plan was to simply look for the brightest point in the pixel data and then use that as the location for our laser marker. After seeing the VGA display of the camera output we immediately realized that such a simple scheme would not work.

First of all, the images are speckled with noise that manifests itself as many points of saturation, or 100% brightness. This is a problem because no point is brighter than a point that is saturated in an image and there are many saturated points in every frame because of this noise. If the hardware simply looks for the brightest point, the pixel coordinates of the point is useless because it cannot tell whether the brightest point is the pixel representation of the laser marker or the result of the noise in the picture. To resolve this problem we plan to average intensity over three frames instead of using just one frame. Since saturated pixels from noise appear at random locations around a frame, averaging should reduce the intensity of the noisy pixels to below the intensity of the laser marker.

Also, we found that the cheap camera we are currently using saturates very easily. Any light source appears in the image as a region of saturation. In addition, every white surface reflecting a moderate light source appears as a region of saturation. Again, saturated regions in our image means that we will not be able to find the laser marker simply by looking for the brightest point in a single frame.

Lowering the exposure of the camera does help to reduce the noise (or randomly saturated pixels) in the image, although it also makes the laser marker hard to see. We hope that by using a much stronger laser and by reducing the camera exposure time, we can get a uniquely bright dot in the image. Our goal is to adjust the camera exposure time so that objects that are prone to reflect light do not cause saturation in the image data. However, lowering the exposure time to prevent undesired saturation does not work well since this requires a very low exposure time; at this level of exposure, even the laser marker is significantly dimmed in the image. Instead of finding a way to remove regions of saturation caused by light sources in the images, we will apply a usage constraint that requires VISPS to not have any substantial light source in its field of vision. In most indoor circumstances, we feel this constraint is reasonable.

Another design issue that we will be facing is the question of whether we will have enough pins on the XS-40 board to receive data from both of the cameras and interface with the Atmel microcontroller to output the angle data. The total number of pins for all these operations is 24. As we mentioned above, our understanding of the XS-40 board is not yet complete. One of the things we need to understand is how to use I/O pins on the board to which signal pads are normally mapped for other purposes. If it turns out that the board does not have enough unused pins to handle the interface of both cameras and of the Atmel Microcontroller, we will need to use one XS-40 for each camera.

We are still facing the question of how accurate our X, Y, Z coordinate outputs will be given that all of the components of our system work perfectly. There are several sources for the accumulation of significant random error in our system: because the output from one component is the input for the next, and the result of one computation is sometimes used in the computation of another value, these errors will be multiplicative. Multiplicative error across several stages will likely stack up to be quite significant, which means that our system as a whole will likely produce X, Y, Z coordinates that are not very accurate.