CSE 477 -- Video Imaged Spatial Positioning Project

  Home

  Proposal

  Schedule

  Parts List

  Weekly Status Reports

  Preliminary Design Package
    Introduction
    Project Design
    Project Requirements
    Parts List
    Design Analysis
       Digital Image Consistency
       System Accuracy
       Timing Issues
    Test Plans
    Design Issues
    Technical References
    Powerpoint Presentation
    Word Format
    Group H Design Review
    Design Review (Word)

  Final Design Report

  Product Brochure

  Final Project Report

  Related Links

  Downloadable Documents

  About US

DESIGN ANALYSIS

There are three major factors that will determine the success of the project. These factors are:

  1. Digital image consistency
  2. System accuracy
  3. Data collection speed versus coordinate calculations

In the following sections we will discuss each area in more detail.

Digital Image Consistency

For digital image consistency, we are concerned that the images from the two cameras are not synchronized. This can occur if the cameras are capturing images at slightly different rates and the laser marker is shifted rapidly. In this case, one camera may stream an image where the laser marker is in the top left corner of the image when the other camera streams an image where the marker is in the lower right corner. To handle this situation, the image processor will compare the row coordinates of the pixels that define the marker. If the row coordinates are within six pixels of each other, then we will keep the coordinates the image processor finds. If the row coordinates are not within this range, we will simply discard the coordinates. The reason we can do this is because on a theoretical level the cameras are mounted on a level plane so that the center of both cameras are on the same horizontal plane. In this case, the laser marker will appear at the same vertical position in both images.

There is another consistency issue: what if the laser marker is visible in the image from one camera but not visible in the image from the other camera? This condition will occur due to the limited viewing angles of the cameras. The marker can be placed such that camera 1 will see the marker, but camera 2 will not see it. To handle this situation, the image processor will not notify the data correlator that new data is available. Whenever the data correlator does not have new data available, the data correlator will not output anything. This is a reasonable solution and is within our operating parameters.

System Accuracy

The primary source of error comes from the low resolution of the cameras. A pixel is the map of a small area of light. As shown in Figure 12, an object far away can move a few inches and still map to the same pixel. The distance the object can move without this change in position being recognized is limited by the resolution of the cameras.


Figure 12 - Pixel variance versus depth

Therefore the system has an inherent source of error. To address this, we limit the maximum distance an object can be from VISPS. In doing so we can reduce this systematic error.

Aside from systematic error, we argue that there is a loss of precision in carrying out the trigonometric calculations. To address this problem, we plan to emulate the floating-point operations on the Atmel mC. Using floating-point emulation we can maintain the required precision. The tradeoff for this is reduced throughput of the Atmel mC. In the next section we will argue that the loss of throughput will not affect the entire system.

So by limiting the distance between the marker and the cameras and by using floating-point emulation, we improve the accuracy of VISPS.

Rate of Data Collection Versus Data Output

In VISPS, we want to process every valid set of pixel coordinates from the image processor subsystem and generate the corresponding spatial coordinates. So the question we must answer is whether the rate of pixel coordinate generation from the image processor system is faster than the rate of output from the image processor. If the image processor system produces more valid pixel coordinates than we can handle, then the Atmel mC will not be able to keep up. This means we will lose pixel coordinates and not meet our goal. We argue that in the current state we will be able process all valid sets of pixel coordinates.

Currently the cameras are only capable of a 30 frames per second output. This means the camera will output a frame every 1/30th of a second. The image processor will then take each three consecutive frames and average the values and determine the pixel location of the marker. If there is no error, the fastest the image processor can produce a set of pixel coordinates is 1/10th of a second. If the Atmel mC runs at 24 MHz and pixel coordinates arrive every 1/10th of a second, this means Atmel mC has 2.4 million cycles to process the pixel coordinates. In 2.4 million cycles, we need to grab data from the image processor, process the data, and output the generated coordinates. We believe that even though we are doing floating-point emulation, we have enough cycles to complete the entire mathematical operations. In rough estimates, we believe the code will contain less than 400 lines of code. If every floating-point operation translates to about 50 integer instructions, 400 lines of code result in 2000 instructions in the worse case. If we need to execute 2000 instructions in 2.4 million cycles, the Atmel mC will have calculated and sent the spatial coordinates through the RS-232 output line before the next set of pixel coordinates arrive.

Using the estimate, we have shown that the Atmel mC will complete its computation before the next set of pixel coordinates arrive. Thus we can guarantee that we will be able to process every valid set of pixel coordinates.