CSE 477 -- Video Imaged Spatial Positioning Project

  Home

  Proposal
    Abstract
    Problem Description
    Benefits
    Implementation Plans
    Design Issues
    Cost
    Schedule
    Division of Labor
    Summary
    Word Formatted

  Schedule

  Parts List

  Weekly Status Reports

  Preliminary Design Package

  Final Design Report

  Product Brochure

  Final Project Report

  Downloadable Documents

  Related Links

  About Us

Implementation Plans

To make our system, we have identified 4 distinct subsystems that will need to be created. The subsystems are as follows:
  1. Image Capturing
  2. Image Processing
  3. Data Correlation
  4. Presenting the data

Image Capturing

In our project, we want to generate a three-dimensional coordinate of a user-defined point in space in relations to our cameras. To achieve this, we plan to use two color cameras that can produce high-resolution digital images with more than 256 color shades. The cameras will be mounted onto a movable platform, adjacent to each other with a fixed distance between them. In addition, we provide the user with a special colored marker that is distinguishable from the surrounding environment.

For the Image Capturing to be successful, the user would place the marker at a location in the environment where the user wants a spatial coordinate. Then each camera takes a digital image of the environment and streams the pixel representation of the image to an attached microprocessor for processing.


Image Processing

For image processing, our goal is to generate two angles: a horizontal angle alpha and a vertical angle beta. These angles, alpha and beta, are shown in Figure 1.


Figure 1: Overhead view of the angles

Obtaining the two angles is done as follows. Each camera is connected to a microprocessor. As the cameras stream the digital image to their respective microprocessor, each microprocessor will attempt to find the pixels that correspond to the marker. Once the microprocessors locate the necessary pixels, the microprocessors will reference the pixels as some x and y distance from the center of the image. (The pixels that correspond to the center of the image is predetermined due to the camera's limitation of producing images of a certain pixel width and height.) Then, taking the pixel's relative position, the microprocessors will determine the angles of their respective images and will feed these angles to the Data Correlation subsection.


Data Correlation

For data correlation, we determine the three-dimensional coordinate of the marker in relations to the two cameras using the four angles we obtained from the Image Processing Section. To correlate the data, we plan to use an XSV-300 circuit board with a Field Programmable Gate Array (FPGA) chip. The FPGA will be programmed with the necessary logic to handle the calculations.


Presenting the Data

After we have calculated the three-dimensional coordinate, we want to present this coordinate to the user of our system. To achieve this, we will transmit the coordinates using an RF signal to a PDA that the user will have. The PDA will then take the coordinates and display the coordinates in an application that is preloaded onto the user's PDA.