CSE 571 - Project

Project Teams

Below is a list of the different project teams and their members. Each team's blog should have the project name, list of team members, a short paragraph explaining the project, a specific goal for the midterm evaluation (11/20) and regular updates on progress.

Team Name Member 1 Member 2 Member 3 Project/Blog
The Lone Man Justin Huang - - Robot localization based on visual place landmarks
Binary Soul Vivek Paramasivam Leah Perlmutter - Bouncing a Balloon using the PR2
Wired Rajalakshmi Nandakumar Donny Huang - Fitgigabit - A wearable device to track body pose
Wigwam Patrick Lancaster James Youngquist - Balancing a ball on a Stuart's platform with a PR2 using RL
Big Feet Ryan Drapeau Aaron Nech Sonja Khan DQN on Mario/NEC simulator
SEAL Team Stats Alec Zimmer Annelise Wagner - RGBD Mapping using a Kinect
TA Tianyi Zhou Angli Liu - NN model for predicting relative motion between two similar RGBD frames
Targetsss Tzu-Lin Ong Dinesh Ravichandran - GP model for predicting object pose from images & control
Meep Kendall Lowrey Visak Kumar - DART + Mujoco integration for object tracking
NERF Aaron Walsman - - Deformable Hand model tracking
PACT Jonathan Lee William Howerton Josue Calderon Tracking a fast bouncing ball with RGBD cameras and Kalman filters
Last On The List Saghar Hosseini Eric Schoof Atiye Alaeddini Multi-robot localization on mobile robots with noisy IR sensors using Kalman Filters

Project Ideas

We would like people to work in teams of two (possibly three). If you have a project that’s closely related to your research, then you could also do a single person project. Teams will have to meet up with Arun and Dieter before finalizing on their project ideas.

Following are some ideas. None of these are especially precisely defined, but should help stimulate thinking regarding robotics projects in a variety of areas:

Tracking / Filtering:

  • Use a depth / color camera to detect and track an object moving quickly. For instance, a ball bouncing off the ground or a wall. This would be a good test for a combination of particle and Kalman filters.
  • Localization with depth camera against a given floor plan: We have existing laser maps / floor plans of the Allen Center. Implementing localization with a depth camera (with a particle filter, say) in such a map would be interesting.
  • Localize a color camera in a 3D map built with a depth camera. Here, the map provides 3D and color information that can be used to track the position of a color only camera (Kinect Fusion, RGBD-SLAM).

Mapping, Exploration:

  • Create an exploration strategy for finding new views of unseen areas of a 3D map. Investigate reconstruction and rendering techniques for such 3D maps. Given an existing 3D map, figure out how to do probabilistic localization and filtering.

Human-Robot Interaction:

  • There are many ways to explore the possibilities of human-robot interaction. Implement person-following on the TurtleBot. Using either the Kinect on the TurtleBot, or adding a second Kinect with a view of the person, enable a person to point at objects or locations which should be moved towards by the robot. This will involve perception and planning.
  • Implement visual servoing (Video) on a manipulator (Baxter or in simulation). Similar to person-following, the robot could track the person's hand and follow it with it's end-effector as the person moves. A simpler task would be to just have a colored object moving in the scene and the robot follows it. This involves perception and some minor planning.
  • Implement a simple human-robot handover system (Paper) on a manipulator. The robot has to track the person, potentially predict where the person will move to and plan accordingly to receive the object from the person's hand while ensuring that there are no collisions and the motion is smooth. This involves perception and planning.

Manipulation:

  • Implement a simple pick and place system on a manipulator. The robot can pick up an object from a table and place it at a different location. This involves some minor perception for object detection, grasp planning and motion planning.
  • Create a benchmark of motion planning algorithms on the manipulator (mainly in simulation). There are many new motion planning algorithms(BIT*, RRT*, TrajOpt, CHOMP, etc.) that can be implemented. There are multiple simulators (OpenRAVE, OMPL) that can be used.
  • Implement a fast local collision avoidance system (in simulation) (Paper) that reacts to moving objects and people in the scene. There are multiple methods to speed up the computations (including GPU based techniques).

Reinforcement Learning:

  • There has been a lot of new work on reinforcement learning, applied to atari games (DQN) and some robot tasks (GPS). Implement the Deep-Q-Network (DQN) for a simple reinforcement learning task (a simple task/implementation can be found here). You can even look at some of the Atari games or other similar game type worlds.
  • PILCO is a recent reinforcement learning algorithm that has been successful on many robotic tasks. Implement this or other similar algorithms on a simple cartpole or double pendulum or other robotic tasks. You may be able to re-use the GP dynamics model from homework 1.
Robots/Hardware:
We have two Turtlebots and one other larger mobile platform (DUB-E). We also have access to a Baxter robot for any manipulation related projects. For perception related work, we have multiple Kinects and other Depth/RGB cameras.