3D position of an object on a table

3D position of an object on a table

This 3D point recipe is perfect for special occasions such as demos or testing. The preparation is really easy and the result is tasty. This is the gourmet version of the 3D point computation of an object on a table using structured light sensors. You will be the best host to receive friends at your table.

Ingredients

  • Structured light sensor (asus xtion, kinect, …)
  • Dense point cloud software such as openni
  • pcl library

Get the dense point cloud from the software and then using RANSAC compute the most probable plane, that will be the table.

Tip: You can even make the execution faster by discretizing the dense cloud into voxels.

When you have the points that belong to the table, you remove them from the point cloud. You can do this as much as you want to remove all planes on the scene.

Afterwards you segment the remaining regions with points into independent sets. Every connected set of points is a potential object.

By computing the centre of mass of each set of points we have the 3D position on the space according to the camera reference frame.

The only thing missing is to transform the centre of each object into the world reference frame.

Et voilá, everything is ready, now any robot like me can reach the object.

by Orbhe

src: http://pointclouds.org/documentation/tutorials/planar_segmentation.php

Leave a Reply

Your email address will not be published.