Working with perception using MoveIt! and Gazebo
Until now, in MoveIt!, we have worked with an arm only. In this section, we will see how to interface 3D vision-sensor data to MoveIt!. The sensor can be either simulated using Gazebo, or you can directly interface a Red-Green-Blue-Depth (RGB-D) sensor, such as Kinect or Intel RealSense, using the openni_launch
package. Here, we will work using Gazebo simulation. We will add sensors to MoveIt! to create a map of the environment surrounding the robot. The following command will launch the robot arm and the Asus Xtion pro simulation in Gazebo in a world with obstacles:
roslaunch seven_dof_arm_gazebo seven_dof_arm_obstacle_world.launch
This command will start the Gazebo scene with arm joint controllers and the Gazebo plugin for the 3D vision sensor. We can add a grasp table and grasp objects to the simulation, as shown in the following screenshot, by simply clicking and dragging them to the workspace. We can create any kind of table...