Skip to content

Commit 2a7b927

Browse files
authored
MoveIt deep grasping tutorial (#521)
Tutorial showing how to use GPD and Dex-Net within the MoveIt Task Constructor.
1 parent 1d64f5b commit 2a7b927

File tree

6 files changed

+126
-0
lines changed

6 files changed

+126
-0
lines changed
1.64 MB
Loading
1.93 MB
Loading
Lines changed: 125 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,125 @@
1+
MoveIt Deep Grasps
2+
==================
3+
4+
This tutorial demonstrates how to use `Grasp Pose Detection (GPD) <https://github.com/atenpas/gpd>`_ and
5+
`Dex-Net <https://berkeleyautomation.github.io/dex-net/>`_ within the MoveIt Task Constructor.
6+
7+
GPD (left) and Dex-Net (right) were used to generate the grasp pose to pick up the cylinder.
8+
9+
|gif1| |gif2|
10+
11+
.. |gif1| image:: mtc_gpd_panda.gif
12+
:width: 250pt
13+
14+
.. |gif2| image:: mtc_gqcnn_panda.gif
15+
:width: 250pt
16+
17+
18+
Getting Started
19+
---------------
20+
If you haven't already done so, make sure you've completed the steps in `Getting Started <../getting_started/getting_started.html>`_.
21+
It is also worthwhile to complete the steps in `MoveIt Task Constructor <../moveit_task_constructor/moveit_task_constructor_tutorial.html>`_.
22+
23+
There are additional dependencies to install in order to run the demos. Therefore, the deep grasping packages are
24+
located in their own repository. Please see `Deep Grasp Demo <https://github.com/PickNikRobotics/deep_grasp_demo>`_.
25+
This repository contains detailed instructions for installation, running the demos, simulating depth sensors, and tips for performance.
26+
27+
The demos will allow you to visualize the results in rviz and use Gazebo if desired.
28+
29+
30+
Conceptual Overview
31+
-------------------
32+
The MoveIt Task Constructor contains a ``DeepGraspPose`` generator stage. This stage does not directly contain
33+
the implementation of either GPD or Dex-Net. Instead, communication with the MoveIt Task Constructor is achieved through
34+
ROS action messages. The ``DeepGraspPose`` stage contains an action client that communicates with an action server. The implementation of the action server is in
35+
both the ``moveit_task_constructor_gpd`` and ``moveit_task_constructor_dexnet`` packages. The action server sends the grasp
36+
candidates along with the associated costs back to the action client as feedback.
37+
38+
The relevant fields for the message can be seen in ``moveit_task_constructor_msgs/action/SampleGraspPoses.action``.
39+
40+
Using the ``DeepGraspPose`` stage is easy. Add the stage below to the current task. The implementation can be seen in `Deep Grasp Task <https://github.com/PickNikRobotics/deep_grasp_demo/blob/master/deep_grasp_task/src/deep_pick_place_task.cpp#L207>`_.
41+
42+
.. code-block:: c++
43+
44+
auto stage = std::make_unique<stages::DeepGraspPose<moveit_task_constructor_msgs::SampleGraspPosesAction>>(
45+
action_name, "generate grasp pose");
46+
47+
The template parameter is the action message. Specify the ``action_name`` which is the namespace for communication between
48+
the server and the client. Optionally, the timeouts for grasp sampling and server connection can be supplied. By default these are
49+
set to unlimited time.
50+
51+
52+
Grasp Pose Detection
53+
--------------------
54+
GPD samples grasp candidates from a point cloud and uses a CNN to classify whether the grasp candidate will be successful. The table plane is automatically segmented from the point cloud in the demo. This is
55+
useful because GPD will sample grasp candidates around this plane if not removed.
56+
57+
The ``workspace`` and ``num_samples`` parameters in `gpd_config.yaml <https://github.com/PickNikRobotics/deep_grasp_demo/blob/master/moveit_task_constructor_gpd/config/gpd_config.yaml>`_ can improve performance.
58+
The first parameter specifies the volume of a cube to search for grasp candidates centered at the origin of the point cloud frame. The second
59+
specifies the number of samples from the cloud to detect grasp candidates.
60+
61+
62+
Dex-Net
63+
-------
64+
Dex-Net will sample grasp candidates from images. A color and depth image must be supplied. Dex-Net uses a grasp quality
65+
convolutional neural network (GQ-CNN) to predict the probability a grasp candidate will be successful. The GQ-CNN was trained
66+
on images using a downward facing camera. Therefore, the network is sensitive to the camera view point and will perform best
67+
when the camera is facing down.
68+
69+
Set the ``deterministic`` parameter to 0 in `dex-net_4.0_pj.yaml <https://github.com/BerkeleyAutomation/gqcnn/blob/master/cfg/examples/replication/dex-net_4.0_pj.yaml#L11>`_ for nondeterministic grasp sampling.
70+
71+
Running the Demos
72+
-----------------
73+
The point cloud and images for the demo are provided but you can optionally
74+
use sensor data from a simulated depth camera in Gazebo.
75+
76+
Due to the sensitivity of the camera view point, it is recommended to use the images of the cylinder that are provided for the Dex-Net demo.
77+
78+
The `Camera View Point <https://github.com/PickNikRobotics/deep_grasp_demo#Camera-View-Point>`_ section shows
79+
how to change the camera to different positions. This will improve performance depending on the object.
80+
81+
The `Depth Sensor Data <https://github.com/PickNikRobotics/deep_grasp_demo#Depth-Sensor-Data>`_ section shows
82+
how to collect data using the simulated depth camera.
83+
84+
85+
Fake Controllers
86+
^^^^^^^^^^^^^^^^^^^
87+
88+
First, launch the basic environment: ::
89+
90+
roslaunch moveit_task_constructor_demo demo.launch
91+
92+
Next, launch either the GPD or Dex-Net demo: ::
93+
94+
roslaunch moveit_task_constructor_gpd gpd_demo.launch
95+
roslaunch moveit_task_constructor_dexnet dexnet_demo.launch
96+
97+
The results should appear similar to the two animations at the top of the tutorial.
98+
99+
Gazebo
100+
^^^^^^
101+
Make sure you complete the `deep grasp demo install guide <https://github.com/PickNikRobotics/deep_grasp_demo#Install>`_ for Gazebo support.
102+
103+
The `load_cloud` argument in `gpd_demo.launch` and the `load_images` argument in `dexnet_demo.launch` specifies
104+
whether or not to load the sensor data from a file. Set either one of these arguments to false to use the simulated depth camera.
105+
106+
First, launch the Gazebo environment: ::
107+
108+
roslaunch deep_grasp_task gazebo_pick_place.launch
109+
110+
Next, launch either the GPD or Dex-Net demo: ::
111+
112+
roslaunch moveit_task_constructor_gpd gpd_demo.launch
113+
roslaunch moveit_task_constructor_dexnet dexnet_demo.launch
114+
115+
The animations below demonstrate the capabilities of Dex-Net for grasp pose generation using the simulated depth camera in Gazebo.
116+
You may notice GPD can successfully pick up the cylinder. However, the algorithm will struggle with more complicated objects
117+
such as the bar clamp (seen on the right). Experiment with the ``workspace`` and ``num_samples`` parameters to see if you can generate a successful grasp using GPD.
118+
119+
|gif3| |gif4|
120+
121+
.. |gif3| image:: gqcnn_cylinder_gazebo.gif
122+
:width: 250pt
123+
124+
.. |gif4| image:: gqcnn_barclamp_gazebo.gif
125+
:width: 250pt
575 KB
Loading
619 KB
Loading

index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ Building more complex applications with MoveIt often requires developers to dig
4747
doc/pick_place/pick_place_tutorial
4848
doc/moveit_grasps/moveit_grasps_tutorial
4949
doc/moveit_task_constructor/moveit_task_constructor_tutorial
50+
doc/moveit_deep_grasps/moveit_deep_grasps_tutorial
5051
doc/subframes/subframes_tutorial
5152
doc/moveit_cpp/moveitcpp_tutorial
5253

0 commit comments

Comments
 (0)