Skip to content

Latest commit

 

History

History
85 lines (68 loc) · 4.94 KB

README.md

File metadata and controls

85 lines (68 loc) · 4.94 KB

Bottleneck Guided Rapidly Exploring Random Tree Star

The aim is to solve the problem of no 'one size fits all' heuristic for different motion planning problems and the inherent 'narrow passage' problem for uniform sampling based planners.

The motion planner is implemented in C++ whereas the 3D CNN is implemented in Python using Tensorflow. The 3D CNN is a modified form of the VoxNet architecture.

Modifications :

  • Multi Input Single Output CNN
    • 1st Input : Voxelized Environment
    • 2nd Input : Start and End points selected by user in Rviz
    • Output : Relevant Bottleneck Points
  • No softmax non linearity in the last layer since ours is a prediction task (not classification)
  • 9 units in the last dense layer to predict 3 sets of x,y,z coordinates of relevant bottleneck points

Video demo of this work is available at : https://www.youtube.com/watch?v=28sW9mFheOY&feature=youtu.be

Training :

  • Since data was limited for our prediction task, we have used transfer learning from a pre trained VoxNet trained on the Sydney Urban Objects Dataset.
  • Our manually labelled dataset in the labelled_data folder consists of 200 data samples collected using a 10% Goal Biased RRT* on different hand engineered environments, where inputs are the voxelized environments, start and end points are 1D arrays of 3D points generated using rviz and outputs are the 1D arrays of three bottleneck points each.

RRT* Implementation Details

  • FCL : Collision Checking with environment
  • Octomap : Data Structure for the 3D Environment
  • Mersenne Twister PRNG : Sampling Random Points in Workspace of Manipulator
  • OpenRAVE :Kinematic Reachability Module used to find sampling domain of the planner.
  • libNABO :Library for nearest neighbour searches to find multiple parent candidates
  • H5Easy: To convert hdf5 file created by openrave using C++ vectors
  • moveit cartesian plan plugin: For taking user input for 3D start and end points in rviz
  • abb_driver: ROS industrial abb package for downloading joint space trajectories to IRC5 controller
  • iai_kinect2 This ROS package is used to acquire point cloud data from xbox one Kinect V2

Instructions

  • The Robot, motion planning markers and the Octomap environment can be visualized in rviz and moveit_cartesian_plan_plugin can be used to select the start and endpoints by using :
roslaunch abb_1410_moveit_config moveit_planning_execution.launch
  • The main ROS package is called abb_manipulator since all experiments were conducted on the ABB 1410 welding robot, all the above listed packages in RRT* implementation details appear as dependencies in the CMakeList.txt file and must be compiled in your workspace
  • The bottleneck guided RRT* and other variations of RRT can be found in abb_manipulator/src folder and can be executed using
rosrun abb_manipulator rrt_star_milestones

But our motion planner will not start searching until it has received the start, end points from rviz, octomap environment and the milestone points from the 3D CNN

  • To enable data acquisition from the kinect (this code is not written by us)
roslaunch kinect2_bridge kinect2_bridge.launch
  • Preprocessing of the point cloud and self identification
rosrun kinect2_viewer abb_sense
  • Voxelization of the processed point cloud, prediction of bottleneck points using our pre-trained multi input single output CNN model and publishing these points to our motion planner
rosrun abb_manipulator rrt_multiinput_prediction.py
  • Cartesian space to Joint Space conversion and execution of generated trajectory on IRC5 controller + ABB 1410 robot
roslaunch abb_driver robot_interface.launch robot_ip:= ROBOT_IP
rosrun abb_manipulator cart_to_joint

Comparison of traditional fixed heuristic RRT planners

Performance of Bottleneck Guided RRT*

References