Preview

Mekhatronika, Avtomatizatsiya, Upravlenie

Advanced search

Comparative Evaluation of Approaches for Determination of Grasp Points on Objects, Manipulated by Robotic Systems

https://doi.org/10.17587/mau.22.83-93

Abstract

This paper considers comparative evaluation of recent methods for grip point determination for manipulations with objects in the scene. This research is aimed to compare and evaluate the modern approaches of grip point determination, when this process is aided by computer vision. The methods of object gripping, considered in this paper, are employed in connection with depth map composition, backed by neural network model ResNet-50, which allowed to omit application of specific depth sensors in the course of experiments. This research shows dependencies of successful grip probability from the things being manipulated. Probability scores, averaged over different types of objects for the methods GPD, 6-DOF GraspNet, VPG, were, accordingly, 0.690, 0.741, 0.613. The paper also considers dependencies of successful grip probability from object sizes, distances from the capturing camera and target objects in the scene, luminosity levels, as well from the angles of scene inspection along the vertical axis. In terms of the considered methods GPD, 6-DOF GraspNet, VPG, non-linear increasing dependencies are revealed for object type-averaged probabilities of successful grip from luminosity level of the scene. It was also discovered, that the dependencies of successful grip for all the other parameters are non-linear and non-monotonic. The ranges of the values for scene parameters under consideration are defined in this paper, which ensure the highest probability values for object grip in these approaches. Upon the results of the performed experimental evaluation, the 6-DOF GraspNet solution showed the best performance for the vast majority of the considered parameters of the scene. The approach, presented in this paper, is the preferable way for solution of grip point problem, in context of methods, which assume depth map reconstruction without specific equipment.

About the Authors

R. N. Iakovlev
St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS), St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences
Russian Federation

Junior Researcher

St. Petersburg, 199178



J. I. Rubtsova
St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS), St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences
Russian Federation
St. Petersburg, 199178


A. A. Erashov
St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS), St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences
Russian Federation
St. Petersburg, 199178


References

1. Cutkosky M. R., Howe R. D. Human grasp choice and robotic grasp analysis, Dextrous Robot Hands, Springer, New York, NY, 1990, pp. 5—31.

2. Morales A., Asfour T., Azad P., Knoop S., Dillmann R. Integrated grasp planning and visual object localization for a humanoid robot with five-fingered hands, 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2006, pp. 5663—5668.

3. Leskov A. G., Illarionov V. V., Kalevatykh I. A., Moroshkin S. D., Bazhinova K. V., Feoktistova E. V. Hardware-software complex for solving the task of automatic capture of the object with manipulators, Inzhenernyj Zhurnal: Nauka i Innovacii, 2015, vol. 1, no. 37 (in Russian).

4. Fernald F. G. Analysis of atmospheric lidar observations: some comments, Applied optics, 1984, vol. 23, no. 5, pp. 652—653.

5. Zhang Z. Microsoft kinect sensor and its effect, IEEE multimedia, 2012, vol. 19, no. 2, pp. 4—10.

6. Ronzhin A., Saveliev A., Basov O., Solyonyj S. Conceptual model of cyberphysical environment based on collaborative work of distributed means and mobile robots, International Conference on Interactive Collaborative Robotics, Springer, Cham, 2016, pp. 32—39 (in Russian).

7. Vatamaniuk I. V., Yakovlev R. N. Algorithmic model of a distributed corporate notification system in context of a corporate cyber-physical system, MOIT, vol. 7, no. 4, pp. 32—33 (in Russian).

8. Levonevskiy D. K. Architecture of a cloud system for distributing mutimedia content in cyber-physical systems, MOIT, vol. 7, no. 4, pp. 16—17 (in Russian).

9. ten Pas A., Gualtieri M., Saenko K., Platt R. Grasp pose detection in point clouds, The International Journal of Robotics Research, 2017, vol. 36, no. 13—14, pp. 1455—1473.

10. Mousavian A., Eppner C., Fox D. 6-dof graspnet: Variational grasp generation for object manipulation, Proceedings of the IEEE International Conference on Computer Vision, 2019.

11. Zeng A., Song S., Welker S., Lee J., Rodriguez A. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2018, pp. 4238—4245.

12. Li B. 3d fully convolutional network for vehicle detection in point cloud, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2017, pp. 1513—1518.

13. Liang H., Ma X., Li S., Görner M., Tang S., Fang B., Sun F., Zhang J. Pointnetgpd: Detecting grasp configurations from point sets, 2019 International Conference on Robotics and Automation (ICRA), IEEE, 2019, pp. 3629—3635.

14. Qi C. R., Su H., Mo K., Guibas L. J. Pointnet: Deep learning on point sets for 3d classification and segmentation, Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 652—660.

15. Calli B., Singh A., Walsman A., Srinivasa S., Abbeel P., Dollar A. M. The ycb object and model set: Towards common benchmarks for manipulation research, 2015 international conference on advanced robotics (ICAR), IEEE, 2015, pp. 510—517.

16. Shao Q., Hu J., Wang W., Fang Y., Liu W., Qi J., Ma J. Suction Grasp Region Prediction using Self-supervised Learning for Object Picking in Dense Clutter, 2019 IEEE 5th International Conference on Mechatronics System and Robots (ICMSR), IEEE, 2019, pp. 7—12.

17. Szegedy C., Ioffe S., Vanhoucke V., Alemi A. A. Inception-v4, inception-resnet and the impact of residual connections on learning, Thirty-first AAAI conference on artificial intelligence, 2017.

18. Liu M., Salzmann M., He X. Discrete-continuous depth estimation from a single image, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 716—723.

19. Liu F., Shen C., Lin G. Deep convolutional neural fields for depth estimation from a single image, Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 5162—5170.

20. Eigen D., Puhrsch C., Fergus R. Depth map prediction from a single image using a multi-scale deep network, Advances in Neural Information Processing Systems, 2014, pp. 2366—2374.

21. Zhu J., Ma R. Real-time depth estimation from 2D images, 2016.

22. Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv: 1409.1556. 2014.

23. Eigen D., Fergus R. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture, Proceedings of the IEEE international conference on computer vision, 2015, pp. 2650—2658.

24. Geiger A., Lenz P., Stiller C., Urtasun R. Vision meets robotics: The kitti dataset, The International Journal of Robotics Research, 2013, vol. 32, no. 11, pp. 1231—1237.

25. Laina I., Rupprecht C., Belagiannis V., Tombari F., Navab N. Deeper depth prediction with fully convolutional residual networks, 2016 Fourth international conference on 3D vision (3DV), IEEE, 2016, pp. 239—248.

26. He K., Zhang X., Ren S., Sun J. Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770—778.

27. Iandola F. N., Han S., Moskewicz M. W., Ashraf K., Dally W. J., Keutzer K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size, arXiv preprint arXiv:1602.07360, 2016.

28. Koenig N., Howard A. Design and use paradigms for gazebo, an open-source multi-robot simulator, 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), IEEE, 2004, vol. 3, pp. 2149—2154.

29. GOST R. 55710-2013 Lighting of workplaces inside buildings, Norms and methods of measurements, 2013 (in Russian).

30. Funabashi H., Horie M., Kubota T., Takeda Y. Development of spatial parallel manipulators with six degrees of freedom, JSME International Journal, Ser. 3, Vibration, Control Engineering, Engineering for Industry, 1991, vol. 34, no. 3, pp. 382—387.


Review

For citations:


Iakovlev R.N., Rubtsova J.I., Erashov A.A. Comparative Evaluation of Approaches for Determination of Grasp Points on Objects, Manipulated by Robotic Systems. Mekhatronika, Avtomatizatsiya, Upravlenie. 2021;22(2):83-93. (In Russ.) https://doi.org/10.17587/mau.22.83-93

Views: 613


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1684-6427 (Print)
ISSN 2619-1253 (Online)