Preview

Mekhatronika, Avtomatizatsiya, Upravlenie

Advanced search

A Method of Mobile Robot Position Estimation Correction Using Visual Location of Natural Landmarks

https://doi.org/10.17587/mau.18.752-758

Abstract

The article presents an approach to using of visual location of an arbitrary number of known natural landmarks for correction of position estimation of a mobile robot. The approach is based on using of the Extended Kalman Filter to perform steps of the position prediction and correction based on the visual location. The visual location is performed using a calibrated camera (or a set of cameras) installed on the mobile robot. The task of a mobile robot navigation in a case of poorly determined conditions is often soled using a set of expensive LIDARs. Other sensors like GNSS and odometer often used in mobile robots are usually not precise enough (for example when maneuvering on intersections). The recent research in the field of computer vision allows creation of much less expensive systems based on image analysis from one or several cameras. When used together with other sensors such system can significantly increase the navigation precision and stability. Known visual navigation approaches like visual SLAM and visual odometer are often used but they are often not precise enough especially when the camera movement is mostly rotating. A set of one or more natural visual landmarks can be located automatically in real time using mobile robot cameras. Existing methods provide partial solutions for the position or direction estimation using 3 or more landmarks. The proposed approach is based on using of the Extended Kalman Filter for efficient fusion of the odometer and GNSS data with the visual location of one or more landmarks. Since all the landmarks are used one at a time independently the number of landmarks, cameras, their types and positions are arbitrary. The method is working with one camera and one landmark as well as with a round view camera system with one or many landmarks. Experiments involving autonomous driving through different intersections shown the feasibility of reaching of the accuracy of horizontal position of 10 cm and rotation of 1° and below. The method has been developed for improving of unmanned vehicles navigation at intersections but can be applied to navigation of different ground, space, marine and underwater mobile robots.

About the Authors

D. N. Stepanov
Institute for Robotics and Technical Cybernetics
Russian Federation


E. Yu. Smirnova
Institute for Robotics and Technical Cybernetics
Russian Federation


References

1. Черноножкин В. А., Половко С. А. Система локальной навигации для наземных мобильных роботов // Научно-технический вестник информационных технологий, механики и оптики. 2008. № 57. С. 13-22.

2. Тюкин А. Л., Лебедев И. М., Приоров А. Л. Анализ телевизионного изображения для работы системы позиционирования мобильного робота в помещении по маякам с цветовой кодировкой // Тезисы научно-технической конференции "Техническое зрение в системах управления - 2015". 2015. С. 77-78.

3. Raul Mur-Artal, J. M. M. Montiel and Juan D. Tardos. ORB-SLAM: A Versatile and Accurate Monocular SLAM System // IEEE Transactions on Robotics. 2015. Vol. 31, N. 5. P. 1147-1163.

4. Девятериков Е. А., Михайлов Б. Б. Использование данных визуального одометра для автономного возвращения моизображений визуальных ориентиров от сверхширокоугольной бильного робота в среде без фиксированных точек отсчета // Экстремальная робототехника. 2015. № 1. С. 351-361.

5. Korsakov A., Fomin I., Gromoshinsky D., Bakhshiev A., Stepanov D., Smirnova E. Determination of an Unmanned Mobile Object Orientation by Natural Landmarks // Supplementary Proceedings of the Fifth International Conference on Analysis of Images, Social Networks and Texts. Yekaterinburg, Russia. 2015. Vol. 1710. P. 91-101.

6. Viola P., Jones M. J. Robust Real-Time Face Detection // International Journal of Computer Vision. 2004. Vol. 57. P. 137-154.

7. Kalal Z., Mikolajczyk K., Matas J. Tracking-learning-detection // IEEE Transactions on Pattern Analysis and Machine Intelligence. 2012. Vol. 34. P. 1409-1422.

8. Shengcai Liao, Xiangxin Zhu, Zhen Lei, Lun Zhang, Stan Z. Li. Learning Multiscale Block Local Binary Patterns for Face Recognition // International conference on Advances in Biometrics. 2007. P. 828-837.

9. Freund Y., Schapire R. E. A Short Introduction to Boosting // International joint conference on Artificial intelligence. 1999. Vol. 2. P. 1401-1406.

10. Girshick R. Fast R-CNN // The IEEE International Conference on Computer Vision (ICCV). 2015. P. 1440-1448.

11. Fomin I., Gromoshinskii D., Stepanov D. Visual features detection based on deep neural network in autonomous driving tasks // 26 международная конференция "GraphiCon2016". Нижний Новгород, 2016. С. 430-434.

12. Suliman C., Cruceru C., Moldoveanu F. Mobile robot position estimation using the Kalman filter // Scientific Bulletin of the "Petru Maior" University of Targu Mures. 2009. Vol. 6. P. 75.

13. Julier S. J., Uhlmann J. K. Unscented filtering and nonlinear estimation // Proceedings of the IEEE. 2004. Vol. 92, N. 3. P. 401-422.

14. Васильев И. А., Смирнова Е. Ю., Степанов Д. Н. Комплекс интеллектуальной навигации амфибийного спасательного средства // Робототехника и техническая кибернетика. 2015. № 2 (7). С. 30-33.

15. Stepanov D., Bakhshiev A., Gromoshinskii D., Kirpan N., Gundelakh F. Determination of the Relative Position of Space Vehicles by Detection and Tracking of Natural Visual Features with the Existing TV-Cameras" // Analysis of Images, Social Networks and Texts. Springer International Publishing. 2015. Vol. 542. P. 431-442.

16. Степанов Д. Н., Кирпань Н. А., Половко С. А. Алгоритм определения взаимного положения подводных аппаратов с использованием телевизионной системы и специальных световых маркеров // Интеллектуальные системы, управление и мехатроника - 2016. Матер. Всеросс. науч.-техн. конф. молодых ученых, аспирантов и студентов. 2016. С. 427-431.


Review

For citations:


Stepanov D.N., Smirnova E.Yu. A Method of Mobile Robot Position Estimation Correction Using Visual Location of Natural Landmarks. Mekhatronika, Avtomatizatsiya, Upravlenie. 2017;18(11):752-758. (In Russ.) https://doi.org/10.17587/mau.18.752-758

Views: 908


Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


ISSN 1684-6427 (Print)
ISSN 2619-1253 (Online)