Tytuł artykułu
Treść / Zawartość
Pełne teksty:
Identyfikatory
Warianty tytułu
Języki publikacji
Abstrakty
The paper presents the possibilities of teaching a robot controller to perform operations of autonomous segregation of objects differing in features that can be identified using a vision system. Objects can be arranged freely on the robot scene also covered with others. In the learning phase, a robot operator presents the segregation method by moving subsequent objects held in a human hand, e. g. a red object to container A, a green object to container B, etc. The robot system, after recognizing the idea of segregation that is being done using the vision system, continues this work in an autonomous way, until all identified objects will be removed from robotic scene. There are no restrictions on the dimensions, shapes and placement of containers collecting segregated objects. The developed algorithms were verified on a test bench equipped with two modern robots KUKA LBR iiwa 14 R820.
Słowa kluczowe
Czasopismo
Rocznik
Tom
Strony
603--615
Opis fizyczny
Bibliogr. 13 poz., rys., tab., wzory
Twórcy
autor
- Institute of Automatic Control, Lodz University of Technology, Stefanowskiego 18/22, 90-924 Łódź
autor
- Institute of Automatic Control, Lodz University of Technology, Stefanowskiego 18/22, 90-924 Łódź
autor
- Institute of Automatic Control, Lodz University of Technology, Stefanowskiego 18/22, 90-924 Łódź
autor
- Institute of Automatic Control, Lodz University of Technology, Stefanowskiego 18/22, 90-924 Łódź
Bibliografia
- [1] S. Chernova and A. L. Thomaz: Robot Learning from Human Teachers Series: Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2014.
- [2] M. Fiala: ARTag, a fiducial marker system using digital techniques. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. San Diego, California USA, 20–25 June 2005, Vol. 2, 590–596.
- [3] G. Granosik et al.: Kube – platforma robotyczna dla badań naukowych i prac wdrożeniowych. In: Postępy robotyki, tom I. Red. K. Tchoń, C. Zieliński. Warszawa, Oficyna Wydawnicza Politechniki Warszawskiej 2016, 224–234 (in Polish).
- [4] S. A. Green et al.: Human-Robot Collaboration: A literature review and augmented reality approach in design. Int. J. of Advanced Robotic Systems, 5(1), (2008), 1–18.
- [5] Y. Jiang, C. Zhang and M. A. Saxena: Learning to place new objects, 2012, ICRA Conference, 3088–3095.
- [6] H. Kjellsrom, J. Romero and D. Kragic: Visual object-action recognition: Inferring object affordances from human demonstration. Computer Vision and Image Understanding, 115 (2011), 81–90.
- [7] H. S. Koppula, R. Guthra and A. Saxena: Learning human activities and object affordances from RGB-D videos. International Journal of Robotics Research (2013), 951–970.
- [8] M. Kyrarini et al.: Robot learning of object manipulation task actions from human demonstrations. Mechanical Engineering, 15 (2017), 217–229.
- [9] Q. Li, M. Wang and W. Gu: Computer based system for apple surface defect detection, 2002, Computers and electronics in Agriculture 36, 2021, 215–223.
- [10] J. Patel, Y. A. Choudhary and G. M. Bone: Fault tolerant robot programming by demonstration of sorting tasks with industrial objects. 2017 IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), 278– 283.
- [11] Seema, A. Kumar and G. S. Gill: Automatic fruit grading and classification system using computer vision: A review. 2015 Second International Conference on Advances in Computing and Communication Engineering (2015), 598–603.
- [12] S. Wang et al.: An application of vision technology on intelligent sorting systemby delta robot. 2017 IEEE 19th International Conference on e-Health Networking, Applications and Services (Healthcom), 1–6.
- [13] https://www.kuka.com/en-us/technologies/human-robot-collaboration.
Uwagi
PL
Opracowanie rekordu w ramach umowy 509/P-DUN/2018 ze środków MNiSW przeznaczonych na działalność upowszechniającą naukę (2019).
Typ dokumentu
Bibliografia
Identyfikator YADDA
bwmeta1.element.baztech-fd4d1c4a-3ba8-4167-a467-96792dde298b