Robotic product-based manipulation in simulated environment
Abstract
Before deploying algorithms in industrial settings, it is essential to validate them in virtual environments to anticipate real-world performance, identify potential limitations, and guide necessary optimizations. This study presents the development and integration of artificial intelligence algorithms for detecting labels and container formats of cleaning products using computer vision, enabling robotic manipulation via a UR5 arm. Label identification is performed using the speeded-up robust features (SURF) algorithm, ensuring robustness to scale and orientation changes. For container recognition, multiple methods were explored: edge detection using Sobel and Canny filters, Hopfield networks trained on filtered images, 2D cross-correlation, and finally, a you only look once (YOLO) deep learning model. Among these, the custom-trained YOLO detector provided the highest accuracy. For robotic control, smooth joint trajectories were computed using polynomial interpolation, allowing the UR5 robot to execute pick-and-place operations. The entire process was validated in the CoppeliaSim simulation environment, where the robot successfully identified, classified, and manipulated products, demonstrating the feasibility of the proposed pipeline for future applications in semi-structured industrial contexts.
Keywords
Artificial intelligence; Computer vision; Deep learning; Industry 5.0; Machine learning
Full Text:
PDFDOI: http://doi.org/10.11591/ijece.v15i6.pp5894-5903
Copyright (c) 2025 Juan Camilo Guacheta-Alba, Anny Astrid Espitia-Cubillos, Robinson Jimenez-Moreno

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578
This journal is published by theĀ Institute of Advanced Engineering and Science (IAES).