Abstract
While traditional methods rely on depth sensors, the current trend leans towards utilizing cost-effective RGB images, despite their absence of depth cues. This paper introduces an interesting approach to detecting grasping poses from a single RGB image. To this end, we propose a modular learning network augmented with grasp detection and semantic segmentation, tailored for robots equipped with parallel-plate grippers. Our network not only identifies graspable objects but also fuses prior grasp analyses with semantic segmentation, thereby boosting grasp detection precision. Significantly, our design exhibits resilience, adeptly handling blurred and noisy visuals. Key contributions encompass a trainable network for grasp detection from RGB images, a modular design facilitating feasible grasp implementation, and an architecture robust against common image distortions. We demonstrate the feasibility and accuracy of our proposed approach through practical experiments and evaluations.
Original language | English |
---|---|
Title of host publication | 2024 7th International Conference on Electronics and Electrical Engineering Technology (EEET) |
Publisher | IEEE |
Pages | 59-64 |
Number of pages | 6 |
ISBN (Print) | 9798331527877 |
DOIs | |
Publication status | Published - 6 Dec 2024 |
Event | 2024 7th International Conference on Electronics and Electrical Engineering Technology (EEET) - Malacca, Malaysia Duration: 6 Dec 2024 → 8 Dec 2024 |
Conference
Conference | 2024 7th International Conference on Electronics and Electrical Engineering Technology (EEET) |
---|---|
Country/Territory | Malaysia |
City | Malacca |
Period | 6/12/24 → 8/12/24 |
User-Defined Keywords
- grasp detection
- modular learning network
- antinoise
- robotics