Abstract
Humans perceive an interaction force through the kinesthetic sense or tactile sense. When viewing the image, they estimate the interaction force on the basis of pseudo-haptics. The interaction force of a robot is traditionally measured using a contact-type tactile sensor or a force/torque (F/T) sensor. In this work, we propose a method for estimating the interaction force between a robot and objects during grasping and picking. The method is based on images, without involving an F/T sensor or a tactile sensor. For undeformable objects, more precise force estimation was achieved by simultaneously using RGB and depth images, the robot position, and electrical current. We propose a deep neural network structure that combines DenseNet and a Transformer encoder/decoder for predicting the interaction force. We verified proposed network with generated DB which has recorded the interaction with 41 objects. Additionally, we compared the results with changing the inputs of the network. Our model could estimate the interaction force from various input modalities for both known objects and unseen objects during its training. The results clearly indicate that the proposed method produces the best results compared with other models, with less than 3% error in estimating the interaction force.
Original language | English |
---|---|
Article number | 118441 |
Journal | Expert Systems with Applications |
Volume | 211 |
DOIs | |
State | Published - Jan 2023 |
Keywords
- Force estimation
- Interaction force
- Machine learning
- Robot grasping