Scientists in Singapore have developed an AI system that recognizes hand gestures by combining skin-like electronics with computer vision.
Over the last decade, AI gesture recognition systems have become a valuable development in various fields and adopted in high-precision surgical robots, health monitoring equipment, and gaming systems.
This advanced technology first was visual-only then has been improved thanks to integrating inputs from wearable sensors, an approach called 'data fusion'. The wearable sensors recreate the skin's sensing ability, one of which is known as 'somatosensory'.
Nevertheless, the precision of gesture recognition is hindered by the low quality of data coming from wearable sensors. This problem happens due to poor contact with the user, the effects of visually blocked objects, and poor lighting. More challenges are triggered by the mismatch between visual and sensory data and datasets, leading to inefficient and slower response times.
Photo: Medgadget
To address these challenges, scientists in Singapore created a 'bioinspired' data fusion system that uses skin-like stretchable strain sensors made from single-walled carbon nanotubes, and an AI approach that resembles the way that the skin senses and vision are handled together in the brain.
These scientists developed their bio-inspired AI system by combining three neural network approaches in one system: they used a 'convolutional neural network', which is a machine learning method for early visual processing, a multilayer neural network for early somatosensory information processing, and a 'sparse neural network' to 'fuse' the visual and somatosensory information together.
The data fusion architecture has its own unique bioinspired features which include a human-made system resembling the somatosensory-visual fusion hierarchy in the brain. Plus, wearable sensors use stretchable strain sensors that comfortably attach to the human skin. Both mentioned high-tech features make gesture recognition more accurately and efficiently compared with existing methods.
Photo: ISARQ
High recognition accuracy even in poor environmental conditions
Scientists tested their bio-inspired AI system using a robot controlled through hand gestures and guiding it through a maze. Results showed that bio-inspired AI gesture recognition was able to guide the robot through the maze with zero errors, compared to six recognition errors made by a visual-based recognition system.
The high accuracy was also maintained under poor conditions including noise and unfavorable lighting. The findings from this paper bring humans another step forward to a smarter and more machine-supported world. This achievement gives us hope that one day we could physically control everything with great reliability and precision through a gesture.
Source: Science Daily
About us: TMA Solutions was established in 1997 to provide quality software outsourcing services to leading companies worldwide. We are one of the largest software outsourcing companies in Vietnam, with 2,500 engineers.
Visit us at https://www.tmasolutions.com/
No comments:
Post a Comment