Tech could someday let people even in dry climates
get clean water straight from the atmosphere›››
Robots are becoming increasingly involved in our everyday lives, assisting everything from manufacturing and logistics to health care and housework. Yet they still face significant hurdles. Here are two ways teams at Khalifa University are improving the technology.
ENHANCED PERCEPTION
Accurately recognizing and dividing up objects in a robot’s environment is a task made challenging by occlusions blockages, complex shapes and ever-changing backgrounds. This stands in the way of robots fully grasping the world around them. The technical term for this daunting task is “panoptic segmentation” — dividing an image into foreground objects and background regions simultaneously. Improving a robot’s perception of its environment would enable them to handle complex tasks more efficiently.
Listen to the Deep Dive
However, this problem isn’t easy to solve. Cluttered scenes, object variability, objects that block vision, motion blur and the slow temporal resolution of traditional cameras all make it a tough nut to crack. Added to this, high latency — or delays — in processing sensor data can slow response times and reduce task accuracy.
Recent developments in object segmentation using cutting-edge graph neural networks have their own limitations: They add extra requirements as both panoptic segmentation and grasp planning must be done quickly and efficiently. More sophisticated algorithms and techniques that can grapple with the real world’s unpredictability are needed.
Yahya Zweiri, director of the KU Advanced Research and Innovation Center, and his team developed a method to overcome these challenges using a graph mixer neural network (GMNN). Specifically designed for event-based panoptic segmentation, a GMNN preserves the asynchronous nature of event streams, making use of spatiotemporal correlations to make sense of the scene. The KU researchers developed their solution with researchers from London’s Kingston University.
Their results were showcased at the 2023 IEEE Conference on Computer Vision and Pattern Recognition, one of the most prestigious conferences in the field of computer vision. They were awarded best paper by a committee that included experts from Meta, Intel and leading U.S. universities.
“GMNN has proven its worth, achieving top performance on the ESD (event-based segmentation dataset), a collection of robotic grasping scenes captured with an event camera positioned next to a robotic arm’s gripper,” Zweiri says. “This data contained a wide range of conditions: variations in clutter size, arm speed, motion direction, distance between the object and camera, and lighting conditions. GMNN not only achieves superior results in terms of its mean intersection over union (a key metric for segmentation accuracy) and pixel accuracy, but it also marks significant strides in computational efficiency compared with existing methods.”
This model lays the groundwork for a future where robots can perceive and interact with their environment as efficiently as possible, opening up a world of potential applications across industries.
Drilling into greater precision
Robotic drilling systems play a crucial role in such industries as manufacturing, construction and resource extraction. Achieving precise positioning of these drilling systems is essential to ensure accuracy, efficiency and safety in drilling operations. To address this challenge, researchers have been exploring advanced control techniques that can improve the positioning accuracy of robotic drilling systems.
One such technique that has shown promising results is neuromorphic vision-based control. By leveraging the principles of neuromorphic engineering and incorporating vision-based sensing capabilities, this approach offers a novel solution for enhancing the precision of robotic drilling.
Zweiri and his team, along with Dewald Swart at Strata Manufacturing, developed a neuromorphic visual controller approach for precise robotic machining.
“The automation of cyber-physical manufacturing processes is a critical aspect of the fourth industrial revolution (4IR),” says Abdulla Ayyad, a researcher on the team. “Between 2008 and 2018, the number of industrial robots shipped annually more than tripled, and by 2024, more than 500,000 industrial robots are expected to ship each year.
The UAE specifically is aiming to become a global hub in 4IR technology and our work is aligned directly with this vision to support solutions for increased efficiency, productivity and safety.”
“The manufacturing industry is currently witnessing a paradigm shift with the unprecedented adoption of industrial robots, and machine vision is a key perception technology that enables these robots to perform precise operations in unstructured environments,” Dr. Zweiri says.
“Neuromorphic vision is a recent technology with the potential to address the challenges of conventional vision with its high temporal resolution, low latency and wide dynamic range. For the first time, we propose a novel neuromorphic vision-based controller for robotic machining applications to enable faster and more reliable operation, and present a complete robotic system capable of performing drilling tasks with sub-millimeter accuracy.”
Automating certain manufacturing processes means greater performance, productivity, efficacy and safety, with drilling one of the processes prime for automation. It is a widespread process, especially in the automotive and aerospace industries, where high-precision drilling is essential as the quality of drilling is correlated with the performance and fatigue life of the end products.