The technology will likely have use in industrial automation and robotics applications that rely on computer vision.
On the exterior of the International Space Station, NASA maintains a robotic grappling arm designed to haul in satellites for repair, or assist approaching spacecraft. Equipped with artificial intelligence and a camera to help guide operators, the system is capable of the pinpoint accuracy necessary to guide objects hurtling through 17,500-miles-per-hour near-earth orbit.
The system supports “the optical seeking and ranging of a target satellite using LiDAR,” according to NASA. “Upon approach, the tumble rate of the target satellite is measured and matched by the approaching spacecraft. As rendezvous occurs the spacecraft deploys a robotic grappling arm or berthing pins to provide a secure attachment to the satellite.”
Now, through its technology-transfer program, NASA said it is opening up the computer vision system to commercial applications. The technology, developed at the NASA Johnson Space Center, employs computer vision software that guides operators aboard the ISS.
See also: How Organizations are Capitalizing on Intelligent Video Apps
The goal of this computer vision software “is to take the guesswork out of grapple operations aboard the ISS by providing a robotic arm operator with real-time pose estimation of the grapple fixtures relative to the robotic arms end effectors,” according to NASA.
The system also includes an object-identification capability that can also detect physical defects on its targets. This technology was created to aid operators aboard the International Space Station who were originally relying on flight controllers on the ground at Mission Control for guidance.
This is the latest release of NASA technology-transfer innovations that have included water purification filters, the computer mouse, ear thermometers, scratch-resistant lenses, and solar cells.
The computer vision software in the system analyzes the live camera feed from the robotic arm’s camera and provides the operator with commands required for an ideal grasp operation. A machine learning component monitors the camera feed and identifies target fixtures, then automatically sequences proper camera and target parameters.
“The software uses computer vision algorithms to determine alignment solutions between the position of the camera eyepoint with the position of the end effector as the borescope camera sensors are typically located several centimeters from their respective end effector grasping mechanisms,” according to NASA.
The software includes a machine learning component that uses a trained regional Convolutional Neural Network that analyzes a live camera feed to determine ISS fixture targets a robotic arm operator can interact with on orbit.
“Industrial automation and robotics applications that rely on computer vision solutions may find value in this software’s capabilities,” NASA added. This includes robotic assembly and manufacturing, inspection, automated local area transport. Other applications may include machine-guided efficiency, trainable object recognition, and real-time tracking. The system may also support robotic vision systems for hazardous environments such as mining, power generation, marine, and reconnaissance.