Air guard It is a system developed by researchers from MIT Laboratory for Computer Science and Artificial Intelligence (Massachusetts Institute of Technology). This technology supports pilots during critical moments, when they have to process a lot of information simultaneously from multiple screens, acting as a proactive co-pilot by combining human and robotic action through attention-based understanding.
Attention, in the case of humans, works through eye tracking and the nervous system, which creates “salience maps” that indicate the direction of focus of attention. The maps act as visual guides that highlight key areas within the image, helping to understand and decode the behavior of complex algorithms, in the case of machines.
Air guard It detects early signs of potential dangers through these attention signals. Which means it works at any time, not just when a safety breach occurs, as traditional autopilot systems do.
“One interesting feature of our method is its discriminability,” the postdoctoral researcher said. MIT CSAIL Lianhao Yinlead author of a new Air-Guardian article.
“Our collaborative layer and the entire end-to-end process can be trained. We specifically chose the continuous depth causal neural network model because of its dynamic attention mapping properties. Another unique aspect is the ability to adapt. The Air-Guardian system is not rigid; it can be modified constructively,” he explained. based on the requirements of the situation, ensuring a balanced partnership between humans and machines.” Yin.
The main force of Air guard It lies in its core technology: an optimization-based collaborative layer that enhances visual attention for both humans and machines, as well as continuous fluid neural networks (CfC) known for their ability to analyze cause-and-effect relationships. Its effect is that it scans incoming images for important information.
Implementation of the algorithm VisualBackPropwhich identifies the system’s focus points in the image, ensuring a clear understanding of its attention maps.
During field testing, both the pilot and the system made decisions using the same unedited images while heading toward the selected waypoint.
Air-Guardian’s success was measured based on the cumulative rewards earned during the flight and the shortest route to the waypoint. The system reduced the risk level of flights and increased the success rate of navigation to target points.
In order for it to be widely adopted in the future, the human-machine interface must be improved. Comments from the researchers who led the tests suggest that an indicator, such as a bar, could be more intuitive to indicate when the sentinel system is taking control.
“The Air-Guardian system highlights the synergy between human expertise and machine learning, furthering the goal of using machine learning to assist pilots in challenging scenarios and reduce operational errors,” he says. Daniela RossDirector of CSAIL and lead author of the article.
“One of the most interesting outcomes of using a visual attention measure in this work is the potential to allow for earlier interventions and increased interpretability by human pilots,” says Stephanie Gill, assistant professor of computer science at Harvard University.
This research was partially funded by United States Air Force Research Laboratory (US Air Force), Usaf artificial intelligence accelerator, Boeing Company And the Office of Naval Research From that country.
“Proud web fanatic. Subtly charming twitter geek. Reader. Internet trailblazer. Music buff.”