Artifical_Intelligence-Pixabay.jpg Pixabay

The Future Of MRO Is Human-Machine Teaming

Lessons from intelligence analysts in exploiting AI for maintenance.

The buzzwords Big Data, Machine Learning, Natural Language Processing, Analytics, Cognitive Computing and many others are tumbling from vendors’ brochures into shop maintenance procedures. Each Artificial Intelligence (AI) technique has to be considered one its own merits, of course. But it is also time to take a more holistic look at what is happening to aircraft maintenance in the age of AI.

The U.S. intelligence community is now deep into its own AI revolution, and a recent conference co-sponsored by the Intelligence and National Security Alliance reviewed some lessons learned in what analysts call, “the human-machine team.”

The first and most important point is that, “AI is a tool to assist human beings in making better decisions,” stressed Rob High, chief technology officer for IBM’s high-powered Watson system. Experiments have proved that properly trained computer systems can do some things, like play chess or Jeopardy, better than humans. But the same kind of experiments show that, at really demanding practical tasks, humans assisted by machines beat machines alone every time.

Stacey Dixon, who directs advanced research for the Director of National Intelligence, agrees. “Hybrid forecasting, with both people and machines, shows big gains. Machines are good at certain things, people at other things, so you need teaming.”

That rule is likely to apply in aircraft maintenance, whether in an office where a computer assists a senior engineer predict and plan for a possible unscheduled maintenance event, or on the tarmac, where a mobile device advises a mechanic on how to troubleshoot an event that has just occurred.

Dixon is now investigating neuroscience to help forecasting, “trying to reverse engineer the algorithms of the brain.” For instance, figuring out how a human can see a picture of a giraffe once, and recognize the animal ever afterwards.

David Honey, a senior scientist for the Director of National Intelligence, says his challenge is tapping massive amounts of data from sensors, putting it in context and making analytic judgements. That should sound familiar to an aircraft maintenance IT unit. Honey says the big gain is using machines to review the huge data volumes and select the instances worthy of human assessment. “It frees the analysts up from the drudgery of reviewing raw data.”

Michael Wolmetz, a senior scientist with Johns Hopkins Applied Physics Laboratory, sees great potential in neural networks for understanding many complicated prediction problems. High said Watson uses neural networks for classification and recognition problems, but true AI needs many different techniques according to the problem being addressed. “You can’t just stick Machine Learning out there, you need multiple methods.”

Wolmetz said one big AI challenge now is moving quickly from research to deployment when many problems do not have plentiful data sets yet. Much more data may be needed to train algorithms to be accurate enough to use. A related challenge is that AI techniques that rely on human language, either spoken or written, have a problem with what are called low-resource languages, for example African languages for which not enough data has been accumulated.

But Wolmetz sees progress on another front, what he calls active learning. This occurs when the machine’s algorithms are trained by feedback from their human masters to improve capabilities.

A general problem common to both intelligence and aircraft maintenance use of AI is how to earn and gain the trust in AI predictions sufficient to prompt people to make important decisions based on these predictions. “Humans earn credibility by explaining how they got results, but people don’t know why software predicts what it does,” Honey said.

Called explainability, this can be a hard problem. Humans do not always know all the steps even in their own thinking, much less those of a computer algorithm. Often people express their confidence in their own predictions with a probability estimate or even a tone in their voice. Honey said one way of gaining trust is not explaining the logic used, but showing the evidence that was used. For computers, that could mean showing the data that was used to train the algorithms.

And knowing the training data helps in another way. It could reveal any bias in training that could tilt resulting predictions toward errors in certain directions.

But Wolmetz cautioned that there can be a tradeoff between trust in AI predictions and AI performance. “If you must understand how it works, you may not get the performance you want.” And Dixon agreed, noting she had learned to trust navigation programs, although she did not necessarily understand everything about how they worked.

But Honey believes the next five years will see great advances in being able to explain better to decision-makers what AI is actually doing – how the machine is ‘thinking’ – so they can have confidence in predictions.

Another implementation lesson: High said IBM learned from its work for doctors that AI has to be integrated into workflows with the least disruption possible. Even when Watson improved predictions, doctors resented any unnecessary time taken away from seeing patients. 

Intelligence analysts have one problem not faced by AI users in aircraft maintenance: the probability that an adversary is trying to deceive them with false data. But most of the other challenges in intelligence use of AI have their counterparts in engineering departments and hangars. The human-machine team has a lot to learn from both members, and the humans in different sectors have a lot to teach each other about working with their very helpful machines.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish