Bayesian networks research helping intelligence analysts get it right
Korb
Intelligence agencies aren’t always intelligent. American intelligence agents failed to predict the Arab spring, for instance, and the collapse of the Soviet Union. They wrongly advised politicians that Iraq was harbouring weapons of mass destruction, and didn’t prevent the 9/11 attacks.
Since those attacks, Western citizens have submitted to greater surveillance to help intelligence agencies do their job. But intelligence agents still have to assess the information gathered, and to do this effectively, high-level reasoning tools are required. To help build better tools, the Office of the Director of National Intelligence established the Intelligence Advanced Research Projects Activity (IARPA) in 2006. The projects it has funded include quantum computing and artificial intelligence research.
Artificial intelligence is also a component of a Monash research project that will receive up to $US14 million from IARPA to develop a system to help analysts extract the most useful advice from a “crowd”.
Monash is one of four universities developing techniques for an IARPA program called CREATE (Crowdsourcing Evidence Argumentation Thinking and Evaluation). In this instance, crowdsourcing doesn’t mean fundraising, but “a way of combining the labour of many diverse individuals to create a single product”, explains CREATE’s program director, Dr Steven Rieber. The technique can assist analysts by giving them access to different, competing points of view. Crowdsourcing also elicits the “wisdom of crowds”. “When you combine a number of individual, discrete judgments, often the accuracy of the combined judgment is greater than the individual ones,” Dr Rieber says.
The chief investigator for the Monash project is Dr Kevin Korb, a Reader in the Department of Information Technology. He and Monash colleagues Professor Ann Nicholson, Erik Nyberg and Professor Ingrid Zukerman are renowned for their causal Bayesian networks research, a mathematical model well-suited to assessing systems with many variables that relate to each other probabilistically. “The vast majority of things of interest to us are related probabilistically,” says Dr Korb. “Geo-politics, of course, which is why IARPA is interested. But also things like environmental science. Medicine. The opportunities are fantastic.”
Consulting the crowd
The Monash approach also relies on a "crowd" of 12 intelligence analysts who will be consulted according to the principles of the Delphi system. Delphi is an early version of crowdsourcing developed by the RAND Corporation in the 1950s, originally to make forecasts about the impact of technology on warfare. The method involves consulting a group of experts by questionnaire – they reply anonymously, and their responses are collated as a statistical representation. The anonymity, and the use of an independent moderator who collates the responses, are designed to eliminate “groupthink”. Monash's project updates Delphi by automating much of that process and applying it to Bayesian network-assisted analysis of problems.
Other universities that are working on separate IARPA proposals are University of Melbourne, New York’s Syracuse University and George Mason University in Virginia.
“The rules of the game that IARPA set up involve meeting a semi-objective kind of criterion at the end of each phase, so that is for the first two phases – there are three,” says Dr Korb. “Anyone who meets those criteria then passes into the next phase with a further amount of funding.”
He says predictive modelling – in which the participants are asked about the likelihood of an event happening in the future – only forms a part of the task. IARPA doesn’t want pure prediction, he says, but explanatory modelling. “You can’t very well explain stuff if you can’t predict it at all,” he says, “so the two aren’t unrelated, but there are bots that predict things that can’t explain a damned thing. IARPA aren’t interested in that.”
“The vast majority of things are related probabilistically. Geo-politics, of course ... But also things like environmental science. Medicine. The opportunities are fantastic.”
The method will be tested by volunteers – most likely US university graduates – acting as Delphi groups who will work on problems set up by IARPA, says Dr Korb.
“They’ll be given a problem description, like a scenario, including available evidence, maybe some probabilities that are relevant, or statistics, and they’ll be asked some questions,” he says.
“So the initial round is just coming to grips with the problem. They’ll engage through the facilitator about what questions need to be answered, or what are the most important variables to consider, and then step by step they build up a Bayesian net model to portray the situation. And once they’ve come to some agreement about what the model is, they each write their own solutions, give them to their facilitator, who collates them, takes out the best elements and gives them back to the analysts for argument or discussion. And hopefully they can converge on the right solution, which gets turned into a report that the facilitator submits to IARPA.”
Automating processes
Dr Korb says the IARPA funding will allow the Monash research group to automate complex processes that had previously been done manually. “The biggest constraint on the wider application of Bayesian net tools in the community is the difficulty that non-technical people have in using the tools that exist,” he says. “Until they’ve gone through some fairly extensive training, people have a lot of trouble using even a basic Bayesian net tool.
“The algorithms we’re developing are mostly to do with the human-computer interface. Getting nice screens and graphics and things like that, so humans can easily understand what’s expected of them, and then we pass model information along to the Bayesian net tool underneath to do the actual probability computations.”
The Monash project is called BARD (Bayesian ARgumentation via Delphi). The Monash team is collaborating with Delphi experts from the University of Strathclyde, and with experts in causal reasoning from Birkbeck College London and University College London.
The artificial intelligence tools the Monash team is developing can make “decision-making by humans a hell of a lot easier”, Dr Korb says. “I don’t see them as replacing decision-making by humans in any short term, but assisting them – as in intelligent decision support. They’re useful for humans to organise their own thinking.”
About the Authors
-
Kevin korb
Kevin specialises in the theory and practice of causal discovery of Bayesian networks (aka, data mining with Bayesian networks), machine learning, evaluation theory, the philosophy of scientific method, and informal logic.
Other stories you might like
-
How do ecosystems collapse? Our study shows evolution plays a role – and can delay a disaster
It’s not easy to tell when a dynamic system, filled with life, might reach a point of no return.
-
The human health impact of climate change
The world has talked at great lengths about how climate change is an environmental crisis. But what about the human health effects that come from it?
-
The cruel inequality of climate change-induced disasters
People living with disability are disproportionally affected by climate change-induced disasters, which is why we need more disability-inclusive decision-making in climate adaptation plans.