Guest Blog by Erik Schlicht, founder of the Computational Cognition Group, LLC

DESIGNING EFFECTIVE DECISION SUPPORT

Challenges and Solutions

 

Biases in human decision-making touch almost every aspect of our life. Consider that although humans are known to be ‘predictably irrational’, the decisions we make still impact several key sectors, from defense and intelligence, to medicine and finance, all the way to aerospace and transportation. This blog discusses an approach to improving human decision-making by designing decision support (DS) systems that are safe, adaptive and actionable.

WHY SHOULD WE CARE ABOUT HUMAN PERFORMANCE?

As members of the machine learning community, you should care about limitations inherent to human performance because it may be impacting:

Your Data: we are now aware that algorithms sometimes exhibit prediction biases since they were trained using biased (human) data.

Your Users: humans will likely be interfacing with the technology you develop, so understanding how they consume and process information is ultimately important.

Your Life: as a human, you make decisions every day based on uncertain information. Most of us (despite our best intentions) exhibit biases that adversely impact our own performance.

The goal of some practitioners of AI and machine learning is to eventually remove the human ‘from the loop’. However, humans will likely be responsible for making mission-critical decisions for the foreseeable future, even in the most heavily automated industries. Therefore, bolstering human decision-making could improve operational performance across most of these critical sectors.

BETTER INFORMATION FOR BETTER DECISIONS

One solution to improving human decision-making across operational settings stems from the overarching goal to provide people with only the information that is relevant to their current operational context. This goal results in a requirement that decision-support systems must adaptively adjust relevant information based on the ability to improve operational performance.

Satisfying this simple system requirement necessitates that several key challenges be solved:

Information Selection: how do we decide the type and amount of information to provide the user?

CAUTION: never rely on user preference data alone. Relying on user preferences often results in perpetuating (even amplifying) user decision biases (e.g., think Facebook news feeds). A couple reasons why this is true for Decision-Support:

Attribution Errors: Humans often mistakenly identify irrelevant information as being cause of operational success.

Implicit Biases: Humans report ignoring information that the data clearly shows impacted their performance.

Defining Operational ‘Context’: when/how do we adaptively change the information displayed to the user?

Information Validity: How do we evaluate if the information provided by our decision-support system results in real-world improvements?

A solution that overcomes these challenges leverages an innovative data pipeline that includes the use of computational games to build causal (i.e., Bayesian) models of human performance and multifidelity methods to assure that models trained through computational gaming predict real-world decision-making.

The next few sections explain this process and detail how it overcomes many challenges inherent to designing effective decision-support systems.

COMPUTATIONAL GAMES FOR LEARNING CAUSAL MODELS

This data pipeline relies on computational games to gain insight into the information and context needed to design an effective decision-support system. More specifically, computational games are utilized to learn causal (i.e., Bayesian) models of human perception-action cycles for the operational task.

Bayesian models are used because they can accommodate several key properties that are required to design an effective decision-support system:

Captures PACs: perception-action cycles (PACs) provide a robust and succinct representation of human performance across operational settings and allow insight into how information is turned into actions/decisions. Bayesian networks can quantitively capture PACs, so they provide a rigorous representation of human performance across several operational settings. Other methods, such as deep learning, do not capture this process (with the corresponding uncertainty) and allow no straightforward principle for how information is acquired and fused to form beliefs.

Easy to Interpret: when designing decision-support systems, it is important that the model be easy to interpret. This promotes user confidence in the recommendations provided by the system, in addition to allowing algorithm developers to understand WHY the algorithm is making a certain recommendation. Other approaches, such as deep learning, have difficulty obtaining this property since they are notoriously complex and opaque.

‘What if’ Scenarios: when designing decision-support systems, it is advantageous to estimate how providing better situational awareness (e.g., reduce uncertainty associated with human estimate of operational state) impacts overall performance. With Bayesian models, this is easily accomplished, and it can save the designer a bunch of time and money pursuing technology capabilities that will result in little benefit to the end-user.

To learn and parameterize our Bayesian model, we need to produce data that is useful for learning the causal structure of the directed-acyclic graph (DAG). Computational games are an extremely useful tool to use for learning such models. Computational games are designed to be a reduced version of the real-life operational task. Essentially, it puts the player at key decision points while still attempting to capture all the important operational components, such as information uncertainty and operational goals.

In this respect, it allows us to produce a great deal of useful data in a relatively inexpensive manner. Moreover, the low-fidelity data produced from these games can be used to inform answers to a couple of the key challenges, defined above:

Information Selection: Using the computational game, data can be produced that allows us to estimate the information that promotes the best overall operational performance of the player. It also provides insight into the information that minimizes uncertainty associated with the player’s ability to infer hidden operational states. Both provide rigorous metrics for recommending information to use for real-life operations.

Defining Operational ‘Context’: Using a computational gaming environment allows us to produce data useful for learning the structure of the operational DAG. This capability provides a rigorous method for defining operational context – that is, each ‘context’ corresponds to a new DAG structure (i.e., a new mapping between important (hidden) operational states and information).

IMPORTANCE OF MULTIFIDELITY METHODS

Although computational games provide a robust method to train develop models of human performance, they cannot be used independently due to a couple fundamental issues that may lead models trained through these methods to poorly predict real-world decision-making.

Simulation Factors: humans may perform differently in the computational gaming environment and the operational setting due to differences between the real-world and simulated world. This would cause models trained in gaming environments to predict real-world performance poorly.

Participant Factors: although using a computational game results in the ability to collect a bunch of inexpensive data (e.g., distributed collection with novice participants), it may lead to models that predict real-world operational performance poorly due to differences between novice and expert participants (e.g., due to training).

This may lead some to argue for exclusively relying on real-world (i.e., high-fidelity) data to train decision-support models. In some cases, this may be possible and is encouraged if enough data are available. However, in some operational settings, data from the real-world doesn’t exist or has limited availability. The limited amount of HiFi data can cause models to predict poorly, since there isn’t enough data to sufficiently train and evaluate the models.

Moreover, if your goal is to evaluate human-technology interactions (HTIs), collecting real-world data for each technology concept is expensive, since it requires the technology to be developed and deployed before the data becomes available. Ideally, you’d like to be able to evaluate concepts before production, as it would save you a great deal of resources.

Multifidelity methods attempt to quantitatively overcome the limitations of exclusively relying on LoFi or HiFi data. More specifically, multifidelity methods seek to allow a mechanism for accurate models to be trained inexpensively. It also enables HTI models to be developed before the technology has been deployed, which can provide a great deal of insight into the effectiveness of different technology concepts.

Multifidelity methods accomplish this lofty goal by learning transformations between LoFi and HiFi parameters. For example, some of our early work on multifidelity simulation for aerospace accomplished this by estimating a joint-distribution over low- and high-fidelity model parameters. In a transportation study we performed, we simply used a regression model that learned the mapping between baseline technology and candidate technology performance. In general, the approach for learning transformations between data fidelities often changes as a function of the features and data available.

INFORMING AND EVALUATING DECISION-SUPPORT

Overall, this article proposed a novel methodology for developing effective decision-support technology that can overcome many of the challenges that are commonly encountered during the design process. Using the data pipeline described above, we can prescribe recommendations about the type and amount of information to be presented, in addition to when information should be adjusted during the decision-making process.

If these recommendations are followed, it should result in decision-support technology that is safe, adaptive and actionable, and should improve operational decision-making. Moreover, the multifidelity methods discussed here allow for models to be trained relatively inexpensively and enable technology to be evaluated at the concept stage of development.

Finally, it is critical that these recommendations be evaluated in a rigorous manner. For example, it would be useful to compare the prescribed information performance (in real-world) against meaningful control conditions (e.g., conditions above).

To be clear, the process described here will take a commitment from the industry in which it will be used, as it requires time and money to establish the method.  However, once in place, it is capable of producing useful recommendations that should lead to more efficient operations, saving expense in the long-term.

Please contact me with any questions; I am always happy to discuss research.

author bio

Erik J Schlicht, PhD, is the founder of the Computational Cognition Group, LLC. He conducted research at Harvard University, MIT, Caltech and the University of Minnesota, where he used quantitative approaches to study decision-making under uncertainty and risk.

Visit his professional web page for further information about his research experience and a sample of his publications.