One of our Program Committee members, Reshama Shaikh, recently interviewed Erik Schlicht, PhD, the founder of the Computational Cognition Group, LLC. Prior to leaving academia to found C2-g, Dr. Schlicht’s research utilized quantitative methods to investigate human decision making under uncertainty and risk. He leveraged methods from AI, machine learning and cognitive science to understand real-world decision making.
RS) Tell us briefly about yourself and your work.
ES) I received my PhD in Cognitive and Brain Sciences (with a minor in Human Factors) from the University of Minnesota. My thesis utilized Bayesian decision-theoretic methods to predict natural human sensorimotor control. I was a postdoctoral researcher between Harvard and Caltech and conducted behavioral- and neuro-economic experiments using a simplified poker task that I developed.
After my academic training, I moved on to applied research at both MIT Lincoln Laboratory and the University of Minnesota. In these positions, I utilized multifidelity methods for both aerospace and transportation studies, respectively.
RS) For those of us who are unfamiliar, what is low and high, multi – fidelity – method?
ES) The goal of the applied research I was involved with attempted to predict operational decision-making. Operational decisions were defined to be an expert making a real-world decision in a risky and uncertain context. To predict expert behavior using machine learning, it requires an adequate amount of high-fidelity (real-world, expert) data to train and evaluate the model. However, in these operational contexts there is seldom enough high-fidelity data available to adequately train the model.
Researchers can leverage data lower on the fidelity spectrum to collect lots of inexpensive data. For example, data can be collected from novices participating in a distributed simulation that approximates the operational context of interest. The risk of utilizing low-fidelity data is that it may not be useful for predicting expert behavior, due to both simulation and participant factors.
Therefore, multifidelity methods attempt to leverage the strengths of each data source, while overcoming the limitations, allowing for low-fidelity data to be used to train models that accurately predict expert behavior. This also enables accurate models of expert behavior to be developed inexpensively.
RS) Can you give an example of how this method is used currently?
Investigate human decision making under uncertainty and risk (pilots, drivers, dating, marketing?) What are effective and popular applications?
ES) In general, I have found multifidelity approaches to be extremely useful for understanding human-technology interactions. For example, we leveraged multifidelity approaches to understand the safety associated with UAS (unmanned aerial system) Operators during self-separation scenarios, in addition to exploring the risk involved with providing drivers with in-vehicle signage.
To extend this example, assume we want to evaluate the safety associated with unmanned aircraft (say, drones) in the NAS (national airspace system). Suppose two aircraft are on a trajectory to collide. How do we simulate these instances so that operator behavior is reflected correctly, and encounters are represented accurately? Multifidelity methods can be used to infer operator utility weights from low-fidelity data, enabling accurate prediction of their decisions and producing valid safety estimates.
The benefit of using this approach for understanding human-technology interactions is that you’re able to gather low-fidelity data on different technology concepts and exclusively produce the one that is estimated to be safest and/or lead to improved performance in the real-world setting. In the absence of multifidelity approaches, technologies have to be manufactured and deployed before data are available regarding its effectiveness, which obviously leads to greater cost (time and money).
RS) What excites you most about the direction of this research? Where do you see it going in 5 or 10 years?
ES) This area is relatively new, so there hasn’t been a ton of research of which I’m aware. Back in 2011, Judea Pearl won the Turing award, so he gave a keynote speech when I first presented our multifidelity work at UAI-2012. His lecture was on the concept of metasynthesis and how he’s excited to see where this area leads. During our poster session, his student commented that he believed this multifidelity work to be a specific example of metasynthesis. I took that as a cue to keep pursuing this line of research, so I will utilize the multifidelity method as opportunities arise, and I hope many others do the same.
I see this making an impact in domains where human-technology interactions directly impact outcomes (e.g., healthcare and operational decision-making). Overall, this could lead to technology being developed because it improves system performance and safety, rather than more superficial reasons. So, I’m excited to see where this method can contribute.
RS) What tools do you use in your research? Platform (AWS?), Python, R? How large are your datasets? Algorithms?
ES) I primarily use MATLAB for rapid algorithm development and then deploy the code on whatever native language is necessary. Since I am leveraging multifidelity data, we tend to run the spectrum on data size and algorithm complexity.
RS) Are there ways for other researchers and students to participate in this research?
ES) I would recommend students read the limited literature there is in the area and see if they can improve on the methods that were developed to date (either in theory or application). I am sure most of the researchers in the area would be happy to help, so just contact them for collaboration opportunities.
Dr. Schlicht is the Founder of the Computational Cognition Group (C2-g), LLC. His research utilizes quantitative methods to investigate human decision making under uncertainty and risk. He leverages techniques from AI, machine learning, and computational cognitive science to understand real-world decision making. This expertise has been used to innovate across many different data-driven domains. Here’s a link to his webpage: http://schlicht.org
Reshama Shaikh is a data scientist/statistician and MBA with skills in Python, R and SAS. She worked for over 10 years as a biostatistician in the pharmaceutical industry. She is also an organizer of the meetup group NYU Women in Machine Learning and Data Science http://wimlds.org/chapters/about-nyc/. She received her M.S. in statistics from Rutgers University and her M.B.A. from NYU Stern School of Business. Twitter: @reshamas