Mental Models of AI-Based Systems: User predictions and explanations of image classification results
N. Bos, K. Glasgow, J. Gersh, I. Harbison, C.L. Paul.
Human Factors and Ergonomics Society Annual Meeting,
2019.
conference, workshop
HCI, Human Factors, AI, ML, CNN, Image Classification, Machine Learning, Artificial Intelligence, Human Machine Teaming, Explainable AI, XAI, Mental Model
Humans should be able work more effectively with artificial intelligence-based systems when they can predict likely failures and form useful mental models of how the systems work. We conducted a study of human's mental models of artificial intelligence systems focusing on a high-performing image classifier. Participants viewed individual labeled images in one of three general classes and then tried to predict whether the classifier would label it correctly. Participants were able to begin performing this task at levels much better than chance, 69% correct. However, after 137 trials with feedback, their performance improved a small, but statistically significant amount to 73%. Analysis of these results and comments indicated that humans were using their own perceptions of the images as first-approximation proxies. 'Projecting' human characteristics onto a computer might be considered a cognitive bias, but in this task, the strategy seemed to yield good initial results. This might be called effective anthropomorphism. Participants sometimes used this strategy both implicitly and explicitly. The paper includes discussion of why this strategy might have worked better than alternatives, why further learning was quite difficult, and what assumptions about similarities between human perception and image classification systems may in fact be correct.
Enhancing Deep Learning with Visual Interactions
E. Krokos, H.C. Chen, J. Chang, B. Nebesh, C. L. Paul, K. Whitley, and A. Varshney.
ACM Transactions on Interactive Intelligent Systems,
9,
2019.
(Article #5)
journal
machine learning, deep learning, visualization, interaction
Deep learning has emerged as a powerful tool for feature-driven labeling of datasets. However, for it to be effective, it requires a large and finely-labeled training dataset. Precisely labeling a large training dataset is expensive, time consuming, and error-prone. In this paper, we present a visually-driven deep learning approach that starts with a coarsely-labeled training dataset, and iteratively refines the labeling through intuitive interactions that leverage the latent structures of the dataset. Our approach can be used to (a) alleviate the burden of intensive manual labeling that captures the fine nuances in a high-dimensional dataset by simple visual interactions, (b) replace a complicated (and therefore difficult to design) labeling algorithm by a simpler (but coarse) labeling algorithm supplemented by user interaction to refine the labeling, or (c) use low-dimensional features (such as the RGB colors) for coarse labeling and turn to higher-dimensional latent structures, that are progressively revealed by deep learning, for fine labeling. We validate our approach through use cases on three high-dimensional datasets and a user study.
Hacking Stressed: Frustration, burnout, and the pursuit of happiness
C.L. Paul.
THOTCON,
Presentation,
2019.
(25 minute talk)
Cyber, HCI, Security, Stress, Hacking, Human Factors
Anyone in this business knows how fun and exciting hacking can be, but also the emotional and physical toll it can take. Mental health is a longstanding dirty secret in the infosec community, and we are just now learning how to talk about it. The wear and tear of everyday stress combined with an 'always on' aspect of an operational environment creates a perfect storm for burning out. While stress can have a negative impact on job performance, my primary concern is on the health and safety of infosec professionals themselves. Not only does stress have short term effects on cognitive abilities and performance, but recurrent acute stress can have long term effects on health (mental and physical) as well as burnout and turnover. There are many sources of stress in infosec operations, some of which can be managed while others are simply the nature of the job. Activities that require long periods of vigilance and creativity will deplete cognitive resources and increase fatigue. Some of these activities have unpredictable results that can increase frustration. Other times, external factors unrelated to the activity itself may introduce new sources of stress that are not normally present. A certain level of stress is to be expected in these operations because they are considerably difficult, have a high risk vs. reward trade-off, and require a significant amount of knowledge and skill. But, how much stress can you take on and still be a happy hacker? In this talk I will discuss why infosec is so stressful, how this stress affects you and your network, and some things you can do about it. I will also discuss lessons learned from my research study of tactical cyber operations that studied fatigue, frustration, and cognitive workload in operators.
Do we need 'Teaming' to Team with a Machine?
C. Haimson, C.L. Paul, S. Joseph, R. Rohrer, B. Nebesh.
HCI International,
2019.
conference, workshop
HMT, HCI, AI, ML, Artificial Intelligence
What does it mean for humans and machines to work together effectively on complex analytic tasks? Is human teaming the right analogue for this kind of human-machine interaction? In this paper, we consider behaviors that would allow next-generation machine analytic assistants (MAAs) to provide context-sensitive, proactive support for human analytic work - e.g., awareness and understanding of a user's current goals and activities, the ability to generate flexible responses to abstractly-formulated needs, and the capacity to learn from and adapt to changing circumstances. We suggest these behaviors will require processes of coordination and communication that are similar to but at least partially distinguishable from those observed in human teams. We also caution against over-reliance on human teaming constructs and instead advocate for research that clarifies the functions these processes serve in enabling joint activity and determines the best way to execute them in specific contexts.
Making Sense of Darknet Markets: Automatic Inference of Semantic Classifications from Unconventional Multimedia Datasets
A. Berman, C.L. Paul.
International Conference on HCI for Cybersecurity, Privacy, and Trust,
2019.
conference, workshop
HCI, Cyber, Machine Learning, ML, AI, CNN
Darknet Markets are a hotbed of illicit trade and are difficult for law enforcement to monitor and analyze. Topic Modeling has been a popular method to semantically analyze market listings, but lacks the ability to infer the information-rich visual semantics of images embedded within these listings. In this paper we present a relatively fast method using unsupervised and self-supervised machine learning methods to infer image semantics from large, unstructured multimedia corpora, and demonstrate how it may aid analysts in investigating the content of Darknet Markets.