Bruce Schneier Visits the BYU Neurosecurity Lab

We recently had the pleasure of hosting author and security thought-leader, Bruce Schneier, at the Neurosecurity Lab. We know Bruce from presenting at the Workshop on Security and Human Behavior (2014, 2015, and 2016), which he co-chairs. Bruce has also featured our work on his blog, Schneier on Security.

We gave Bruce a tour of the MRI Facility:

As part of the tour, we scanned Bruce’s brain in the MRI scanner:

Best of all, Bruce gave a fantastic lecture to our students on security and the Internet of Things:

Thanks, Bruce, for visiting us at BYU!

On the Top of the World (Y Mountain)

The Neurosecurity Lab hiked to the top of Y Mountain, an 8,572 ft (2,613 m) mountain named for the 380 ft (116 m) “Y” insignia representing BYU.

It was a beautiful, clear fall morning. Below are some pictures we took.

On top of Y Mountain, overlooking BYU campus and Provo, UT.

The trail leading from the Y to the top of Y Mountain.

A panorama of Rock Canyon from the north summit.

Jeff, Brock, Bonnie, and Dan near the cliffs of the north summit.

After returning to the Y Mountain trailhead.

What the Neurosecurity Lab Has Been Up To this Summer

With summer now officially over, it’s a good time to recap what the Neurosecurity Lab has been up to. We’ve been very busy, with a major publication, presentations on three continents, and our ongoing research.

Study on Interruptions and Security Messages Published in ISR

Our study, “More Harm than Good? How Messages that Interrupt Can Make Us Vulnerable,” was published online at Information Systems Research, one of the top two journals of the field of Information Systems. This article received press coverage in a number of outlets, including:

Workshop on Security and Human Behavior (SHB)

Bonnie and Tony presented at the Workshop on Security and Human Behavior, held at Harvard Law School. Bruce Schneier describes the workshop this way:

SHB is a small invitational gathering of people studying various aspects of the human side of security. The fifty or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, philosophers, political scientists, neuroscientists, lawyers, anthropologists, business school professors, and a smattering of others. It’s not just an interdisciplinary event; most of the people here are individually interdisciplinary.

These are the most intellectually stimulating two days of my year; this year someone called it “Bruce’s brain in conference form.”

Bonnie and Tony participated in a panel on security decision making with Anupam Datta, Robin Dillon-Merrill, Serge Edelman, and Angela Sasse.

Ross Anderson summarized Bonnie’s presentation this way:

The last session of Tuesday was started by Bonnie Anderson from the neurosecurity lab at BYU. Mostly we tune out security warnings because we’re busy; if warnings could avoid dual-task interference they’d be more effective and less annoying. An overload in the MTL temporal lobe is responsible, and she’s been working with the Chrome security team to investigate bad times to interrupt a user (so: not during a video but after, not while typing, while switching between domains of different types …). Now she has an eye tracker that can be used with fMRI and is testing polymorphic warnings, jiggling warnings and much more. This demonstrated that polymorphic warnings are more resistant to habituation, and that eye tracking studies give much the same results as fMRI.

And here’s Ross’ summary of Tony’s presentation:

Tony Vance studies habituation in communication. It’s not the same as fatigue, as polymorphic stimuli still work, but rather a means of saving effort. However people habituate to whole classes of UI designs; and notifications are becoming pervasive and desensitising in general. It’s not just habituation you /have to forestall, but generalisation too. It’s good that the Chrome team worry about their warning design, but not sufficient; their good design can be impacted by others’ poor design (or downright mimicry). Tony has been using eye tracking and fMRI to explore all this.

Interdisciplinary Symposium on Decision Neuroscience

Brock and Dan presented a poster at the Interdisciplinary Symposium on Decision Neuroscience (ISDN) at Temple University in Philadelphia.

Besides presenting, Dan and Brock went to a Phillies game and had a great time. Brock caught a ball and gave it to Dan for his birthday. Best. PhD adviser. Ever.

Gmuden Retreat on NeuroIS

Bonnie and Tony participated in the Gmunden Retreat on NeuroIS, held in the Schloss Ort castle in Gmuden, Austria. This was their commute to work during the conference:

The Gmuden Retreat focuses on neuroscience applications to information systems research. Tony presented on generalization to security messages.

Tony, Bonnie, and colleague Adriane Randolph

European Conference on Information Systems

Bonnie and Tony attended the European Conference on Information Systems, held at Boğaziçi University at Istanbul, Turkey.

The Bospherous, looking towards the European side of Istanbul

Bonnie presented on our security message interruptions paper, described at the top of this post.

Symposium on Usable Security and Privacy

Finally, Bonnie presented at the USENIX Symposium on Usable Security and Privacy in Denver, Colorado. She presented on generalization to security messages.

Dual-task Interference Study Published in Information Systems Research

Update 10/23/2017: This paper received the “Best Published Paper Award” for all papers published in Information Systems Research in 2016. See story here.

Our study, “More Harm than Good? How Messages that Interrupt Can Make Us Vulnerable,” has been accepted to the special issue on “Ubiquitous IT and Digital Vulnerabilities” at Information Systems Research, one of the premier journals of the field of information systems.

In the article, we examine how security messages are impacted by dual-task interference (DTI), a neural limitation in which even simple tasks cannot be simultaneously performed (i.e., multitasking) without significant performance loss. We demonstrated this in two experiments: one using fMRI and another using users’ responses to the Chrome Cleanup Tool (CCT), a security message in Google Chrome.

In the News

Study Summary

First, we used fMRI to show how DTI occurs in the brain when a simple memory task is interrupted with a security message. We found that neural activity in the bilateral medial temporal lobe (MTL) was substantially reduced when a security message interrupted a user in a simple memory task (a high-DTI condition), compared to when a user responded to the security message by itself (Figure 1). This suggests that DTI inhibits one's ability to utilize the MTL to retrieve information from the long-term memory necessary to respond to permission warnings.

Figure 1. Increased activity in the medial temporal lobe (MTL) in response to the Warning-Only condition compared to the High-DTI condition, in which the warning interrupted a memory task. Warm colors indicate increased blood flow.

Further, we showed that the change in activation in the MTL significantly predicted users' disregard of the security message, which we define as behaving against the security message's recommended course of action.

 Interestingly, we found that if we finessed the timing of the security message so that it was displayed between memory tasks (a low-DTI condition), then participants had more activation in the MTL as compared to the high-DTI treatment. In addition, participants in the low-DTI condition had significantly lower security message disregard compared to the high-DTI condition (8.8% vs. 22.92%).

Amazon Mechanical Turk Experiment using the Chrome Cleanup Tool

Next, applying the findings of our fMRI experiment, we performed a practical experiment that examined how DTI impacts users' responses to the Chrome Cleanup Tool (CCT), a security message in Google Chrome for Windows (Figure 2). The CCT detects if malware has tampered with the host computer and manipulated the browser or other Internet settings (Google 2015). When a problem is detected, the CCT displays a message to the user asking for permission to remove the unwanted software and restore Chrome's original settings. Although the CCT message is important, it does not require immediate attention and, therefore, can be delayed.

Figure 2. Google Chrome Cleanup Tool (CCT) message.

We collaborated with a team of Google Chrome security engineers who develop the CCT to identify low-DTI times to display security messages during the browsing experience, in contrast to high-DTI times when the user would likely be cognitively engaged in another task. These times were selected according to (1) DTI theory and the results of fMRI results of Experiment 1, (2) input from Google engineers on moments that were frequent in occurrence and generalizable across a wide variety of web-based activities and users, and (3) a feasibility assessment for implementing in a web browser.

The low- and high-DTI conditions were:

Low-DTI:

  1. At the beginning of starting the first task.
  2. After the video.
  3. After interacting with a website.
  4. Waiting for a file to process.
  5. Waiting for a page to load.

 

High-DTI:

  1. In the middle of watching a video.
  2. In the middle of typing.
  3. In the middle of transferring a confirmation code.
  4. In the middle of the movement to close the web page.

 

We tested each of these conditions were tested in connection with an online video categorization task using Amazon Mechanical Turk. A total of 856 Turkers participated.

The results were dramatic. Finessing the timing of when the CCT was displayed reduced the rate it was disregarded by users from 80% for high-DTI times to 36% for low-DTI times (see Table 1 below).

Table 1. Percentage of Security Message Disregard for high- and low-DTI experimental conditions.

Finally, we show how mouse cursor-tracking and psychometric measures can be used to validate low-DTI times for security messages to be displayed for other software applications and contexts.

Together, our findings show that the timing of when security messages are displayed makes a substantial difference in how users respond to them. Many security messages are urgent and cannot be delayed (e.g., browser malware warnings). However, for those security messages that are not attached to an immediate threat (like the CCT), using a timing that respects users' limited cognitive resources can significantly improve the effectiveness of security messages.

Acknowledgements:

We thank Elisabeth Morant, Adrienne Porter Felt, and Robert Shield of Google, Inc. for their collaboration on the Google Chrome Clean-up Tool experiment.

From the abstract:

System-generated alerts are ubiquitous in personal computing and, with the proliferation of mobile devices, daily activity. While these interruptions provide timely information, research shows they come at a high cost in terms of increased stress and decreased productivity. This is due to dual-task interference (DTI), a cognitive limitation in which even simple tasks cannot be simultaneously performed without significant performance loss. Although previous research has examined how DTI impacts the performance of a primary task (the task that was interrupted), no research has examined the effect of DTI on the interrupting task. This is an important gap because in many contexts, failing to heed an alert—the interruption itself—can introduce critical vulnerabilities.

Using security messages as our context, we address this gap by using functional magnetic resonance imaging (fMRI) to explore how (1) DTI occurs in the brain in response to interruptive alerts, (2) DTI influences message security disregard, and (3) the effects of DTI can be mitigated by finessing the timing of the interruption. We show that neural activation is substantially reduced under a condition of high DTI, and the degree of reduction in turn significantly predicts security message disregard. Interestingly, we show that when a message immediately follows a primary task, neural activity in the medial temporal lobe is comparable to when attending to the message is the only task.

Further, we apply these findings in an online behavioral experiment in the context of a web-browser warning. We demonstrate a practical way to mitigate the DTI effect by presenting the warning at low-DTI times, and show how mouse cursor-tracking and psychometric measures can be used to validate low-DTI times in other contexts.

Our findings suggest that although alerts are pervasive in personal computing, they should be bounded in their presentation. The timing of interruptions strongly influences the occurrence of DTI in the brain, which in turn substantially impacts alert disregard. This paper provides a theoretically-grounded, cost-effective approach to reduce the effects of DTI for a wide variety of interruptive messages that are important but do not require immediate attention.

Article Download

Download a PDF of the article here.

Interview on BYU Radio

Bonnie Anderson was interviewed on BYU Radio’s Top of Mind With Julie Rose program about the Neurosecurity Lab’s research. They discussed specifically why and how we are doing our research, what our findings are, and how we are working with Google and others to implement improved security message design.

From Julie Rose’s introduction:

Improving our security online is a $67 billion-year business. It’s huge. And yet, what’s your instinct when you’re surfing the web and a little window pops up warning you could be at risk? Most of us hit ignore and move on. We, the human users, are the weak link in internet security. But it’s not all our fault. Studies conducted in the neurosecurity lab here at BYU show our biology deserves some of the blame, too.

You can listen to their 17-minute conversation here.