On vulnerability of Facebook users to social botnets

How likely for a Facebook user to accept a friendship request from a stranger (albeit a pretty/handsome one)? By how much do such chances correlate with “promiscuity” of the user in terms of FB friends? Can such requests be automated? What can an adversary gain from befriending users?

These and other questions were investigated in the project led by my Ph.D. student Yazan Boshmaf. Preliminary results of this ongoing project will be presented in December at ACSAC. Yazan and Ildar Muslukhov have done cool stuff with automating a small but potent “social botnet” that used various heuristics to pose its “bot” profiles as “real people” to evade FB detection and to become friends with hundreds of profiles, collect information those “victims” shared with friends-only.

The most interesting questions of why FB users accept friendship request from strangers and how technology can help the users to make informed choices remain open.

You don’t have to wait until December and to come to hot sunny Florida to find more about this work. Just read the full paper.

Towards Usable Web Single Sign-On

OpenID is an open and promising Web single sign-on (SSO) solution. The research led by my Ph.D. student San-Tsai Sun investigates the challenges and concerns web users face when using OpenID for authentication, and identifies what changes in the login flow could improve the users’ experience and adoption incentives. We found our participants had several behaviors, concerns, and misconceptions that hinder the OpenID adoption process: (1) their existing password management strategies reduce the perceived usefulness of SSO; (2) many (26%) expressed concerns with single-point-of-failure related issues; (3) most (71%) held the incorrect belief that the OpenID credentials are being given to the content providers; (4) half exhibited an inability to distinguish a fake Google login form, even when prompted; (5) many (40%) were hesitant to consent to the release of their personal profile information; and (6) many (36%) expressed concern with the use of SSO on websites that contain valuable personal information or, conversely, are not trustworthy. We also found that with an improved affordance and privacy control, more than 60% of study participants would use Web SSO solutions on the websites they trust.

The paper has been recently presented at SOUPS.

The Lab Study Troubles

Can real behavior of users, when it comes to security decisions, be observed in lab studies? A recent paper from my research group sheds light on this question.

Initially, our goal was quite different. We replicated and extended a 2008 study conducted at CMU that investigated the e effectiveness of SSL warnings. To achieve better ecological validity, we adjusted the experimental design: allowing participants to use their web browser of choice and recruiting a more representative user sample.

At the end, we found what we were not looking for. During our study we observed a strong disparity between our participants actions during the laboratory tasks and their self-reported “would be” actions during similar tasks in every day computer practices. Our participants attributed this disparity to the laboratory environment and the security it offered. In a paperrecently presented at SOUPS we discuss our results and how the introduced changes to the initial study design may have affected them. Also, we discuss the challenges of observing natural behavior in a study environment, as well as the challenges of replicating previous studies given the rapid changes in web technology. We also propose alternatives to traditional laboratory study methodologies that can be considered by the usable security research community when investigating research questions involving sensitive data where trust may influence behavior.

See more details in the paper.

Can Metaphors of Physiscal Security Work for Computers?

There is evidence that the communication of security risks to home computer users has been unsuccessful. Prior research has found that users do not heed risk communications, that they do not read security warning texts, and that they ignore them. Risk communication should convey the basic facts relevant to the warning recipient’s decision. In the warning science literature, one successful technique for characterizing and designing risk communication is to employ the mental models approach, which is a decision-analytic framework. With this approach, the design of risk communication is based on the recipients’ mental model(s). The goal of the framework is to help people make decisions by providing risk communication that improves the recipients’ mental models in one of three ways: (1) adding missing knowledge, (2) restructuring the person’s knowledge when it inappropriately focussed (i.e., too general or too narrow), and (3) removing misconceptions.

The mental models approach has been successfully applied in such areas as medical and environmental risk communications, but not in computer security. Risk communications in computer security have been based on experts’ mental models, which are not good models for typical users. An expert’s mental model of security is different from that of a non-expert. This difference could lead to ineffective risk communications to non-experts. Similarly, Asgharpour et al. (2007) proposed that risk communication methods such as security warnings should be designed based on non-expert mental models and metaphors from the real world, emphasizing that:
“the purpose of risk communication is not conveying the perfect truth, but rather prompting the users to take an appropriate action to defend their system against a certain threat. While mitigation of a risk requires knowledge of the general nature of the risk, efficacy of the risk communication requires communication that is aligned with the mental model of the target group.”

While employing a mental models approach has been previously proposed for computer security warnings, it was not evaluated. The goal of the research led by my Masters student Fahimeh Raja was to do exactly this. This work has been recently presented at SOUPS.

In this paper, we present our iterative design of a firewall warning using a physical security metaphor, and we present our study of the effectiveness of this approach. In the warnings, the functionality of a personal firewall is visualized based on a physical security metaphor, which includes the metaphor of a firewall, a fireproof wall that “separates the parts of a building most likely to have a fire from the rest of a structure”. The goals of our study were to determine the degree to which the warnings are understandable for our participants and the degree to which they convey the risks and encourage safe behavior. We used an open-ended test to evaluate the initial clarity of the warnings, and we used Likert-type scales, followed by an interview, to evaluate participants’ risk perceptions. We also used the self-reported likelihood of choosing any action as the intention for performing that action.

We compared our warnings with warnings based on those from the Comodo personal firewall. The Comodo firewall is the most popular personal firewall, and is the top one in online reviews not only for its protection, but also for its “warning features that make it easy for novices to understand how to respond to those warnings”. Our results show that our proposed warnings facilitate comprehension of warning information.

They also better communicated the risk; with our warnings, participants had a better estimation of the level of hazard, likelihood of damage or loss, and the severity of potential damage or loss. Participants could also better describe the potential consequences of their intended actions. More importantly, our warnings increased the likelihood of safe behavior in response to the warnings. These findings suggest that our use of a physical security metaphor in the warnings has altered the participants mental model(s) of the functionality of a personal firewall as it relates to their security and risk. Our warnings were also preferred by the majority of participants.

See more details in the paper.

Heuristics for Evaluating IT Security Management Tools

The usability of IT security management (ITSM) tools is hard to evaluate by regular methods, making heuristic evaluation attractive. However, standard usability heuristics (e.g., Nielsen’s) are hard to apply, as IT security management occurs within a complex and collaborative context that involves diverse stakeholders. In a joint project with CA Technologies, my Ph.D. student Pooya Jaferian has proposed a set of ITSM usability heuristics that are based on activity theory, are supported by prior research, and consider the complex and cooperative nature of security management. The paper reporting the evaluation of the heuristics received Best Paper Award at SOUPS ’11.

In a between-subjects study, we compared the employment of the ITSM and Nielsen’s heuristics for evaluation of a commercial identity management system. Participants who used the ITSM set found more problems categorized as severe than those who used Nielsen’s. As evaluators identified different types of problems with the two sets of heuristics, we recommend employing both the ITSM and Nielsen’s heuristics during evaluation of ITSM tools.

See more details of the study and the results in the paper.

Have users signed up?

I participated in a panel “Password Managers, Single Sign-On, Federated ID: Have users signed up?” at Workshop on The Future of User Authentication and Authorization on the Web: Challenges in Current Practice, New Threats, and Research Directions, which was collocated with the conference on Financial Cryptography and Data Security. In my panel presentation, I showed the most recent results of the evaluation of OpenID authentication experience by participants, conducted in my lab, which shed some light on why users have not signed up, at least for OpenID. An apparent reluctance among the end users of employing OpenID, despite the fact that there are over one billion OpenId-enabled accounts, results from technical, business, and human factors. This particular short presentation was devoted to the usability factors.

Is OpenID too Open? Technical, Business, and Human Issues That Get in the Way of OpenID and Ways of Addressing Them

The web is essential for business and personal activities well beyond information retrieval, such online banking, financial transactions, and payment authorization, but reliable user authentication remains a challenge. OpenID is a mainstream Web single sign-on (SSO) solution intended for Internet-scale adoption. There are currently over one billion OpenID-enabled user accounts provided by major content-hosting and service providers (CSPs), e.g., Yahoo!, Google, Facebook, but only a few relying parties that allow users to use their OpenID credentials for SSO. Why is that? I presented at Eurecom an overview OpenID, and then discussed weaknesses of (1) the protocol and its implementations, (2) the business model behind it, and (3) the user interface. The talk concluded with a discussion of a proposal for addressing some of OpenID issues.

See presentation slides for more details.

CHI Work in Progress to Feature LERSSE Research

This year, in Vancouver, Work In Progress Posters session of SIG CHI Conference will feature three research projects of my graduate students.

San-Tsai Sun and his team-mates will present results of investigating the challenges web users face when using OpenID for authentication. They also designed a phishing-resistant, privacy-preserving browser add-on to provide a consistent and intuitive single sign-on user experience for average web users: OpenID-Enabled Browser: Towards Usable and Secure Web Single Sign-On.

Pooya Jaferian and Andreas Sotirakopoulos will present Heuristics for Evaluating IT Security Management Tools. The usability of IT security management (ITSM) tools is hard to evaluate by regular methods, making heuristic evaluation attractive. However, ITSM occurs within a complex and collaborative context that involves diverse stakeholders; this makes standard usability heuristics difficult to apply. We propose a set of ITSM usability heuristics that are based on activity theory and supported by prior research. We performed a study to compare the use of the ITSM heuristics to Nielsen’s heuristics for the evaluation of a commercial identity management system. Our preliminary results show that our new ITSM heuristics performed well in finding usability problems. However, we need to perform the study with more participants and perform more detailed analysis to precisely show the differences in applying the ITSM heuristics as compared to Nielsen’s heuristics.

Fahimeh Raja will present her research on Promoting A Physical Security Mental Model For Personal Firewall Warnings. We used an iterative process to design personal firewall warnings in which the functionality of a firewall is visualized based on a physical security mental model. We performed a study to determine the degree to which our proposed warnings are understandable for our participants, and the degree to which they convey the risks and encourage safe behavior as compared to warnings based on those from a popular personal firewall. Initial results show that our warnings facilitate the comprehension of warning information, better communicate risk, and increase the likelihood of safe behavior. Moreover, they provided participants with a better understanding of both the functionality of a personal firewall and the consequences of their actions.

My former postdoc Kirstie Hawkey has been involved in all the above work projects.

Undergrad Security Course Features Cool Projects

Students in my undergraduate computer security course had done several excellent projects. You can watch video clips of the projects or read reports.

httpvp://www.youtube.com/view_play_list?p=ABEF30FCC4453A52

I would like particularly mention the following projects:

Great job, guys!

Lessons learned from studying users’ mental models of security

In the course of past three years at LERSSE, we have done several studies that helped us to further the understanding of users’ mental models, when it comes to security. A mental model is “an abstraction of system’s architecture and software structures that is simple enough for non-technical users to grasp. . . It provides an integrated package of knowledge that allows the user to predict what the system will do if certain commands are executed, to predict the state of the system after the commands have been executed, to plan methods for novel tasks, and to deal with odd error situations” (Card and Moran, 1986). Adequate mental models of security controls are critical for computer users in order to avoid dangerous errors. Yet, security controls and their interfaces are hard to design in a way that could help users in developing and maintaining adequate mental models.

Findings from our projects led us to the following lessons:

  • users develop and maintain their mental models (mostly) through UI
  • users’ mental models are quite adaptive, changing sometimes as quickly as the system interface
  • “automating away” security can lead to inadequate mental models and dangerous errors
  • adequacy of mental models, not just UIs, has to be tested
  • security UIs must be consistent and users need to be made aware of the consistency if they are expected to notice inconsistencies
  • combining UIs for existing and new security functions can lead to unexpected mental models

You can find more details from the talk I’ve gave recently at Microsoft Research on the subject of users’ mental models of security. I discussed those projects in which we either intentionally studied users’ mental models of security controls or ended-up stumbling upon them (or their parts) by accident. Specifically, I focused on the studies of Vista personal firewallUAC prompt, and web authentication with OpenID