News

The Devil is in (Implementation) Details

It’s hard to get a security protocol right. It seems even harder to get its implementations right, even more so when millions use it on daily basis. LERSSE’s Sun-Tsai will present at ACM CCSthis October several critical vulnerabilities he has uncovered in implementation of OAuth 2.0, used by Facebook, Microsoft, Google, and many other identity providers and relying parties. These vulnerabilities allow an attacker to gain unauthorized access to the victim user’s profile and social graph, and impersonate the victim on the RP website. Closer examination reveals that these vulnerabilities are caused by a set of design decisions that trade security for implementation simplicity. To improve the security of OAuth 2.0 SSO systems in real-world settings, we suggest simple and practical improvements to the design and implementation of IdPs and RPs that can be adopted gradually by individual sites.

See the paper for details.

Systematically breaking and fixing OpenID security

Do you use OpenID? I bet you do, even if you don’t know this. OpenID 2.0 is a user-centric Web single sign-on protocol with over one billion OpenID-enabled user accounts, and tens of thousands of supporting websites. Well, the security of this protocol is clearly critical! Yet, its security analysis has only been done so far  in a partial and ad-hoc manner. LERSSE Ph.D. candidate San-Tsai Sun performed a systematic analysis of the protocol using both formal model checking and an empirical evaluation of 132 popular websites that support OpenID.Our formal analysis revealed that the protocol does not guarantee the authenticity and integrity of the authentication request, and it lacks contextual bindings among the protocol messages and the browser. The results of our empirical evaluation suggest that many OpenID-enabled websites are vulnerable to a series of cross-site request forgery attacks (CSRF) that either allow an attacker to stealthily force a victim user to sign into the OpenID supporting website and launch subsequent CSRF attacks (81%), or force a victim to sign in as the attacker in order to spoof the victim’s personal information (77%). With additional capabilities (e.g., controlling a wireless access point), the adversary can impersonate the victim on 80% of the evaluated websites, and manipulate the victim’s profile attributes by forging the extension parameters on 45% of those sites. Based on the insights from this analysis, we propose and evaluate a simple and scalable mitigation technique for OpenID-enabled websites, and an alternative man-in-the-middle defense mechanism for deployments of OpenID without SSL.
Read more in the paper.

On vulnerability of Facebook users to social botnets

How likely for a Facebook user to accept a friendship request from a stranger (albeit a pretty/handsome one)? By how much do such chances correlate with “promiscuity” of the user in terms of FB friends? Can such requests be automated? What can an adversary gain from befriending users?
These and other questions were investigated in the project led by my Ph.D. student Yazan Boshmaf. Preliminary results of this ongoing project will be presented in December at ACSAC. Yazan and Ildar Muslukhov have done cool stuff with automating a small but potent “social botnet” that used various heuristics to pose its “bot” profiles as “real people” to evade FB detection and to become friends with hundreds of profiles, collect information those “victims” shared with friends-only.
The most interesting questions of why FB users accept friendship request from strangers and how technology can help the users to make informed choices remain open.
You don’t have to wait until December and to come to hot sunny Florida to find more about this work. Just read the full paper.

Towards Usable Web Single Sign-On

OpenID is an open and promising Web single sign-on (SSO) solution. The research led by my Ph.D. student San-Tsai Sun investigates the challenges and concerns web users face when using OpenID for authentication, and identifies what changes in the login flow could improve the users’ experience and adoption incentives. We found our participants had several behaviors, concerns, and misconceptions that hinder the OpenID adoption process: (1) their existing password management strategies reduce the perceived usefulness of SSO; (2) many (26%) expressed concerns with single-point-of-failure related issues; (3) most (71%) held the incorrect belief that the OpenID credentials are being given to the content providers; (4) half exhibited an inability to distinguish a fake Google login form, even when prompted; (5) many (40%) were hesitant to consent to the release of their personal profile information; and (6) many (36%) expressed concern with the use of SSO on websites that contain valuable personal information or, conversely, are not trustworthy. We also found that with an improved affordance and privacy control, more than 60% of study participants would use Web SSO solutions on the websites they trust.

The paper has been recently presented at SOUPS.

The Lab Study Troubles

Can real behavior of users, when it comes to security decisions, be observed in lab studies? A recent paper from my research group sheds light on this question.

Initially, our goal was quite different. We replicated and extended a 2008 study conducted at CMU that investigated the e effectiveness of SSL warnings. To achieve better ecological validity, we adjusted the experimental design: allowing participants to use their web browser of choice and recruiting a more representative user sample.
At the end, we found what we were not looking for. During our study we observed a strong disparity between our participants actions during the laboratory tasks and their self-reported “would be” actions during similar tasks in every day computer practices. Our participants attributed this disparity to the laboratory environment and the security it offered. In a paperrecently presented at SOUPS we discuss our results and how the introduced changes to the initial study design may have affected them. Also, we discuss the challenges of observing natural behavior in a study environment, as well as the challenges of replicating previous studies given the rapid changes in web technology. We also propose alternatives to traditional laboratory study methodologies that can be considered by the usable security research community when investigating research questions involving sensitive data where trust may influence behavior.
See more details in the paper.