The number of smartphone users worldwide was expected to surpass 2 billion in 2016. To protect personal and other sensitive information from unauthorized access, some smartphone users lock their phones. Yet, others don’t, risking the data and online services accessible through their devices. The risks emanate from both device thieves and those who
belong to the users’ social circles, so called social insiders. In 2014, 2.1 million Americans (under 2%) had phones stolen.
While the threat of social insiders for smartphone users has been under-appreciated by the research community, there is growing volume of evidence that it cannot be ignored any more. A recent privacy-preserving survey suggests that 20% of US adults snooped on at least one other person’s phone, during the year proceeding the study.
In this talk, I present LERSSE research on unauthorized physical access to smartphones. In particular, I discuss users’ concerns when it comes to unauthorized access to their devices, their use of locking mechanisms and devices themselves, and examine the differences that recent advances in smartphone locking make.
See presentation slides and the corresponding papers for more details:
Current smartphone operating systems regulate application permissions by prompting users on an ask-on-first-use basis. Prior research has shown that this method is ineffective because it fails to account for context: the circumstances under which an application first requests access to data may be vastly different than the circumstances under which it subsequently requests access. LERSSE’s Primal is leading the research collaboration with UC Berkeley, in which a longitudinal 131-person field study was performed to analyze the contextuality behind user privacy decisions to regulate access to sensitive resources. We built a classifier to make privacy decisions on the user’s behalf, by detecting when context has changed and, when necessary, inferring privacy preferences based on the user’s past decisions and behavior. Our goal is to automatically grant appropriate resource requests without further user intervention, deny inappropriate requests, and only prompt the user when the system is uncertain of the user’s preferences. We show that our approach can accurately predict users’ privacy decisions 96.8% of the time, which is a four-fold reduction in error rate compared to current systems.
This paper reports on why people use, not use, or have stopped using mobile tap-and-pay in stores. The results of our online survey with 349 Apple Pay and 511 Android Pay participants suggest that the top reason for using mobile tap-and-pay is usability. Surprisingly, for nonusers of Apple Pay, security was their biggest concern. A common security misconception we found among the nonusers (who stated security as their biggest concern) was that they felt storing card information on their phones is less secure than physically carrying cards inside their wallets. Our security knowledge questions revealed that such participants lack knowledge about the security mechanisms being used to protect card information. We also found a positive correlation between the participants’ familiarity with security of mobile tap-and-pay and their adoption rate, suggesting that the participants who are more knowledgeable of the security protections in place are more likely to be using the technology.
Facebook accounts are secured against unauthorized access through passwords and device-level security. Those , however, may not be sufficient to prevent social insider attacks, where attackers know their victims, and gain access to a victim’s account by interacting directly with their device. To characterize these attacks, we ran two MTurk studies. In the first study (n = 1,308), using the list experiment method, we estimated that 24% of participants had perpetrated social insider attacks and that 21% had been victims (and knew about it). In the second study (n = 45), participants wrote stories detailing personal experiences with such attacks. Using thematic analysis, we typified attacks around five motivations (fun, curiosity, jealousy, animosity, and utility), and explored dimensions associated with each type. Our combined findings indicate that social insider attacks are common, often have serious emotional consequences, and have no simple mitigation.11
A common security practice used to deal with a password breach is locking user accounts and sending out an email to tell users that they need to reset their password to unlock their account. This paper evaluates the effectiveness of this security practice based on the password reset email that LinkedIn sent out around May 2016, and through an online survey conducted on 249 LinkedIn users who received that email. Our evaluation shows that only about 46% of the participants reset their passwords.
The mean time taken to reset password was 26.3 days, revealing that a significant proportion of the participants reset their password a few weeks, or even months after first receiving the email. Our findings suggest that more effective persuasive measures need to be added to convince users to reset their password in a timely manner, and further reduce the risks associated with delaying password resets.
The orthodox paradigm to defend against automated social-engineering attacks in large-scale socio-technical systems is reactive and victim-agnostic. Defenses generally focus on identifying the attacks/attackers (e.g., phishing emails, social-bot infiltrations, malware offered for download). To change the status quo, we propose in our paper presented at NSPW ’16 to identify, even if imperfectly, the vulnerable user population, that is, the users that are likely to fall victim to such attacks. Once identified, information about the vulnerable population can be used in two ways. First, the vulnerable population can be influenced by the defender through several means including: education, specialized user experience, extra protection layers and watchdogs. In the same vein, information about the vulnerable population can ultimately be used to fine-tune and reprioritize defense mechanisms to offer differentiated protection, possibly at the cost of additional friction generated by the defense mechanism. Secondly, information about the user population can be used to identify an attack (or compromised users) based on differences between the general and the vulnerable population.
SOUPS ’16 paper on the prevalence of snooping on mobile phones has received Distinguished Paper award. The paperreports a series of quantitative studies that allowed a more accurate measurement of this phenomena. The study was led by our collaborators at the University of Lisbon. It was inspired by our previous study presented at Mobile CHI ’13.
Through an anonymity-preserving survey experiment, we quantify the pervasiveness of snooping attacks, deﬁned as “looking through someone else’s phone without their permission.” We estimated the 1-year prevalence to be 31% in an online participant pool. Weighted to the U.S. population, the data indicates that 1 in 5 adults snooped on at least one other person’s phone, just in the year before the survey was conducted. We found snooping attacks to be especially prevalent among young people, and among those who are themselves smartphone users. In a follow-up study, we found that, among smartphone users, depth of adoption, like age, also predicts the probability of engaging in snooping attacks. In particular, the more people use their devices for personal purposes, the more likely they are to snoop on others, possibly because they become aware of the sensitive information that is kept, and how to access it. These ﬁndings suggest that, all else remaining equal, the prevalence of snooping attacks may grow, as more people adopt smartphones, and motivate further eﬀort into improving defenses.
Motivated by the beneﬁts, people have used a variety of web-based services to share health information (HI) online. Among these services, Facebook, which enjoys the largest population of active subscribers, has become a common place for sharing various types of HI.
At the same time, Facebook was shown to be vulnerable to various attacks, resulting in unintended information disclosure, privacy invasion, and information misuse. As such, Facebook users face the dilemma of beneﬁting from HI sharing and risking their privacy. In this SOUPS ’16paper, we report our investigation of HI sharing practices, preferences, and risk perceptions among US Facebook users. We interviewed 21 participants with chronic health conditions to identify the key factors that inﬂuence users’ motivation to share HI on Facebook. We then conducted an online survey with 492 Facebook users in order to validate, reﬁne, and extend our ﬁndings. While some factors related to sharing HI were found in literature, we provide a deeper understanding of the main factors that inﬂuenced users’ motivation to share HI on Facebook. The results suggest that the gained beneﬁts from prior HI sharing experiences, and users’ overall attitudes toward privacy, correlate with their motivation to disclose HI. Furthermore, we identify other factors, speciﬁcally users’ perceived health and the audience of the shared HI, that appear to be linked with users’ motivation to share HI. Finally, we suggest design improvements— such as anonymous identity as well as search and recommendation features—for facilitating HI sharing on Facebook and similar sites.
This paper reports on a design and development of a mobile game prototype as an educational tool helping computer users to protect themselves against phishing attacks.
The elements of a game design framework for avoiding phishing attacks were used to address the game design issues. Our mobile game design aimed to enhance the users’ avoidance behaviour through motivation to protect themselves against phishing threats. A think-aloud study was conducted, along with a pre- and post-test, to assess the game design framework though the developed mobile game prototype. The study results showed a significant improvement of participants’ phishing avoidance behaviour in their post-test assessment. Furthermore, the study findings suggest that participants’ threat perception, safeguard effectiveness, self-efficacy, perceived severity and perceived susceptibility elements positively impact threat avoidance behaviour, whereas safeguard cost had a negative impact on it.