Workshop on Usable Security


Program now online!

USEC 2014

Workshop on Usable Security
23rd February 2014
Co-located with NDSS
http://www.usecap.org/usec14.html

Register now!


Program

Note: Click on paper titles to see the abstract.

8:45 am - 9:00 am -- Opening Remarks
Welcome Message from the chairs, David Wagner and Matthew Smith.
9:00 am - 10:00 am -- Keynote
Speaker: Serge Egelman, UC Berkeley.

When Everyone's A Cyborg: Privacy and Security in The Age of Wearable Computing

While the "wearable computer" started as an experimental research prototype in the late 1960s, the recent demand for devices like Google Glass, smart watches, and wearable fitness monitors suggests that wearable computers may soon become as ubiquitous as cellphones. These devices offer many benefits to end-users in terms of realtime access to information and the augmentation of human memory, but they are also likely to introduce new and complex privacy and security problems. In this talk, I will discuss how wearable computing will pose several unique challenges and opportunities for usable security researchers. The continuous capture of audio and video will be a critical enabler of many use cases, while also opening up new attack vectors and concerns about user privacy. Thus, we find ourselves at the ideal time to be experimenting on these devices: their widespread adoption is imminent, yet there is still ample opportunity for platforms to integrate research findings.
10:00 am - 10:30am -- Break
10:30 am - 12:10 pm -- Developer USec and Public Policy

Shubham Jain (Rutgers University), Janne Lindqvist (Rutgers University)

Abstract: There have been many proposals and developments to improve smartphone users' location privacy with respect to mobile applications. These include user-centric application permission models and disclosures. However, little attention has been paid to how application developers could build privacy-preserving apps. In this paper, we present a laboratory study (N=25) to understand developers' behavior to enhanced APIs, which are a privacy-preserving modification of the existing Android Location API. In contrast to the existing methods, the studied API facilitates acquiring coarse location information without accessing the geocoordinates. Our results indicate that by offering a redesigned API, programmers can be nudged into making choices with their programming that help to preserve privacy of their users.

Rebecca Balebako (Carnegie Mellon University), Abigail Marsh (Carnegie Mellon University), Jialiu Lin (Carnegie Mellon University), Jason Hong (Carnegie Mellon University), Lorrie Faith Cranor (Carnegie Mellon University)

Abstract: Smartphone app developers have to make many privacy-related decisions about what data to collect about end-users, and how that data is used. We explore how app developers make decisions about privacy and security. Additionally, we examine whether any privacy and security behaviors are related to characteristics of the app development companies. We conduct a series of interviews with 13 app developers to obtain rich qualitative information about privacy and security decision-making. We use an online survey of 228 app developers to quantify behaviors and test our hypotheses about the relationship between privacy and security behaviors and company characteristics. We find that smaller companies are less likely to demonstrate positive privacy and security behaviors. Additionally, although third-party tools for ads and analytics are pervasive, developers aren't aware of the data collected by these tools. We suggest tools and opportunities to reduce the barriers for app developers to implement privacy and security best practices.

Rebecca Balebako (Carnegie Mellon University), Rich Shay (Carnegie Mellon University), Lorrie Faith Cranor (Carnegie Mellon University)

Abstract: In this paper, we present a case study of applying usable privacy methodologies to inform debate regarding a multistakeholder public policy decision. In particular, the National Telecommunications and Information Administration (NTIA) relied on a multi-stakeholder process to define a set of categories for short-form privacy notices on mobile devices. These notices are intended for use in a United States national code of conduct to assist mobile device users in making decisions regarding data collection. We describe, specifically, a 791-participant online study to determine whether users consistently understand these proposed categories and their definitions. We found that many users did not understand the terms in our usability study. The heart of our contribution, however, is a case study of our participation in this group as academic usable privacy and security experts, and a presentation of lessons learned regarding the application of usable privacy and security methodology to public policy discussion.We believe this work is valuable to usable privacy and security researchers wishing to affect public policy.

Iacovos Kirlappos (University College London), Simon Parkin (University College London), M Angela Sasse (University College London)

Abstract: Over the past decade, security researchers and practitioners have tried to understand why employees do not comply with organizational security policies and mechanisms. Past re-search has treated compliance as a binary decision: people comply, or they do not. From our analysis of 118 in-depth interviews with individuals (employees in a large multinational organization) about security non-compliance, a 3rd response emerges: shadow security. This describes the instances where security-conscious employees who think they cannot comply with the prescribed security policy create a more fitting alter-native to the policies and mechanisms created by the organization’s official security staff. These workarounds are usually not visible to official security and higher management – hence ‘shadow security’. They may not be as secure as the ‘official’ policy would be in theory, but they reflect the best compromise staff can find between getting the job done and managing the risks that the assets they understand face. We conclude that rather than trying to ‘stamp out’ shadow security practices, organizations should learn from them: they provide a starting point ‘workable’ security: solutions that offer effective security and fit with the organization’s business, rather than impede it.

Emiliano De Cristofaro (University College London), Honglu Du (PARC), Julien Freudiger (PARC), Greg Norcie (Indiana University)

Abstract: Two-factor authentication (2F) aims to enhance resilience of password-based authentication by requiring users to provide an additional authentication factor, e.g., a code generated by a security token. It also introduces non-negligible costs for service providers and requires users to carry out additional actions during the authentication process. In this paper, we present an exploratory comparative study of the usability of 2F technologies. First, we conduct a pre-study interview to identify popular technologies as well as contexts and motivations in which they are used. We then present the results of a quantitative study based on a survey completed by 219 Mechanical Turk users, aiming to measure the usability of three popular 2F solutions: codes generated by security tokens, one-time PINs received via email or SMS, and dedicated smartphone apps (e.g., Google Authenticator). We record contexts and motivations, and study their impact on perceived usability. We find that 2F technologies are overall perceived as usable, regardless of motivation and/or context of use. We also present an exploratory factor analysis, highlighting that three metrics -- ease-of-use, required cognitive efforts, and trustworthiness -- are enough to capture key factors affecting 2F usability.

12:10 pm - 1:10pm -- Lunch Break
1:10 pm - 2:30 pm -- Access Control and Authentication

Mainack Mondal (MPI-SWS), Peter Druschel (MPI-SWS), Krishna P. Gummadi (MPI-SWS), Alan Mislove (Northeastern University)

Abstract: We posit that access control, the dominant model for modeling and managing privacy in today's online world, is fundamentally inadequate. First, with access control, users must a priori specify precisely who can or cannot access information by enumerating users, groups, or roles—a task that is difficult to get right. Second, access control fails to separate who can access information from who actually does, because it ignores the difficulty of finding information. Third, access control does not capture if and how a person who has access to some information redistributes that information. Fourth, access control fails to account for information that can be inferred from other, public information. We present exposure as an alternate model for information privacy; exposure captures the set of people expected to learn an item of information eventually. We believe the model takes an important step towards enabling users to model and control their privacy effectively.

Ero Balsa (KU Leuven), Laura Brandimarte (Carnegie Mellon University), Alessandro Acquisti (Carnegie Mellon University), Claudia Diaz (KU Leuven), Seda Gürses (New York University)

Abstract: Cryptographic access control tools for online social networks (CACTOS) allow users to enforce their privacy settings online without relying on the social network provider or any other third party. Many such tools have been proposed in the literature, some of them implemented and currently publicly available, and yet they have seen poor or non adoption at all. In this paper we investigate which obstacles may be hindering the adoption of these tools. To this end, we perform a user study to inquire users about key issues related to the desirability and general perception of CACTOS. Our results suggest that, even if social network users would be potentially interested in these tools, several issues would effectively obstruct their adoption. Participants in our study perceived that CACTOS are a disproportionate means to protect their privacy online. This in turn may have been motivated by the explicit use of cryptography or the fact that users do not actually share on social networks the type of information they would feel the need to encrypt. Moreover, in this paper we point out to several key elements that are to be considered for the improvement and better usability of CACTOS.

Manar Mohamed (University of Alabama at Birmingham), Song Gao (University of Alabama at Birmingham), Nitesh Saxena (University of Alabama at Birmingham), Chengcui Zhang (University of Alabama at Birmingham)

Abstract: CAPTCHAs are a widely deployed mechanism to distinguish a legitimate human user from a computerized program trying to abuse online services. Attackers, however, have devised a clever and an economical way to bypass the security provided by CAPTCHAs by simply relaying CAPTCHA challenges to remote human-solvers. Most existing varieties of CAPTCHAs are completely vulnerable to such relay attacks, routinely executed in the wild.

Dynamic Cognitive Game (DCG) CAPTCHAs are an upcoming CAPTCHA category which require the user to play a simple moving object matching game. Due to the dynamic and interactive nature of the underlying games, DCG CAPTCHAs may offer resistance to relay attacks. In this paper, we focus on a streaming-based DCG CAPTCHA relay attack whereby the game frames and responses are simply streamed between the attacker and a human-solver. We present a mechanism for detecting such a streaming-enabled game captcha farming based on real-time game statistics, such as play duration, mouse clicks and incorrect drags, fed to machine learning detection algorithms. To demonstrate the feasibility of our detection mechanism, we report on a three-dimensional study measuring: (1) the performance of legitimate DCG CAPTCHA users, (2) the performance of remote human-solvers in a DCG CAPTCHA streaming attack, and (3) the performance of gameplay behavioral features and machine learning classifiers in distinguishing human-solvers in a streaming attack from legitimate users. Our results show that it is possible to detect the streaming-based relay attack against many instances of DCG CAPTCHAs with a high overall accuracy (low false negatives and false positives). Broadly, DCG CAPTCHAs appear to be one of the first CAPTCHA schemes that enable reliable detection of relay attacks.

Huiqing Fu (Rutgers University), Yulong Yang (Rutgers University), Nileema Shingte (Rutgers University), Janne Lindqvist (Rutgers University), Marco Gruteser (Rutgers University)

Abstract: Smartphone users are increasingly using apps that can access their location. Often these accesses can be without users knowledge and consent. For example, recent research has shown that installation-time capability disclosures are ineffective in informing people about their apps' location access. In this paper, we present a four-week field study (N=22) on run-time location access disclosures. Towards this end, we implemented a novel method to disclose location accesses by location-enabled apps on participants' smartphones. In particular, the method did not need any changes to participants' phones beyond installing our study app. We randomly divided our participants to two groups: a Disclosure group (N=13), who received our disclosures and a No Disclosure group (N=9) who received no disclosures from us. Our results confirm that the Android platform's location access disclosure method does not inform participants effectively. Almost all participants pointed out that their location was accessed by several apps they would have not expected to access their location. Further, several apps accessed their location more frequently than they expected. We conclude that our participants appreciated the transparency brought by our run time disclosures and that because of the disclosures most of them had taken actions to manage their apps' location access.

2:30 pm - 3:00pm -- Break
3:00 pm - 4:00 pm -- Privacy in Health, Life and Death

Carsten Grimm (Carleton University), Sonia Chiasson (Carleton University)

Abstract: When we die, we leave imprints of our online lives behind. What should be the fate of these digital footprints after our death? Using a crowdsourced online survey with 400 participants from four countries, we investigate how users want their digital footprint handled after their death, how they would like to communicate these preferences, and whom they would entrust with carrying out this part of their will.

We poll users’ sentiments towards an online service curating digital footprints. We let users comment on design questions regarding this service posed by Locasto, Massimi, and De Pasquale (2011, NSPW). Interestingly, responses across countries and religions were similar. The vast majority of participants had never considered the fate of their digital footprint. When faced with the choice, our participants request a non-profit service primarily for deleting their accounts upon receiving a death certificate.

Emiliano De Cristofaro (University College London)

Abstract: Progress in Whole Genome Sequencing (WGS) will soon allow a large number of individuals to have access to their fully sequenced genome. This represents a historical breakthrough, enabling important medical and societal progress. At the same time, however, the very same progress amplifies a number of ethical and privacy concerns stemming from the unprecedented sensitivity of genomic information.

This paper presents an exploratory ethnographic study of users' perception of privacy and ethical issues with WGS as well as their attitude toward different WGS programs. We report on a series of semi-structured interviews, involving 16 participants, and analyze the results both quantitatively and qualitatively. Our analysis shows that users exhibit common trust concerns and fear of discrimination, and demand to retain strict control over their genetic information. Finally, we highlight the need for further research in the area and follow-up studies that build on our initial findings.

Tehila Minkus (NYU Polytechnic School of Engineering), Nasir Memon (NYU Polytechnic School of Engineering)

Abstract: As social interactions increasingly move to Facebook, the privacy options offered have come under inspection. Users find the interface confusing, and the impact of the individual settings on a user's overall privacy is difficult to determine. This creates difficulties for both users and researchers: users cannot gauge the privacy of their respective configurations, and researchers cannot easily compare the degree of privacy encapsulated in different users' choices. In this work, we suggest a novel and holistic measure for Facebook privacy settings. Based on a survey of a sample of 189 Facebook users, we incorporate appropriate weights that combine the different options into one numerical measure of privacy. This serves as a building block for measurement and comparison of Facebook users' privacy choices, enabling new inferences and insights.

4:00 pm - 4:10 pm -- Mini-Break
4:10 pm - 5:10 pm -- E-Voting and Anonymity

Greg Norcie (Indiana University), Jim Blythe (Information Sciences Institute), Kelly Caine (Clemson University), L Jean Camp (Indiana University)

Abstract: Tor is an anonymity network used by whistleblowers, journalists, and anyone else wishing to communicate anonymously. Adoption of anonymity tools like Tor can be hindered by a lack of usability. Tor's anonymity can be measured as 1/n, with n being the number of Tor users. Therefore, enhancing Tor's usability and making it easier for more people to successfully use Tor increases security for all Tor users. In our first study, we identified stop-points that acted as barriers to using the Tor Browser Bundle (TBB). We suggested changes based on our analysis of these stop-points. In our second study, we tested whether the changes we recommended to the Tor Project were effective in increasing usability, and found a numerical decrease in stop-points across the board. Based on these studies, we suggest design heuristics for improving the usability of anonymity systems, thereby helping non-experts utilize anonymity software.

Jurlind Budurushi (TU Darmstadt / CASED), Marcel Woide (TU Darmstadt / CASED), Melanie Volkamer (TU Darmstadt / CASED)

Abstract: Most electronic voting systems in use today provide printouts so called voter verifiable paper audit trails (VVPATs). Voters are supposed to verify these before they put them into the ballot box in order to detect election fraud. A number of studies have shown that voters are unlikely to do so when using current systems. Thus, it is very likely that malicious electronic voting systems print the wrong candidates without being detected. We introduce precautionary behavior by providing voters with ``just in time'' instructions while ensuring that these instructions cannot be manipulated by a malicious electronic voting system. Our approach is evaluated in a user study, showing a highly significant increase in the number of voters that verify, as they found the manipulation we introduced in the printouts for the study.

M. Maina Olembo (CASED, TU Darmstadt), Karen Renaud (School of Computing Science, University of Glasgow), Steffen Bartsch (CASED, TU Darmstadt), Melanie Volkamer (CASED, TU Darmstadt)

Abstract: There is increasing interest in verifiable Internet voting systems that enable voters to verify the integrity of their vote on the voting platform prior to casting it, and any interested party to verify the integrity of the election results. The ease with which a vote can be verified plays a key role. Empowering individual voters to act as interested yet objective verifiers increases the probability of fraud detection. Verifying constitutes additional effort, something humans resist unless the benefits are compelling enough. Thus, what is the best way to provide such motivation? We report on a survey, distributed to 123 respondents, in which we explore the effects of three types of motivating messages on voters' intention to verify a vote, using a smartphone app. The motivating messages were intended to increase the intention to verify a vote. Our findings have persuaded us that further research on the use of motivating messages in the context of verifiable voting is warranted.


Call For Papers

Many aspects of information security combine technical and human factors. If a highly secure system is unusable, users will try to circumvent the system or move entirely to less secure but more usable systems. Problems with usability are a major contributor to many high-profile security failures today.

However, usable security is not well-aligned with traditional usability for three reasons. First, security is rarely the desired goal of the individual. In fact, security is usually orthogonal and often in opposition to the actual goal. Second, security information is about risk and threats. Such communication is often unwelcome. Increasing unwelcome interaction is not a goal of usable design. Third, since individuals must trust their machines to implement their desired tasks, risk communication itself may undermine the value of the networked interaction. For the individual, discrete technical problems are all understood under the rubric of online security (e.g., privacy from third parties use of personally identifiable information, malware). A broader conception of both security and usability is therefore needed for usable security.

The workshop on Usable Security invites submissions on all aspects of human factors and usability in the context of security. USEC'14 aims to bring together researchers already engaged in this interdisciplinary effort with other computer science researchers in areas such as visualization, artificial intelligence and theoretical computer science as well as researchers from other domains such as economics or psychology.

We invite authors to submit original papers describing research or experience in all areas of usable privacy and security. We particularly encourage collaborative research from authors in multiple fields. Topics include, but are not limited to:

  • Evaluation of usability issues of existing security & privacy models or technology
  • Design and evaluation of new security & privacy models or technology
  • Impact of organizational policy or procurement decisions
  • Lessons learned from designing, deploying, managing or evaluating security & privacy technologies
  • Foundations of usable security & privacy
  • Methodology for usable security & privacy research
  • Ethical, psychological, sociological and economic aspects of security & privacy technologies


CfP as PDF

Details

New at USEC'14

  • Reports of replicating previously published studies and experiments
  • Reports of failed or negative usable security studies or experiments, with the focus on the lessons learned from such experience.
  • Reports on deploying usable security & privacy technology in industry

It is the aim of USEC to increase the scientific quality of usable security and privacy research. To this end we encourage the use of replication studies to validate research findings. This important and often very insightful branch of research is sorely underrepresented in usable security and privacy research to date. Papers in these categories should be clearly marked as such and will not be judged against regular submissions on novelty. Rather they will be judged based on scientific quality and value to the community. Please contact the chairs in advance of submitting such work.

Submissions and Important Dates

Papers should be written in English. Papers must be no more than 8-10 pages total (including the references and appendices). Papers must be formatted for US letter size (not A4) paper in a two-column layout, with columns no more than 9.25 in. high and 3.5 in. wide. The text must be in Times font, 10-point or larger, with 11-point or larger line spacing. Authors are encouraged to use the IEEE conference proceedings templates found at http://www.computer.org/portal/web/cscps/formatting

Submissions may be blinded at the discretion of the authors, however it is not a requirement.

We also invite short papers of up to 6 pages covering work in progress, novel or provocative ideas, replication or failed experiment studies. These will be selected based on their potential to spark interesting discussions during the workshop.

Submission site: https://usec2014.cs.berkeley.edu/

Important Dates

Submission deadline: 6th of November 2013
13th of December 2013 (23:59 PST)

Notification: 5th of January 2014
25th of January 2014

Camera ready: 15th of January 2014
30th of January 2014

Venue

NDSS Symposium 2014

February 23-26 will be the dates for the 2014 Network and Distributed System Security (NDSS) Symposium. The venue will be the Catamaran Resort Hotel and Spa in San Diego, California.

Registration

Registration is now open at: https://www.internetsociety.org/events/ndss-symposium-2014/ndss-2014-registration-information

Steering Committee

Jean Camp, Indiana University
Jim Blythe, University of Southern California
Angela Sasse, University College London

Chairs

Matthew Smith, Leibniz University Hannover
David Wagner, UC Berkeley

Replication-Study Track Chair

Marian Harbach, Leibniz University Hannover

Keynote Speaker

Serge Egelman, UC Berkeley

Program Committee

Alessandro Acquisti, CMU Heinz College
Andrew A. Adams, Meiji University, Tokyo
Ross Anderson, University of Cambridge
Pamela Briggs, Northumbria University
Dirk Balfanz, Google
Lorrie Faith Cranor, CMU
Sunny Consolvo, Google
Alexander De Luca, LMU
Serge Egelman, UC Berkeley
Sascha Fahl, LUH
Neil Gandal, Tel Aviv University
Peter Gutmann, University of Auckland
Seda Gürses, New York University
Tiffany Hyun-Jin Kim, CMU
Maritza Johnson, Facebook
Yoshi Kohno, University of Washington
Sameer Patil, Helsinki Institute for Information Technology
Andrew Patrick, Carleton University
Rob Reeder, Google
Hovav Shacham, UC San Diego
Sara Sinclair, Google
Douglas Stebila, Queensland University of Technology
Kami Vaniea, Michigan State University
Eugene Y. Vasserman, Kansas State University
Rick Wash, Michigan State University