User-controllable learning of security and privacy policies

  title={User-controllable learning of security and privacy policies},
  author={Patrick Gage Kelley and Paul Hankes Drielsma and Norman M. Sadeh and Lorrie Faith Cranor},
  booktitle={AISec '08},
Studies have shown that users have great difficulty specifying their security and privacy policies in a variety of application domains. While machine learning techniques have successfully been used to refine models of user preferences, such as in recommender systems, they are generally configured as "black boxes" that take control over the entire policy and severely restrict the ways in which the user can manipulate it. This article presents an alternative approach, referred to as user… 

Figures from this paper

User-Controllable Learning of Location Privacy Policies With Gaussian Mixture Models
A user-controllable method based on multivariate Gaussian mixtures that is suitably modified so as to restrict the evolution of the underlying policy to favor incremental and therefore human-understandable changes as new data arrives is presented.
Understandable Learning of Privacy Preferences Through Default Personas and Suggestions
Research aimed at reducing user burden through the development of two types of user-oriented machine learning techniques is reported, suggesting that both types of techniques can significantly help users converge towards their desired privacy settings.
User-Tailored Privacy
This chapter covers the concept of tailoring the privacy of an information system to each individual user, and discusses practical problems that may arise when collecting data to determine a user's privacy preferences, techniques to model these preferences, and a number of adaptation strategies that can be used to tailor the system’s privacy practices, settings, or interfaces to the user's modeled preferences.
The Effects of Nudging a Privacy Setting Suggestion Algorithm's Outputs on User Acceptability
An experiment with a suggestion system for 80 privacy settings is reported, and users are shown to be highly accepting of suggestions, even where the suggestions are random (though less so than for nudged suggestions).
Towards security policy decisions based on context profiling
It is argued that a simple measure like the "familiarity" of a device and/or context can be calculated and used to infer appropriate policy settings and report on the experience in using context observations collected from the devices of two testers over a period of time.
Personalised Privacy by Default Preferences - Experiment and Analysis
A novel mechanism that provides individuals with personalised privacy by default setting when they register into a new system or service is presented, using a machine learning approach that requires a minimal number of questions at the registration phase, and sets up privacy settings associated to users’ privacy preferences for a particular service.
Default Privacy Setting Prediction by Grouping User's Attributes and Settings Preferences
Results show that while models built on users’ privacy preferences have improved the accuracy of the scheme; grouping users by attributes does not make an impact in the accuracy, and services potentially using the prediction engine, could minimize the collection of user attributes and based the prediction only on users' privacy preferences.
Understanding and capturing people's mobile app privacy preferences
This thesis combines static code analysis, crowdsourcing and machine learning techniques to elicit people's mobile app privacy preferences, and introduces a crowdsourcing methodology to collect people's privacy preferences when it comes to granting permissions to mobile apps for different purposes.
Policy by Example: An Approach for Security Policy Specification
This paper proposes the approach of Policy by Example (PyBE) for specifying user-specific security policies, and demonstrates that PyBE correctly predicts policies with 76% accuracy across all users, a significant improvement over naive approaches.
Capturing social networking privacy preferences: can default policies help alleviate tradeoffs between expressiveness and user burden?
It is suggested that providing users with a small number of canonical default policies to choose from can help reduce user burden when it comes to customizing the rich privacy settings they seem to require.


Toward harnessing user feedback for machine learning
The results show that user feedback has the potential to significantly improve machine learning systems, but that learning algorithms need to be extended in several ways to be able to assimilate this feedback.
Understanding and Capturing People’s Privacy Policies in a People Finder Application
The authors suggest that an anonymous and privacy-sensitive approach to collecting sensed data in location-based applications and Expressing Privacy Policies Using Authorization Views in the 9th International Conference on Ubiquitous Computing (Workshop on Privacy).
Improving user-interface dependability through mitigation of human error
Explaining collaborative filtering recommendations
This paper presents experimental evidence that shows that providing explanations can improve the acceptance of ACF systems, and presents a model for explanations based on the user's conceptual model of the recommendation process.
Designing example-critiquing interaction
The essential design question in example critiquing is what examples to show users in order to best help them locate their most preferred solution and this paper analyzes this question based on two requirements.
Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions
This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main
Preference-based Search using Example-Critiquing with Suggestions
We consider interactive tools that help users search for their most preferred item in a large collection of options. In particular, we examine example-critiquing, a technique for enabling users to
Privacy in the United States: Some Implications for Design
Privacy is a socially gifted commodity. It comes in many forms, granted to or withheld from us by many types of people. These especially include the professionals who design objects, environments,
Representation of electronic mail filtering profiles: a user study
A usability study is reported on that investigates what types of profiles people would be willing to use to filter mail and how a variety of approaches can learn a profile of a user's interests.
Getting to know you: learning new user preferences in recommender systems
Six techniques that collaborative filtering recommender systems can use to learn about new users are studied, showing that the choice of learning technique significantly affects the user experience, in both the user effort and the accuracy of the resulting predictions.