User:Sidstamm/Notes July 2014 SOUPS

From MozillaWiki
Jump to: navigation, search

These are not the greatest notes, but the main takeaways are covered.

My high-level thoughts:

  • People can't agree on how to best segment users based on privacy needs/preferences/etc.
  • Social network privacy controls are hard to use
  • Psychologists are just starting to get into this space
  • Security and privacy are still not usable.

I was unable to attend all sessions, so there are some papers presented at SOUPS I did not summarize below.

Main conference site: http://cups.cs.cmu.edu/soups/2014/

Contents

Day 1: Workshop (Privacy Personas and Segmentation)

tl;dr: People can't agree on best ways to segment large populations by privacy posture or needs.

Urban & Hoofnagle: The Privacy Pragmatic as Vunerable.

This work critiqued Alan Westin's segmentation (Fundamentalists/Pragmatists/Unconcerned) and suggested peoples' concerns are related to how informed they are. "Perhaps underinformed individuals are vulnerable since many privacy prgamatics are underinformed."

The authors claimed to find some logical flaws in Westin's segmentation. Pragmatists are simply not Fundamentalists or Unconcerned, they're really the catch-all segment. And there's a gap between what consumers understand about data flows and what they want (their preferences).

The authors:

  1. Tested how informed each of Westin's segmentation was and found Fundamentalists were significantly more informed about privacy risks
  2. All groups reject information-intensive business models
  3. Reduced the segmentation into two segments: resilient and vulnerable where Fundamentalists are resilient and everyone else is "vulnerable".

See their short paper for 10 suggestions on how to improve the segmentation. But better segmentation may be hard. Is it too hard? When is it useful?

They reccomend Jennifer King's paper on this subject as she did many statistical tests on the authors' data (also in this workshop.

Soren Preibusch : Managing Diversity in Privacy Preferences: How to construct a Privacy Topology.

The author here wants to reduce the complexity of peoples' privacy preferences -- make it easier for them to choose by kick-starting with typing.

Too often there are privacy/functionality trade-offs. People will often prefer functionality over privacy.

A good typology has (1) reliability and (2) predictive power. It means that people are classified into a type correctly and the type strongly indicates the individual's preferences.

The author clustered some survey respondents into a strangely arbitrary number of clusters. He also tried Factor Analysis to partition based on activities.

Nothing was reliable and predictive. No solution presented. Suggested a typology would be useful but only if done right with strong, reproducible science.

Author asserts that privacy preferences are strongly related to personality traits, which leads him to typing since personality typing is reliable and predictive.

Pam Wisnewski : Profiling Facebook Users' Privacy Behaviors

Pam defines privacy as how we manage social interactions. There are lots of things you can do in Facebook to manage boundaries and interactions. Nobody has yet analyzed disclosure decisions (what you share and when) in relation to peoples' privacy settings.

This author studied people's use and frequency of use for each privacy behavior (such as managing news feeds), then classified users based on their results. Did Confirmatory Factor Analysis of the results.

Next, she ran a few MFAs, chose six classes based on stats. The largest group in this clustering was "Privacy Balancers" (much like the pragmatists in Westin's segments). http://usabart.nl/chart/

Takeaways:

  • Privacy strategies extend beyond disclosure decisions
  • Studying feature awareness vs privacy behavior
  • Decisions depend on more than awareness level

Kovila Copamootoo : An approach to modeling privacy concerns and behavior via mental models

Attitude is evaluation of something that changes your behavior. (lost the train of thought during slide flipping)

Mental Models are maps of cognition made up of cognative associations. Time + Context dependent. Can help with prediction and facilitate interaction with computer systems.

But mental models are not directly accessible. So the authors asked people questions (indirect ones) to extract peoples' attitudes towards social nets, things like IP addresses and bank accounts.

Mental models could help validate or identify segments -- new way to segment the population.

Lynn Covantry : Perceptions and Actions

This was a talk by a psychologist. Much of it went over my head.

  • theory: Behavior is based on rational choice
  • theory: behavior is planned
  • theory: perceived threats and behavior as a result is "coping"
  • theory: behavior is learned
  • theory: change is a process.

Lynn wants to know what are the environmental, social and personal influencers on privacy decisions. Can we influence behavior based on product design?

Paper had a survey to determine risk groups based on behaviors.

Takeaway: While people say they intend to have cautious behavior, did not observe a difference in behavior.

Lydia Kraus : Privacy and Security Knowledge for influencing mobile protection behavior.

Premise: People lack understanding of mobile device security.

Two questions:

  1. Is knowledge related to concerns?
  2. Does knowledge lead to behavior change?

The authors studied smartphone (android) users

  • 11 questions based on recommendations from various web sites
  • coded answers for correctness and calculated score by summing results
  • measured Global Info Privacy Concern by asking questions about privacy
  • measured behavior based on questions about behavior

Takeaways:

  • Knowledge and Concern are not correlated
  • Behavior is influenced by both knowledge and concern.

(Author described limits to their methods including biased sample and questions)

Bart Knijneburg : Information Disclosure Profiles for Segmentation and Recommendation.

Transparency and control are intended to empower BUT:

  • Simple notices are useless and detailed ones are too complex
  • Informing users make them more wrong
  • People want control but eschew the hassle
  • Decision bias.

Many people lack the resources to navigate the privacy space. Privacy nudges are promising, but what is the right direction? Need to move beyond one-size-fits-all.

Idea: use recommendation system (like Neflix) to find out what determines user choices.

Disclosure behaviors are multidimensional; profile users based on behavior, not attitude.

The authors clustered users based on types of disclosures (actions). Used Mixed Factor Analysis.

Idea: Privacy adaption procedure: (1) predict behaviors (2) provide tailored support when prediction is uncertain.

Maija Pockela : Locate! When users expose location.

What influences location disclosure?

  • Who is requesting
  • What is the reason
  • Who I am

Previous studies have been hypothetical, these authors wanted data. The authors developed an app called Locate!. Participants would receive messages requesting location and the user could allow, allow "blurred" location, deny or "cheat" with a fake location. User could also set context like work or home to describe where they were.

Participants chose 6 names from their address books. Study spoofed requests from these people with reasons ("Where are you, need to see you ASAP at work", etc). Randomized the defaults (location, fuzz, etc) to identify deliberate actions in responses. Presented short questionnaire after each disclosure to identify why they disclosed what they did.

Authors are working on a new questionnaire.

Results: participants did not share more accurately with people they felt closer to. The opposite was true. Also asked if people used protection from threats for mobile. Those who said "yes" did not share precise location as often.

Higher education of subjects did not affect willingness to disclose. "Who" has no affect, nor does reason for disclosure. Subject or context does have effect.

Takeaway: This study indicates context of location disclosures has no effect, but they need more data.

Sebastian Schnorf : A Comparison of Six Sample Providers Regarding Online Privacy Benchmarks.

UX research at google.

Fielded set of questions to different survey platforms. Including mail and phone surveys.

Takeaway: Hard to get secretive people in a privacy-focused survey. Random samplers are better quality because of this.

Marc Busch: Is This Information Too Personal? Relationship between privacy concerns and personality.

Personality matters because it may influence design of a system. As in other talks, "one-size-fits-all" privacy fails.

Recent studies are specific and narrow.

Takeaway: Only 3.8% of privacy concerns are affected by various personality traits.

Janine Spears : I have nothing to hide, thus nothing to fear.

What about the person who has no privacy concern?

D. Solove makes a case for why privacy matters even if you have nothing to hide. This is a myopic view of privacy equals secrecy. They trust data collectors (blindly) and are unaware of the extents of tracking.

Instead, shift discussions to implications of over-disclosure (when data is not suppressed). What are the long-term implications of inadvertent shares? How do you educate the user? How do nudges work?

  • How does this persona type affect others around him with different types?
  • How quickly do this person's views change, like when there is something to hide?

To illustrate why privacy matters, start with a zip code and shopping list and then ask:

  1. What inferences can be made from this info?
  2. What are implications of these inferences?

Also, "Do you wear clothes?"

Keynote: Chris Soghoian -- Sharing the blame for the NSA's dragnet surveillance program

This talk was about government spying on people/suspects. People don't buy things or generally expect to be "attacked" (I don't buy a laptop based on thinking I'll be raided by the FBI in the future).

Phones.

Supreme court ruled recently that we have reasonable expectation to privacy on phones and other digital portable devices.

At the US border, authorities can inspect and image any of your devices (but not make you enter your password).

Mobile developers don't advertise security as a selling point. It's hard to weigh security benefits of various apps when the devs don't say things about how they secure things.

Apple did decscribe how their security works on iOS; it says that with a pin, your device is encrypted -- strongly.

Apple and google also have mechanisms to bypass any encryption with a warrant.

Desktop.

Windows limits which consumers (via home/pro/ultimate) versions get disk encryption. Windows 8.1 has it for all versions, but has not in the past packaged the option with home. Apple offers it to all Mac OS X users. Defaults and incentives are not there to benefit the majority of people. This is default security for the rich.

We know how to fix this, but security isn't reaching poorer users.

Tech can protect us when the law can't. So we should have protection tech.

Mail.

GPG is not usable. Glen Greenwald couldn't use it when he needed to protect a source.

Nothing has changed since "Why Johnny Can't Encrypt."

What about email subjects and attachment names? PGP doesn't help obfuscate these.

Existing tools do not suit the needs of non-technical users. The market forces are against default/easy-to-use crypto.

  • Data loss concerns
  • Business model (data mining companies)
  • government and law enforcement pressure
  • Lack of market power in the orgs that want to make change

Warnings And Decisions

Stefan Korff: Too much choice

Decision making: Assessment -> Planning -> Action -> Evaluation

Question: how do people feel after disclosure? (Evaluation) Hypothesis is that people don't get that far when playing around in social nets.

The authors study how the number and structure of options affect individuals' attitudes towards situations and how satisfied they are with their decisions.

There's a point where people experience a "too much choice" effect and don't feel as comfortable with their choices because there were too many options.

The authors:

  1. made a list of types of info to share (picture, address, name, friend count, etc)
  2. grouped them by sensitivity type.
  3. studied differences with two variables:
    • number of groups (size one groups and size n/2 groups)
    • composition of groups (homogeneous/same type of data vs heterogeneous/mixed data types in groups)

Hypothesis was that:

  1. more options (more groups) would lead to less satisfaction
  2. heterogeneous grouping (different types of things together) would lead to less satisfaction

They conducted a survey to measure satisfaction after users were presented with these interfaces.

Turns out hypothesis 1 was confirmed (more options = less satisfaction) but hypothesis 2 was disproved (homogeneous groups were no better than heterogeneous groups).

According to Stefan: This suggests that clinical depression in industrialized society is linked to "too much choice".

Rick Wash: How automatic software updates introduce security problems

"Whenever possible, secure system designers should find ways of keeping humans out of the loop" (Lorrie Cranor)

Examples: Windows Update with default auto-updating (XP SP2 and later)

The researchers did a survey then examined computer logs. They matched the log data with the survey and interview data.

People misunderstand updates. 2/3 did not know that auto updates were on (or thought they were but were wrong). Many thought windows update was just advising them that updates were available, not that they were installed.

When removing people from decision making, peoples' misunderstanding of what happens increases. This is because we tend to remove the easy decisions and leave the hard decisions (like "do you want to accept this cert?"). Thus, amateur security is harder and people end up either wrong or experts.

Saranga Komanduri: Revisiting popup fatigue

Are attention attractors subject to habituation? (Attention attractors help people identify and focus on the important bits of a warning/dialog, such as a pulsing arrow, forcing them to interact, etc.)

There's lots of literature that observes habituation especially with security warnings.

Goal: Good attractors are not subject to habituation and should not lose effectiveness after many exposures.

They did a mechanical turk study.

Results: Some attractors (the interactive ones like "type this" or "highlight this") were not subject to habituation.

Mobile Security and Privacy

Hui Xu: Towards continuous and passive authentication via touch biometrics

They used statistical pattern recognition for touch-based authentication.

This is more than circuits, passwords, pins, swipe, etc, they were looking for patterns of use during regular mobile device activity. Touch data comes from many operations and many apps.

They did a 21 day study examining universiality, collectability, distinctness and permanence of touch biometrics.

Permanence was low: not stable matching over time since people change their use patterns, have sore hands, moods change, etc.

To get over permanence change, they continued training the model while it was in use and that was more stable.

Jialu Liu: Modeling users' mobile app privacy preferences

Similar to the PPS workshop (day 1, see above).

They downloaded 108,000 apps and reversed them to see what permissions they used (versus the ones they asked for in the store). They also identified *why* the permissions were used. Turns out most permissions are for third party libraries:

  • Targeted ads
  • Social networks
  • Mobile os analytics

They asked for users' comfort with sets of (app name, permission, purpose). Users are generally fine when location is used for an app's central purpose, but not for those third-party libraries.

Emanuel von Zezschwitz: Smartphone unlocking behavior and risk perception

Shoulder Surfing!

How, why, when do people use lock screens? Is Shoulder Surfing a problem?

They did a survey and field study. The survey results are in the paper. Field study was discussed in the talk (is also in the paper).

They logged data for 27 days. Monitored activation/unlock behaviors. (Activation is interaction without unlocking.)

Asked a brief question after each unlock.

Average session: 70s (activations) or 104s (unlocks) Average phone usage is 43 hours over 27 days. Spent 1.2 hours unlocking during the 27 days.

Most unlocks happen in "private" contexts. Proportion of sensitive data is low for most users, their worry is low.

Shoulder surfing:

  • Possible in 17% of scenarios
  • Likely in 41% and critical in 19% of cases
  • Mostly happens in private contexts and involving people the phone's owner knows.

Takeaways:

  • We should use context info to decide if locking/unlocking is necessary.
  • Locks should be at the app level to reduce frequency of unlocking
  • Shoulder surfing risks are perceived low by users.

Authentication

Taiabul Haque: Applying psychometrics to measure user comfort when constructing a strong password

Psychometrics: ability, mood, etc. Reliability and validity of these measures is key. Want people to be comfortable constructing a good password.

Not sure of the takeaways for this talk. Turns out Apple iOS is most "comfortable" for users.

Elizabeth Stobert: The password life cycle, user behavior in managing passwords

People have coping strategies for so many passwords:

  • Password managers
  • Password reuse

These authors interviewed 27 people. They did qualitative analysis using grounded theory. They found 66 patterns in peoples' responses such as "records passwords as backup strategy". They used GT to identify connections between patterns.

  • Some people used frequency-based passwords (PW_A for frequently used sites, PW_B for others)
  • Many people had a main password
  • People write their passwords down
  • People use passwords for a very long time.

Rationing: common theme is that people spend more effort creating good passwords for important accounts and reduce their willingness to spend effort on other sites.

Users do not differentiate easily between different scenarios that call for different passwords. (Surprise, they suck at threat modeling.)

Social nets and access control

Mainack Mondal: Social ACLs

Partition sharing into two groups:

  • Public (picture of a field of daffodils)
  • Private (beer hoarding picture)

This work addresses only private sharing.

Right now users are assumed to employ SACLs manually to control access. Other work on this proposes lots of things like automatic group detection.

So far, there's no validation that people actually use the SACLs they create or if they do, how.

Authors created an app called Friendlist Manager and got consent to access users' data. 67% of users used at least one SACL.

But most SACLs don't correlate with the auto created groups they tried.

Hootan Rashtian: To befriend or not? friend request acceptance model on facebook

These authors want to limit large-scale infiltration attacks (getting into peoples' networks when you're not supposed to).

First, they wanted to understand why people are willing to accept requests.

  • Mainly based on picture, name and knowing the person in real life.
  • This suggests UI changes for presenting friend requests to people. Emphasize the things that are most useful.

Final Panel

I didn't take notes on this but there are a few things I remember:

  • Software has trouble determining intent, so it gives up and has to ask a user. Can we determine intent so we don't need to ask?
  • According to a panelist, MS SmartScreen is collecting certs