Showing posts with label Privacy. Show all posts
Showing posts with label Privacy. Show all posts

Friday, November 16, 2012

Utah Is Key to Reforming Digital Privacy Law

CDT is joining a politically diverse coalition to launch VanishingRights.com, a campaign to update the Electronic Communications Privacy Act (ECPA).

ECPA was passed in 1986. It sets out rules for when the government can access our digital information – and it’s woefully out of date.

Under ECPA, the government says stored email, other information in the cloud, and location data generated by mobile devices does not receive full Fourth Amendment protection – meaning the government can access it without a warrant. Postal mail and phone calls, on the other hand, receive full Fourth Amendment protection. The distinction makes no sense.

On November 15, the Senate Judiciary Committee could take a big step toward bringing privacy law into the 21st Century when it votes on ECPA reform.

This is a critical vote. Bipartisan support in the Judiciary Committee is needed to advance ECPA reform in the Senate. The best hope for Republican support on the Judiciary Committee likely rests with Utah Senators Orrin Hatch and Mike Lee. However, the Senators need to hear from their constituents – especially from those in the Utah tech community who are affected by this outdated law.

VanishingRights.com provides phone numbers for Members of the Judiciary Committee. If you care about digital privacy rights, please take a few moments before November 15 to call and ask your Senators to support strong ECPA reform and to oppose any weakening amendments.


View the original article here

Monday, November 12, 2012

Proposals to Children's Privacy Rule Pose Real Problems for Free Expression and Innovation

The FTC is proposing changes to the Children’s Online Privacy Protection Act (COPPA) rule that will increase uncertainty for website operators and app developers and could bring a whole new set of sites and services into COPPA’s scope. COPPA requires operators of websites and online services that are targeted to children, or who know a particular user is a child under the age of 13, to obtain verified parental consent before collecting the child’s personal information.

A lot has changed about the collection and use of personal information online since COPPA was enacted in 1998, and the FTC started the current Rule review process in 2010. CDT weighed in on previous rounds of comments, recognizing the need to bring COPPA up to date but cautioning the FTC that changes to COPPA’s age limit or the range of sites it covers would have severe consequences for minors’ and adults’ First Amendment rights.

The FTC has been a strong voice in keeping COPPA focused on children under 13, but, as we discussed in Ars Technica last week, several of their most recent proposals introduce vagueness and uncertainty into COPPA’s scope, which could have real impacts on online innovation and free expression. CDT, joined by the American Library Association, filed comments yesterday that discuss how.

How do plugin providers know where on the web their code is installed?

Some plugins require websites that want to use their code to sign up and receive permission (e.g., through a mechanism like an API key).  There are many examples of plugins--such as those used by YouTube--that don't. Anyone can embed a YouTube video by copying a bit of code and pasting it into their web page.

When you visit a website and a plugin is activated by your browser, the originator of that plugin is sent a bit of code containing the site's URL--and nothing more--by what's known as a "referrer header." A referrer that points to say, "moshimonsters.com," doesn't tell the plugin operator whether the site serves up animated cartoon characters or the characteristics of classic cars. For that kind of data, plugin operators have to look elsewhere, such as analytics providers.

However, an analytics provider may only provide an estimate of a site's typical user. For example, Quantcast provides this type of estimated data for moshimonsters.com. Such estimates may not be precise enough to measure an audience "disproportionately" made of kids, as contemplated by the FTC under the proposed update to the COPPA rule.

One of the biggest changes the FTC proposes is to require the operator of a third-party plugin – meaning analytics providers, advertising networks, social widgets, and any other third-party code running on a given site – to obtain verified parental consent if it “knows or has reason to know” it is collecting information through a site directed to children. This provision is vague: operators are not given clear guidance on what type of notice would be sufficient to trigger this provision, but are just warned that they will not be able to ignore “credible information”. But even if the FTC proposed a thoroughly clear notice-and-action regime for plugin operators, it would still be unfair to put these obligations on third-parties who aren’t targeting children themselves and can’t control whether child-oriented first-parties use their code. [See sidebar for more.]

Fear of these consequences could prevent plugin services from sharing their code with other sites and services, since it’s not clear that there’s anything the plugin developer could do to avoid incurring COPPA obligations. CDT argues that the responsibility for complying with COPPA should lie with the first-party operators who have the direct relationship with users, except in rare circumstances when a plugin purposefully targets children or has actual knowledge that it’s collecting children’s information.

The issue only arises because of another of the Commission’s proposed changes: adding “IP address or other persistent identifier” to COPPA’s definition of personal information, except in cases where this information is collected for support of internal operations. In the broader consumer privacy context, CDT has argued for recognizing the ability for pseudonymous identifiers to act as “personally identifiable information” in certain circumstances. In the COPPA context, we have persistently raised the issue that wholesale coverage of IP address as COPPA-covered “personal information” would lead to the unintended consequence that sites directed to children could not comply with the law, because they would necessarily collect IP address before having an opportunity to obtain parental consent.

The FTC’s proposal to provide exemptions for uses of persistent identifiers to “support the internal operations of a site or service” is a good approach, and these exemptions should be clearly and specifically stated by the FTC. We have advocated for exemptions for operations such as content delivery, site analytics, contextual advertising, identity transaction, and fraud prevention in our work on a universal Do Not Track tool.

But plugins themselves must have the exemption, too – running on the first-party website, they will be “collecting” personal information under COPPA through no effort of their own, as IP addresses are automatically transmitted to them. Without an exemption for the basic, functional uses that plugins make of IP address and other persistent identifiers, it will be difficult for either the first-party operators or the plugin operators themselves to understand when the use of such information is exempted and when it is not. Children’s sites should be able to do contextual advertising and analytics via the use of plugins. The effect of COPPA should not be to make children’s sites shoddy and impoverished, but a failure to extend the exemption to those plugin operators would make it difficult indeed for plugins to comply.

We asked the Commission for another key point of clarification, regarding the liability they envision for platform operators – those services, like the Apple AppStore, that act as general-audience services that could support development of a wide range of content, including content aimed at children. Fundamentally, we think responsibility under COPPA should lie with the entity making the decision to collect data from children, be it the first-party children’s site that’s choosing to use plugins and ad networks, or the app developer who is choosing to make apps for children.

Another recent proposal would muddy the understanding of which sites and services the FTC considers to be “directed to children.” The FTC proposes expanding the definition of “directed to children” to include sites and services that are “likely to attract an audience that includes a disproportionately large percentage of children under 13 as compared to the percentage of such children in the general population.” Yet the FTC gives no sense of what would count as “disproportionate”, and does not adequately address its own previous acknowledgement that demographic data “is neither available for all websites and online services, nor is it sufficiently reliable, to adopt it as a per se legal standard”. It would be exceptionally complicated for site operators to gauge what proportions of their audience fall into precise age categories. And attempts to get more information about site demographics would just result in more tracking and data collection from all users.

This “disproportionate” standard would blur the line that the FTC has drawn over the years between sites intentionally, actively aiming for an audience of children and the rest of the Internet. The FTC’s current test for “directed to children” involves a number of variables – including whether the site has cartoon characters or celebrities that appeal to children, uses language pitched at a young audience, or deals in subject matter designed for children – that, taken as a whole, identify sites that appeal to children and likely don’t appeal to anyone else. Shifting away from that standard toward one that pulls in sites aimed at a general audience that happen to appeal to children as well as teens or adults would radically upset the balance that COPPA has thus far achieved.

The FTC goes on to propose that sites who may fall into the “disproportionate” category could be saved from liability if their operators ask for age information from all users prior to collecting any personal information. Implementing age-screening technology would place financial and resource burdens on operators. However, between the potential breadth of the “disproportionate” standard and operators’ general inability to determine whether they meet the standard, this age-screening carve-out for liability would likely seem the least of three evils to operators attempting to know where they stand under the law.

But the decade-long litigation over the Child Online Protection Act (the confusingly similarly named COPA) established that federal laws burdening operators’ ability to provide constitutionally protected material are suspect under the First Amendment. Further, requiring the provision of personal information prior to accessing protected speech is a violation of users’ First Amendment right to access information anonymously. The FTC is putting COPPA on a dangerous path by introducing even a soft version of an age-verification mandate into the Rule.


View the original article here

Wednesday, October 31, 2012

Laptop Spying Case Indicates More Aggressive FTC Stance on Privacy

The Federal Trade Commission announced late last month that it had settled a landmark case with seven rent-to-own companies and a software design firm for alleged consumer spying via laptop webcams, screenshots, and keystroke monitoring. This settlement is important because it marks the most expansive use by the FTC of its “unfairness” authority to pursue privacy violations. As privacy legislation has stalled in Congress in the short term, this latest action could signal more aggressive FTC action under its existing authority to reign in dubious privacy practices.

According to the complaint, the software company, DesignerWare, provided software to rent-to-own franchises that rented laptops to consumers. The software was designed to allow franchises to shut off computers remotely if the rental contract had been breached – for example, if customers failed to make timely payments or if they stopped communicating with the franchise.

DesignerWare’s programs, however, were capable of much more than just remotely deactivating computers. Via Detective Mode, a special add-on feature, rent-to-own franchises could track a computer’s physical location, create fake software registration windows to gather information, log keystrokes, take screenshots, and even spy on consumers via the laptop’s webcam. In some instances, Detective Mode-enabled webcams took pictures of children, naked people, and people having sex. As a result, the FTC charged DesignerWare and the rent-to-own companies with violations of the FTC Act.

Nearly every other developed country has instituted robust privacy protections that follows the Fair Information Practice Principles (FIPPs). In the US, by contrast, the FTC can only use the FTC Act of 1914, which established the agency and gave it the power to regulate unfair and deceptive acts or practices in commerce. In recent years, the FTC has relied upon its “deceptiveness” authority more than “unfairness” in order to pursue privacy violations, as in the recent MySpace case. In that case, Myspace claimed in its privacy policy that it would not share users' personally identifiable information (PII) without first requiring notice and consent from users. However, the FTC alleged that Myspace gave third-party advertisers access to Friend IDs, which allowed advertisers at a minimum to learn the full names of individual users, which violated the terms of the privacy policy.

Most privacy cases rely on these types of “gotcha” scenarios, where a company mistakenly represents some aspect of their practices and then can be charged with acting deceptively. Structurally problematic website practices are less frequently the subject of FTC Act enforcement cases, in part because they are harder to discover. In addition, institutionalized practices might not necessarily be deceptive, but rather confusing or obscure to users. At first glance, unfairness seems a stronger fit for privacy cases in which users may not be aware of undisclosed practices that collect and use their data. However, the unfairness enforcement power requires a three-part analysis as set out by the FTC Act. Unfair acts or practices must cause (or be likely to cause) (1) substantial injury to consumers (2) that cannot be reasonably avoidable, (3) and are not offset by benefits to consumers. While public policy considerations can play a role in this analysis, they cannot be the primary justification for an unfairness claim.

However, the unfairness prong has been applied in data security cases. Several high profile actions, including those against Reed Elsevier, BJ’s Wholesale Club, and Wyndham Hotels, have alleged weak or ineffective security systems protecting user data. In these cases, companies were responsible for user PII, including names, credit card numbers, Social Security numbers, addresses, purchase histories, and dates of birth. But these companies failed to enact adequate security methods, including anonymization, encryption, and user verification. As a result of these lax security procedures, consumers were exposed to the possibility of identity theft or other fraudulent activities – a very real injury that could not be offset by any possible benefit.

Privacy practices, unlike security measures, are more difficult to evaluate under the unfairness test. Under the third prong of the unfairness balancing test, companies that engage in bad privacy practices can point to a corresponding consumer benefit, making an unfairness claim unsuitable. If a company has a policy that might expose a consumer to harm, that company can often assert that there are countervailing benefits that point against an unfairness claim. For example, targeting users with ads based on their preferences and personal characteristics might implicate the unfairness prong, but a company could assert that targeted ads are actually beneficial to users, because they provide information about products that are particularly appealing to individual users. Because the unfairness test has a built in escape hatch for defendants, it can be a challenge for the FTC to successfully litigate unfairness claims.

It can also be difficult to determine what kind of harm is sufficient for the unfairness standard. CDT has suggested that the types of harm that result from privacy violations should be interpreted broadly, including data breaches, obstacles to innovation, dangers from government access, and encroachments upon individual liberty. In our prior commentary, we have argued for the FTC’s adoption of FIPPs in its understanding of consumer harm under the unfairness prong.

The Commission has recently indicated that it might expand its conception of what constitutes unfairness in the privacy context. For example, its high profile settlement with Facebook included unfairness claims in addition to deception claims. But, as the first major case relying upon unfairness concerning the dissemination of consumer PII, the settlement indicates a major step forward in ensuring that the government protects user privacy.

In its complaint against Designware, the Commission made its strongest statement that poor privacy practices are governed by its unfairness authority, indicating that it considers harm to be sufficiently likely as a result of disclosing personal, financial, and confidential information to third parties. In its complaint, the Commission confidently alleged that DesignerWare’s software caused substantial harm:

Because of DesignerWare’s intrusions, consumers are at risk of harm from the exposure of personal, financial account access, and medical information to strangers. Consumers are harmed by DesignerWare’s unwarranted invasion into their homes and lives and its capture of the private details of individual and family life, including, for example, images of visitors, children, family interactions, partially undressed individuals, and couples engaged in intimate activities. Sharing these images with third parties can cause consumers financial and physical injury and impair their peaceful enjoyment of their homes. Consumers cannot reasonably avoid these injuries because PC Rental Agent is invisible to them. The harm caused by respondents’ unauthorized collection and disclosure of confidential consumer information is not outweighed by countervailing benefits to consumers or to competition.

However, the drawbacks of unfairness – its multifaceted balancing test and lack of clarity over what is fair and what isn’t – demonstrates why CDT has long argued for a substantive baseline consumer privacy law protecting users and encouraging new innovations. A baseline consumer privacy law would provide clear guidance to companies and define acceptable practices, as well as clearly and forcefully demonstrate to consumers that regulators are committed to protecting user privacy and promoting fair practices. In the interim, however, the FTC’s renewed commitment to using unfairness to protect consumers is welcome.


View the original article here

Apple iOS 6 and Privacy

When iOS 6 was released last week, the "big news" was Apple's decision to drop Google Maps. In the uproar that followed, iOS 6's privacy features received little fanfare, despite undergoing a major overhaul. Many changes CDT has advocated for—including giving users more control over tracking and increasing the visibility of and options in the privacy settings—have been adopted in the new version.

In Settings, Apple has created a new Privacy tab (see the images below). It contains the familiar Location Services tab, allowing users to determine which apps have access to the device’s location. The Privacy tab also lists a number of other types of data that will now require explicit requests to the user for data sharing, including Contacts, Calendars, Reminders, Photos, and Bluetooth. (Android, by contrast, lists all information and services that an app can access during installation, although they can't be changed later without a manual app update and a permissions notice to the user.)

Apple has also allowed users to limit advertising tracking, via Settings > General > About > Advertising > Limit Ad Tracking. When users enable this setting, they are setting a “flag” that tells apps they don't want to be tracked, much in the same spirit of the W3C’s Do Not Track efforts. It’s unclear why this setting is located outside of the Privacy settings and deep in the General settings, but its existence and functionality are welcome.

Apple has incorporated three new identifiers to take the place of the much-maligned and unchangeable UDID: iOS 6 now makes available a vendor-specific identifier, identifierForVendor, that can be used by app developers to recognize a device across their apps; a second identifier for advertising purposes, advertisingIdentifier, that can be used by third-party ad networks to identify a device for advertising purposes; and a third application identifier, UUID, that is a more accessible way for applications to create identifiers specific to that application. These three IDs may sound similar but the details are quite different: The vendor identifier is cleared when the user uninstalls the last app on their phone by a given vendor; the advertising identifier persists until the device is completely reset; the application identifier persists only if the application saves it, and then only until that application is uninstalled. Each of these new identifiers is preferable to the UDID, which cannot be modified.

How could these identifiers be used by apps? If a single app needs to store a lightweight device-specific identifier, they would choose the UUID (the UUID is quite different from the UDID; the UUID has a time-based element which means that two UUIDs created at different times will be completely different). If an app provider needs an identifier that persists across each of their apps, they would choose the identifierForVendor, which can be used across all the apps for a given vendor; for example, allowing a family of privacy-sensitive apps like Blendr/Grindr to offer “no personal information required” logins across each app where account information is tied to a device instead of personal information like an email account and name. Finally, for advertising purposes the advertisingIdentifier can be used to deliver, measure, and target advertisements to users. With the advertisingIdentifier, Ad networks installed in apps from different vendors will be able to track users across all the apps on which they are installed on the device - for example, a user’s love of wine in a wine cellar app could be leveraged to offer a discount on wine paraphernalia in a shopping app. This identifier is universal, making it easier for ad networks to trade and sell information about users (compared to the cookie-based model on the web, where each ad network has a different identifier for a user that only it can read). Arguably, Apple should have tried to replicate an advertiser-specific identifier for mobile, or at least made the identifier easier to reset.

However, the “Limit Ad Tracking” setting ameliorates the persistence of the advertisingIdentifier as app developers will have to check if the user has enabled the preference before they read or use the advertisingIdentifier in their code. If “Limit Ad Tracking” is set, advertisers and ad networks are only allowed to use the identifier for a limited set of exempted uses: “frequency capping, conversion events, estimating the number of unique users, security and fraud detection, and debugging.” CDT has long advocated for exactly this balance between user preferences and limited operational uses. This is an important and subtle balance. In negotiating the meaning of “Do Not Track” in the World Wide Web Consortium, we have argued that other uses like “market research” and “product improvement” could tip the scales too far; while these uses don’t directly impact the user’s experience, they wouldn’t be expected by users who enable the Limit Ad Tracking preference and these uses allow data collection of indeterminate scope and extent, potentially acting as exceptions that swallow the rule. The balance struck by Apple here in terms of permitted uses is a careful and appropriate one between honoring users’ desires to limit advertising tracking and ensuring a baseline level of accepted uses that promote a healthy app ecosystem. Furthermore, because Apple must approve iOS apps, they must respect the user’s choice for Limited Ad Tracking or face rejection or removal. This is in sharp contrast to Do Not Track, which requires affirmative representations and agreement from advertising networks to have any weight.

Finally, iOS 6 fixes some 200 critical vulnerabilities across the entire operating system. Some of these vulnerabilities are serious: from allowing bypass of the PIN-enabled lock screen to viewing pictures taken on the device without entering a PIN to running arbitrary code on the device by loading a malicious image file. Unfortunately, iOS 6 is only available for iPhone 3GS and later, iPod Touch 4 or later and iPad 2 or later, meaning the large quantity of older devices will still be subject to many of these potential problems.

CDT applauds Apple’s decision to incorporate these substantial pro-privacy elements into iOS 6, allowing users to finely control how their data gets shared with specific apps, and to more easily express a desire not to be tracked by marketers. We hope that this effort encourages mobile OS vendors to continue to iterate and compete on built-in privacy controls. For years, CDT has periodically published a report comparing the privacy settings for the major browser vendors. We are now in the process of evaluating the major mobile OS platforms in terms of comparative privacy features. Stay tuned!


View the original article here

Sunday, October 28, 2012

Proposals to Children's Privacy Rule Pose Real Problems for Free Expression and Innovation

The FTC is proposing changes to the Children’s Online Privacy Protection Act (COPPA) rule that will increase uncertainty for website operators and app developers and could bring a whole new set of sites and services into COPPA’s scope. COPPA requires operators of websites and online services that are targeted to children, or who know a particular user is a child under the age of 13, to obtain verified parental consent before collecting the child’s personal information.

A lot has changed about the collection and use of personal information online since COPPA was enacted in 1998, and the FTC started the current Rule review process in 2010. CDT weighed in on previous rounds of comments, recognizing the need to bring COPPA up to date but cautioning the FTC that changes to COPPA’s age limit or the range of sites it covers would have severe consequences for minors’ and adults’ First Amendment rights.

The FTC has been a strong voice in keeping COPPA focused on children under 13, but, as we discussed in Ars Technica last week, several of their most recent proposals introduce vagueness and uncertainty into COPPA’s scope, which could have real impacts on online innovation and free expression. CDT, joined by the American Library Association, filed comments yesterday that discuss how.

How do plugin providers know where on the web their code is installed?

Some plugins require websites that want to use their code to sign up and receive permission (e.g., through a mechanism like an API key).  There are many examples of plugins--such as those used by YouTube--that don't. Anyone can embed a YouTube video by copying a bit of code and pasting it into their web page.

When you visit a website and a plugin is activated by your browser, the originator of that plugin is sent a bit of code containing the site's URL--and nothing more--by what's known as a "referrer header." A referrer that points to say, "moshimonsters.com," doesn't tell the plugin operator whether the site serves up animated cartoon characters or the characteristics of classic cars. For that kind of data, plugin operators have to look elsewhere, such as analytics providers.

However, an analytics provider may only provide an estimate of a site's typical user. For example, Quantcast provides this type of estimated data for moshimonsters.com. Such estimates may not be precise enough to measure an audience "disproportionately" made of kids, as contemplated by the FTC under the proposed update to the COPPA rule.

One of the biggest changes the FTC proposes is to require the operator of a third-party plugin – meaning analytics providers, advertising networks, social widgets, and any other third-party code running on a given site – to obtain verified parental consent if it “knows or has reason to know” it is collecting information through a site directed to children. This provision is vague: operators are not given clear guidance on what type of notice would be sufficient to trigger this provision, but are just warned that they will not be able to ignore “credible information”. But even if the FTC proposed a thoroughly clear notice-and-action regime for plugin operators, it would still be unfair to put these obligations on third-parties who aren’t targeting children themselves and can’t control whether child-oriented first-parties use their code. [See sidebar for more.]

Fear of these consequences could prevent plugin services from sharing their code with other sites and services, since it’s not clear that there’s anything the plugin developer could do to avoid incurring COPPA obligations. CDT argues that the responsibility for complying with COPPA should lie with the first-party operators who have the direct relationship with users, except in rare circumstances when a plugin purposefully targets children or has actual knowledge that it’s collecting children’s information.

The issue only arises because of another of the Commission’s proposed changes: adding “IP address or other persistent identifier” to COPPA’s definition of personal information, except in cases where this information is collected for support of internal operations. In the broader consumer privacy context, CDT has argued for recognizing the ability for pseudonymous identifiers to act as “personally identifiable information” in certain circumstances. In the COPPA context, we have persistently raised the issue that wholesale coverage of IP address as COPPA-covered “personal information” would lead to the unintended consequence that sites directed to children could not comply with the law, because they would necessarily collect IP address before having an opportunity to obtain parental consent.

The FTC’s proposal to provide exemptions for uses of persistent identifiers to “support the internal operations of a site or service” is a good approach, and these exemptions should be clearly and specifically stated by the FTC. We have advocated for exemptions for operations such as content delivery, site analytics, contextual advertising, identity transaction, and fraud prevention in our work on a universal Do Not Track tool.

But plugins themselves must have the exemption, too – running on the first-party website, they will be “collecting” personal information under COPPA through no effort of their own, as IP addresses are automatically transmitted to them. Without an exemption for the basic, functional uses that plugins make of IP address and other persistent identifiers, it will be difficult for either the first-party operators or the plugin operators themselves to understand when the use of such information is exempted and when it is not. Children’s sites should be able to do contextual advertising and analytics via the use of plugins. The effect of COPPA should not be to make children’s sites shoddy and impoverished, but a failure to extend the exemption to those plugin operators would make it difficult indeed for plugins to comply.

We asked the Commission for another key point of clarification, regarding the liability they envision for platform operators – those services, like the Apple AppStore, that act as general-audience services that could support development of a wide range of content, including content aimed at children. Fundamentally, we think responsibility under COPPA should lie with the entity making the decision to collect data from children, be it the first-party children’s site that’s choosing to use plugins and ad networks, or the app developer who is choosing to make apps for children.

Another recent proposal would muddy the understanding of which sites and services the FTC considers to be “directed to children.” The FTC proposes expanding the definition of “directed to children” to include sites and services that are “likely to attract an audience that includes a disproportionately large percentage of children under 13 as compared to the percentage of such children in the general population.” Yet the FTC gives no sense of what would count as “disproportionate”, and does not adequately address its own previous acknowledgement that demographic data “is neither available for all websites and online services, nor is it sufficiently reliable, to adopt it as a per se legal standard”. It would be exceptionally complicated for site operators to gauge what proportions of their audience fall into precise age categories. And attempts to get more information about site demographics would just result in more tracking and data collection from all users.

This “disproportionate” standard would blur the line that the FTC has drawn over the years between sites intentionally, actively aiming for an audience of children and the rest of the Internet. The FTC’s current test for “directed to children” involves a number of variables – including whether the site has cartoon characters or celebrities that appeal to children, uses language pitched at a young audience, or deals in subject matter designed for children – that, taken as a whole, identify sites that appeal to children and likely don’t appeal to anyone else. Shifting away from that standard toward one that pulls in sites aimed at a general audience that happen to appeal to children as well as teens or adults would radically upset the balance that COPPA has thus far achieved.

The FTC goes on to propose that sites who may fall into the “disproportionate” category could be saved from liability if their operators ask for age information from all users prior to collecting any personal information. Implementing age-screening technology would place financial and resource burdens on operators. However, between the potential breadth of the “disproportionate” standard and operators’ general inability to determine whether they meet the standard, this age-screening carve-out for liability would likely seem the least of three evils to operators attempting to know where they stand under the law.

But the decade-long litigation over the Child Online Protection Act (the confusingly similarly named COPA) established that federal laws burdening operators’ ability to provide constitutionally protected material are suspect under the First Amendment. Further, requiring the provision of personal information prior to accessing protected speech is a violation of users’ First Amendment right to access information anonymously. The FTC is putting COPPA on a dangerous path by introducing even a soft version of an age-verification mandate into the Rule.


View the original article here

Saturday, October 13, 2012

Laptop Spying Case Indicates More Aggressive FTC Stance on Privacy

The Federal Trade Commission announced late last month that it had settled a landmark case with seven rent-to-own companies and a software design firm for alleged consumer spying via laptop webcams, screenshots, and keystroke monitoring. This settlement is important because it marks the most expansive use by the FTC of its “unfairness” authority to pursue privacy violations. As privacy legislation has stalled in Congress in the short term, this latest action could signal more aggressive FTC action under its existing authority to reign in dubious privacy practices.

According to the complaint, the software company, DesignerWare, provided software to rent-to-own franchises that rented laptops to consumers. The software was designed to allow franchises to shut off computers remotely if the rental contract had been breached – for example, if customers failed to make timely payments or if they stopped communicating with the franchise.

DesignerWare’s programs, however, were capable of much more than just remotely deactivating computers. Via Detective Mode, a special add-on feature, rent-to-own franchises could track a computer’s physical location, create fake software registration windows to gather information, log keystrokes, take screenshots, and even spy on consumers via the laptop’s webcam. In some instances, Detective Mode-enabled webcams took pictures of children, naked people, and people having sex. As a result, the FTC charged DesignerWare and the rent-to-own companies with violations of the FTC Act.

Nearly every other developed country has instituted robust privacy protections that follows the Fair Information Practice Principles (FIPPs). In the US, by contrast, the FTC can only use the FTC Act of 1914, which established the agency and gave it the power to regulate unfair and deceptive acts or practices in commerce. In recent years, the FTC has relied upon its “deceptiveness” authority more than “unfairness” in order to pursue privacy violations, as in the recent MySpace case. In that case, Myspace claimed in its privacy policy that it would not share users' personally identifiable information (PII) without first requiring notice and consent from users. However, the FTC alleged that Myspace gave third-party advertisers access to Friend IDs, which allowed advertisers at a minimum to learn the full names of individual users, which violated the terms of the privacy policy.

Most privacy cases rely on these types of “gotcha” scenarios, where a company mistakenly represents some aspect of their practices and then can be charged with acting deceptively. Structurally problematic website practices are less frequently the subject of FTC Act enforcement cases, in part because they are harder to discover. In addition, institutionalized practices might not necessarily be deceptive, but rather confusing or obscure to users. At first glance, unfairness seems a stronger fit for privacy cases in which users may not be aware of undisclosed practices that collect and use their data. However, the unfairness enforcement power requires a three-part analysis as set out by the FTC Act. Unfair acts or practices must cause (or be likely to cause) (1) substantial injury to consumers (2) that cannot be reasonably avoidable, (3) and are not offset by benefits to consumers. While public policy considerations can play a role in this analysis, they cannot be the primary justification for an unfairness claim.

However, the unfairness prong has been applied in data security cases. Several high profile actions, including those against Reed Elsevier, BJ’s Wholesale Club, and Wyndham Hotels, have alleged weak or ineffective security systems protecting user data. In these cases, companies were responsible for user PII, including names, credit card numbers, Social Security numbers, addresses, purchase histories, and dates of birth. But these companies failed to enact adequate security methods, including anonymization, encryption, and user verification. As a result of these lax security procedures, consumers were exposed to the possibility of identity theft or other fraudulent activities – a very real injury that could not be offset by any possible benefit.

Privacy practices, unlike security measures, are more difficult to evaluate under the unfairness test. Under the third prong of the unfairness balancing test, companies that engage in bad privacy practices can point to a corresponding consumer benefit, making an unfairness claim unsuitable. If a company has a policy that might expose a consumer to harm, that company can often assert that there are countervailing benefits that point against an unfairness claim. For example, targeting users with ads based on their preferences and personal characteristics might implicate the unfairness prong, but a company could assert that targeted ads are actually beneficial to users, because they provide information about products that are particularly appealing to individual users. Because the unfairness test has a built in escape hatch for defendants, it can be a challenge for the FTC to successfully litigate unfairness claims.

It can also be difficult to determine what kind of harm is sufficient for the unfairness standard. CDT has suggested that the types of harm that result from privacy violations should be interpreted broadly, including data breaches, obstacles to innovation, dangers from government access, and encroachments upon individual liberty. In our prior commentary, we have argued for the FTC’s adoption of FIPPs in its understanding of consumer harm under the unfairness prong.

The Commission has recently indicated that it might expand its conception of what constitutes unfairness in the privacy context. For example, its high profile settlement with Facebook included unfairness claims in addition to deception claims. But, as the first major case relying upon unfairness concerning the dissemination of consumer PII, the settlement indicates a major step forward in ensuring that the government protects user privacy.

In its complaint against Designware, the Commission made its strongest statement that poor privacy practices are governed by its unfairness authority, indicating that it considers harm to be sufficiently likely as a result of disclosing personal, financial, and confidential information to third parties. In its complaint, the Commission confidently alleged that DesignerWare’s software caused substantial harm:

Because of DesignerWare’s intrusions, consumers are at risk of harm from the exposure of personal, financial account access, and medical information to strangers. Consumers are harmed by DesignerWare’s unwarranted invasion into their homes and lives and its capture of the private details of individual and family life, including, for example, images of visitors, children, family interactions, partially undressed individuals, and couples engaged in intimate activities. Sharing these images with third parties can cause consumers financial and physical injury and impair their peaceful enjoyment of their homes. Consumers cannot reasonably avoid these injuries because PC Rental Agent is invisible to them. The harm caused by respondents’ unauthorized collection and disclosure of confidential consumer information is not outweighed by countervailing benefits to consumers or to competition.

However, the drawbacks of unfairness – its multifaceted balancing test and lack of clarity over what is fair and what isn’t – demonstrates why CDT has long argued for a substantive baseline consumer privacy law protecting users and encouraging new innovations. A baseline consumer privacy law would provide clear guidance to companies and define acceptable practices, as well as clearly and forcefully demonstrate to consumers that regulators are committed to protecting user privacy and promoting fair practices. In the interim, however, the FTC’s renewed commitment to using unfairness to protect consumers is welcome.


View the original article here

Tuesday, October 9, 2012

Apple iOS 6 and Privacy

When iOS 6 was released last week, the "big news" was Apple's decision to drop Google Maps. In the uproar that followed, iOS 6's privacy features received little fanfare, despite undergoing a major overhaul. Many changes CDT has advocated for—including giving users more control over tracking and increasing the visibility of and options in the privacy settings—have been adopted in the new version.

In Settings, Apple has created a new Privacy tab (see the images below). It contains the familiar Location Services tab, allowing users to determine which apps have access to the device’s location. The Privacy tab also lists a number of other types of data that will now require explicit requests to the user for data sharing, including Contacts, Calendars, Reminders, Photos, and Bluetooth. (Android, by contrast, lists all information and services that an app can access during installation, although they can't be changed later without a manual app update and a permissions notice to the user.)

Apple has also allowed users to limit advertising tracking, via Settings > General > About > Advertising > Limit Ad Tracking. When users enable this setting, they are setting a “flag” that tells apps they don't want to be tracked, much in the same spirit of the W3C’s Do Not Track efforts. It’s unclear why this setting is located outside of the Privacy settings and deep in the General settings, but its existence and functionality are welcome.

Apple has incorporated three new identifiers to take the place of the much-maligned and unchangeable UDID: iOS 6 now makes available a vendor-specific identifier, identifierForVendor, that can be used by app developers to recognize a device across their apps; a second identifier for advertising purposes, advertisingIdentifier, that can be used by third-party ad networks to identify a device for advertising purposes; and a third application identifier, UUID, that is a more accessible way for applications to create identifiers specific to that application. These three IDs may sound similar but the details are quite different: The vendor identifier is cleared when the user uninstalls the last app on their phone by a given vendor; the advertising identifier persists until the device is completely reset; the application identifier persists only if the application saves it, and then only until that application is uninstalled. Each of these new identifiers is preferable to the UDID, which cannot be modified.

How could these identifiers be used by apps? If a single app needs to store a lightweight device-specific identifier, they would choose the UUID (the UUID is quite different from the UDID; the UUID has a time-based element which means that two UUIDs created at different times will be completely different). If an app provider needs an identifier that persists across each of their apps, they would choose the identifierForVendor, which can be used across all the apps for a given vendor; for example, allowing a family of privacy-sensitive apps like Blendr/Grindr to offer “no personal information required” logins across each app where account information is tied to a device instead of personal information like an email account and name. Finally, for advertising purposes the advertisingIdentifier can be used to deliver, measure, and target advertisements to users. With the advertisingIdentifier, Ad networks installed in apps from different vendors will be able to track users across all the apps on which they are installed on the device - for example, a user’s love of wine in a wine cellar app could be leveraged to offer a discount on wine paraphernalia in a shopping app. This identifier is universal, making it easier for ad networks to trade and sell information about users (compared to the cookie-based model on the web, where each ad network has a different identifier for a user that only it can read). Arguably, Apple should have tried to replicate an advertiser-specific identifier for mobile, or at least made the identifier easier to reset.

However, the “Limit Ad Tracking” setting ameliorates the persistence of the advertisingIdentifier as app developers will have to check if the user has enabled the preference before they read or use the advertisingIdentifier in their code. If “Limit Ad Tracking” is set, advertisers and ad networks are only allowed to use the identifier for a limited set of exempted uses: “frequency capping, conversion events, estimating the number of unique users, security and fraud detection, and debugging.” CDT has long advocated for exactly this balance between user preferences and limited operational uses. This is an important and subtle balance. In negotiating the meaning of “Do Not Track” in the World Wide Web Consortium, we have argued that other uses like “market research” and “product improvement” could tip the scales too far; while these uses don’t directly impact the user’s experience, they wouldn’t be expected by users who enable the Limit Ad Tracking preference and these uses allow data collection of indeterminate scope and extent, potentially acting as exceptions that swallow the rule. The balance struck by Apple here in terms of permitted uses is a careful and appropriate one between honoring users’ desires to limit advertising tracking and ensuring a baseline level of accepted uses that promote a healthy app ecosystem. Furthermore, because Apple must approve iOS apps, they must respect the user’s choice for Limited Ad Tracking or face rejection or removal. This is in sharp contrast to Do Not Track, which requires affirmative representations and agreement from advertising networks to have any weight.

Finally, iOS 6 fixes some 200 critical vulnerabilities across the entire operating system. Some of these vulnerabilities are serious: from allowing bypass of the PIN-enabled lock screen to viewing pictures taken on the device without entering a PIN to running arbitrary code on the device by loading a malicious image file. Unfortunately, iOS 6 is only available for iPhone 3GS and later, iPod Touch 4 or later and iPad 2 or later, meaning the large quantity of older devices will still be subject to many of these potential problems.

CDT applauds Apple’s decision to incorporate these substantial pro-privacy elements into iOS 6, allowing users to finely control how their data gets shared with specific apps, and to more easily express a desire not to be tracked by marketers. We hope that this effort encourages mobile OS vendors to continue to iterate and compete on built-in privacy controls. For years, CDT has periodically published a report comparing the privacy settings for the major browser vendors. We are now in the process of evaluating the major mobile OS platforms in terms of comparative privacy features. Stay tuned!


View the original article here

Friday, October 5, 2012

Proposals to Children's Privacy Rule Pose Real Problems for Free Expression and Innovation

The FTC is proposing changes to the Children’s Online Privacy Protection Act (COPPA) rule that will increase uncertainty for website operators and app developers and could bring a whole new set of sites and services into COPPA’s scope. COPPA requires operators of websites and online services that are targeted to children, or who know a particular user is a child under the age of 13, to obtain verified parental consent before collecting the child’s personal information.

A lot has changed about the collection and use of personal information online since COPPA was enacted in 1998, and the FTC started the current Rule review process in 2010. CDT weighed in on previous rounds of comments, recognizing the need to bring COPPA up to date but cautioning the FTC that changes to COPPA’s age limit or the range of sites it covers would have severe consequences for minors’ and adults’ First Amendment rights.

The FTC has been a strong voice in keeping COPPA focused on children under 13, but, as we discussed in Ars Technica last week, several of their most recent proposals introduce vagueness and uncertainty into COPPA’s scope, which could have real impacts on online innovation and free expression. CDT, joined by the American Library Association, filed comments yesterday that discuss how.

How do plugin providers know where on the web their code is installed?

Some plugins require websites that want to use their code to sign up and receive permission (e.g., through a mechanism like an API key).  There are many examples of plugins--such as those used by YouTube--that don't. Anyone can embed a YouTube video by copying a bit of code and pasting it into their web page.

When you visit a website and a plugin is activated by your browser, the originator of that plugin is sent a bit of code containing the site's URL--and nothing more--by what's known as a "referrer header." A referrer that points to say, "moshimonsters.com," doesn't tell the plugin operator whether the site serves up animated cartoon characters or the characteristics of classic cars. For that kind of data, plugin operators have to look elsewhere, such as analytics providers.

However, an analytics provider may only provide an estimate of a site's typical user. For example, Quantcast provides this type of estimated data for moshimonsters.com. Such estimates may not be precise enough to measure an audience "disproportionately" made of kids, as contemplated by the FTC under the proposed update to the COPPA rule.

One of the biggest changes the FTC proposes is to require the operator of a third-party plugin – meaning analytics providers, advertising networks, social widgets, and any other third-party code running on a given site – to obtain verified parental consent if it “knows or has reason to know” it is collecting information through a site directed to children. This provision is vague: operators are not given clear guidance on what type of notice would be sufficient to trigger this provision, but are just warned that they will not be able to ignore “credible information”. But even if the FTC proposed a thoroughly clear notice-and-action regime for plugin operators, it would still be unfair to put these obligations on third-parties who aren’t targeting children themselves and can’t control whether child-oriented first-parties use their code. [See sidebar for more.]

Fear of these consequences could prevent plugin services from sharing their code with other sites and services, since it’s not clear that there’s anything the plugin developer could do to avoid incurring COPPA obligations. CDT argues that the responsibility for complying with COPPA should lie with the first-party operators who have the direct relationship with users, except in rare circumstances when a plugin purposefully targets children or has actual knowledge that it’s collecting children’s information.

The issue only arises because of another of the Commission’s proposed changes: adding “IP address or other persistent identifier” to COPPA’s definition of personal information, except in cases where this information is collected for support of internal operations. In the broader consumer privacy context, CDT has argued for recognizing the ability for pseudonymous identifiers to act as “personally identifiable information” in certain circumstances. In the COPPA context, we have persistently raised the issue that wholesale coverage of IP address as COPPA-covered “personal information” would lead to the unintended consequence that sites directed to children could not comply with the law, because they would necessarily collect IP address before having an opportunity to obtain parental consent.

The FTC’s proposal to provide exemptions for uses of persistent identifiers to “support the internal operations of a site or service” is a good approach, and these exemptions should be clearly and specifically stated by the FTC. We have advocated for exemptions for operations such as content delivery, site analytics, contextual advertising, identity transaction, and fraud prevention in our work on a universal Do Not Track tool.

But plugins themselves must have the exemption, too – running on the first-party website, they will be “collecting” personal information under COPPA through no effort of their own, as IP addresses are automatically transmitted to them. Without an exemption for the basic, functional uses that plugins make of IP address and other persistent identifiers, it will be difficult for either the first-party operators or the plugin operators themselves to understand when the use of such information is exempted and when it is not. Children’s sites should be able to do contextual advertising and analytics via the use of plugins. The effect of COPPA should not be to make children’s sites shoddy and impoverished, but a failure to extend the exemption to those plugin operators would make it difficult indeed for plugins to comply.

We asked the Commission for another key point of clarification, regarding the liability they envision for platform operators – those services, like the Apple AppStore, that act as general-audience services that could support development of a wide range of content, including content aimed at children. Fundamentally, we think responsibility under COPPA should lie with the entity making the decision to collect data from children, be it the first-party children’s site that’s choosing to use plugins and ad networks, or the app developer who is choosing to make apps for children.

Another recent proposal would muddy the understanding of which sites and services the FTC considers to be “directed to children.” The FTC proposes expanding the definition of “directed to children” to include sites and services that are “likely to attract an audience that includes a disproportionately large percentage of children under 13 as compared to the percentage of such children in the general population.” Yet the FTC gives no sense of what would count as “disproportionate”, and does not adequately address its own previous acknowledgement that demographic data “is neither available for all websites and online services, nor is it sufficiently reliable, to adopt it as a per se legal standard”. It would be exceptionally complicated for site operators to gauge what proportions of their audience fall into precise age categories. And attempts to get more information about site demographics would just result in more tracking and data collection from all users.

This “disproportionate” standard would blur the line that the FTC has drawn over the years between sites intentionally, actively aiming for an audience of children and the rest of the Internet. The FTC’s current test for “directed to children” involves a number of variables – including whether the site has cartoon characters or celebrities that appeal to children, uses language pitched at a young audience, or deals in subject matter designed for children – that, taken as a whole, identify sites that appeal to children and likely don’t appeal to anyone else. Shifting away from that standard toward one that pulls in sites aimed at a general audience that happen to appeal to children as well as teens or adults would radically upset the balance that COPPA has thus far achieved.

The FTC goes on to propose that sites who may fall into the “disproportionate” category could be saved from liability if their operators ask for age information from all users prior to collecting any personal information. Implementing age-screening technology would place financial and resource burdens on operators. However, between the potential breadth of the “disproportionate” standard and operators’ general inability to determine whether they meet the standard, this age-screening carve-out for liability would likely seem the least of three evils to operators attempting to know where they stand under the law.

But the decade-long litigation over the Child Online Protection Act (the confusingly similarly named COPA) established that federal laws burdening operators’ ability to provide constitutionally protected material are suspect under the First Amendment. Further, requiring the provision of personal information prior to accessing protected speech is a violation of users’ First Amendment right to access information anonymously. The FTC is putting COPPA on a dangerous path by introducing even a soft version of an age-verification mandate into the Rule.


View the original article here

Thursday, September 27, 2012

Announcing a New Forum to Discuss Privacy

In order to support NTIA’s multistakeholder convening around mobile privacy, CDT is setting up an online forum for people to present and discuss ideas related to that effort. Starting today, anyone can go to www.privacymsh.org to contribute by posting to a community message board, suggesting text to a wiki, or signing up for a public email discussion list.

Setting up this site has been a collaborative effort. Ross Schulman from CCIA, Nick Doty from Berkeley, and Cyrus Nemati from CDT all worked together to create privacymsh.org (and all four of us will be administrators on the forum). We decided that having some sort of open forum for discussion might be useful to advance the dialogue during the interims between NTIA meetings (and potentially during the meetings themselves). We are committed to trying to make this collaborative approach to privacy work, and we hope that this site can help all voices be heard as they communicate ideas for promoting mobile privacy (as well as whatever other topics NTIA might tackle).

These tools are very much a work-in-progress; the bare-bones look to the site may change, and the group may eventually decide that something else might work better. We’re not sure whether people will find the message board or the mailing list more effective for generating discussion. On the one hand, emails are an effective way to keep people constantly up to speed on the state of the discussion. However, for those of us involved in the email-intensive W3C Do Not Track policy process, we weren’t sure that people would want to have every discussion point pushed to their inbox. (In any event, the message board is configurable to send email notifications to you when people respond to your points.) We encourage people to experiment to see what’s most effective — these forums are designed to be iterative.

CDT wants to see the NTIA process deliver strong, flexible, and consistent privacy protections for consumers. We hope these discussion tools promote an open and productive dialogue among advocates, industry, and regulators.


View the original article here

Oversight of Government Privacy, Security Rules for Health Data Questioned

Oversight and accountability for following federal privacy and security rules is critical if the public is going to trust that the next generation of electronic health care providers, insurers, and billing services can protect the privacy of their medical information.  A recent report by the Government Accountability Office questions whether sufficient work is being done to build that public trust.

The GAO report says the Department of Health and Human Services has failed to issue new rules for protecting personal health information and lacks a long-term plan for ensuring that those new rules are being followed.  The HHS Office for Civil Rights (OCR), which is responsible for overseeing these efforts, acknowledged these concerns but noted that rules are winding their way through government channels and that they have "taken the necessary first steps towards establishing a sustainable" oversight program.   

The report's two main concerns are: (1) the urgent need for guidance on de-identification methods, and (2) lack of a long-term plan for auditing covered entities and business associates for compliance with federal privacy and security rules (specifically, HIPAA and HITECH).

De-Identification Guidance

De-identification is a tool that enables health data to be used for a broad range of purposes while minimizing the risks to individual privacy.  Under HIPAA, there are two methods that can be used to de-identify health data. The first is the safe harbor method, which merely requires the removal of 18 specific categories of identifiers, such as name, address, dates of birth or health care services, and other unique identifiers.  The second is the expert determination method that certifies that the data, in the hands of the intended recipient, raises a very small risk of re-identification. The safe harbor method is static and presumes that the removal of the 18 categories of identifiers translates into very low risk of re-identification in all circumstances.

In HITECH, Congress directed HHS to complete a study of the HIPAA de-identification standard by February 2010.  Though covered entities rely more on the safe harbor method because it is easier to understand and more accessible, OCR aimed to produce guidance that would "clarify guidelines for conducting the expert determination method of de-identification to reduce entities reliance on the Safe Harbor method," according to the report.  Two years later and notwithstanding its good intentions, OCR has not released this guidance.  

CDT has met with industry and consumer stakeholders about how to improve federal policy regarding de-identified health data since 2009. CDT also recently published an article in JAMIA proposing a number of policies to strengthen HIPAA de-identification standards and ensure accountability for unauthorized re-identification.  

The OCR should issue the required guidance on de-identification without further delay and continue seeking public feedback on how to build trust in uses of de-identified data.  Foot dragging on this issue risks impeding progress on the ability to monitor the public's health in ways that go far beyond mere notification and routine reporting of symptoms, diagnoses, etc.  With these new capabilities in place, public health officials can move beyond traditional detection and response to outbreaks, enabling earlier disease detection, allowing public health officials to take a more active role monitoring health issues from cancer screening to adult immunizations to HIV.

Ensuring Compliance

Routine audits help ensure that covered entities and business associates comply with HIPAA and HITECH regulations.  Audits also provide OCR with important information about how entities covered by HIPAA and HITECH are implementing critically important privacy and security protections, and potentially surface issues needing further regulatory guidance and helping OCR better determine when penalties for noncompliance are warranted.  

HITECH directed HHS to audit entities covered by HIPAA for compliance with HIPAA and new HITECH requirements; OCR officials began those audits earlier this year. The report states that OCR has no plan to sustain these audits beyond 2012; the report also notes that HHS does not have a defined plan for including HIPAA business associates in its audits. HHS responded that OCR plans to review the pilot audit program at the end of this year and move forward with an audit program after that step is complete.

If the public is to trust that the privacy of their health information is well protected, it must know where that information is going and how it's being used. The report highlights the importance of audits as an effective mechanism for accountability. CDT is encouraged by the progress OCR has made to date in its pilot audit program, and we are pleased to see HHS commit to learning from the pilots to developing and implementing a sustained plan for auditing compliance with federal privacy and security regulations. 


View the original article here

Benefits of Streamlining CA State and Federal Health Privacy Laws Stalled

An initiative aimed at making California's health privacy laws easier to understand and more streamlined with federal standards has stalled.  A year into this harmonization of state and federal standards finds the program needs focus, lacks adequate transparency and isn't providing enough opportunity for public input. CDT believes industry and consumers could benefit from the effort, but changes are needed to make the initiative a success.

The harmonization effort is aimed at eliminating conflicts, confusion and inconsistencies between the primary health privacy laws at the state and federal level. An advisory group, the Privacy and Security Steering Team (PSST), will provide its harmonizing recommendations to the agency that oversees California's health privacy laws.  The agency will give the recommendations to the state legislature as a proposed amendment to the state's primary health law, which, if adopted, could lead to significant changes.

Consumer's Union (CU) and CDT recently issued a joint letter endorsing efforts to make health privacy and security policy in California more protective for consumers and less burdensome to industry. Success here is critical, the letter says, "to securing public trust in the use of [health information technology] to improve individual and population health."

However, both organizations expressed concerns about the lack of focus and transparency of the effort to date. CU and CDT specifically called on the PSST to release work product from the law harmonization deliberation process to include:

detailed explanations of what legal standards each recommendation would specifically change,precisely how the legal standards will be changed;and a justification or the rationale behind each recommendation.

To better focus the project, CU and CDT also call on the PSST to consider addressing areas or issues lacking legal standards or safeguards for personal health information, or areas where current policies are not well understood or insufficiently enforced. Such policy gaps allow for the use and transfer of personal health information in ways that could undermine public trust, creating an environment where individuals do not feel safe or confident utilizing HIT tools.

CDT recently became a member of the PSST and is committed to helping reach the goal of building trust in the use of HIT by making California health privacy law clearer and more comprehensive.


View the original article here

Monday, September 24, 2012

Announcing a New Forum to Discuss Privacy

In order to support NTIA’s multistakeholder convening around mobile privacy, CDT is setting up an online forum for people to present and discuss ideas related to that effort. Starting today, anyone can go to www.privacymsh.org to contribute by posting to a community message board, suggesting text to a wiki, or signing up for a public email discussion list.

Setting up this site has been a collaborative effort. Ross Schulman from CCIA, Nick Doty from Berkeley, and Cyrus Nemati from CDT all worked together to create privacymsh.org (and all four of us will be administrators on the forum). We decided that having some sort of open forum for discussion might be useful to advance the dialogue during the interims between NTIA meetings (and potentially during the meetings themselves). We are committed to trying to make this collaborative approach to privacy work, and we hope that this site can help all voices be heard as they communicate ideas for promoting mobile privacy (as well as whatever other topics NTIA might tackle).

These tools are very much a work-in-progress; the bare-bones look to the site may change, and the group may eventually decide that something else might work better. We’re not sure whether people will find the message board or the mailing list more effective for generating discussion. On the one hand, emails are an effective way to keep people constantly up to speed on the state of the discussion. However, for those of us involved in the email-intensive W3C Do Not Track policy process, we weren’t sure that people would want to have every discussion point pushed to their inbox. (In any event, the message board is configurable to send email notifications to you when people respond to your points.) We encourage people to experiment to see what’s most effective — these forums are designed to be iterative.

CDT wants to see the NTIA process deliver strong, flexible, and consistent privacy protections for consumers. We hope these discussion tools promote an open and productive dialogue among advocates, industry, and regulators.


View the original article here

Sunday, September 23, 2012

Cybersecurity Amendments Would Modernize 25-Year-Old Privacy Law

[Editors Note: This is one in a of series of blog posts from CDT on the Cybersecurity Act, S. 3414, a bill co-sponsored by Senators Lieberman and Collins that is slated to be considered on the Senate floor soon.]

Two amendments to the Senate cybersecurity bill now being debated would require government agents to get a warrant before reading a person's email or secretly tracking someone through their mobile phone.  The amendments, if adopted, would be a huge privacy gain and address a long-standing civil liberties goal of modernizing the Electronic Communications Privacy Act, the 25-year old law setting rules for when government agents can access our electronic communications and other private data.
The amendments, one from Senator Leahy and another from Senator Wyden, would implement reforms sought by a diverse coalition from across the political spectrum.  Supporters include AT&T, Google, the ACLU, Americans for Tax Reform, EFF, and IBM, among others.
Including these reforms in the Cybersecurity Act is appropriate:  the information sharing, monitoring and countermeasures provisions of the bill all effectively amend ECPA and the Wiretap Act, permitting companies to share user information notwithstanding privacy protections in those laws.  Congress should strengthen the underlying laws to counterbalance these changes.

ECPA Reform Is Long Overdue

The amendments respond to the dramatic technological changes in the 25 years since ECPA became law. Digital communications services are now ubiquitous in modern life. The government has a huge appetite for the data generated when we use the Internet and our mobile phones.  Last year, government agencies made over 1.3 million demands for text messages, location data and other information about mobile subscribers alone.

ECPA was forward-looking when adopted.  Court decisions of this outdated law now create a crazy patchwork of rules for government collection of communications and location data. This lack of clarity serves no one. It confuses users and law enforcement, as well as the companies in the middle that have to respond to government demands while protecting users. One federal appeals court has held part of the statute unconstitutional.  

Leahy Amendment

The Leahy amendment requires government agents to get a search warrant, based on probable cause, before they are allowed access to the content of users' private communications or documents stored "in the cloud," except in some circumstances.

Americans today routinely use some sort of electronic communication for confidential correspondence ranging from business deals to personal letters. Most people save their emails indefinitely, with much of the data stored on the computers of communications service providers.  Tens if not hundreds of millions of people store calendars, draft documents, private photos and videos online.  Senator Leahy's amendment would eliminate the outdated rule in ECPA that permits the government read someone's stored documents and email without a warrant.

The Leahy amendment also would cure a constitutional defect in ECPA.  In December 2010, the Sixth Circuit ruled in U.S. v. Warshak that the provision of ECPA allowing the government to access email over 180 days old with a subpoena is unconstitutional.  In response, many providers – including providers in other court circuits – now require a warrant before granting law enforcement access to communications content. By requiring warrants for content, Senator Leahy's amendment would make the law clearly constitutional and put companies and prosecutors back on firm legal footing.

The amendment also modifies the Video Privacy Protection Act to make it easier for online video services to get consent from consumers to share data about movie rentals. A similar tweak was adopted last year in the House of Representatives.

A diverse coalition of groups and companies supports the Leahy amendment.

Wyden Amendment

The cell phones that we carry with us all the time are tracking devices. Even when no call is being made, mobile devices placed in pockets, purses and on night stands constantly signal their location to service providers. The government is increasingly collecting location data from service providers in order to track citizens. GPS is only a part of this invasive surveillance:  data indicating which cell towers a device is near at any given time can be readily available to the government.

Senator Wyden's amendment would require a warrant if the government wants to track someone using that person's mobile phone, except in emergencies or when a person calls 911. The amendment mirrors the GPS Act introduced in both the House and the Senate by a bipartisan group of lawmakers, introduced last year. Under Wyden's amendment, the government would need a court-approved warrant, based on probable cause, to obtain information about a person's location that is generated by use of a mobile device such as a cell phone. Similar to Senator Leahy's amendment, Wyden's location amendment would replace complex and constitutionally-suspect rules with a clear warrant requirement.

Senator Wyden's location tracking amendment would also implement a reform supported by companies, trade associations, and groups from across the political spectrum. 


View the original article here