Sunday, September 30, 2012

Hands-on review: Updated: BlackBerry 10

BlackBerry 10 is still heavily under development and still quite some way from being a finished product, but we've had some hands on time with an early release to get a feel for some of the new features.

Update: We've checked out an almost-final version of the user interface, which is pretty close to perfection, according to RIM: "we think we've nailed the user experience going forward," Vivek Bhardwaj told TechRadar - but we'll let you be the judge by checking out our findings below.

Delayed until early 2013, the first BB10 devices should land in January – although we're yet to see final devices running the new OS.

Although the demos at BlackBerry Jam are being done on an updated version of the BlackBerry Dev Alpha device that RIM is handing out to developers, we saw the near-final version of BlackBerry 10 running on early versions of the upcoming BlackBerry 10 devices in London recently (although we can't share more details about those handsets than we've already told you).

BB10 sees the implementation of a whole new user interface, with RIM doing away with the familiar BlackBerry system we're all used to, in favour of something which resembles the likes of Android and iOS, although with its own unique features.

BlackBerry 10 review

With BlackBerry 10, RIM has merged homescreens, widgets, app lists and a unified inbox into one slick interface, offering up an easy-to-navigate user experience.

The main homescreen comprises of four widgets, technically mini-applications, which expand to fill the screen when tapped.

Scroll down and you'll notice that this main display actually holds eight mini apps in total – displayed in order of use, allowing you to jump quickly between your recent applications.

Open up an application which isn't in top spot, or a completely new one from the app list, and when you exit it you'll notice that it now occupies the first, top left spot on the homescreen.

BlackBerry 10 review

Swiping from left to right will bring you to the app home screen, with 16 apps on the screen at any one time, and you can access more by sliding up and down – the whole thing very similar to Windows Phone's Start Menu UI.

BlackBerry 10 review

At the bottom of both the homescreen and app list you'll notice a shortcut bar, with quick links to the phone, search and camera applications – allowing you to quickly jump to these regularly used features.

Unfortunately these features had not been implemented on the version of BB10 we were using, so we'll have to wait and see how well they work.

The lock screen shows notifications for alarms and unread messages on the left plus your upcoming meetings as well as the date and time, with a button to launch the camera straight from the lock screen to grab a quick snap.

You unlock the phone by sliding your thumb up the screen and from there slide from anywhere on screen. This means that rather than needing to start at the bottom, the screen starts to draw in around where you slide so if you just want a quick peek at the information in one area of the screen, you can just drag to show it and then let go (more on that in 'Peek' mode below).

Return to the home screen and then sweep in the opposite direction and you'll be greeted by the unified inbox, which pulls in all your messaging and social network notifications into one easy to access location.

BlackBerry 10 review

And when we say all, we mean all, as the unified inbox can deal with multiple email accounts, text messages, BBM, call history, third party messaging apps such as Whats App and a whole host of social networks including Facebook, Twitter and LinkedIn.

Of course, with so many accounts feeding into the handset, the more popular among us will be quickly inundated with notifications from various different channels, however there's an easy way to check where your new messages are coming from, without clogging up the notification bar at the top of the screen.

BlackBerry 10 review

RIM has developed the "peek" function, which sees the user drag slightly from right to left, which reveals a slender column on the right side of the display, with new message icons and counters for your respective accounts.

The reason for this is so you can quickly see which account has received a new message and jump directly to it if required, whether you're on the homescreen or within another application.

Wherever you are, you can drag up on the screen to see notifications down the left-hand side of the screen. Pull up and slide across and you see the details of the new messages (from the unified inbox, so you get email, texts, BBM and social network updates or other alerts all together).

The BlackBerry Hub will be the brain center of the device, a one-stop shop to access email, Gmail, Twitter, Facebook, BB messenger, text messages, and more

The further you slide your finger across, the more the top layer slides out of the way and the more of the layer underneath you see, so you can peek at new mail to see if it's important enough to read straight away and then go back to what you're doing, without ever actually switching out of the current application.

BB10 review

It's hard to explain (but we can't show you on video yet) but siffuce to say other gestures will change what you're peeking at, swapping to a different email account or even the calendar in the message centre (by picking the icon or pulling far enough across to see the account name first.

This method works much better when you get your head around all the gestures - although it may prove to be overly complex for some users.

You can jump straight into the clients for your cloud storage, like Box, Dropbox and SkyDrive. Start in the file manager and pull the screen back to see storage on your device or pick Dropbox instead. You can open, edit and manage files in cloud services as if they were on the phone.

The same level of integration applies to the new Remember note-taking app, which pulls in notebooks from EverNote or OneNote and tasks from Outlook accounts. Swipe sideways in the HTML5 browser and you see the list of favourites.

BB 10 review

It sounds a little complicated, but once you have the hang of it, you can navigate around your information going straight to what you want without going back to the home screen and into different applications each time.

Having the 'peek' idea work the same way in so many applications helps you get used to it as well (though we don't yet know how well third-party applications will be able to do the same thing).

When you do get into a message or an appointment, you can see more information about the people involved in a way that will be familiar to BlackBerry PlayBook 2 users; you can see who you know in common, what messages you've exchanged or recent social network updates.

It's a new look for the 'flow' between different apps and information sources that BlackBerry has always been good at, but with a fresh modern look on a much larger screens, with a similar gesture showing you a pane of the apps that are currently running.

If you drag down on the screen you see Personal and Work buttons that let you switch between the two BlackBerry Balance modes.

In Personal, you can install any apps you want, send any email, save any file and so on, working in a partition that's encrypted for privacy but not locked down in any way.

BB 10 BlackBerry HubBB10 in work and personal modes

If you use your BlackBerry for work though, you'll also have a Work partition that's also encrypted but completely separate and can be locked down if that's what the company wants.

Drag down on the screen, pick Work mode and all your personal apps disappear – so you can't accidentally copy a work file into your personal cloud storage account.

Then there's Cascades, a new navigation system cooked up by RIM especially for BB10, allowing for quick multitasking from within applications.

BlackBerry 10 review

The example we've seen is in the messaging app - open an email it will display full screen, but drag your finger from left to right and the message will slide with you, revealing the inbox below.

This means if you get a new message in the middle of reading an email, you can check who it's from without having to close the application – similar to the notification bar on Android and also now iOS.

BlackBerry 10 review

If you were to open an attachment from the email, a PDF document in the case of our demo, pulling to the side to view the cascade will show the app's layers stacked up – a more visual paper trail, if you will.

It's certainly an intuitive feature that we found to work smoothly on the development handset – but it will be interesting to see how this feature is embedded into other applications and if it will have the same pleasing results.

BB10 review

There's a different version of BlackBerry AppWorld where your company can offer specific work apps – like an app that uses the NFC chip in your BlackBerry to unlock the door to the office.

RIM is hoping having the Balance modes will keep companies happy with security but also keep users happy, because the security team at work can wipe all the company information off your device if they want but that won't delete your photos.

They can't even see what files you have on your phone when they're managing it, because your personal partition is encrypted.

BB 10

As more of us take our own phones to work, this is much more sophisticated way of protecting both the company and the user's personal files that other smartphones – but again, it's a little on the complicated side and relies on your company having the appropriate BlackBerry management software.

Finally, the last feature which was available for us to play with on this early version of BB 10 was RIM's new full-touchscreen QWERTY keyboard.

BlackBerry handsets are famous for their physical boards and RIM is keen to bring this typing experience to its BB10 touchscreen smartphones with its own offering.

Visually the keyboard looks similar to the stock Android offering, but each row of keys is separated with a silver line – which is supposed to reflect the metal strips between buttons on the Bold range, such as the Bold 9790 and Bold 9900.

BlackBerry 10 review

Next word prediction, auto-correct and spell check are all common features on smartphones today and RIM has spent some time developing its own system to offer an efficient typing experience.

It sees next-word suggestions appear above the character the word begins with, and if it's the word you want to use, you just need to swipe up over the word and it will be added to your sentence.

As with many offerings these days, the keyboard will learn your style of writing, meaning it will be able to suggest better words the more you use your phone.

BlackBerry 10 review

We found the keyboard to be fairly accurate and relatively well spaced, but for those of you used to the physical buttons of a traditional BlackBerry it will take some getting used to.

Although the operating system is still very much in its early stages of development, we must say that we were impressed with how smooth and slick the interface felt under out fingers – seamlessly zipping around without fuss.

BlackBerry assured us that is smooth experience would still be present in the final product, thanks to the clever integration of the HTML 5 system, which optimises the performance of the software. We certainly hope they're right.

Find out more information on BlackBerry 10, including its release date, upcoming devices and the camera function with our BB10: what you need to know article.


View the original article here

Review: Toshiba Qosmio X870

Spearheading the Ivy Bridge refresh for Toshiba is the 3D-toting gaming giant known as the Qosmio X870 ($1,899).

From the black and chrome-red chassis to the bleeding edge components inside, this is a laptop designed specifically to take on the 3D-ready Samsung Series 7 Gamer and the might of the Alienware M17X.

Its mission is to simply cater to power hungry gamers looking for the best machine to handle 2012's latest titles.

We use the word giant with good reason; the X870's chassis is a huge 418 x 272 x 44mm and weighs an impressive 3.4kg.

The design follows a head-turning trend with a black plastic body trimmed with shiny red chrome.

Toshiba Qosmio X870 Review

In a world of sleek, silver machines, like the new MacBook Air or the Dell XPS 14, the Qosmio embraces the outlandish design we've come to expect from the gaming laptop stable.

We have no doubt that the Qosmio X870 could run any game under the sun thanks to the combined power of the Intel Core i7-3610QM CPU, clocked at 2.3GHz and the Nvidia GeForce GTX 670M graphics card.

Meanwhile all the extra features we would expect from a laptop at this price are here in force.

The Toshiba Qosmio X870 boasts 4 USB 3.0 ports, a Blu-ray drive, HDD protection, harmon/kardon speakers, a backlit keyboard and, of course, that all-important 3D screen.

And yet, for all its dominance when it comes to performance, there are a couple of slight hiccups that cause us to question whether this is The Greatest Gaming Machine Ever™ or just a very, very good challenger to the all-conquering Alienware series.

There's no compromise on specifications here.

Toshiba has kitted the Qosmio X870 out with an Intel Ivy Bridge Core i7-3610QM CPU with a speed of 2.3GHz that can be overclocked to 3.3GHz.

On top of that the Nvidia GeForce GTX 670M graphics card will give you 3GB of dedicated video memory on top of the 3GB already provided by the integrated Intel HD Graphics 4000 chip.

Moreover, there's a massive 16GB of RAM keeping the operation as smooth as possible.

And, because the native resolution is a pin-sharp 1,920 x 1,080, you'll be able to enjoy all your gaming in Full 1080p High Definition. The way that nature intended.

Alongside the Full HD resolution, the 17.3-inch screen on the Toshiba Qosmio X870 is also extremely bright.

Toshiba Qosmio X870 Review

This is a good thing - as the bundled Nvidia active shutter glasses that you'll need for 3D will shade the display dramatically.

The stereoscopic 3D works well and the Nvidia Control Panel will give you a list of compatible games and how well those titles work with the 3D technology.

The only drawback is that the thick plastic glasses look like something Michael Caine would pull on back in his Harry Palmerdays.

We know that 3D glasses can be lightweight and look good, because we've seen Samsung do it with the Series 7 Gamer.

Don't worry about the space required for high definition or 3D media, the Qosmio X870 has a terabyte of hard-drive space, giving you plenty of digital real estate for media and games.

Toshiba has also built in HDD protection, so when vibration or a drop is detected; the X870 shifts the position of the drive to keep your valuable data safely protected.

All four USB ports are the faster USB 3.0 format which gives you data transfer speeds of up to 5Gbps. So, should you decide to invest in some external storage or a few extra peripherals to accessorise the Qosmio X870, you won't be left waiting around.

The rest of the connections read as standard, HDMI and VGA for connecting extra monitors and a Gigabit Ethernet port in case you don't want to use the 802.11n wireless connection.

The optical drive on the Qosmio X870 is a Blu-ray, doubling this laptop as an extra Blu-ray player for your home and you also get the benefit of sleep-and-charge.

When the Qosmio X870 is in sleep mode or even shut down completely, you can use it to charge your smartphone or tablet.

Cinebench 10: 21,197
3D Mark '06: 10,772
Battery Eater '05: 41 minutes

The build quality of a gaming machine is vital – the laptop needs to stand up to some serious hammering during that final boss battle or tense shootout.

Happily, the Qosmio X870 stood up well to our repeated pokes and prods.

There's no getting away from the design though, and we feel it will divide opinion.

On one hand, it's not as outrageous as, say the Alienware or MSI gaming laptops, but it's also not as conservative as the likes of the Medion Erazer X6819 or the Samsung Series 7 Gamer.

Strangely, for a machine with as much chassis real estate as the Qosmio X870, the travel on the keys is relatively shallow.

We'd be expecting keys similar to the HP Envy series, with a reassuring depth to them that could take a pounding during those aforementioned boss battles.

Instead, it's almost as if the keyboard was taken from a space-conscious Ultrabook.

Despite the lack of depth, the Qosmio X870 was comfortable to type on and there's a good amount of space between the keys.

Admittedly, the blood-red backlight behind the keys does look great.

You're also given a dedicated numeric keypad on the right-hand side, although we don't expect you'll be spending a lot of time editing spreadsheets with this machine.

Media playback is complimented not just by the excellent screen, but also the integrated harmon/kardon speakers placed at either side of the chassis.

You get an excellent level of volume from these speakers and the sound has plenty of depth.

Toshiba Qosmio X870 Review

It would have been great to see a subwoofer included on the underside of the Qosmio X870, similar to the one found on the HP Envy 17 3D to add extra bass to the audio.

We take no joy in pointing out flaws on a $1,899 machine but unfortunately, they exist. The DC output power cord doesn't seem to fit snugly into the Qosmio's 19V jack, giving you the worrying feeling it could fall out at any moment.

This is bad, because the Qosmio X870 only lasted 41 minutes during our battery test, possibly the worst score we've ever recorded.

While we don't expect to carry the Qosmio X870 around all day, a little extra juice would be a benefit.


View the original article here

Review: ASRock Z77E-ITX

Good performanceZ77 chipsetFull roster of connectionsOnly one PCIe slot (understandable though)Performance not best in classSlightly limited overclockingBy Dave James from PCFormat Issue 271  September 29th 2012

There is something rather exquisite about the recent glut of small-form factor motherboards. The mini-ITX form factor has been around for a while, but it wasn't until the Sandy Bridge crowd tipped up that we started to see some lovely-looking, teeny-tiny motherboards worth a damn.

With Ivy Bridge and the Z77 platform though things have matured even further. Last month we checked out a pair of beauties from Asus, and here we've got the ASRock competition.

Like the Asus P8Z77-I Deluxe you're looking at a full Z77 motherboard with discrete graphics capabilities and all the overclocking finesse you'd expect of the top Intel chipset.

Where the H77 chipset is rather cut down and lacks real OC support, this Z77E-ITX is fully featured - including some features that you won't even get with full-size ATX boards at this price.

The packed back-plate should give you some idea of how feature-rich the Z77E-ITX is. With three video outputs it's equally at home running off the processor graphics inside the Ivy Bridge CPUs.

There's HDMI, DisplayPort and full DVI too. You've also got four USB 3.0 ports on the back with a single header inside for those chassis with front-mounted USB 3.0 ports. And as well as a pair of SATA 6Gbps and a pair of SATA 3Gbps sockets on the top of the board, on the flipside there's also an mSATA connector if you wanted to add in a PCB-mounted SSD for some space saving.

Unlike some others, that mSATA port isn't used for either Wi-Fi or SSD because there's a second mini PCIe connector next to the DIMM slots fully laden with a Wi-Fi module. But how does it perform?

Without the Deluxe name or premium price-tag of the Asus board we reviewed a while ago we weren't expecting a huge amount out of the wee ASRock, but it can hold its head up.

Turbo-ing at 3.7GHz it's no slouch, but is slower than the pricier Asus. It also doesn't have the same over-clocking chops without the extra power components of the competition. Still, it managed to hit a stable 4.5GHz and had enough juice to keep the discrete GPU flinging polygons around without any trouble.

CPU rendering performance
Cinebench R11.5: Index score: Higher is better
ASROCK Z77E-ITX: 7.49
ASUS P8Z77-I DELUXE: 7.9
ASUS P8H77-I: 7.39

CPU encoding performance
X264 v4.0: Frames per second: Higher is better
ASROCK Z77E-ITX: 41.1
ASUS P8Z77-I DELUXE: 43.6
ASUS P8H77-I: 39.0

Gaming performance
Batman: AC: Frames per second: Higher is better
ASROCK Z77E-ITX: 181
ASUS P8Z77-I DELUXE: 184
ASUS P8H77-I: 167

For just over £100, though, it's a great little board. It holds its own against the top mini-ITX Asus board and even against a lot of the full-size competitors.

Performance aside there's not really anything you're missing out should you opt to buy this cheaper mini-ITX board. It's got every modern computing feature you could possibly want and would make the basis for an excellent little gaming machine.

Thanks to some impressive design elements, and the amount of motherboard real-estate freed up by components moving onto the CPU itself, we're now starting to see boards in the mini-ITX form factor that are almost impossible to distinguish in performance or feature-set from their full size compadres.

This ASRock Z77E-ITX board is another, relatively cheap, example of this.

Review: Lomography Fisheye No 2

Originally introducing a Fisheye camera back in 2005, Lomography's Fisheye No 2 brings with it a number of upgrades, most notably in the form of a bulb mode – which enables you to capture longer exposures – and an 'MX' switch, which enables you to set multiple exposures on the same section of film.

Like its predecessor, the Lomo Fisheye No 2 features an almost 180 degree field of view, and takes 35mm film. This is still relatively easy to come by in many supermarkets, chemists and so on. Lomography also produces its own range of films, which we used during this test. Processing on the high street is also relatively easy to find, but again you can use Lomography's own lab, which we used here.

For those not in the know, Lomography is the company that has brought back many incarnations of analogue photography. It is proving extremely popular with its unique designs in many different styles.

Lomography Fisheye No 2 review

Lomography cameras are well known for their erratic behaviour. This can include light leaks, ghosting, flare and other unusual properties, which most users believe to be part of the charm.

The Lomography Fisheye No 2 camera - priced at around £79 in the UK and $75 in the US - comes with an optical viewfinder that can be attached to the camera's hotshoe, enabling you to more accurately judge composition than you could with its predecessor, which didn't come bundled with the accessory.

A fixed aperture of f/8 is available on the camera, while shutter speeds are limited to 1/100 second in standard, or as long as you need in Bulb mode. The approximate focal length of the lens is 10mm.

As with most Lomo cameras, the Fisheye No 2 is not for shy and retiring types.

Lomography Fisheye No 2 review

Available in a variety of fun designs - including Python, Faded Denim, Vibrant Orange and others - probably the most striking aspect of the camera is the bulbous fisheye lens on the front of it.

Again like many Lomography cameras, the Fisheye No 2 is very light, because it's constructed from plastic. However, it also feels relatively robust and able to be chucked into a bag ready to be taken anywhere.

Controls on the camera are few and far between, leaving you free to concentrate on composition. The only switches you'll find on the Lomography Fisheye No 2 are those to go from the standard shutter speed of 1/100 second to Bulb mode. Handily, there's also a Lock mode, which stops you accidentally switching into Bulb mode.

Lomography Fisheye No 2 review

On the back of the camera there's also the "MX" switch, which stands for multiple exposure. Like many other Lomography cameras, the Fisheye No 2 is capable of creating unlimited multiple exposures on one frame of film, enabling you to use some fun, creative, but unpredictable effects.

The camera comes with a rubber lens cap that can be attached to protect the fisheye optic. However, on our review model at least, this didn't fit very snugly, and fell off at almost every opportunity – especially when the camera was floating around in a handbag.

A circular viewfinder on the top of the camera is designed to give you a rough guide as to how the composition of the image will turn out. Although you have to remember that it won't be exactly as you see through the viewfinder, thanks to parallax error, it's a useful addition to the original Fisheye camera.

Lomography Fisheye No 2 review

A dial at the back of the camera is provided for winding on the film after each frame is taken. Although this can be a little frustrating for those used to automatic (and of course digital) cameras, it does at least help prevent wasted shots, because the next image can't be taken until the film is wound on.

Once the film is used up, you will need to rewind the film using the gear at the top of the camera. The gear can be a little fiddly to use, so you may find this takes longer than anticipated.

Speaking of the film, loading it is pretty easy, especially if you have worked with 35mm film cameras in the past. If not, it's pretty quick to learn, and you can insert a new film in under a minute.


View the original article here

Thursday, September 27, 2012

CDT Supports Brazil's "Bill of Rights" for Internet Users

A modified version of this post originally appeared on Global Voices Advocacy.

Tomorrow, a special committee in Brazil's Congress will vote on the Marco Civil da Internet, a "bill of rights" for Internet users. If passed, the law would represent a paramount advance in country's digital policymaking agenda.

The Marco Civil da Internet, or Civil Regulatory Framework for the Internet, establishes a clear set of rights and responsibilities for users, sets strong net neutrality principles, and shields Internet intermediaries from liability for illegal content posted by users. Pedro Paranaguá, an Internet policy advisor for Brazil's House of Representatives, has a detailed archive of the law's legislative history on his blog.

Unlike Internet-related laws addressing piracy or copyright infringement, the Marco Civil is not a criminal law, but a civil one. Rather than framing digital policy as a matter of criminal violations, it puts forth a clear set of rights for users and aims to balance these with the interests of online companies and law enforcement. The Marco Civil is also strategically deft in this regard: by establishing user rights and responsibilities forthright, the law aims to guarantee that these interests will be protected if laws addressing online crime and copyright infringement are introduced in the future.

The Marco Civil is also unique in that it was developed in a highly participatory style. Lawmakers were not the only entities involved in drafting the law--academic experts, civil society groups, and Internet users had a critical role in developing the law's text as well. Lawmakers partnered with scholars at Fundacão Getulio Vargas (FGV), the country's leading social science research institution, to draft the preliminary text for the law. It was then posted for an open online consultation where all Brazilians were invited to comment and make suggestions for the bill through Cultura Digital, a website created by Brazil's Ministry of Culture. The process reflected a potent vision for Internet policymaking, one in which all individuals who hold stake in the social and technological power and the functioning of the Internet can have a say in how it is governed.

Over the past decade, Brazil has pioneered a digital policymaking approach that many countries have looked to as a model for promoting innovation and openness online. During the administration of Ignacio “Lula” da Silva, Minister of Culture and acclaimed musician Gilberto Gil developed a policy agenda that focused on increasing Internet access and digital education for all Brazilians.

Advocates are urging Brazil's Congress to vote in favor of the Marco Civil, the passage of which would make Brazil a global and regional leader for progressive Internet policy and a model that many countries may look to as they develop their digital agendas. This week, CDT joined international partners at FGV, Derechos Digitales in Chile, India's Centre for Internet and Society, and Consumers International by signing a letter in support of the bill that will be presented to Congress prior to tomorrow's vote.

Brazil-based groups including the Centro da Tecnologia e Sociedade [pt] at FGV; Mega Não [pt], an online advocacy initiative promoting Internet openness; and MegaSim [pt], a blog that promotes progressive cultural policy for the digital age all offer more information about the law and its development.


View the original article here

Announcing a New Forum to Discuss Privacy

In order to support NTIA’s multistakeholder convening around mobile privacy, CDT is setting up an online forum for people to present and discuss ideas related to that effort. Starting today, anyone can go to www.privacymsh.org to contribute by posting to a community message board, suggesting text to a wiki, or signing up for a public email discussion list.

Setting up this site has been a collaborative effort. Ross Schulman from CCIA, Nick Doty from Berkeley, and Cyrus Nemati from CDT all worked together to create privacymsh.org (and all four of us will be administrators on the forum). We decided that having some sort of open forum for discussion might be useful to advance the dialogue during the interims between NTIA meetings (and potentially during the meetings themselves). We are committed to trying to make this collaborative approach to privacy work, and we hope that this site can help all voices be heard as they communicate ideas for promoting mobile privacy (as well as whatever other topics NTIA might tackle).

These tools are very much a work-in-progress; the bare-bones look to the site may change, and the group may eventually decide that something else might work better. We’re not sure whether people will find the message board or the mailing list more effective for generating discussion. On the one hand, emails are an effective way to keep people constantly up to speed on the state of the discussion. However, for those of us involved in the email-intensive W3C Do Not Track policy process, we weren’t sure that people would want to have every discussion point pushed to their inbox. (In any event, the message board is configurable to send email notifications to you when people respond to your points.) We encourage people to experiment to see what’s most effective — these forums are designed to be iterative.

CDT wants to see the NTIA process deliver strong, flexible, and consistent privacy protections for consumers. We hope these discussion tools promote an open and productive dialogue among advocates, industry, and regulators.


View the original article here

Oversight of Government Privacy, Security Rules for Health Data Questioned

Oversight and accountability for following federal privacy and security rules is critical if the public is going to trust that the next generation of electronic health care providers, insurers, and billing services can protect the privacy of their medical information.  A recent report by the Government Accountability Office questions whether sufficient work is being done to build that public trust.

The GAO report says the Department of Health and Human Services has failed to issue new rules for protecting personal health information and lacks a long-term plan for ensuring that those new rules are being followed.  The HHS Office for Civil Rights (OCR), which is responsible for overseeing these efforts, acknowledged these concerns but noted that rules are winding their way through government channels and that they have "taken the necessary first steps towards establishing a sustainable" oversight program.   

The report's two main concerns are: (1) the urgent need for guidance on de-identification methods, and (2) lack of a long-term plan for auditing covered entities and business associates for compliance with federal privacy and security rules (specifically, HIPAA and HITECH).

De-Identification Guidance

De-identification is a tool that enables health data to be used for a broad range of purposes while minimizing the risks to individual privacy.  Under HIPAA, there are two methods that can be used to de-identify health data. The first is the safe harbor method, which merely requires the removal of 18 specific categories of identifiers, such as name, address, dates of birth or health care services, and other unique identifiers.  The second is the expert determination method that certifies that the data, in the hands of the intended recipient, raises a very small risk of re-identification. The safe harbor method is static and presumes that the removal of the 18 categories of identifiers translates into very low risk of re-identification in all circumstances.

In HITECH, Congress directed HHS to complete a study of the HIPAA de-identification standard by February 2010.  Though covered entities rely more on the safe harbor method because it is easier to understand and more accessible, OCR aimed to produce guidance that would "clarify guidelines for conducting the expert determination method of de-identification to reduce entities reliance on the Safe Harbor method," according to the report.  Two years later and notwithstanding its good intentions, OCR has not released this guidance.  

CDT has met with industry and consumer stakeholders about how to improve federal policy regarding de-identified health data since 2009. CDT also recently published an article in JAMIA proposing a number of policies to strengthen HIPAA de-identification standards and ensure accountability for unauthorized re-identification.  

The OCR should issue the required guidance on de-identification without further delay and continue seeking public feedback on how to build trust in uses of de-identified data.  Foot dragging on this issue risks impeding progress on the ability to monitor the public's health in ways that go far beyond mere notification and routine reporting of symptoms, diagnoses, etc.  With these new capabilities in place, public health officials can move beyond traditional detection and response to outbreaks, enabling earlier disease detection, allowing public health officials to take a more active role monitoring health issues from cancer screening to adult immunizations to HIV.

Ensuring Compliance

Routine audits help ensure that covered entities and business associates comply with HIPAA and HITECH regulations.  Audits also provide OCR with important information about how entities covered by HIPAA and HITECH are implementing critically important privacy and security protections, and potentially surface issues needing further regulatory guidance and helping OCR better determine when penalties for noncompliance are warranted.  

HITECH directed HHS to audit entities covered by HIPAA for compliance with HIPAA and new HITECH requirements; OCR officials began those audits earlier this year. The report states that OCR has no plan to sustain these audits beyond 2012; the report also notes that HHS does not have a defined plan for including HIPAA business associates in its audits. HHS responded that OCR plans to review the pilot audit program at the end of this year and move forward with an audit program after that step is complete.

If the public is to trust that the privacy of their health information is well protected, it must know where that information is going and how it's being used. The report highlights the importance of audits as an effective mechanism for accountability. CDT is encouraged by the progress OCR has made to date in its pilot audit program, and we are pleased to see HHS commit to learning from the pilots to developing and implementing a sustained plan for auditing compliance with federal privacy and security regulations. 


View the original article here

Better Policies for De-Identified Health Data

The staggering amount of personal health data now being collected for treatment or billing purposes has a life beyond the doctor's clipboard. That data is collected, stripped of personally identifying information ("de-identified") and re-used in ways that are vital for medical breakthroughs, improving patient care, or predicting public health trends.  And it's just as valuable when used for targeted marketing campaigns or eliminating inefficiencies in the healthcare industry.  

HIPAA restricts uses of identifiable health information for secondary purposes; but information that is de-identified per HIPAA standards is largely not subject to federal regulation.  As a result, de-identified health data is in high demand.

The HIPAA de-identification standards were controversial when introduced in 2000.  The reason: no record of personal information can be truly de-identified to the point where there is no risk of becoming identifiable. The Department of Health and Human Services acknowledged this risk when approving the standard, but at the time said it was comfortable with "a reasonable balance between the risk of identification and the usefulness of the information."

Time has not erased the initial concerns about the de-identification standards. Those concerns appear to be on the rise and fall into three categories:  1) sufficiency of the methods used for de-identification; 2) lack of accountability for unauthorized or inappropriate re-identification; and 3) disapproval of certain uses of de-identified data.  

In 2009, CDT began exploring concerns about HIPAA de-identification. In October 2011, we held a workshop for about 50 academic, industry and consumer stakeholders to discuss some policy ideas for addressing de-identified data concerns.  A paper based on the findings of that workshop will be published by the Journal of the American Medical Informatics Association. (An online version of the paper was published in June 26, 2012.)

The paper includes more details on the following policy options for addressing concerns about de-identified health data:

•    Prohibiting by law or contract the unauthorized re-identification of de-identified data;
•    Ensuring strong, dependable de-identification methods through consistent review of safe harbor methodology and objectively vetting statistical approaches;
•    Requiring reasonable security safeguards for de-identified data (today no such safeguards are required); and
•    Providing greater transparency to the public regarding uses of de-identified data.

CDT believes these policy ideas merit greater discussion.  De-identification should remain an important tool for protecting privacy while preserving the availability of data for uses critical to advancing a more effective and efficient healthcare system. 


View the original article here

Benefits of Streamlining CA State and Federal Health Privacy Laws Stalled

An initiative aimed at making California's health privacy laws easier to understand and more streamlined with federal standards has stalled.  A year into this harmonization of state and federal standards finds the program needs focus, lacks adequate transparency and isn't providing enough opportunity for public input. CDT believes industry and consumers could benefit from the effort, but changes are needed to make the initiative a success.

The harmonization effort is aimed at eliminating conflicts, confusion and inconsistencies between the primary health privacy laws at the state and federal level. An advisory group, the Privacy and Security Steering Team (PSST), will provide its harmonizing recommendations to the agency that oversees California's health privacy laws.  The agency will give the recommendations to the state legislature as a proposed amendment to the state's primary health law, which, if adopted, could lead to significant changes.

Consumer's Union (CU) and CDT recently issued a joint letter endorsing efforts to make health privacy and security policy in California more protective for consumers and less burdensome to industry. Success here is critical, the letter says, "to securing public trust in the use of [health information technology] to improve individual and population health."

However, both organizations expressed concerns about the lack of focus and transparency of the effort to date. CU and CDT specifically called on the PSST to release work product from the law harmonization deliberation process to include:

detailed explanations of what legal standards each recommendation would specifically change,precisely how the legal standards will be changed;and a justification or the rationale behind each recommendation.

To better focus the project, CU and CDT also call on the PSST to consider addressing areas or issues lacking legal standards or safeguards for personal health information, or areas where current policies are not well understood or insufficiently enforced. Such policy gaps allow for the use and transfer of personal health information in ways that could undermine public trust, creating an environment where individuals do not feel safe or confident utilizing HIT tools.

CDT recently became a member of the PSST and is committed to helping reach the goal of building trust in the use of HIT by making California health privacy law clearer and more comprehensive.


View the original article here

The Limits of Free Expression: Defamation in the Internet Age

The right to freedom of expression protects individuals as they seek and share information, engage in debate, and voice criticism—but free expression is not without limits.

As the Internet has expanded, courts have grappled with the challenge of protecting free expression while upholding other rights, such as privacy and reputation, which are also enshrined in international human rights doctrine.

Defamation law protects privacy and reputation. If a citizen journalist publishes an article that falsely accuses an individual of wrongdoing, that individual can sue under defamation law, forcing the journalist to retract the false statement. However, the picture becomes more complicated if, for example, a citizen journalist accuses a government official of corruption and the truthfulness of the allegation is unknown. This scenario requires courts to balance a citizen’s right to free expression against the right to reputation of the government official. Government officials should be subject to a higher degree of scrutiny and criticism than an ordinary citizen.

Unfortunately, defamation law has been used in some countries by the rich and powerful not merely to defend privacy and reputation, but also to quash legitimate speech, including criticism of government officials and comment on matters of public interest.

Today, CDT is releasing a paper that describes how the framework provided by international human rights principles should be applied to limit such abuses of defamation law.  It discusses, for example, the practice of charging defamation as a criminal offense, which human rights bodies have consistently condemned. While some countries have de-criminalized defamation, others have refused to do so; Russia recently re-criminalized it.  

The paper also examines the practice of "libel tourism," wherein wealthy individuals take advantage of loose jurisdictional rules to sue journalists and others in countries with rules that tend to favor defamation plaintiffs. This practice is facing possible reform in England, which had been a “defamation forum of choice” for movie stars and oligarchs alike. When American movie actress Cameron Diaz, a US resident, wanted to file suit against a US-based tabloid, The National Enquirer, she took her case to England, where it was accepted on the grounds that defamatory statements appearing on the tabloid’s website could be read online in the UK.

Aggressive application of defamation not only limits the speech of the defendants in specific cases, it also has a chilling effect on other users, who may choose not to express themselves for fear of facing expensive litigation.

Human rights instruments implicitly endorse defamation laws by recognizing rights to reputation and privacy. However, if not carefully applied, defamation laws can have a chilling effect on speech, endangering the rights of individuals engaging in expression and of those entitled to seek and receive information, opinions, and ideas. The paper we release today, “Defamation in the Internet Age: Protecting Reputation without Infringing Free Expression,” aims to explore the tensions between these rights using examples from a diverse range of jurisdictions around the world and to suggest how the balance should be struck.


View the original article here

Wednesday, September 26, 2012

'OpenStand' Underscores Commitment to Voluntary Internet Standards

Recent proposals from several countries urging the mandatory adoption of technical standards are dangerous and misguided.

Underscoring that view is today's launch of "OpenStand," an initiative supporting a commitment to open, voluntary technical standards for the Internet.  CDT welcomes the OpenStand paradigm.  Today we also released a paper detailing how technical standardization works and why proposals for the mandatory use of Internet standards developed in the International Telecommunication Union (ITU) are cause for grave concern.

Our digital world turns on technical standards. Emails composed on a Microsoft Windows computer can be easily read on an Apple laptop or iPhone. Websites created by an incredible diversity of companies and organizations – Twitter.com, Wikipedia.org, BBC.co.uk, and millions more – are easily viewed in web browsers made by Google or Mozilla. This ability to communicate between technologies developed by different companies exists because standards provide the language that allows computers and software to talk to each other.

OpenStand is the product of five of the world's leading technical Internet organizations -- IEEE, the Internet Architecture Board (IAB), the Internet Engineering Task Force (IETF), the Internet Society, and the World Wide Web Consortium (W3C).  These organizations have produced many of the most fundamental standards on which all Internet communications rely, including Internet Protocol (IP), HTTP, and HTML. OpenStand is a set of principles built on a model of open processes that supports transparency, consensus, and the participation of all interested parties.

While the standards organizations making today's announcement have been operating under these principles for many years, OpenStand demonstrates a continued commitment by these groups to the voluntary, bottom-up processes that have made existing standards the foundation of the Internet's success as a platform for communications and commerce.

Unfortunately, the OpenStand paradigm is under serious threat. In December, the ITU will convene the World Conference on International Telecommunications (WCIT), a meeting of the world's governments to decide whether and how the ITU should regulate the Internet. In advance of that meeting, several countries have proposed that the technical standards the ITU produces – known as "ITU-T Recommendations" – become mandatory for Internet technology companies and network operators to build into their products. Russia and a number of Middle Eastern countries are among the primary proponents.

If adopted, these proposals would jeopardize the Internet's core principles of openness and free expression, threaten the growth and stability of the network, and sap the Internet's economic vitality. Having governments – the only formal decision-making members of the ITU – decide which standards technology companies must build into their products would upend the existing process of technological development on the Internet. Those with the most intimate knowledge of technology would be cut out of the loop for technological decision making, replacing them with government officials who do not write software, run networks, or build computers.

Making ITU-T Recommendations mandatory, while all other standards remain voluntary, would skew technology development in favor of largely unused specifications of questionable technical merit. They "have long ceased to have relevance," as one industry expert has explained.

Having the ITU-T Recommendations become mandatory could also cause the ITU to become a magnet for standardization proposals that undermine freedom of expression, privacy, and other civil liberties. Knowing that ITU standards would become mandatory, some governments may step up their efforts to have standards adopted that would increase network-based surveillance capability, create backdoors in existing encryption systems, embed identity information in all communications, or introduce other functionality that would threaten the Internet's ability to support free expression and private communication.

Because the ITU standardization process is generally opaque to civil society, the ability for civil society advocates to challenge such proposals and have a real impact on their outcome would be extremely limited.

Today's announcement of support for the OpenStand paradigm provides an important counterweight to mandatory standards proposals, but there is more work to be done. The paper we released today provides details about how technical standardization works and the danger of mandatory ITU standards. Those concerned about these proposals should take action:

•    Express your support for the OpenStand paradigm. Join CDT and other concerned Internet users in publicly affirming your support for the paradigm.

•    Press national governments to oppose mandatory ITU standards.  Civil society, Internet users, and other parties concerned about the future of the Internet should explain to their national ITU delegations that mandatory standards proposals would represent a major departure from the existing paradigm of Internet standardization and that these proposals would endanger the future of the Internet as an open, innovative platform.

•    Voice your concern about mandatory application of ITU-T Recommendations on the public comment page for the WCIT.  Oppose proposals to make ITU-T Recommendations mandatory by registering your comments here.


View the original article here

Will the White House Executive Order on Cybersecurity Look Like CISPA?

White House officials have signaled recently that the President may issue an executive order on cybersecurity to do by administrative fiat some of what Congress has not (yet?) done through legislation. Key Senators have called for the White House to act.

I haven't seen the draft executive order described in this Open Congress blog post or in this Washington Post story.

But, it's important to keep in mind that the three worst parts of CISPA from a privacy perspective were that (i) it drove a bulldozer through all of the privacy statutes by authorizing ISPs to share customer communications information "notwithstanding any law," (ii) empowered companies to share those communications directly with the super secret military-intelligence agency, the NSA, and (iii) allowed the NSA to use the info it received for any national security purpose.

An executive order from the White House couldn't do the first of these, and given the Administration's position on cybersecurity, would probably not do the other two.  It can't drive a bulldozer through the privacy laws because it would need a statutory exception to those laws in order to start the bulldozer. It probably won't do the latter two because it both proposed it's own contrary legislation in May 2011 and endorsed the contrary position in the Lieberman-Collins bill.

An executive order on cybersecurity could make some needed changes that are entirely within the control of government. It could, for example, encourage intelligence agencies to declassify more cyber threat signatures and share them with the private sector, and share more classified threat signatures with cleared network operators. It could require agencies to report when they receive cybersecurity disclosures under existing law from companies in the private sector, and make public the extent of such disclosures.

I don't know what to expect in an Executive Order on cybersecurity, and I don't know whether it will be good or bad for privacy and innovation, but don't expect the White House to attempt to enact a CISPA-like, privacy-invading cybersecurity program through executive order. After all, the White House threatened to veto CISPA, in very strong language, in large part on privacy grounds.


View the original article here

House Extends Warrantless Surveillance Law

Over the objections of an array of privacy groups, the House voted to extend the law permitting the government to eavesdrop on international communications--such as email and phone calls--between U.S. citizens and individuals "reasonably believed to be" foreigners living outside the U.S.

The law in question is the FISA Amendments Act, which gives the government broad surveillance powers that are "conducted without meaningful judicial authorization and without probable cause," the letter says.

CDT opposes reauthorization because safeguards to protect the privacy of Americans' communications have not been included in the House legislation.  The Senate is expected to take up the legislation, and some safeguards, this fall.  


View the original article here

CDT Weighs in on Copyright Enforcement Strategy

The Administration's Intellectual Property Enforcement Coordinator (IPEC) is expected to release its new "Joint Strategic Plan" by the end of this year.  Responding to the IPEC's request for comments from the public to assist with developing the new plan, CDT has submitted its recommendations.

The plan faces a substantial challenge in the wake of the bruising battle and public uprising over PIPA and SOPA:  namely, the widespread public perception that the Federal Government's approach to copyright serves a narrow set of corporate interests and ignores important competing values. This colors the debate over copyright policy and, ultimately, threatens to further erode public respect for copyright itself.  That's a risk that copyright holders and enforcers need to take seriously, because dwindling respect for copyright can fuel high levels of infringement, creating a vicious cycle.

What can the Federal Government do about this challenge?  Well, at a minimum, it can ensure that its approach to copyright enforcement and policy is forthright, fair, and respectful of other interests.  As we explain in our comments, that means taking care to fully assess collateral impacts; establishing guidelines and procedures to minimize the risk of collateral damage, especially with respect to domain name seizures; allowing much greater transparency in trade negotiations over copyright; and supporting affirmative initiatives or reforms that focus on the copyright regime from the point of view of Internet users or other stakeholders, rather than just the major copyright industries.

Our comments also recommend some core principles:  target enforcement carefully on true bad actors; don't call for new network-policing roles for Internet intermediaries; focus on effective and efficient use of existing legal tools, rather than calling for new ones; and set realistic goals.

Finally, our comments discuss the advantages and risks of trying to reduce copyright infringement through voluntary, collaborative efforts between copyright holders and other parties in the Internet ecosystem.  Actions that focus on educating users about copyright pose limited risks, since they generally won't cause significant harm even if applied in an overbroad or imprecise manner.  Actions that put private parties in the quasi-judicial role of imposing concrete sanctions are much more problematic, particularly when they are the product of an industry-wide or multi-party framework that arguably is a stand-in for government.  CDT recommends distinguishing between different kinds of voluntary action and emphasizing the importance of broad stakeholder participation and procedural safeguards.

We'll see how our recommendations fare.  Whether or not they find their way into the written strategy, however, we think our principles and recommendations have a key role to play in enabling copyright policy to chart a sound course that the public can accept and respect.


View the original article here

Shielding the Messengers: CDT Comments on Notice-and-Action

This post is part of our ‘Shielding the Messengers’ series, which examines issues related to intermediary liability protections, both in the U.S. and globally. Without these protections, the Internet as we know it today–a platform where diverse content and free expression thrive–would not exist.

Any guidelines to harmonize "notice and action" policies for content hosts must focus on maintaining strong liability protections and providing effective safeguards against abuse. That was the message CDT reiterated to the European Commission last week in our response to its public consultation on the issue. These comments (and appendix) are the latest in a series of contributions CDT has made since the Commission first picked up the issue in February, when CDT offered a set of principles to guide the inquiry.

Unlike in the US - where the Digital Millennium Copyright Act lays out a specific notice-and-takedown procedure that hosts must follow to be shielded from copyright liability - the E-Commerce Directive (ECD) that guides EU states' intermediary liability protections offers only a higher-level framework. It covers all content, and has been implemented in a wide variety of ways in different countries, some adopting formal notice-and-takedown systems, others not. The Commission is considering issuing guidelines to help harmonize the processes across the EU.

CDT's comments start from the proposition that liability protection should be available to the full range of content hosts that are relevant on today's Internet, and that any notice-and-takedown system needs to target illegal content with specificity and care. We stress that protection should be unequivocally extended to "active hosts" and that so-called notice-and-stay down obligations are inconsistent with the ECD's prohibition on general monitoring obligations. And CDT believes that private notice-and-takedown should only apply in areas where unlawful conduct is straightforward. Allowing notice-and-takedown for defamation and other content whose legal assessment requires difficult factual and legal determinations allows far too much opportunity for abuse.

A major focus of our comments is what steps can be taken to prevent abuse of notice-and-takedown where it is implemented. Abuse and mistakes under the DMCA and the threat they pose to online free expression have been well documented by CDT and other advocates. To prevent actions that result in the takedown of lawful material, we recommended a combination of strict requirements for notices, transparency requirements to expose abuses, and strong appeal and counter-noticing procedures - including the availability of meaningful penalties for those who send abusive, misleading, or negligent notices.

Lastly, the comments urge the Commission to consider "actions" other than takedown. Although the questionnaire focused on takedown, it is just one among a wide range of actions that can help address illegal content. Notice-forwarding by access providers, for example, can alert users of the allegations being made and the possibility of legal action against them - without the risk that lawful content will come down by mistake before a user has the chance to respond or a court has the chance to intervene.


View the original article here

A Few Concrete Recommendations for TPP

It's hard to offer input on proposals you haven't been allowed to see, but lots of advocates tried their best at Sunday's Trans-Pacific Partnership (TPP) stakeholders' forum. 

I was there for CDT, setting out a few concrete recommendations based on previous leaks of draft text for TPP's intellectual property chapter.  Of course, as we noted in our recent comments to the Intellectual Property Enforcement Coordinator last month, there's a real problem when the ability of stakeholders to offer meaningful comments depends almost entirely on leaks. 

Confidential negotiations may be the norm for trade policy in general, but they are ill-suited to broad policymaking in an area like intellectual property, with so many diverse stakeholders.


View the original article here

It Takes a Village to Defend a Network

Defending networks from malicious hacking exploits depends in large part on the voluntary, cooperative efforts of network operators, device makers, and Internet users.

Today the Broadband Internet Technical Advisory Group (BITAG) -- a group of technical experts dedicated to building consensus about broadband network management -- has released a series of targeted, balanced recommendations to help stifle an emerging type of network attack. That attack has been used in recent years by the hacker collective Anonymous (among others) to swamp web sites with traffic, knocking them offline.

The attack, shown below, exploits two Internet vulnerabilities: the failure of some network operators to apply recommended protections that prevent users from impersonating (“spoofing”) other users’ IP addresses, and the lack of adequate authentication in certain home router software that implements the Simple Network Management Protocol (“SNMP”).

The attack begins with an army of zombie computers (a “botnet”) that the attacker can control. The attacker instructs the computers in the botnet to send traffic to users whose home routers may contain the SNMP vulnerability. That traffic is sent with a spoofed return address to make it look as if it came from the web site that is the intended victim (say, www.example.com). When the users’ home routers respond, their responses flood www.example.com, taking it offline.

BITAG recommends a set of highly targeted actions that network operators, device makers, and end users can take, together and separately, to help prevent this kind of attack in the future while having minimal effects on legitimate uses of the network. The set of suggestions reflects just the kind of focused, balanced, user-empowering response to network management and security issues that we would hope to see out of voluntary forums like BITAG. The recommendations fall into four categories:

Secure SNMP – or leave it turned off in the first place. Many home networking devices are shipped with an insecure version of SNMP turned on by default, even though it sees little use among residential end users. BITAG makes a number of recommendations to encourage the use of secure versions of SNMP, to discourage insecure SNMP from being on by default, and to allow users to turn off SNMP themselves.Prevent address spoofing. BITAG suggests that network operators take reasonable steps to prevent address spoofing on their networks – a well-understood best practice in the engineering community.Filter or block SNMP traffic if necessary, but do so in a targeted, transparent, user-friendly way. Some network operators may feel the need to simply block SNMP traffic (in the middle of an attack, or perhaps on a more persistent basis) in a similar fashion to how some operators already block certain network ports used to send spam. BITAG recommends a number of strategies for limiting the collateral damage from such filtering/blocking and for ensuring that users understand what is happening and how to have SNMP re-enabled if they wish.Share attack information. When done with an eye towards safeguarding customer privacy, network operators and attack victims can help mitigate attacks by sharing attack traffic information with each other, other network operators, security researchers and product vendors, and device makers. BITAG suggests a limited set of specific information that may be useful for sharing.

BITAG’s work shows that while the debate about legislating for cybersecurity rages on, experts from across the Internet industry and the public interest community are working together to defend against the latest network attacks while ensuring minimal impact on legitimate network use.


View the original article here

Tuesday, September 25, 2012

Reconsidering Location Tracking

Yesterday, we joined the ACLU, EFF and EPIC in calling on the 6th U.S. Circuit Court of Appeals to rehear U.S. v. Skinner, the GPS cell phone location tracking case.   A panel of the 6th Circuit ruled that tracking a cell phone's location by repeatedly "pinging" the phone over a three-day period did not require a warrant.  The amicus brief we filed yesterday asked the full Sixth Circuit to consider this issue in light of the concurring opinions filed by five justices in the U.S. v. Jones U.S. Supreme Court case which came down earlier this year.  

We also pointed out that the panel's legal conclusion was based on a material misunderstanding:  that cell phones normally "give off" GPS location information.  Instead, mobile providers have to take a special step -- sending a signal to the phone to direct it to produce the GPS data.  Unless they take that step, there is no location data at the provider for the government to seize.  As a result, the court should not have analyzed the case under the third party records doctrine, which says a person has no Fourth Amendment interest in information shared with a third party.


View the original article here

The Rise of the Internet Defense League

Turning the force of a historical moment into a mere historical footnote takes little more than managing to squander momentum.  A loose coalition of Internet companies, advocacy groups and individuals are working to ensure that doesn't happen in the wake of SOPA/PIPA.  Enter the Internet Defense League, (IDL) which officially launched today.

The IDL, of which CDT is a member, fits the vision that this new movement should think and act like an Internet start-up.  That vision was offered last month by CDT President Leslie Harris during a keynote speech at the Personal Democracy Forum.  Harris noted that the nascent movement is seeking to define itself and cultivate the relationships needed to sustain its efforts:

"We need to give ourselves the space to innovate, experiment and evolve. We have to figure out how to meld together our skills and strategies in the service of our common goal. We need to form and test new partnerships, build our collective knowledge and deepen our trust in each other."

While the idea of the IDL was germinating, another effort sprang up, spun from the energy created by the SOPA victory: The Declaration for Internet Freedom. In a CDT blog post, Kevin Bankston, director of CDT's Free Expression Project, said of the Declaration:

[T]he five core principles… are consistent with the values that CDT has promoted for nearly twenty years in its ongoing mission to 'keep the Internet open, innovative, and free.' The Declaration celebrates and seeks to protect the core features of the Internet that have made it such a powerful global platform for free expression and innovation, the same features we recently outlined in the wake of the SOPA debate in our paper “What Every Policy Maker Should Know About the Internet”: open, decentralized, and interoperable, with no gatekeepers.

The Declaration is meant only as a compass point -- its language is not set in stone and debate is encouraged.  It is the defense of principles like those in the Declaration that forms the foundation of the IDL.  The IDL describes itself as "a network of people and sites who use their massive combined reach to defend the open internet and make it better. Because it can sound the alarm quickly to millions of users, people are calling it 'a bat-signal for the Internet.'"

Not every member will sign on to all the actions that flow from the League.  The IDL says that its members will choose "on a case-by-case basis" what actions they will participate in.  And that's how it should be.  

No one involved in this burgeoning net freedom movement should claim that there are no rough edges, nor that the path to efforts such as the creation of the IDL or the drafting of the Declaration are frictionless.  The movement is still in "beta mode." In the process new relationships will be formed, strategies will be tightened, new muscles flexed and the adrenaline of advocacy will be channeled into a skill set that's ready and willing to step up and defend the Internet, whenever that call goes out.


View the original article here

Doubling Down with Double-Speak: ETNO Responds to Critics

When is a request for regulatory intervention not a request for regulatory intervention? Just ask the European Telecommunications Network Operators’ Association. Last week, ETNO published a response to critics of its controversial proposal to amend the International Telecommunication Regulations (ITRs), attempting to stress that the association was not asking for governments to weigh in on commercial Internet interconnection agreements.

The problem is, that’s exactly what ETNO is asking for. The ITRs are a global treaty that delineates the regulatory authority of the ITU. ETNO’s proposed amendment – roundly criticized by CDT, the Internet Society, and others when it was issued in June – calls for extending the ITRs’ reach beyond traditional telcos to cover a potentially wide range of entities from across the Internet. The proposal expressly calls for using the ITRs to establish a new interconnection system that would differ in key ways from the commercial interconnection agreements that have been privately negotiated to date. ETNO would have the ITRs specify that network operators should structure their interconnection agreements to include end-to-end Quality of Service (in other words, pay-for-priority) and the principle of “sending party network pays” (in other words, pay-for-delivery). As we wrote in June, acceptance of these proposals would mark a dramatic shift in the market for interconnection, upending basic principles of Internet neutrality and resulting in higher costs for end-users and potentially decreased access to information in developing economies.

ETNO attempts to frame its proposal as one that would simply remove barriers to the types of deals its members want to make with content providers. But neither the original proposal nor the recent paper point out exactly what those barriers are. ETNO cites the rise of large content providers, increases in Internet video traffic, and the proliferation of connected devices to argue that the market is changing, but presents no evidence that the market for interconnection isn’t functioning properly – and certainly no evidence that the ITRs themselves stand in the way of anything ETNO members want to do. It seems more likely that ETNO members are trying to use the authority of the ITU to force more favorable agreements than they can negotiate absent regulatory intervention.

Indeed, some of the “evidence” cited in the paper actually cuts against ETNO’s proposal. The paper cites the growth of content-delivery networks as evidence of growing demand for higher quality of service.  But as we noted in our June critique, ETNO’s proposal would most likely reverse the trend toward CDNs and efficient content-localization. “Sending party network pays” would create strong incentives for last-mile networks to opt against local content caching in order to collect compensation from sender networks forced to re-send the same content again and again. 

ETNO additionally claims that a system where content providers have to pay distant networks for access to customers will “enrich the Internet ecosystem” and increase content diversity. The more likely result, however, is that large entrenched players with the ability to pay will do so, cementing their dominance and raising huge new barriers to potential competitors. The Internet’s power to drive innovation comes from its low barriers to entry – once you plug in, you and your service can reach everyone else on the network of networks. While there are some disparities (not every service can be fully deployed on a global CDN on day one, for example), a system that effectively forced new entrants to “pay to play” would dramatically upset that competitive environment.

Nobody disputes that telcos need to generate revenues sufficient to sustain investment. But it’s just not true that, as ETNO asserts, “Over the Top players . . . are not contributing to network investment.” Providers of popular online content and services already invest huge amounts of money in their own networks, bandwidth, and CDN deployment in order to stay competitive as traffic grows; they interconnect with telcos via freely negotiated agreements; and they fuel customer demand for broadband Internet access. European member states are meeting this week to discuss regional proposals to amend the ITRs.Yesterday, the chair of ETNO tweeted that the ETNO proposal would be discussed at the next regional meeting in October. Attendees of that meeting – and ITU member states more broadly – should reject the idea of trading away the unified, global nature of the Internet in hopes of generating some incremental revenue for established telcos.


View the original article here

Keep Partisan Politics Out of Internet Policy

Is there anything to be gained by interjecting the strangling kudzu of partisan politics into Internet policy? I strongly doubt it and that's why I am frankly mystified by the Competitive Enterprise Institute's (CEI) Fred Campbell's bombastic and highly partisan July 26 opinion piece in the Atlantic calling on conservatives to join the fight for Internet freedom. His hypothesis seems to be that progressives are "winning" in their efforts to subvert the open Internet and deliver it into the clutches of "government control." Really?

There has always been a strong bipartisan consensus in favor of a lightweight policy approach to the Internet. The key policy decisions that made the U.S. Internet an engine of innovation and democracy have almost always been made on a bipartisan basis. It was the bipartisan duo of then-Representatives Wyden and Cox who, more than 15 years ago, drafted the seminal law (now known as Section 230) that enabled Internet innovation to flourish by establishing strong liability protections for the Internet's intermediaries. And earlier this year, it was Representative Issa and Senator Wyden – backed by a strong bipartisan coalition – that lead the successful opposition to the Stop Online Piracy Act (SOPA).  There is no basis to suggest that this longstanding consensus to keep the Internet above the political fray has been lost.

That is not to say that there are no disagreements. In a community where there are more opinions than there are issues, robust debate is the norm. But disagreements rarely break neatly along partisan lines. Lets face it. The issues have become far more complex since the days when the Internet ran on top of a regulated, "common carriage" phone network. In today's environment of unregulated broadband, ubiquitous mobile connectivity, and truly global reach, anyone who thinks there are easy fixes for policy challenges, isn't thinking very deeply.

What seems to have sent Campbell over the partisan edge are the "Open Internet rules" commonly known as net neutrality, which he sees as a precursor to a government takeover of the Internet. Here is where we cannot paper over disagreements, Those of us who believe the rules are necessary, want to ensure that large network operators do not use their position to exercise "gatekeeper" control.  Those who oppose such rules insist that centralized gatekeeping by governments, not companies, is the only real threat. It's a fair debate, but to suggest that where one stands on the issue reflects diametrically opposed agendas—regarding the general relationship of government to the Internet—is to fundamentally misunderstand the nature of the debate.

What I can't work out is why anyone who truly cares about the open Internet would pick the warm afterglow of the anti-SOPA campaign to launch a highly inflammatory attack on longstanding allies.

I get the desire to get conservatives more engaged in the policy debates.  The Campbell piece was timed to build on the so-called Paul "manifesto" that set out Libertarian principles for the Internet, which can be boiled down to "no regulation ever." It too sought to rile up the right by shoving the rest of us into a commune somewhere for "Internet collectivists." But is playing the partisanship card really the only way to get the political right more engaged in the issues? I hope not.

We have done pretty well in working out a policy path over the years without putting partisanship first. Defense of the open Internet needs conservatives as well as progressives, but not if the only lens is partisan politics.

One thing is sure: the grassroots groups that organized Internet users to protest against SOPA did not for a New York minute see the bill in partisan terms. They viewed SOPA as an existential threat to the future of the open Internet and responded accordingly. Our common interest in preserving the Internet for innovation and freedom is not well served by forcing the issue into the partisan muck.

Campbell doesn't really know what to do with the SOPA campaign, so he dismisses it as an aberration – a one-night stand likely to bring remorse at first light. But the day after brought new energy, not remorse; it brought resolve to take the SOPA moment and grow it into a sustained movement for the open Internet. Rather than celebrating SOPA's long tail, Campbell insists on seeing deep political cracks and progressive taint in the effort, calling out efforts like the Declaration of Internet Freedom and the Internet Defense League for particular scorn.

In order to pigeonhole the Declaration of Internet Freedom at as a "progressive" endeavor, he must ignore the broad, big-tent, post-ideological principles that it articulates – "don't censor the internet," "protect privacy" and "protect the freedom to innovate" – and misapprehend the document's basic purpose.  As I've written before, the goal of the declaration isn't to define "Internet freedom" for all times and for all people, but to jumpstart a broader conversation about what internet freedom means, a conversation that will include people and organizations from every part of the political spectrum and every part of the globe.  It is not a policy document but an organizing tool, meant to unite those who care about preserving the economically and politically liberating power of the Internet, regardless of political party or geography.

The principles articulated in the Declaration that we signed are broad enough – and intentionally so – to be acceptable to progressives and free marketers, conservatives and liberals, Democrats and Republicans.  We're seeking to provide a rallying point for a wide range of Internet freedom supporters, even though we will sometimes disagree on more specific policy prescriptions. And in that respect, the declaration has been successful: certainly, any document that both Ron Wyden and Darryl Issa can sign has the power to bridge partisan divides and bring together a wide variety of voices and perspectives.

I am not sure why Atlantic would publish such an inflammatory piece, but it was deliciously ill-timed to appear right before the now derailed Senate cybersecurity bill was headed to the floor. As Campbell spun conspiracy theories, Sens. Franken and Paul came together to draft a critical amendment to the cyber security bill to strike language that gave companies new authority to monitor and possibly block our private communications. Groups across the political spectrum from Tech Freedom to the ACLU and yes, CEI, Campbell's own organization, strongly supported the amendment.  At the same time, a politically diverse coalition of organizations, companies and trade associations came together to urge the Senate to take up an amendment offered by Senator Leahy to require a warrant for government access to digital content. The Leahy amendment was the product of years of work by the Digital Due Process coalition, which has been working to reform government access laws for the Internet. And yes, CEI and many other conservative groups are in DDP and on the letter along with CDT, EFF and ACLU.

And of course the most critical "big tent" effort currently under way is the upcoming battle to prevent the International Telecommunications Union (ITU) from claiming new authority over Internet governance. Here too, I can find little light between us.

The point here is not to pretend that sharp differences don't exist. It is simply to ask what is to be gained by urging the Internet freedom community—all committed to openness, innovation and freedom—to retreat to warring ideological camps. We've tasted victory; we know what can be accomplished if partisanship is set aside. There is an Internet to defend. We should get on with it.


View the original article here

Why Fibbing About Your Age Is Relevant to the Cybersecurity Bill

[Editors Note: This is one in a of series of blog posts from CDT on the Cybersecurity Act, S. 3414, a bill co-sponsored by Senators Lieberman and Collins that is slated to be considered on the Senate floor soon.]  

Congress is about to decide whether it is a crime to violate terms of service governing your use of Gmail, Facebook, Hulu, or any other on-line service.

One of the amendments to the Cybersecurity Act that the Senate is likely to take up this week would substantially increase already severe penalties for violations of the Computer Fraud and Abuse Act (CFAA), an important law designed to prevent malicious computer activity, such as hacking.  The amendment would eliminate provisions setting lower sentences for first time offenders, establish mandatory minimum sentences for many offenders, make computer crimes "racketeering" predicates, and subject homes to civil asset forfeiture for computer crimes committed inside.  The problem is, there is widespread agreement that the statute is already overly broad, sweeping in common online conduct, and the Department of Justice has interpreted it in a way that turns many – maybe most – Internet users into potential criminals.

A fix has been proposed, but the Justice Department is opposing it.  The DOJ wants all the enhanced penalties, without narrowing the scope of the bill to focus on true hacking.

The CFAA makes it a crime to use a computer "in excess" of "authorization." This has been read to mean that it is illegal to use a computer in a manner that violates contractual agreements.  People regularly use websites with broad and ambiguous "Terms of Service" prohibitions, and violations of terms of service are commonplace.  For example, Gmail's Terms of Service bar users younger than age 13, but there is little doubt that thousands of pre-teens lie about their age so they can use Gmail.  Under the reading of the Justice Department, they are all criminals and should be subject to the law's harsh penalties.

As another example, the 150 million users of Facebook in the U.S. agree to a Statement of Rights and Responsibilities that ban:

•    Accessing someone else's Facebook account, even with their permission
•    Sharing your Facebook password, or letting anyone else access your account
•    Posting any false personal information on Facebook
•    Using Facebook "to do anything malicious"
•    Using Facebook "to do anything misleading"

Any of these actions would constitute a computer use that is in excess of authorization.  As such, in the view of the Department of Justice, each action is a candidate to prosecuted as a federal crime punishable by a fine, asset forfeiture, or prison time.  

Fortunately, lawmakers are attempting correct this problem, and ensure that Americans cannot be charged with a felony for actions that merely violate a website's Terms of Service.  In September, the Senate Judiciary Committee adopted unanimously an amendment by Senators Grassley (R-IA), Franken (D-MN) and Lee (R-UT) to fix the statute so that most terms of service violations are not CFAA crimes.  Organizations and individuals from across the philosophical spectrum endorsed their amendment.  

The Grassley/Franken/Lee language has been incorporated into the larger CFAA amendment mentioned earlier, which Senator Patrick Leahy has proposed to the Cybersecurity Act, soon to be taken up by the Senate.

Weighing in on the issue are a group of individuals and organizations from across the philosophical spectrum; CDT is among that group. The group sent a letter today to Senate leadership highlighting the flaws noted earlier and asking that, should the Leahy CFAA come to a vote, that it include the Grassley/Franklin/Lee provisions, which they called "an important step forward for security and civil liberties."

However, the Justice Department is trying to strip out the common-sense amendment of Senators Grassley, Franken and Lee.  The CFAA is an important law, but Congress should make sure that it does not criminalize fibbing about your age on the Internet.


View the original article here

Tracking Big Foot: Why GPS Location Requires a Warrant

In a case that raises as many questions as the average sighting of Big Foot, a panel of the Sixth Circuit Court of Appeals ruled earlier this week that law enforcement officers didn't need a warrant to obtain GPS location information generated by his cell phone.

The court’s analysis has been roundly criticized as legally incorrect, lazy, shallow, and vague. I’d like to focus on one aspect of the case that the court missed:  the Department of Justice recommends that police obtain warrants in the scenario presented by this case, does so for good reason, and there were sufficient facts for the government to obtain the warrant that the Department of Justice recommends investigators obtain.

In this case, U.S. v. Skinner law enforcement officers obtained an order that allowed them to monitor for 60 days the location of a pre-paid cell phone they had good cause to believe was being used by Big Foot, the nickname given trucker eventually identified as Melvin Skinner, who they alleged was transporting marijuana.  They obtained a court order under which the provider, Sprint/Nextel, acting at the behest of law enforcement, pinged the phone repeatedly so it would reveal its location over a three-day period and eventually activated the phone’s GPS functionality to locate the phone’s GPS coordinates.   (Sprint/Nextel recently developed a web portal through which law enforcement can do this automatically for the duration of the court authorization, without contacting the provider each time officers ping the phone.)

The court found that there was "… no Fourth Amendment violation because Skinner did not have a reasonable expectation of privacy in the data given off by his voluntarily procured … cell phone."  But, as Jennifer Grannick points out cell phones don’t normally "give off" the kind of GPS location data that law enforcement used to locate Skinner.  Unless the user is employing location services – and Skinner wasn’t – the GPS location data has to be created.  In this case, the provider, under court order, remotely activated the GPS function of Skinner's phone so the police could track him.

There's a critical difference between GPS location information and cell tower location information a mobile phone creates during normal use.  The GPS data in this case is created at the request of law enforcement for tracking purposes and not through the normal use of the mobile phone. The GPS data doesn’t even exist until the provider prompts the device to deliver its GPS location to the provider so law enforcement can access it.  In contrast, providers maintain cell tower location information for business reasons.  Because providers do not normally maintain GPS location information and because it was not voluntarily conveyed to the provider, it is not a "business record" and does not fit into the third party records doctrine, which says that a person has no Fourth Amendment interest in information that is voluntarily revealed to, and held by, a third party.  While the third party doctrine should probably be re-examined, for now we have to live with it, but not for GPS data created by providers at the behest of law enforcement.  For that data, we retain our Fourth Amendment rights against warrantless GPS tracking.  

Blind Eye to Justice

Apparently recognizing that GPS is different, the Justice Department recommends that prosecutors obtain a warrant to get GPS location information from mobile communications service providers.  For example, in this power point presentation the Associate Director of the Justice Department Office of Enforcement Operations recommends that prosecutors use search warrants to get prospective GPS location information (referred to as “lat/long data” or latitudinal and longitudinal data) for constitutional, not statutory reasons, and because "anything less presents significant risks of suppression."  In addition, the Justice Department Associate Deputy Attorney General, testified in April last year that when the government seeks to compel disclosure of prospective GPS coordinates generated by cell phones, it relies on a warrant.

The Sixth Circuit missed this point entirely.  It blithely rejected Skinner's Fourth Amendment claims and implicitly bought into the government's argument that orders under the Stored Communications Act provision at 18 USC 2703(d) can be used to obtain prospective location information that has never been stored.  It did not consider whether the information sought was within the third party records doctrine and it cited no statutory authority for the proposition that the government can compel a provider to create the GPS information for the government to seize.  

Perhaps most ironically, it seems pretty clear that the government had facts establishing probable cause and could have obtained a warrant if it had applied for one.  As the concurring opinion in Skinner noted, law enforcement officials were watching the drug operation for months, had recorded conversations about an upcoming drug run, learned that the courier was carrying a particular phone that they could track, and that a half ton of marijuana was in transit.  

A warrant requirement for location information, as advocated by the Digital Due Process coalition, would still mean a drug courier like Skinner would get caught.  If followed, a statutory warrant requirement decreases the chances a criminal would elude jail because the seized evidence would not be at risk of suppression, as it is now for Big Foot if he appeals this decision. 


View the original article here

Shielding the Messengers: (Court-Ordered) Notice-and-Takedown, the Chilean Approach

This post is part of our ‘Shielding the Messengers’ series, which examines issues related to intermediary liability protections, both in the U.S. and globally. Without these protections, the Internet as we know it today–a platform where diverse content and free expression thrive–would not exist.

In 2010, Chile updated its copyright law with a novel approach for protecting Internet intermediaries from liability for their users’ copyright infringement.  Though modeled on the US Digital Millennium Copyright Act (DMCA), the law differs in one crucial respect: While a cornerstone of the US law is its private notice-and-takedown system, the Chilean law requires that rightsholders secure a court order before content must be taken down.

Today, CDT released a short report on the Chilean law, examining the balance the law strikes among the rights of copyright-holders, intermediaries, and Internet users.  As we explain in the paper, the law offers greater certainty to intermediaries as to when content should be removed, and court oversight may well prevent some of the mistakes we have seen under the US system.

On the other hand, some rightsholders have expressed dissatisfaction with the law, since having to go to court significantly raises the burden on them when requesting takedowns.  Despite these objections, the Chilean Congress repeatedly rejected amendments that would have allowed for DMCA-style private takedowns, believing that the approach of relying on court orders was best for ensuring Internet users’ constitutional rights were protected.

CDT believes the balance struck by the DMCA remains viable in many respects. (We do, however, caution against extending the DMCA’s notice-and-takedown regime beyond copyright.)  Nonetheless, the Chilean law has provided an important and interesting new model worth considering.  It remains to be seen as courts implement the law whether it does in practice provide reasonable protection for rightsholders, intermediaries, and users.  Anecdotally, we have heard from colleagues in Chile that no one has sought a court order.  Instead, it seems rightsholders may be taking advantage of notice-forwarding requirements (see below) in the law to communicate directly with users to request the removal of infringing content.

Notice-forwarding requirements, whereby ISPs and content hosts are required to pass along notices of apparent or alleged infringement to subscribers, present yet a third model for dealing with online copyright infringement.  As CDT commented when US ISPs announced the Copyright Alert System, notice-forwarding can serve an important educational function and has the potential to deter a significant portion of online infringement.  Canada’s copyright reform act, passed earlier this summer, followed this approach.  We’re currently reviewing the law, and it will be the subject of a future report.


View the original article here