The “killer robot” used by Dallas police to take out shooter: links

On July 7th, a suspect in Dallas shooting of police officers during a peaceful protest of recent police violence died when law enforcement deployed a remote-controlled bomb-disposal “robot” (doesn’t appear this tech utilized any sensing or thinking required for the contemporary sense-think-act definition of robot) carrying an explosive device. Here are some great links covering the issues:

Background: http: //

Legal: http: //

Ethical: https: //

Future implications:


Robotic Jerks

Microsoft’s Twitter chatbot, @TayandYou, only lasted day on the platform before the account was shut down. The AI is supposed to be modeled after a teenage girl (Taylor Swift-loving and self-conscious) but after one day of being exposed to the open, global internet was transformed into a racist, sexist, incestual, sexually aggressive bot.

The cultural difference when looking at a similar Microsoft AI bot in China, where the system has been operating for two years without having turned into a raging nut case, are fascinating – the bot refrains from becoming a monster.

Tay was such a terror that it almost slipped my mind: my brother Ben and I wrote an article comparing how cyberlaws in the US and EU would deal with robot jerks – specifically robot liars. Check it out here.

Reading List: History of Privacy, Surveillance, & Data Protection

This is what I have so far for books:

Lawrence Friedman, Guarding Life’s Dark Secrets: Legal and Social Controls over Reputation, Propriety, and Privacy

Samantha Barbas, Laws of Images: Privacy and Publicity in America

Simone Browne, Dark Matters: On the Surveillance of Blackness

Michael Schudson, The Rise of the Right to Know: Politics and the Culture of Transparency, 1945-1975

David Flaherty, Protecting Privacy in Surveillance Societies: The Federal Republic of Germany, Sweden, France, Canada, and the United States

Collin Bennett, Regulating Privacy: Data Protection and Public Policy in Europe and the United States

Gloria González Fuster,  The Emergence of Personal Data Protection as a Fundamental Right of the EU

Laura Donohue, The Future of Foreign Intelligence: Privacy and Surveillance in a Digital Age

Daniel Solove, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet

Christian Parenti, The Soft Cage: Surveillance in America From Slavery to the War on Terror

Robert Ellis Smith, Ben Franklin’s Web Site: Privacy and Curiosity from Plymouth Rock to the Internet

David Lyon, Electronic Eye: The Rise of Surveillance Society

Looking for more! Please send to @megleta

When Companies Study Their Customers: The Changing Face of Science, Research, and Ethics

Heading to Boulder (YES!!!) for the annual all day Silicon Flatirons privacy conference on Thursday, Dec 4th.

Four panels inspired by recent controversial research endeavors, findings, and disclosures by private companies like Facebook and Target will be full of dynamic discussants and presenters. This post is my prep – the best collection was put together by James Grimmelmann here.

The first panel will debate whether we should be alarmed by the type of human social science currently taking place on online services. Moderated by Paul Ohm, this panel has the two main academic  Zeynep Tufekci has posted one of the most read (by academics at least) responses to the Facebook study, arguing that the information asymmetry, ubiquity, opacity, and lack of choice create a dangerous research environment and how to frame the issue properly (I also think that all the law profs in attendance would find her post on research methods interesting and valuable). Tal Yarkoni‘s response is presented here (and disagreement expanded upon in Zeynep’s comments). He argues that nudges are and have been part of the world for a long time (subliminal advertising was investigated by the FCC in the 1970s after a wide spread freak out occurred and it’s apparently banned on television in the UK) and that these types of projects are not inherently good or bad – nudges can help us give to charities, vote, and make healthier choices. I’m not sure what Matthew Boeckman, Vice President of DevOps at Craftsy, is going to say but Craftsy looks awesome. Kashmir Hill, Senior Online Editor Forbes (and my favorite privacy writer) will hopefully discuss her thoughts on packaging this research under “improving services.” Another Forbes author wrote that you should leave FB because of the emotional manipulation research. Rob Sherman, Deputy Chief Privacy Officer at Facebook, will surely be defending these pursuits at FB and quelling fears by describing internal safeguards. Previous explanation from researchers here, ends with: “While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.” Looking forward to getting the updates.

The second panel (moderated by YT) will focus on the changing nature of science and research, particularly focused on the public versus private divide. Ed Felton will be presenting (I think) on the impact of the gap between industry and research practices, namely that it will drive a wedge that prevents company researchers from being able to publish their work and that it will lead to academic researchers evading the cumbersome IRB process by collaborating with companies. Chris Calabrese, newly minted Senior Policy Director at the CDT will be responding. It looks like the CDT’s take on the is that when when user testing rises to the level of changing a user’s experience or engaging in analysis of private information that users don’t reasonably expect a service provider to examine, they should be actively notified. Long ago (2008) Aaron Burstein, now Attorney Advisor for Commissioner Brill at the FTC, argued that ECPA needed a research exception to conduct cybersecurity research and that corporate research should be vetted internally. One of my favorite humans, Jill Dupre, Assoc Director of ATLAS in CU Engineering, will offer insight into innovative technology studies research collaborations and models.

The third panel will be moderated by Harry Surden and consider whether “informed consent” is a viable concept in the big data age. James Grimmelmann told the Washington Post “What Facebook and OkCupid did wasn’t just unethical. It was illegal.” I hope that’s what he’ll be presenting on. In Wired, Michelle Meyer criticized the way the findings were presented, explaining the findings overstate what the research could possibly have known from the study. “The fact that someone exposed to positive words very slightly increased the amount of positive words that she then used in her Facebook posts does not necessarily mean that this change in her News Feed content caused any change in her mood.” Michelle will be also be presenting – bringing in unique expertise as a bioethicist. Personally, I want to know if not being a FB user is even an effective form of denying consent – just because I’ve never had a FB account doesn’t mean I haven’t been affected by the practices of the company. Discussants include Claire Dunne, the IRB Program Director at CU, and Janice Tsai, Global Privacy Manager at Microsoft Research.

The fourth panel will be moderated by my favorite Colorado attorney Nicole Day and look at the institutions for ethical review, with a particular focus on the history and current status of IRBs. If consent is out – then IRBs all over the place makes sense if they can do any better than individuals at assessing possible harms and outcomes. Omer Tene will be presenting (I think) on corporate IRB-like processes that are already in place and will spread, as well as what those processes should look like. Attorneys like panelist Jason Haislmaier (who is also a CU Law adjunct) are likely the people companies will and do consult on these issues. Ryan Calo, who was writing about this exact subject at least a year ago, provided the NYTimes the following:

“This is a company whose lifeblood is consumer data. So mistrust by the public, were it to reach too critical a point, would pose an existential threat to the company,” said Ryan Calo, an assistant professor at the University of Washington School of Law, who had urged Facebook to create a review panel for research. “Facebook needs to reassure its users they can trust them.”

Also contributing to this last panel will be FTC Commissioner Julie Brill, who is also doing a chat with Paul between the third and fourth panels.

Can’t wait!

EU Right to be Forgotten Case: The Honorable Google Handed Both Burden and Boon

No doubt Google does not feel it received a boon after the Court of Justice of the European Union (CJEU) established a “right to be forgotten” on the Internet.  This ruling gives individuals the right to request the removal of reputation-harming links from Google’s search engine.

What salacious, reputation-harming information was at the heart of this quest to cleanse Google’s search?   Alas, it was a rather mundane notice of Mario Costeja González’s real estate auction to pay off a social security debt published in a newspaper in 1998 – pretty boring stuff by internet standards.  The law at issue here was the 1995 EU Data Protection Directive. The Directive orders EU member states to grant their citizens the right to object to the further processing of personal information by a data controller and the right to erasure when data is inaccurate or incomplete. González first took his complaint to the Spanish data protection agency (AEPD) claiming that both the newspaper and Google violated his data rights by continuing the process the information after he requested its removal. The complaint against the newspaper was not part of the appeal heard by the Court, because the AEPD rejected it on grounds that the newspaper had “lawfully published it.” Google, on the other hand, was still on the hook and appealed the decision ordering it to remove links to the newspaper article all the way to the EU’s highest court.

The case caught many onlookers off-guard, because it looks nothing like the June 2013, Opinion of the Advocate General, which are advisory opinions generally relied upon by the Court. The decision is otherwise shocking because of the position it puts Google in.

The CJEU labeled Google a “data controller,” and the company now carries the huge burden of addressing any number of user takedown requests. Up to this point, Google has directed unhappy users to a help page that tells them to contact the site operator to get their problematic content removed and explains that Google may remove links under rare circumstances but usually requires a court or executive order. When the company receives takedown requests from a governmental body, it simply verifies the legitimacy and complies. This is true for all legal domains except copyright. When Google is notified of copyright infringing content, it automatically removes the content to avoid secondary liability. Compliance costs have been kept relatively low. Now actual humans at Google will have to consider each user request for removal to determine whether it is a valid right to be forgotten claim, with the only guidance from the Court being that it must take into account amorphous and jurisdiction-specific values like the “public interest.”

The problem is no one knows what the right to be forgotten means. There is no body of law giving this right shape or edges. Sure, there is some scattered case law related to past criminal activity from a few EU member states, but the only guidance the CJEU gives Google is that the data subject’s rights override the interest of internet users, as a general rule, and that the balance of interests should be case specific.

The understandable intention of the European Union to redistribute power away from companies and toward users backfired here. Vivienne Reding, the EU Commissioner who has long championed the right to be forgotten, celebrated the ruling with a Facebook post: “Companies can no longer hide behind their servers being based in California or anywhere else in the world.” But, this decision does not take power away from Google. It gives Google Almighty more power than ever. Google gets to decide what the right to be forgotten means, because its interpretation of the right will be as good as anyone else’s guess. Without any sense of what the right is and is not, Google will have to create its own policy for addressing user takedown requests. The various beginnings of a right to be forgotten amongst EU member states is clear in the CJEU’s decision as it waded through the variations taken by countries other than Spain. Google will come up with rules to respond and try to comply with requests from various countries by piecing together what little each member state has said on the issue. Either way, Google’s guess on removal requests will then be tested in courts across the EU over the course of many years, if and when Google decides to fight for the decisions it never wanted to make in the first place.

Expanding the ruling beyond Google, we will see other data controllers, from search engines to social networks, just removing content upon request, not wanting to bother with inevitably having to defend their decisions in court.

The Data Protection Regulation, set to replace the Directive, is now the last ditch effort for both advocates and opponents of the right to be forgotten, which suffered a name change in recent edits to the proposed Regulation and for no obvious reason was retitled the right to erasure. For advocates, the right offers an opportunity for a networked world that promotes more expression and freedom than one where information lasts indefinitely. The right handed down by the Court, however, is not the nuanced and delicate touch required to balance the many interests at play when limiting access to publicly available information. For opponents to the right, significant lobbying will be required to rework, limit, and define the right to erasure exceptions in the Regulation, which allow a data controller to retain information for reasons related to expression, historical and statistical purposes, and public health and safety. Again, no idea what any of that means – can’t anything be kept for historical purposes?

So congratulations Google! While your robe and gavel will be expensive, you now have the (unwelcome) honor of shaping Internet content (even more than you already did).


For my long academic warning about this (although I predicted the problem arising from the Regulation, not the Directive), check out the draft version of my article presented at the Telecommunication Policy Research Conference being published by Telecommunication Policy.

More of my thoughts can be found here:

NPR All Things Considered

CBC Spark


Washington Post

Right to Remove for Cali Kids

California bill SB568 was signed by Gov. Brown Monday, September 23, giving minors (>18) the right to remove information they post online. There are some important caveats to the law and differences from the COPPA amendments in the Do Not Track Kids bill proposed that failed at the federal level.

Photo by Kristin Nador

Photo by Kristin Nador

First, the California bill only applies:

  • to sites and services directed at minors or those with actual knowledge that a minor is using the site or service;
  • to minors that have registered with a site (unless the operator prefers to extend the right to non-registered users);
  • to non-anonymized posts that do not individually identify the minor.

So registered users under the age of 18 may request the removal of content or information posted on the site or service that they themselves have posted in a way that identifies them.

The right does not extend to:

  • content posted by another user;
  • posts that have been copied or reposted by a third party;
  • anonymous posts;
  • content removed from visibility but still stored with the site or service.

The bill does not require an “eraser button,” meaning this is not a technology forcing bill. Rather, it grants the substantive right to remove content that has been disclosed (arguably) to the public and the associated procedural requirements to effectuate that right. Procedurally it is similar to laws that ensure information controllers provide means to correct information in certain settings (included in most policies based on the Fair Information Practices Principles). The bill requires that sites and services must provide notice of the right, clear instructions for exercising the right, and explain that exercising the right does not ensure complete or comprehensive removal of the information.

The substantive right is novel. Only under a few circumstances does the law allow truthful information to be retracted from the public domain once it is released (e.g., copyright). The law only grants this right to minors in California but intends to hold any site that is accessible to those in California responsible for any violations.

A few responses to some of the reactions I’ve heard about the law. The first suggests that users can already delete things they post online. The most popular sites like Google, Facebook, and Twitter already offer this feature to all their users, but there are many that do not – e.g., most forums and comments. Content does the most damage once it’s been copied and distributed  (and usually the original source is one of the popular sites), to which the law explicitly does not apply. The second is that the law is not enforceable. Beyond authentication problems (pseudonyms or usernames may identify an individual but not be the name on their legal documents and so hard to verify the user’s identity or age), sites will comply with the law the same way that they comply with various state and international laws. They will include a final section addressing the law in the TOS (possibly saying that if you are under 18 and in California, you are not allowed on the site) and try to determine the validity of deletion requests as they come in. User participation, which is a tenet in most FIPPs-based policies around the world, is just a pain for data controllers. Lastly, to reiterate, this is not a technology forcing law. A site can require a copy of birth certificate, username, and IP address be mailed to them before they remove these posts – there is no eraser button. This is an important departure from the federal Do Not Track Kids bill.

I’m not a fan of takedown systems, which do not include some judicial process to determine the validity of the claims, because of their potential for error and abuse (I’ll be discussing this at TPRC this weekend). It’s much easier for data controllers to simply assess the validity of a court order. For minors however, I’m not opposed to such a system and COPPA requirements have already made sites and services aware of and prepared for the added compliance costs.

An interesting legal question that is relevant to all right to be forgotten laws is whether truthful information can pulled from the public domain based on reputation/dignity/privacy justifications without violating the First Amendment. This may be possible for children and not for adults, but challenges to these types of laws are a route to the answer.

A Bit of Clarity on the Right to be Forgotten from EU

This is a delayed post, but better late than never. At the end of June, an Advocate General of the Court of Justice of the European Union filed an opinion regarding the Spanish Data Protection Agency’s (AEPD) decision from back in July, 2010 to uphold a complaint filed by one of its citizens against Google for not withdrawing data from its search index. It all started in 1998 when a newspaper reported (in print) information about an auction related to a social security debt and the announcement’s subsequent electronic presence years later, retrievable through Google. The data subject of the announcement contacted the publisher in 2009, but the newspaper refused to erase the content from the site. He then requested that Google Spain see to it that the link was not included in search results for the data subject’s name, and the request was forwarded to the main office in Mountain View. The identified individual also filed a complaint with AEPD against both the search engine and the publisher.

The AEPD found the publication of the data legally justified but supported the complaint as it related to Google – who appealed to the Audiencia Nacional seeking to overturn the agency’s decision. The National High Court of Spain referred the question to the EU Court of Justice.

This long awaited opinion is somewhat anti-climactic. First of all, the Advocate General’s opinion is not binding; it serves as more of an advisory document. Second, the opinion sheds little light on the right to be forgotten that we can expect to come from the proposed Data Protection Regulations.

Essentially the opinion answers a few questions:

1.) Google is accountable for processing data in Spain, regardless of the fact that no processing of personal data related to searches occurs in Spain. “[I]t must be considered that an establishment processes personal data if it is linked to a service involved in selling targeted advertising to inhabitants of a Member State, even if the technical data processing operations are situated in other Member States or third countries.”

2.) Even though Google is processing personal data, it is not a data controller of the personal data that appears on a web page hosted by a third party. It has no way of removing data from a web page, and so it cannot be held to the obligations of a data controller of that personal data. Google has to remove information from its index only when it has not complied with the exclusion codes (i.e., robots.txt) or updated cached memory. This is the most interesting. The AG explains that search engine service providers are not responsible, on the basis of the DATA PROTECTION DIRECTIVE, for personal data appearing on outside web pages they process. There may be secondary liability for search engines under NATIONAL LAW that may lead to duties amounting to the blocking of access to 3rd party sites with illegal content like IP infringing material or libelous or criminal information – but not data protection. 

3.) There is no right to be forgotten under the current DP Directive. This is not surprising, even though the European Commission claimed it would be “strengthening” the right to be forgotten in the new DP Regulations, suggesting a weak right to be forgotten existed. However, the AG did explain that the right to object in the ’95 DP Directive requires more than just the “subjective preference” of the data subject to meet the “compelling legitimate grounds” hurdle. This begs the question about whether the subjective preference of a data subject will be enough to have information removed if no compelling legitimate grounds in the future.

Although the AG is not extending liability onto the search intermediary in this case (and recommends this as a general rule), it is difficult to know whether this (rational) interpretation will extend into the DP Directive. The AG explains that search engines had not been foreseen when the 95 DP Directive was drafted. That is not true for the DP Regulation, which does establish a right to be forgotten as well as addresses data transfers to third parties. Because this is the first instance in which the DP Directive has been interpreted in relation to a search engine, the AG’s opinion may not be followed by the Court.