When Companies Study Their Customers: The Changing Face of Science, Research, and Ethics

Heading to Boulder (YES!!!) for the annual all day Silicon Flatirons privacy conference on Thursday, Dec 4th.

Four panels inspired by recent controversial research endeavors, findings, and disclosures by private companies like Facebook and Target will be full of dynamic discussants and presenters. This post is my prep – the best collection was put together by James Grimmelmann here.

The first panel will debate whether we should be alarmed by the type of human social science currently taking place on online services. Moderated by Paul Ohm, this panel has the two main academic  Zeynep Tufekci has posted one of the most read (by academics at least) responses to the Facebook study, arguing that the information asymmetry, ubiquity, opacity, and lack of choice create a dangerous research environment and how to frame the issue properly (I also think that all the law profs in attendance would find her post on research methods interesting and valuable). Tal Yarkoni‘s response is presented here (and disagreement expanded upon in Zeynep’s comments). He argues that nudges are and have been part of the world for a long time (subliminal advertising was investigated by the FCC in the 1970s after a wide spread freak out occurred and it’s apparently banned on television in the UK) and that these types of projects are not inherently good or bad – nudges can help us give to charities, vote, and make healthier choices. I’m not sure what Matthew Boeckman, Vice President of DevOps at Craftsy, is going to say but Craftsy looks awesome. Kashmir Hill, Senior Online Editor Forbes (and my favorite privacy writer) will hopefully discuss her thoughts on packaging this research under “improving services.” Another Forbes author wrote that you should leave FB because of the emotional manipulation research. Rob Sherman, Deputy Chief Privacy Officer at Facebook, will surely be defending these pursuits at FB and quelling fears by describing internal safeguards. Previous explanation from researchers here, ends with: “While we’ve always considered what research we do carefully, we (not just me, several other researchers at Facebook) have been working on improving our internal review practices. The experiment in question was run in early 2012, and we have come a long way since then. Those review practices will also incorporate what we’ve learned from the reaction to this paper.” Looking forward to getting the updates.

The second panel (moderated by YT) will focus on the changing nature of science and research, particularly focused on the public versus private divide. Ed Felton will be presenting (I think) on the impact of the gap between industry and research practices, namely that it will drive a wedge that prevents company researchers from being able to publish their work and that it will lead to academic researchers evading the cumbersome IRB process by collaborating with companies. Chris Calabrese, newly minted Senior Policy Director at the CDT will be responding. It looks like the CDT’s take on the is that when when user testing rises to the level of changing a user’s experience or engaging in analysis of private information that users don’t reasonably expect a service provider to examine, they should be actively notified. Long ago (2008) Aaron Burstein, now Attorney Advisor for Commissioner Brill at the FTC, argued that ECPA needed a research exception to conduct cybersecurity research and that corporate research should be vetted internally. One of my favorite humans, Jill Dupre, Assoc Director of ATLAS in CU Engineering, will offer insight into innovative technology studies research collaborations and models.

The third panel will be moderated by Harry Surden and consider whether “informed consent” is a viable concept in the big data age. James Grimmelmann told the Washington Post “What Facebook and OkCupid did wasn’t just unethical. It was illegal.” I hope that’s what he’ll be presenting on. In Wired, Michelle Meyer criticized the way the findings were presented, explaining the findings overstate what the research could possibly have known from the study. “The fact that someone exposed to positive words very slightly increased the amount of positive words that she then used in her Facebook posts does not necessarily mean that this change in her News Feed content caused any change in her mood.” Michelle will be also be presenting – bringing in unique expertise as a bioethicist. Personally, I want to know if not being a FB user is even an effective form of denying consent – just because I’ve never had a FB account doesn’t mean I haven’t been affected by the practices of the company. Discussants include Claire Dunne, the IRB Program Director at CU, and Janice Tsai, Global Privacy Manager at Microsoft Research.

The fourth panel will be moderated by my favorite Colorado attorney Nicole Day and look at the institutions for ethical review, with a particular focus on the history and current status of IRBs. If consent is out – then IRBs all over the place makes sense if they can do any better than individuals at assessing possible harms and outcomes. Omer Tene will be presenting (I think) on corporate IRB-like processes that are already in place and will spread, as well as what those processes should look like. Attorneys like panelist Jason Haislmaier (who is also a CU Law adjunct) are likely the people companies will and do consult on these issues. Ryan Calo, who was writing about this exact subject at least a year ago, provided the NYTimes the following:

“This is a company whose lifeblood is consumer data. So mistrust by the public, were it to reach too critical a point, would pose an existential threat to the company,” said Ryan Calo, an assistant professor at the University of Washington School of Law, who had urged Facebook to create a review panel for research. “Facebook needs to reassure its users they can trust them.”

Also contributing to this last panel will be FTC Commissioner Julie Brill, who is also doing a chat with Paul between the third and fourth panels.

Can’t wait!

EU Right to be Forgotten Case: The Honorable Google Handed Both Burden and Boon

No doubt Google does not feel it received a boon after the Court of Justice of the European Union (CJEU) established a “right to be forgotten” on the Internet.  This ruling gives individuals the right to request the removal of reputation-harming links from Google’s search engine.

What salacious, reputation-harming information was at the heart of this quest to cleanse Google’s search?   Alas, it was a rather mundane notice of Mario Costeja González’s real estate auction to pay off a social security debt published in a newspaper in 1998 – pretty boring stuff by internet standards.  The law at issue here was the 1995 EU Data Protection Directive. The Directive orders EU member states to grant their citizens the right to object to the further processing of personal information by a data controller and the right to erasure when data is inaccurate or incomplete. González first took his complaint to the Spanish data protection agency (AEPD) claiming that both the newspaper and Google violated his data rights by continuing the process the information after he requested its removal. The complaint against the newspaper was not part of the appeal heard by the Court, because the AEPD rejected it on grounds that the newspaper had “lawfully published it.” Google, on the other hand, was still on the hook and appealed the decision ordering it to remove links to the newspaper article all the way to the EU’s highest court.

The case caught many onlookers off-guard, because it looks nothing like the June 2013, Opinion of the Advocate General, which are advisory opinions generally relied upon by the Court. The decision is otherwise shocking because of the position it puts Google in.

The CJEU labeled Google a “data controller,” and the company now carries the huge burden of addressing any number of user takedown requests. Up to this point, Google has directed unhappy users to a help page that tells them to contact the site operator to get their problematic content removed and explains that Google may remove links under rare circumstances but usually requires a court or executive order. When the company receives takedown requests from a governmental body, it simply verifies the legitimacy and complies. This is true for all legal domains except copyright. When Google is notified of copyright infringing content, it automatically removes the content to avoid secondary liability. Compliance costs have been kept relatively low. Now actual humans at Google will have to consider each user request for removal to determine whether it is a valid right to be forgotten claim, with the only guidance from the Court being that it must take into account amorphous and jurisdiction-specific values like the “public interest.”

The problem is no one knows what the right to be forgotten means. There is no body of law giving this right shape or edges. Sure, there is some scattered case law related to past criminal activity from a few EU member states, but the only guidance the CJEU gives Google is that the data subject’s rights override the interest of internet users, as a general rule, and that the balance of interests should be case specific.

The understandable intention of the European Union to redistribute power away from companies and toward users backfired here. Vivienne Reding, the EU Commissioner who has long championed the right to be forgotten, celebrated the ruling with a Facebook post: “Companies can no longer hide behind their servers being based in California or anywhere else in the world.” But, this decision does not take power away from Google. It gives Google Almighty more power than ever. Google gets to decide what the right to be forgotten means, because its interpretation of the right will be as good as anyone else’s guess. Without any sense of what the right is and is not, Google will have to create its own policy for addressing user takedown requests. The various beginnings of a right to be forgotten amongst EU member states is clear in the CJEU’s decision as it waded through the variations taken by countries other than Spain. Google will come up with rules to respond and try to comply with requests from various countries by piecing together what little each member state has said on the issue. Either way, Google’s guess on removal requests will then be tested in courts across the EU over the course of many years, if and when Google decides to fight for the decisions it never wanted to make in the first place.

Expanding the ruling beyond Google, we will see other data controllers, from search engines to social networks, just removing content upon request, not wanting to bother with inevitably having to defend their decisions in court.

The Data Protection Regulation, set to replace the Directive, is now the last ditch effort for both advocates and opponents of the right to be forgotten, which suffered a name change in recent edits to the proposed Regulation and for no obvious reason was retitled the right to erasure. For advocates, the right offers an opportunity for a networked world that promotes more expression and freedom than one where information lasts indefinitely. The right handed down by the Court, however, is not the nuanced and delicate touch required to balance the many interests at play when limiting access to publicly available information. For opponents to the right, significant lobbying will be required to rework, limit, and define the right to erasure exceptions in the Regulation, which allow a data controller to retain information for reasons related to expression, historical and statistical purposes, and public health and safety. Again, no idea what any of that means – can’t anything be kept for historical purposes?

So congratulations Google! While your robe and gavel will be expensive, you now have the (unwelcome) honor of shaping Internet content (even more than you already did).

 

For my long academic warning about this (although I predicted the problem arising from the Regulation, not the Directive), check out the draft version of my article presented at the Telecommunication Policy Research Conference being published by Telecommunication Policy.

More of my thoughts can be found here:

NPR All Things Considered

CBC Spark

WSJ

Washington Post

Right to Remove for Cali Kids

California bill SB568 was signed by Gov. Brown Monday, September 23, giving minors (>18) the right to remove information they post online. There are some important caveats to the law and differences from the COPPA amendments in the Do Not Track Kids bill proposed that failed at the federal level.

Photo by Kristin Nador

Photo by Kristin Nador

First, the California bill only applies:

  • to sites and services directed at minors or those with actual knowledge that a minor is using the site or service;
  • to minors that have registered with a site (unless the operator prefers to extend the right to non-registered users);
  • to non-anonymized posts that do not individually identify the minor.

So registered users under the age of 18 may request the removal of content or information posted on the site or service that they themselves have posted in a way that identifies them.

The right does not extend to:

  • content posted by another user;
  • posts that have been copied or reposted by a third party;
  • anonymous posts;
  • content removed from visibility but still stored with the site or service.

The bill does not require an “eraser button,” meaning this is not a technology forcing bill. Rather, it grants the substantive right to remove content that has been disclosed (arguably) to the public and the associated procedural requirements to effectuate that right. Procedurally it is similar to laws that ensure information controllers provide means to correct information in certain settings (included in most policies based on the Fair Information Practices Principles). The bill requires that sites and services must provide notice of the right, clear instructions for exercising the right, and explain that exercising the right does not ensure complete or comprehensive removal of the information.

The substantive right is novel. Only under a few circumstances does the law allow truthful information to be retracted from the public domain once it is released (e.g., copyright). The law only grants this right to minors in California but intends to hold any site that is accessible to those in California responsible for any violations.

A few responses to some of the reactions I’ve heard about the law. The first suggests that users can already delete things they post online. The most popular sites like Google, Facebook, and Twitter already offer this feature to all their users, but there are many that do not – e.g., most forums and comments. Content does the most damage once it’s been copied and distributed  (and usually the original source is one of the popular sites), to which the law explicitly does not apply. The second is that the law is not enforceable. Beyond authentication problems (pseudonyms or usernames may identify an individual but not be the name on their legal documents and so hard to verify the user’s identity or age), sites will comply with the law the same way that they comply with various state and international laws. They will include a final section addressing the law in the TOS (possibly saying that if you are under 18 and in California, you are not allowed on the site) and try to determine the validity of deletion requests as they come in. User participation, which is a tenet in most FIPPs-based policies around the world, is just a pain for data controllers. Lastly, to reiterate, this is not a technology forcing law. A site can require a copy of birth certificate, username, and IP address be mailed to them before they remove these posts – there is no eraser button. This is an important departure from the federal Do Not Track Kids bill.

I’m not a fan of takedown systems, which do not include some judicial process to determine the validity of the claims, because of their potential for error and abuse (I’ll be discussing this at TPRC this weekend). It’s much easier for data controllers to simply assess the validity of a court order. For minors however, I’m not opposed to such a system and COPPA requirements have already made sites and services aware of and prepared for the added compliance costs.

An interesting legal question that is relevant to all right to be forgotten laws is whether truthful information can pulled from the public domain based on reputation/dignity/privacy justifications without violating the First Amendment. This may be possible for children and not for adults, but challenges to these types of laws are a route to the answer.

A Bit of Clarity on the Right to be Forgotten from EU

This is a delayed post, but better late than never. At the end of June, an Advocate General of the Court of Justice of the European Union filed an opinion regarding the Spanish Data Protection Agency’s (AEPD) decision from back in July, 2010 to uphold a complaint filed by one of its citizens against Google for not withdrawing data from its search index. It all started in 1998 when a newspaper reported (in print) information about an auction related to a social security debt and the announcement’s subsequent electronic presence years later, retrievable through Google. The data subject of the announcement contacted the publisher in 2009, but the newspaper refused to erase the content from the site. He then requested that Google Spain see to it that the link was not included in search results for the data subject’s name, and the request was forwarded to the main office in Mountain View. The identified individual also filed a complaint with AEPD against both the search engine and the publisher.

The AEPD found the publication of the data legally justified but supported the complaint as it related to Google – who appealed to the Audiencia Nacional seeking to overturn the agency’s decision. The National High Court of Spain referred the question to the EU Court of Justice.

This long awaited opinion is somewhat anti-climactic. First of all, the Advocate General’s opinion is not binding; it serves as more of an advisory document. Second, the opinion sheds little light on the right to be forgotten that we can expect to come from the proposed Data Protection Regulations.

Essentially the opinion answers a few questions:

1.) Google is accountable for processing data in Spain, regardless of the fact that no processing of personal data related to searches occurs in Spain. “[I]t must be considered that an establishment processes personal data if it is linked to a service involved in selling targeted advertising to inhabitants of a Member State, even if the technical data processing operations are situated in other Member States or third countries.”

2.) Even though Google is processing personal data, it is not a data controller of the personal data that appears on a web page hosted by a third party. It has no way of removing data from a web page, and so it cannot be held to the obligations of a data controller of that personal data. Google has to remove information from its index only when it has not complied with the exclusion codes (i.e., robots.txt) or updated cached memory. This is the most interesting. The AG explains that search engine service providers are not responsible, on the basis of the DATA PROTECTION DIRECTIVE, for personal data appearing on outside web pages they process. There may be secondary liability for search engines under NATIONAL LAW that may lead to duties amounting to the blocking of access to 3rd party sites with illegal content like IP infringing material or libelous or criminal information – but not data protection. 

3.) There is no right to be forgotten under the current DP Directive. This is not surprising, even though the European Commission claimed it would be “strengthening” the right to be forgotten in the new DP Regulations, suggesting a weak right to be forgotten existed. However, the AG did explain that the right to object in the ’95 DP Directive requires more than just the “subjective preference” of the data subject to meet the “compelling legitimate grounds” hurdle. This begs the question about whether the subjective preference of a data subject will be enough to have information removed if no compelling legitimate grounds in the future.

Although the AG is not extending liability onto the search intermediary in this case (and recommends this as a general rule), it is difficult to know whether this (rational) interpretation will extend into the DP Directive. The AG explains that search engines had not been foreseen when the 95 DP Directive was drafted. That is not true for the DP Regulation, which does establish a right to be forgotten as well as addresses data transfers to third parties. Because this is the first instance in which the DP Directive has been interpreted in relation to a search engine, the AG’s opinion may not be followed by the Court. 

The Impact of (Good) Online First Impressions

By Leland Francisco(CC-BY-2.0)

By Leland Francisco
(CC-BY-2.0)

I, like most privacy researchers, spend most of my time considering how negative information online is harmful to the subject of that information. Reading Glamour at the gym last night, I came across an article about how “pre-dating” affects relationships – got me thinking more about how too much information impacts the searcher and for the first time, how good truthful information impacts the searcher. Pre-dating is Google sleuthing through linkedin, facebook, twitter, etc, etc. 48% of women and 38% of men pre-date before a first date according to a match.com survey.

The Glamour article, “Stop Googling Your Dates!,” interviews a number of relationship experts (or at least people who work in relationship fields) that all seem to agree pre-dating is bad for relationships. The most interesting quote is from biological anthropologist Helen Fischer: “Every piece of positive information you learn online about someone will probably drive you toward having sex sooner.” Sex sooner because you think you know him or her!?!

Of course, this isn’t a scholarly peer reviewed article, but there is research out there. The research on first impressions seems to suggest it doesn’t take much information to form one and that they are difficult to change. Eli J. Finkel, Northwestern University, is quoted in the article (“You’re trying to suss out: Will this person and I have a connection? Actually, there is no evidence that we can assess that online.”) and has some written some page turners like this one.

I’ve written about distortions and inaccurate impressions resulting from negative online information, but perhaps we should start having more conversations about larger social implications of positive information as well.

Revenge Porn Zombie Site

I’ve been using the content on IsAnyoneUp.com as an example of web ephemerality since the site went down in April. Digital decay and weak content persistence are important aspects of a digital right to be forgotten. Kayla Laws, pictured below, found herself on the notorious revenge porn site started by the now notorious Hunter Moore. Today, when you search Kayla Laws on Google, results reference her acting activities, but not her content on IAU – it is gone.

Kayla-Laws-150x150

The only reference to her content on the revenge porn site is from articles that covered her coming out as a victim of the site on Nightline. The site was archived by the Internet Archive, but it’s not easily searchable by name. Kayla’s information on IsAnyoneUp.com can be seen as example of how embarrassing digital information can be forgotten without legal intervention.

But now the site is back – or coming back – with vengeance.

Moore explains,

“We had too many hackers too much overhead and way too many legal problems. This time I am doing it right. We are going to start off by launching with all the old IAU content and all new content. The submission page has only been up for five full days and we’ve done over 7,000 submission within that time. I am creating something that will question if you will ever want to have kids.”

IAU currently redirects users to bullyville.com, but apparently handing over the site to James McGibney was not actually a change of heart. Scorned exes were previously able to fill in fields with their submissions that included social media profiles, but the new submission form will include an address entry.

“We’re gonna introduce the mapping stuff so you can stalk people,” Moore told Betabeat, “I know–it’s scary as shit.”

I’m not quite sure what to do with this come-back. In order for digital content to persist, it has to be maintained. The data controller must have the interest and resources to maintain access to it. Generally, this aligns with public interest and old content loses its appeal quickly. Accounting for vengeful data controllers in a right to be forgotten is difficult, because they may maintain content out of spite, even as it ages and gets fewer and fewer hits. Moore is disrupting more than the social niceties of the internet; he’s disrupting my research!

Bullying and the Right to be Forgotten: A Right to End Victimization

Amanda Todd, a 15 year old, was found dead Wednesday. She took her own life – another victim of cyberbullying. Tragically, her story is almost stereotypical at this point. CNN covered the story, her story, which she tells through a haunting YouTube video. As she recounts the incidents that led her to change schools, suffer from deep depression and anxiety, being physically as well as emotionally beaten by herself and her peers, and eventually take her own life, she says that he can never get that photo back.

“I can never get that photo back”

Social isolation, bullying, and depression are difficult to endure, but the added feeling that one must endure them forever – the hopelessness – is simply too much for many. The It Gets Better Project was started to address this issue for the LGBT community. It is “a response to a number of students taking their own lives after being bullied in school.” If Amanda held felt she could move on from that photo – if she could “get it back,” perhaps she would have felt that it would get better. How can we even suggest to her that it will get better when the life of her information online seems eternal?

A right to be forgotten may provide hope for the victims of cyberbullying. The notion that you do not have to be that victim forever – the Internet will loosen its shackles eventually – offers hope for self-determination. In this case, however, the right to be forgotten is just one of many legal tools that may have helped Amanda regain control of that regretful image. Bullying content can often be removed for a number of reasons including violations of terms of service, child pornography laws, copyright ownership, and cyber-bullying and cyber-stalking laws (Canadian laws). Amanda’s picture falls into at least the first three categories. Criminalizing bullying is difficult and the language of the statute must usually include that the communication be threatening or defamatory. Although she had legal options, none of these laws are designed to address Amanda’s fears, which was that she would not be able to move beyond a horrific moment in her adolescence because it was indefinitely online. Perhaps the right to be forgotten should not be re-cast as a right to delete (European scholars have argued that the right to be forgotten is really just about deleting data trails and user profiles, but not online content accessible through the Internet), but a right to let it get better. The Do Not Track Kids bill, more than any other legislative effort, seeks to create such a right. Although imperfect, the bill is a good place to start and could (but does not currently) include specific language for removing bullying content.

There is value to that image; it provides historical evidence of sexting and cyber mob mentality, among other topics, but what value do we derive from Amanda being attached to that content? Any value is surely outweighed by the harm she suffered and the fear and anxiety felt throughout society because of these types of stories. This child needed legal, social, and technical help but did not find it. Time to do better has long passed.

Why I Can’t Change My Name To 01101101 01100101 01100111 00100000 01101100 01100101 01110100 01100001 00100000 01100001 01101101 01100010 01110010 01101111 01110011 01100101

Last week I gave a talk on the right to be forgotten at Indiana University’s Center for Applied Cybersecurity Research.  After, IU Law Dean Hannah Buxbaum mentioned that changing one’s name in the US is much easier than in other places. Names are an important part of the right to be forgotten – names are what attach us to so much personal information online. You can change your name to just about anything here in the States. Just ask Tyrannosaurus Rex, or T-Rex as the 23 year old previously known as Tyler Gold likes to be called (because it’s “cooler” – obviously). Limitations are pretty slight, but a name change may be denied if it involves:

  • Fraudulent intent, such as avoiding bankruptcy by pretending to be someone else or to get away with a crime
  • Violating a trademark or interfering with the rights of others; so you can’t change your name to Mitt Romney unless there is a convincing reason that isn’t related to the famous person
  • Using numbers or symbols (except Roman numerals) because they are intentionally confusing, which is why no binary Meg Leta Ambrose
  • Using obscene words, fighting words, or racial slurs

You also have to give notice of your name change by publishing it in your local newspaper and take the affidavit of publication back to be filed with the clerk of courts. With most newspapers publishing online, the connection between the past name and new name is easily accessible, but certainly adds some pretty significant friction to gathering information on an individual with a name change through an online search.

In other countries, names are far more regulated. When naming a child in Germany, you must be able to tell the gender of the child by the first name (Matti was rejected for a new born boy). And the name chosen cannot negatively affect the well being of the child (Mayo Head might agree with this type of limitation). You cannot use last names, the names of objects, or the names of products as first names. You pay a fee for the name to be submitted to the office of vital statistics. The office refers to a book of first names when assessing the proposed name and an appeal can be made if rejected, but if the appeal is lost, a new name will be required (and a new fee paid).

Denmark has a specific law that would probably not allow Moxie CrimeFighter. The Law of Personal Names is meant to save children from their silly parents. Names are subject to approval of the Ministry of Ecclesiastical Affairs and the Ministry of Family and Consumer Affairs. Parents can either choose from the pre-approved names in a government list of 3,000 boys and 4,000 girls names or they can apply to have a name approved. 15-20% of the 1,1000 reviewed names are rejected. In short, “Danish law stipulates that boys and girls must have different names, first names cannot also be last names, and bizarre names are O.K. as long as they are ‘common.'”

Sweden’s Naming Law, enacted in 1982 to prevent non-nobles from giving their children noble names (regulating noble names in 1982 seems late, no?), “First names shall not be approved if they can cause offense or can be supposed to cause discomfort for the one using it, or names which for some obvious reason are not suitable as a first name.” The law has been updated to allow men to change their last names to their partner’s, but at least two parents were quite unhappy with the registration process, naming their child Brfxxccxxmnpcccclllmmnprxvclmnckssqlbb11116 (pronounced Albin and rejected by the Swedish Tax Agency and the court upon appeal). Also of note, Metallica was determined “inappropriate” but later overturned, Google was accepted as a middle name, and Allah was refused as objectionable.

Today, names serve as an access point to personal information. We find information about people by entering her or his name into a search engine. If the information does not include the individual’s name (or connected in another way, e.g., meta-data), it will not be retrieved. It seems that European law is more inclined to manipulate the content or the route to access the information (see Spain vs Google right to be forgotten dispute), as opposed to the altering the initial access point. The US, on the other hand, is happy to allow the creation of a new identity (as long as it does not fall into one of the above mentioned exceptions) through a name change but less inclined to support efforts to edit content or alter the other access points (like messing with intermediary indexes or anonymization). With over 90% of two year olds already accumulating an online presence and new parents naming their children so they are easily retrievable on Google, the clerk’s office may see a spike in paper work.

SXSW Right to be Forgotten Core Conversation: A Reflection

Yesterday, Jill Van Matre and I presented at SXSW Interactive under the title “The Right to be Forgotten: Forgiveness or Censorship?” The format is what SXSW calls a core conversation – needless to say we had no idea what was expected of us. With about 40 or 50 people in the room, we managed to have a rich conversation about the subject with most (perhaps all) of the stakeholders represented in the room: lawyers, developers, bloggers, journalists, historians (yes!) – all who are users with a past. We (via the wonderful audience) covered every facet of the debate including costs of data management regulations, identity and reputation, public vs private figures, reinvention and second chances, free speech, historical records, persistence of online content, cultural variation, and, of course, scandal. An attorney from Texas described her state as “culturally punitive,” which has inspired me to sketch the outline of an article titled “If Texas Ran the Internet.” Thanks to all that came and participated – I had a wonderful time and hope you did as well.

Follow

Get every new post delivered to your Inbox.