The Impact of (Good) Online First Impressions

By Leland Francisco(CC-BY-2.0)

By Leland Francisco

I, like most privacy researchers, spend most of my time considering how negative information online is harmful to the subject of that information. Reading Glamour at the gym last night, I came across an article about how “pre-dating” affects relationships – got me thinking more about how too much information impacts the searcher and for the first time, how good truthful information impacts the searcher. Pre-dating is Google sleuthing through linkedin, facebook, twitter, etc, etc. 48% of women and 38% of men pre-date before a first date according to a survey.

The Glamour article, “Stop Googling Your Dates!,” interviews a number of relationship experts (or at least people who work in relationship fields) that all seem to agree pre-dating is bad for relationships. The most interesting quote is from biological anthropologist Helen Fischer: “Every piece of positive information you learn online about someone will probably drive you toward having sex sooner.” Sex sooner because you think you know him or her!?!

Of course, this isn’t a scholarly peer reviewed article, but there is research out there. The research on first impressions seems to suggest it doesn’t take much information to form one and that they are difficult to change. Eli J. Finkel, Northwestern University, is quoted in the article (“You’re trying to suss out: Will this person and I have a connection? Actually, there is no evidence that we can assess that online.”) and has some written some page turners like this one.

I’ve written about distortions and inaccurate impressions resulting from negative online information, but perhaps we should start having more conversations about larger social implications of positive information as well.

Revenge Porn Zombie Site

I’ve been using the content on as an example of web ephemerality since the site went down in April. Digital decay and weak content persistence are important aspects of a digital right to be forgotten. Kayla Laws, pictured below, found herself on the notorious revenge porn site started by the now notorious Hunter Moore. Today, when you search Kayla Laws on Google, results reference her acting activities, but not her content on IAU – it is gone.


The only reference to her content on the revenge porn site is from articles that covered her coming out as a victim of the site on Nightline. The site was archived by the Internet Archive, but it’s not easily searchable by name. Kayla’s information on can be seen as example of how embarrassing digital information can be forgotten without legal intervention.

But now the site is back – or coming back – with vengeance.

Moore explains,

“We had too many hackers too much overhead and way too many legal problems. This time I am doing it right. We are going to start off by launching with all the old IAU content and all new content. The submission page has only been up for five full days and we’ve done over 7,000 submission within that time. I am creating something that will question if you will ever want to have kids.”

IAU currently redirects users to, but apparently handing over the site to James McGibney was not actually a change of heart. Scorned exes were previously able to fill in fields with their submissions that included social media profiles, but the new submission form will include an address entry.

“We’re gonna introduce the mapping stuff so you can stalk people,” Moore told Betabeat, “I know–it’s scary as shit.”

I’m not quite sure what to do with this come-back. In order for digital content to persist, it has to be maintained. The data controller must have the interest and resources to maintain access to it. Generally, this aligns with public interest and old content loses its appeal quickly. Accounting for vengeful data controllers in a right to be forgotten is difficult, because they may maintain content out of spite, even as it ages and gets fewer and fewer hits. Moore is disrupting more than the social niceties of the internet; he’s disrupting my research!

Bullying and the Right to be Forgotten: A Right to End Victimization

Amanda Todd, a 15 year old, was found dead Wednesday. She took her own life – another victim of cyberbullying. Tragically, her story is almost stereotypical at this point. CNN covered the story, her story, which she tells through a haunting YouTube video. As she recounts the incidents that led her to change schools, suffer from deep depression and anxiety, being physically as well as emotionally beaten by herself and her peers, and eventually take her own life, she says that he can never get that photo back.

“I can never get that photo back”

Social isolation, bullying, and depression are difficult to endure, but the added feeling that one must endure them forever – the hopelessness – is simply too much for many. The It Gets Better Project was started to address this issue for the LGBT community. It is “a response to a number of students taking their own lives after being bullied in school.” If Amanda held felt she could move on from that photo – if she could “get it back,” perhaps she would have felt that it would get better. How can we even suggest to her that it will get better when the life of her information online seems eternal?

A right to be forgotten may provide hope for the victims of cyberbullying. The notion that you do not have to be that victim forever – the Internet will loosen its shackles eventually – offers hope for self-determination. In this case, however, the right to be forgotten is just one of many legal tools that may have helped Amanda regain control of that regretful image. Bullying content can often be removed for a number of reasons including violations of terms of service, child pornography laws, copyright ownership, and cyber-bullying and cyber-stalking laws (Canadian laws). Amanda’s picture falls into at least the first three categories. Criminalizing bullying is difficult and the language of the statute must usually include that the communication be threatening or defamatory. Although she had legal options, none of these laws are designed to address Amanda’s fears, which was that she would not be able to move beyond a horrific moment in her adolescence because it was indefinitely online. Perhaps the right to be forgotten should not be re-cast as a right to delete (European scholars have argued that the right to be forgotten is really just about deleting data trails and user profiles, but not online content accessible through the Internet), but a right to let it get better. The Do Not Track Kids bill, more than any other legislative effort, seeks to create such a right. Although imperfect, the bill is a good place to start and could (but does not currently) include specific language for removing bullying content.

There is value to that image; it provides historical evidence of sexting and cyber mob mentality, among other topics, but what value do we derive from Amanda being attached to that content? Any value is surely outweighed by the harm she suffered and the fear and anxiety felt throughout society because of these types of stories. This child needed legal, social, and technical help but did not find it. Time to do better has long passed.

Why I Can’t Change My Name To 01101101 01100101 01100111 00100000 01101100 01100101 01110100 01100001 00100000 01100001 01101101 01100010 01110010 01101111 01110011 01100101

Last week I gave a talk on the right to be forgotten at Indiana University’s Center for Applied Cybersecurity Research.  After, IU Law Dean Hannah Buxbaum mentioned that changing one’s name in the US is much easier than in other places. Names are an important part of the right to be forgotten – names are what attach us to so much personal information online. You can change your name to just about anything here in the States. Just ask Tyrannosaurus Rex, or T-Rex as the 23 year old previously known as Tyler Gold likes to be called (because it’s “cooler” – obviously). Limitations are pretty slight, but a name change may be denied if it involves:

  • Fraudulent intent, such as avoiding bankruptcy by pretending to be someone else or to get away with a crime
  • Violating a trademark or interfering with the rights of others; so you can’t change your name to Mitt Romney unless there is a convincing reason that isn’t related to the famous person
  • Using numbers or symbols (except Roman numerals) because they are intentionally confusing, which is why no binary Meg Leta Ambrose
  • Using obscene words, fighting words, or racial slurs

You also have to give notice of your name change by publishing it in your local newspaper and take the affidavit of publication back to be filed with the clerk of courts. With most newspapers publishing online, the connection between the past name and new name is easily accessible, but certainly adds some pretty significant friction to gathering information on an individual with a name change through an online search.

In other countries, names are far more regulated. When naming a child in Germany, you must be able to tell the gender of the child by the first name (Matti was rejected for a new born boy). And the name chosen cannot negatively affect the well being of the child (Mayo Head might agree with this type of limitation). You cannot use last names, the names of objects, or the names of products as first names. You pay a fee for the name to be submitted to the office of vital statistics. The office refers to a book of first names when assessing the proposed name and an appeal can be made if rejected, but if the appeal is lost, a new name will be required (and a new fee paid).

Denmark has a specific law that would probably not allow Moxie CrimeFighter. The Law of Personal Names is meant to save children from their silly parents. Names are subject to approval of the Ministry of Ecclesiastical Affairs and the Ministry of Family and Consumer Affairs. Parents can either choose from the pre-approved names in a government list of 3,000 boys and 4,000 girls names or they can apply to have a name approved. 15-20% of the 1,1000 reviewed names are rejected. In short, “Danish law stipulates that boys and girls must have different names, first names cannot also be last names, and bizarre names are O.K. as long as they are ‘common.'”

Sweden’s Naming Law, enacted in 1982 to prevent non-nobles from giving their children noble names (regulating noble names in 1982 seems late, no?), “First names shall not be approved if they can cause offense or can be supposed to cause discomfort for the one using it, or names which for some obvious reason are not suitable as a first name.” The law has been updated to allow men to change their last names to their partner’s, but at least two parents were quite unhappy with the registration process, naming their child Brfxxccxxmnpcccclllmmnprxvclmnckssqlbb11116 (pronounced Albin and rejected by the Swedish Tax Agency and the court upon appeal). Also of note, Metallica was determined “inappropriate” but later overturned, Google was accepted as a middle name, and Allah was refused as objectionable.

Today, names serve as an access point to personal information. We find information about people by entering her or his name into a search engine. If the information does not include the individual’s name (or connected in another way, e.g., meta-data), it will not be retrieved. It seems that European law is more inclined to manipulate the content or the route to access the information (see Spain vs Google right to be forgotten dispute), as opposed to the altering the initial access point. The US, on the other hand, is happy to allow the creation of a new identity (as long as it does not fall into one of the above mentioned exceptions) through a name change but less inclined to support efforts to edit content or alter the other access points (like messing with intermediary indexes or anonymization). With over 90% of two year olds already accumulating an online presence and new parents naming their children so they are easily retrievable on Google, the clerk’s office may see a spike in paper work.

SXSW Right to be Forgotten Core Conversation: A Reflection

Yesterday, Jill Van Matre and I presented at SXSW Interactive under the title “The Right to be Forgotten: Forgiveness or Censorship?” The format is what SXSW calls a core conversation – needless to say we had no idea what was expected of us. With about 40 or 50 people in the room, we managed to have a rich conversation about the subject with most (perhaps all) of the stakeholders represented in the room: lawyers, developers, bloggers, journalists, historians (yes!) – all who are users with a past. We (via the wonderful audience) covered every facet of the debate including costs of data management regulations, identity and reputation, public vs private figures, reinvention and second chances, free speech, historical records, persistence of online content, cultural variation, and, of course, scandal. An attorney from Texas described her state as “culturally punitive,” which has inspired me to sketch the outline of an article titled “If Texas Ran the Internet.” Thanks to all that came and participated – I had a wonderful time and hope you did as well.

How to Perform a Digital Seance

In an article entitled Deaths Pose Test for Facebook, the WSJ outlines the problems that arise when a Facebook user dies. Jo White and I proposed a workshop or presentation on the subject to a conference with the title of this post (perhaps that is why we were rejected), but the automatic memorialization of pages struck us as odd. The ways in which friends and family interact with the page upon memorialization is fascinating. Friends will continue  to converse with the deceased, often in a more intimate way giving new insight into intimate relationships, and will often build a new form of community around the deceased making or enriching connections.

People have always left memories with those they knew, as well as artifacts that stir up memories. The deaths referenced in the article are teen suicides, and the question is whether username/passwords should transfer with other forms of property upon death. At this point, family may either live with the memorial page, which some appreciate a great deal, or close the account. The digital object that represents some aspect of the deceased person cannot be altered by the family. Probate law has defaults that must be amended if an individual wants to go in a different direction. Although teenagers rarely have a will, giving all users an opt-out for memorialization may indicate the level of control and presence one wishes to exert after they cannot manage their FB page any longer.

This is of little comfort to parents trying to understand their children’s lives and deaths. But, the idea of my parents sifting through my Google account is mortifying, especially if I were gone. Yahoo faced a similar issue in 2004 when parents of a marine killed in battle wanted access to their son’s email account. Yahoo will not give out passwords to anyone but the account holder, unless ordered to by a court. So the parents sought a court order, verifying their identity and relationship. Facebook does not require this level of verification (they do have some steps in place for reporting and requests for next of kin). AOL only requires proof of kinship and death be faxed in order to gain access to an account.

In light of the strong possibility that my family or imaginary future spouse may someday be able to access my accounts, consider this an amendment to my will (which I should get around to writing): access to online accounts operated by me shall be denied to any person, including my next of kin or any other individual or institution, upon my death. Any account that represents a form of private correspondence should be deleted upon notification of my death. Any public account may be maintained financially (I have no idea why anyone would want to do this) but the content created by me may not be edited or altered in anyway. These accounts may be deleted upon the request of my next of kin. This statement cannot be nullified by any later consent to Terms of Service for any site or online service. I assure you I will not read those Terms of Service and true consent was not obtained.

I wonder what the TOS are for this site…

Prelims… DONE

I thought I’d post the reading list for my preliminary examination – completed last Friday. There is an (heavily) annotated bibliography – email if you’d like a copy. Now that all of this is done, I plan to post more regularly.

Governance of Cyberspace
David R. Johnson and David G. Post, Law and Borders: The Rise of Law in Cyberspace, 48 Stan. L. Rev. 1367 (1996).
Lawrence Lessig, The Law of the Horse: What Cyberspace Might Teach, 113 (2) Harv. L. Rev. 501 (1999).
Lawrence Lessig, Code 2.0 (2006).
Tim Wu and Jack Goldsmith, Who Owns the Internet (2006).
Julie Cohen, Cyberspace as/and Space, 107 Colum. L. Rev. 210 (2007).
Pamela Samuelson, Randall Davis, Mitchell D. Kapor, & J.H. Reichman , A Manifesto Concerning the Legal Protection of Computer Programs, 94 Colum. L. Rev. 2308 (1994).
Danielle Citron, Cyber Civil Rights, 89 Boston Univ. L. Rev. 61 (2009).
Eugene Volokh, Freedom of Speech, Information Privacy, and the Troubling Implications of a Right to Stop People From Speaking About You, 52 Stan. L. Rev. 1049 (1999-2000).
Jonathan Zittrain, The Future of the Internet (2009).
Lorrie Cranor, A Framework for Reasoning About the Human in the Loop in Usability, Psychology and Security (2008).

The User
Yochai Benkler, From Consumers to Users: Shifting the Deeper Structures of Regulation Toward Sustainable Commons and User Access, 52 Fed. Comm. L.J. 561, 561-62 (2000).
James Boyle, Shamans, Software, and Spleens (1997).
Jane Ginsburg, The Cyberian Captivity of Copyright: Territoriality and Authors Rights in a Networked World, 15 Santa Clara Comp. & Tech. L. J. 347 (1999).
Roberta Rosenthal Kwall, The Soul of Creativity (2009).
Jennifer Rothman, The Questionable Use of Custom in Intellectual Property, 93 Va. L. Rev. 1899 (2007).
Julie Cohen, The Place of the Users in Copyright Law, 34 Fordham Law Rev. 347 (2005).
Susan Crawford, Who’s in Charge of Who I Am? Identity and Law Online, 49 New York Law School Law Review 211 (2004).

Information Privacy Law
What is Privacy?
Daniel Solove, Understanding Privacy (2010).
Helen Fay Nissenbaum, Privacy in Context (2009).
Harry Surden, Structural Rights of Privacy, 60 SMU L. Rev. 1605 (2007).
Alessandro Acquisti, What Behavioral Economics Can Teach Us About Privacy in Digital Privacy: Theory, Technologies, and Practices 363-377 (2008).
Deirdre Mulligan and Kenneth Bamberger, Privacy on the Books and on the Ground, 63 Stanford Law Review (forthcoming 2011).
danah boyd and Eszter Hargittai, Facebook Privacy Settings: Who Cares?, First Monday, Volume 15, No. 8, 2 Aug 2010.
Julie Cohen, The Right to Read Anonymously: A Closer Look at “Copyright Management” in Cyberspace, 28 Conn. L. Rev. 981 (1996).
Sonia Katyal, The New Surveillance, 54 Case West L. Rev 297 (2004).
Neil Richards, Intellectual Privacy, 87 Tex. L. Rev. 387 (2008).
Ryan Calo, People Can be So Fake: A New Dimension to Privacy and Technology Scholarship, 114 Penn. St. L. Rev. 809 (2010).
Data Collection and Processing
Julie Cohen, Examined Lives: Information Privacy and the Subject as Object, 52 Stan. L. Rev. 1373-1438 (2000).
Ann Bartow, Our Data, Ourselves: Privacy, Propertization, and Gender, 34 University of San Francisco Law Review 633 (Summer 2000).
Daniel Solove, The Digital Person (2006).
Paul Ohm, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization, 57 UCLA L. Rev. 1701 (2010).
Scott R. Peppet, Unraveling Privacy: The Personal Prospectus & The Threat of a Full Disclosure Future, Northwestern Univ. L. Rev. (forthcoming 2011).
The Distribution of Private Information
Daniel Solove, The Future of Reputation (2008).
Viktor Mayer-Schonberger, Delete (2009).
Lior Strahilevitz, A Social Networks Theory of Privacy, 72 U. Chi. L. Rev. 919 (2005).
James Grimmelmann, The Unmasking Option, 87 Denver U. L. Rev. Online 23 (2010).
Anita Allen, Dredging up the Past: Lifelogging, Memory and Surveillance, 75 U. Chi. L. Rev. 47 (2008).

Information Science
Information Measures and Value
Claude Elwood Shannon and Warren Weaver, A Mathematical Theory of Communication (1964).
Joseph A. Goguen, Towards a Social, Ethical Theory of Information in Social Science Research, Technical Systems and Cooperative Work: Beyond the Divide 27-56 (1997).
George J. Stigler, The Economics of Information, LXIX(3) The Journal of Political Economy 213 (1961).
J. McCarthy Measures of the value of information, 42 Proc. Nat Acad. Sci. 654 (1956).
R.A. Howard, Information Value Theory, 2 IEEE Trans. Systems Science and Cybernetics (August 1966).
R. Glazer, Measuring the Value of Information: The Information Intensive Organization, 32(1) IBM Systems Journal (1993).
Richard Y. Wang and Diane M. Strong, Beyond Accuracy, What Data Quality Means to Data Consumers 12(4) Journal of Management Information Systems 5 (Spring 1996).
Information Persistence and Preservation
Yiping Ke, Liin Deng, Wildred Ng and Dik-Lun Lee, Web Dynamics and their Ramifications for the Development of Web Search Engines, Computer Networks 50: 1430–1447 (2006).
B.E. Brewington and G. Cybenko, How dynamic is the web? 33(1-6) Computer Networks 257 (2000).
D. Fetterly, M. Manasse, M. Najork, and J. Wiener, A large-scale study of the evolution of web pages in WWW ‘03: Proceedings of the 12th International Conference on the World Wide Web 669 (2003).
J. Cho and H. Garacia-Molina, The Evolution of the web and implications for an incremental crawl in VLDB 2000, Proceedings of 26th International Conference on Very Large Data Bases 200 (September 2000).
F. Douglis, A. Feldmann, and B. Krishnamurthy, Rate of Change and Other Metrics: a Live Study of the World Wide Web in USENIX Symposium on Internet Technologies and Systems, December 8-11, Monterey, CA. 147 (1997).
D. Spinellis, The Decay and Failures of Web References 46 Communications of the ACM 71 (2003).
S. Lawrence, D.M. Pennock, G.W. Flake, R. Krovetz, F.M. Coetzee, E. Glover, F.A. Nielsen, A. Kruger, and C.L. Giles, Persistence of Web References in Scientific Research 34 IEEE Computer 26 (2001).
Z. Bar-Yossef, A.Z. Broder, R. Kumar, and A. Tomkins. Sic Transit Gloria Telae: Towards an Understanding of the Web’s Decay in WWW ‘04 Proceedings of the 13th International Conference on the World Wide Web (2004).
J. Leskovec, J. Kleinberg, and C. Faloutsos, Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations in Proc. 11th ACM SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining (2005).
Wallace Koehler, A Longitudinal Study of Web Pages Continued: A Consideration of Document Persistence, 9(2) Information Research (2004).
Jeff Rothenberg, Avoiding Technological Quicksand: Finding a Viable Technical Foundation for Digital Preservation (1998). 81
Marilyn Deegan and Simon Tanner, Digital Preservation (2004).
Julien Masanes, Web Archiving (2006).

Code Supports Forgetting

Old content is for libraries (they need help with digital preservation, but that is for another time), but Google has made a big algorithmic change that will give users timelier results. The unintended consequences may support privacy, at least as the value relates to the Right to be Forgotten.  By recognizing that users want the freshest information (because it is generally the most valuable) and adjusting to decrease the relevancy of stale information, Google helps people move on from embarrassing, negative, no longer accurate and/or that have recently threaten to linger indefinitely. By embracing the impact time has on information, Google supports information stewardship – allows for information to fall away, at which point it should be assessed for archival systems and privacy harms. Check out Giving You Fresher, More Recent Search Results.

Toxic Information and Privacy Stewardship

Some information lingers longer than it should, longer than it is valuable. It becomes inaccurate, unreliable, outdated, and harmful – toxic. If web content has a very short lifespan (say a half life of 2 days – see Modelling Information Persistence on the Web by Daniel Gomes and Mario J. Silva) and we are concerned with removing harmful old information (e.g., The Right to be Forgotten), I suggest we are not managing this space properly. We are losing massive amounts of information everyday that may or may not be incredibly valuable in the future and holding on to pieces of toxic information, the loss of which is considered censorship.

What if we, as users, as contributors, as internet citizens, cared for this digital space as stewards for future information uses/users? The internet has many analogies – the one I find the most dangerous and the most optimistic is the library. The internet is dramatically different than it was ten years ago and it can and likely will be different ten years from now. As stewards of this decentralized, user-created space, we should also be librarians. As users, we have proven that we can enrich the space, but are we managing the space properly? Are we protecting its riches?

Principles of information stewardship exist in spaces like Wikipedia (see the discussion related to why the Star Wars Kid entry does not use the poor guy’s real name), but how many of us consider what ever happened to our old blog posts, flickr accounts, myspace pages? I am as guilty as anyone of information littering, dumping, polluting. To take the stewardship concept even further, some of this information is “biodegradable,” as in non-toxic, but that is dumb luck. I have been responsible with what I put into the space (a good first step) but have not been responsible about managing any of it – correcting inaccuracies, updating content, anonymizing news that is no longer news. This notion of information stewardship (and in turn, privacy stewardship) is underdeveloped to say the least – it’s 1am after all. Stewardship promoted by design is the natural next step (and hopefully the next post), but even less developed at this point.

The first step should be a long term timeline of web information lifecycles – how long is information persisting online and is it increasing or decreasing over the years? Only a few studies have been done on persistence and not over time periods relevant to regulation. Second, based on those findings, the characteristics of lingering harmful information sources should be identified. Regulation, if appropriate, should be tailored to these sources. Finally, standards/guidelines of information stewardship should be established and supported by design to encourage the librarian in all of us.

Too Little Information to Regulate Information Overload?

As we consider the next phases of management and regulation of the web, we reflect upon the last few decades of Internet developments and social impacts. It is interesting that in an attempt to reflect on our new abundance of information (some would say overabundance), I find information gaps. The removal or “forgetting” of harmful information from one’s past is a contentious topic. Comically, as I investigate whether the information flood needs to be dammed up, I cannot find information I have deemed necessary for the assessment.

Truthful information (facts or opinions), when initially created and distributed, is incredibly valuable. It is novel and informative (and generally receives First Amendment protection as newsworthy); it accurately represents the subject; it is a reliable communication from the speaker; and is heavily used by a large number of people. Organizations would call this operational information. Over time, however, this newsworthy information transforms into a record. Time generally renders the information less accurate – the subject changes over time but the information still represents the subject in its earlier state. The information is less reliable as a communication from the speaker – perhaps she no longer would communicate that information or has lost interest in communicating it. The information is also stale and rarely used. The information may eventually expire, meaning it will be deleted, anonymized, archived, or otherwise made less accessible.

The Right to be Forgotten would act as a regulatory expiration phase for harmful information that has lost its value.  Because so much of the Web becomes drastically less accessible “naturally” (dead links, changes in content, changes in URLs, revamping of sites, server changes, sites that are simply abandoned), it is important to consider whether information that is in fact low value remains accessible – and hurtful.  Does the combination of search algorithms, human nature, and information life cycles take care of forgetting the “right” information? In order to determine what types of sites continue to hold harmful information beyond the time period designated by regulation, it is appropriate to ask what is the average lifespan on content on the web? Is it getting longer? Can we expect the lifespan to grow or shrink in the future? How do search query results change over time? These numbers exist but are snapshots of various time periods taken randomly. In order to not see everything as a nail just because we’ve got a hammer, we must determine where and in what form information remains accessible beyond the regulatory and natural expiration dates.