Tweeting into the self-archive
Relaunching a blog has made me reconsider one of my firmest principles about my social media use, which was not to use my full name on Twitter.
I signed up for Twitter in May or June 2009, but took my surname off the account as soon as I started using it actively in November last year. It’s back there now in large part because later today I’m going to be taking part in a live chat on the Guardian Higher Education website about surviving your first academic post. That uses full names, because we’re real people; and since I belong to a higher education community on Twitter, I’d like to include my Twitter name in the profile, because my tweeting colleagues likely know that name better than the person with my legal name and job title.
Since I usually wall my digital presences off from each other, integrating two fields this way is new and still uncomfortable. I can drive traffic to my blog through Twitter and invite readers to join the social media conversation through my blog, but there are risks.
I don’t have a permanent contract, so even more than permanent or tenured staff I have to face the possibility that I may not spend all my working life as an academic. I have a visible digital profile as a teacher and researcher but not as, let’s say, an administrator, a copy-editor or an applied underwater basket-weaver. When I need to sell myself as someone whose career narrative has led inexorably to becoming an underwater basket-weaver, that profile becomes a liability.
The connections between the various locations of my online persona are visible to everyone, whether or not I had that audience in mind when I created the content. I’m perpetually accountable for thoughts in progress, replies taken out of context, jokes and, of course, mistakes.
In return for accepting these risks, I get to take part in a network that has grown up in an organic sense among academic people who share links, resources and ideas.
Although I logged back into Twitter in November on a slow work day to follow on-the-ground reports from the tuition fees protests in London, the researchers, lecturers, PhD students, professors, skills developers, research administrators and post-academic types I follow on there now have made me think more deeply about teaching practice, career planning and the philosophy of what academics do.
Not foregrounding my own name in my profile had the liberating effect that I didn’t feel I had to view or represent these issues through the lens of my own research interests all the time.
I can use Twitter for starting to figure out what the issues are in post-secondary education in other countries where I might want to work one day; swapping perspectives on communications, management and the effects of public spending cuts with people in other parts of the UK public sector; kicking back and watching television.
Like all communities of practice, Twitter evolves its own conventions and codes. Some of them can practically be expressed in a how-to guide, like the colloquial and conversational tone that experienced Anglophone tweeters seem to expect or the double life of hashtags.
Typing a hashtag (# plus a string of letters) into a tweet means users can search Twitter to discover every recent tweet with that hashtag. Hashtags can crowdsource news (think #tahrir during the uprising in Egypt or #tottenham on the first night of the UK riots); they can gather people who don’t know each other into scheduled weekly chat sessions (here’s the story of the #phdchat community), turning Twitter into a messageboard or forum. Twitter users have also turned the hashtag notation into a convention for denoting a sarcastic aside: if I type the same thing into Facebook or an email, where hashtags don’t work, I’ve transferred it from one community of practice to another, making a tiny contribution to linguistic change.
Other aspects of the Twitterverse are delights you discover as you get to know the platform, like Easter eggs in gaming. The virtual impressionists impersonating a BP public relations team, Death, or the Queen (who proclaims it ‘gin o’clock’ at the end of the British working day); the automatic bots that seize on and retweet any mention of socialism, grumpiness or the Scottish biscuit-maker Tunnocks.
Most academic types on Twitter use their full names and affiliations, and so I’ve joined the crowd. I’m still not sold on the idea, and I wouldn’t have joined if that had been compulsory, the requirement Google has tried to introduce on its own social network, Google Plus.
Last month, when Google Plus started to suspend the profiles of users using pseudonyms or even names an English-speaking monitor hadn’t recognised as real, early adopters set off the ‘nymwars’, where they identified dozens of reasons why a one-real-name policy might be problematic or even dangerous. danah boyd, a tech researcher at Harvard, has read the policy as an expression of social networking corporations’ power and privilege:
What’s most striking is the list of people who are affected by “real names” policies, including abuse survivors, activists, LGBT people, women, and young people.
Over and over again, people keep pointing to Facebook as an example where “real names” policies work. This makes me laugh hysterically. One of the things that became patently clear to me in my fieldwork is that countless teens who signed up to Facebook late into the game chose to use pseudonyms or nicknames. What’s even more noticeable in my data is that an extremely high percentage of people of color used pseudonyms as compared to the white teens that I interviewed. Of course, this would make sense…
The people who most heavily rely on pseudonyms in online spaces are those who are most marginalized by systems of power. “Real names” policies aren’t empowering; they’re an authoritarian assertion of power over vulnerable people. These ideas and issues aren’t new (and I’ve even talked about this before), but what is new is that marginalized people are banding together and speaking out loudly. And thank goodness.
‘In real life, you get to choose when to use your name, and how much of it to use’, points out Kee Hinckley; online, a search engine has hoovered it up for you already. Having every sphere of your life linked together for a casual acquaintance, or a wrongdoer, to inspect is new.
I have some personal friends who blog on Dreamwidth, where all the commenting sign-ins I could use already belong to services I use for different purposes. Unless they enable anonymous commenting, I can’t compliment or respond to their posts inside their blog space.
Facebook has almost got things right, for my own needs, with its granular privacy controls. On Facebook, once I’ve organised my contacts into lists, I can display content to certain groups and prevent other groups seeing it. I can even block individual people seeing individual posts: if I can’t figure out what to buy my mother for her birthday, I can post a status update to ask my friends’ opinion without spoiling it for Mum.
But all that content has the same name on it. That doesn’t bother me, once I’ve made sure no-one outside my contacts list can find me, but it’s a risk for others; I have more than one Facebook contact who has changed their name but could be in danger if the change was known to all their old Facebook connections and all those people’s friends. I do give thanks I didn’t have access to this sort of technology in adolescence. (Eric Schmidt, the CEO of Google, once suggested that today’s teenagers would need to change their names before entering the workplace to disassociate themselves from their digital trails.)
We’re depositing perpetual, open-access self-archives of ourselves. Is it worth the risks? Well, for as long as you’re still able to read this post, it must be.