Quantcast
Channel: Neuroanthropology » Susan Greenfield
Viewing all articles
Browse latest Browse all 2

Fear of Twitter: technophobia part 2

$
0
0

When I was a lifeguard in high school, two of my fellow lifeguards — Steve and Pete — sought to converse as much as possible quoting directly lines from the Chevy Chase movie, Fletch. This is what qualified as comedy. Steve was apparently the ‘more clever’ of the two as he probably achieved Fletch Quotation Ratios as high as 20%; Pete, though quite well tanned, likely only managed 10% FQR at best. I hadn’t seen the movie, and I was never much for quoting film scripts (not even Monty Python), so I assumed that Steve’s high FQR was either a symptom of premature senility or a sign of the impending collapse of Western civilization.

Recent fears about the negative cognitive consequences of the social networking site Twitter, which I mentioned in an earlier post, Is Facebook rotting our children’s brains?, led me to recall Steve and Pete’s battle for high FQR. In both cases, concerned observers might wonder whether patterns of mental activity can lead to long-term neural degeneration; I haven’t checked in on Steve or Pete in more than 20 years, but I suspect they’re both locked in institutions living out a cruel Chevy Chase imitation from which they can no longer escape.

Twitter, even more than other Internet-based social networking applications, seems to provoke apocalyptic fears of mass mental degradation. Over at Alternet, for example, Alexander Zaitchik asked Twitter Nation Has Arrived: How Scared Should We Be? In the piece, Zaitchik wonders whether what was ‘once an easily avoided subculture of needy and annoying online souls’ was bringing about the apotheosis of all that is loathsome in American pop culture: ‘look-at-me adolescent neediness, constant-contact media addiction, birdlike attention-span compression and vapidity to the point of depravity.’ Rob Horning of Pop Matters warns about ‘Twitterification’ in a piece titled, Foucault’s Facebook. Keith Olbermann named Twitter ‘worst person in the world,’ …for the one episode at least (see video at You Tube); Olbermann found someone already Twittering in his name, even using his email address. And if you’re not already convinced that Twitter is the unmentioned fifth Horseman of the Apocalypse, John Mayer’s Twitter obsession is blamed for Jennifer Aniston pulling the pin on their relationship.

Fortunately, even if we are on the non-stop plane to cognitive Armageddon, Web 2.0 assures us that we will have clever guerilla videos about our own immanent destruction as our in-flight entertainment. From SuperNews, we have a helpful cartoon, ‘The Twouble with Twitters’, to explain to us ‘the latest socially networking micro-bloggy thingy,’ especially if you’re a slow-on-the-uptake parent not sufficiently worried about adolescent technology use (are there any?).

More after the jump…

But first, a caveat or two…

Before I go any further, I have to offer a couple of caveats so that my own position in this particular moral panic is a bit clearer. However, if you want to cut to the chase, skip to the next heading below the ‘fail whale’ picture:

1) Although status as a geek blogger may normally be a symptom of an early technology adopter — I am not one. Old mobile phones maintain my loyalty even when offered snazzy new ones because adapting to new ones is so frustrating. My old laptop on which I’m writing this has worn off keys and several irreparable structural problems, but I’m slow to upgrade. And in general, new technologies only hold interest when it’s obvious that they can make easier some essential task in my life (seldom to add new functions to daily life) or when a dependable piece of technology must be laid to rest.

2) I do not have a Twitter account, nor do I plan on getting one. A Facebook account in my name was put up by a student, but I’ve never figured out how to use it or update it. (Do any of my former students know my password since they put up the account?) If anyone wants to stalk me, you are going to have to used old-fashioned techniques like just Googling me or waiting until I blog. It seems hard enough keeping up with the constant onslaught of email without monitoring tweets. In general, communication seems an impediment to serious writing, so I will frequently hide from all communication (including conversation) when I need to get something done in my intellectual life.

3) My 19-year-old daughter is (was?) a heavy user of My Space, and like many adults, when I’m inhabiting the parental social role, social networking strikes me as a staggering waste of time and energy. Of course, my mother said the same about Dungeons and Dragons. Parents will likely always say this about adolescent activities unless they involve happy completion of household chores or meaningful child labour, preferably heavy physical labour or piecework producing significant income streams.

Even though a registered Techno-Curmudgeon and Grumpy Father-of-Teenager, I won’t be hating on Twitter in this post because my own refusal to participate in a particular wave of communication technology need not be justified by disparaging the technology itself. I’m more interested in the particular shape technophobia takes in attacks on Twitter, and asking whether these fears are neuroanthropologically plausible or, in fact, overblown in light of cross-cultural and evolutionary evidence about the development and variability of human cognition.

I’m also not going to comment on the latest technophobic rants by the Lady Susan Greenfield (see, for example, the frightening Social websites harm children’s brains: Chilling warning to parents from top neuroscientist in The Daily Mail). She still can’t site any evidence, and, in a recent interview on ABC (the Australian network), she demonstrated she still can’t tell a social networking site from a video game (these problems are discussed in the previous post, Is Facebook rotting our children’s brains?).

For example, in the recent interviews, Greenfield tries to pin an increase in the rate of autism on online communication, which is so preposterous it beggars the imagination: how many people become autistic after using social networking websites?

Fail whale

Fail whale

One thing that a number of commentators have pointed out is that Twitter acts as a bit of an inkblot for an older generation, either declaring it a harbinger of the demise of complex thought or proclaiming it a revolution in human social life. Samantha Bee and John Stewart at the Daily Show got into the act, especially after it became widely known that Congress was busy Tweeting — even notorious late tech adopter John McCain! (see stories at Time or on McCain’s Twitterview at cnet).

Bee makes the point that some of the technoenthusiasm among — how shall we say this? — older adopters in politics and media appears to be a desperate attempt to claw back some relevance to a younger generation that just does not seem to care. With a nineteen-year-old daughter, I’m abundantly aware that this is a recipe for severe parental pathos (just give it up – we’re ancient in their eyes, and no amount of techno-hipness is going to fix that). As Bee explains to Old Man Stewart, young people love Twitter,… according to older people.

But it’s also the case that some older people loathe Twitter, so perhaps it behooves Neuroanthropology to contemplate the potential effects of the technology, specifically…

Can Twitter make you stupid?

Critics of Twitter bemoan the tight constraints that the 140-character-limited format places on communication, the mangled grammar, SMS-style abbreviations, emoticons, and other character-economizing techniques that result. Alexander Zaitchik, of Alternet, for example, is emphatic in his critique of the cognitive consequences of Twitter:

What was once just a colorful special-needs classroom on the Internet is starting to look like a steel spike aimed at the heart of what remains of our ability to construct and process complete grammatical sentences and thoughts.

The Daily Show’s Samantha Bee has a bit of fun with the short attention span allegedly evidenced by Twitter-users by suggesting that there will eventually be Grunter for those who can’t even be bothered to write or read 140-character Tweets. And Slate video brings us Flutter, a parody web-based company for those who find 26 characters enough to capture their thoughts or actions.

In contrast, enthusiasts see the short length as one of Twitter’s primary selling points; ‘micro-blogging’ is alleged to be more efficient, faster, more democratic, and less demanding of time-starved users. For example, Anita Hamilton at Time Magazine argues that it is precisely because Twitter is ‘totally silly and shallow’ that it is ‘on its way to becoming the next killer app.’

Like SMS and texting, Twitter forces users to economize on the many superfluous articles demanded in formal English, generating a simplified grammar like a pidgin. Some of the same quirks of English that are eliminated in Twitter-speak are also neglected in different vernacular dialects, things like helping verbs or articles. For example, my own least favourite is dropping ‘to be’ in expressions like ‘this room needs cleaned,’ but that’s also one I hear in conversational Aussie and upper Midwest US English, too. The most technophobic version of this fear is that, through writing mangled Koko-the-gorilla level grammar in Twitter, users will gradually lose the ability to form more complex grammatical constructions, and then lose the underlying capacity to think in any structure more complex than Subject-Verb-Object-☺.

For the fear of grammar’s annihilation to be proportional, critics would have to demonstrate that simply using a reduced, abbreviated communication format leads the user to lose grasp on grammar. I’m a little suspicious about this because the much-maligned degradation of grammar began long before the advent of Twitter. As an academic with writerly pretensions, I’ve been amazed at the poor quality of student writing for as long as I’ve been reading it; recent examples are no worse than what I remember seeing in the early 90s, when email was new, the Internet hadn’t gone graphic, and Twitter was still a distant nightmare over the virtual horizon. I suspect that the inability of some students to write is one of those complex, multi-causal phenomena, with contributions from all sorts of angles: changes in curriculum, a shift away from focus on traditional grammar instruction, alterations in student ambition and motivation, changed communication practices, even the democratization of university access.
twitter-cartoon

The dangers of mangled grammar, in speaking, for example

One of the clearest bits of evidence that Twittering (or texting or some other yet-to-be-discovered network communication, like Grunting) will not permanently impair our ability to form thoughts negatively is the very gap between written and spoken language. In studies of children, for example, Harrell (1957) found that past the age of 13, the average length of a written sentence exceeded that of spoken language, and across all ages, written grammatical complexity tended to be greater than spoken linguist complexity. Even though writing was especially complex, Harrell found that orally narrating stories tended to be longer than written ones. That is, the simpler grammar of spoken English did not necessarily impoverish narration; rather, the effort of writing tended to decrease the length of stories and lead to greater economy of words (much like texting does to language).

If one looks at Twittering or texting, not as a degenerate form of written language, but rather in relation to spoken conversation, one gets a very different picture of its inadequacies. Close analysis of naturally-occurring conversations reveals that spoken language is seldom built with the same grammatical care and consistency as written text. Utterances are rarely complete sentences, and feature such non-grammatical constructions as abundant fragments, mis-starts, odd ellipsis and verbal pauses (ummm…). Spoken language tends to be more expansive, less efficient, but also more lexigraphically constrained, with a smaller vocabulary and greater word repetition.

Conversational English is heavily dependent upon context, with indefinite pronouns and deixis (‘this,’ ‘that,’ ‘there’) which can become incomprehensible in transcription because the words are taken out of context. Anyone who has ever waited too long to transcribe a conference audio tape or recorded interview knows well that even a high quality recording can be hard to follow without the context to draw upon to make sense of utterances. In contrast, written language is self-contained and overly definite compared to speech, with no interactive elements (such as questions and dialogue) and an independence from context that speech does not have.

In other words, if using non-grammatical English endangers thought, we all need to take an immediate vow of silence, or learn to speak in fully blown sentences. Of course, if we talk that way, we’ll sound like classic movies, when the dialogue seems impossibly artificial and rehearsed, before script-writing became more naturalistic.

One thing I’m curious about is the way that Twitter and texting, although they are speech-like in their lack of containment (in part simply due to length), still travel without context. I wonder to what degree the lack of context leads to patterns of inaccurate inference or poses cognitive challenges for both texter and recipient. What I suspect is that interpreting texts likely demands of the reader a projection of context that is probably mostly accurate given that both texter and recipient share patterns of social context but that might become impenetrable or misinterpreted when this is not the case (for example, with celebrities texting fans).

In the same way, conversation is a multi-modal form of communication, with dialogue occurring not only through audible interaction, but also through visual perception of facial expressions, gestures, and body language. David McNeill’s brilliant book, Hand and Mind: What Gestures Reveal about Thought (1992), discusses how even children communicate different sorts of information through speech and through gesture when trying to recount what happened in a Tweety and Sylvester cartoon. Gestures, according to McNeill, form a kind of parallel language, linked to mental images in an imaginary story-telling space. How do texters or Tweeters compensate for the radical reduction in available communication channels, or do they? What I suspect is that many texts and Tweets are formulaic, less about communicating information than about establishing co-presence or social connection, as I’ll discuss below.

If you’re interested in the contrast of spoken and written English, there’s a nice online unit on the subject from Open University, English grammar in context.

Linguistic innovation: is texting a new pidgin?

Twitter and other abbreviated codes are hardly unprecedented; telegraph Morse code, naval flag signaling, hand signaling in some professions (like around aircraft), trade languages or pidgins, even writing prescriptions as a doctor often involve highly reduced forms of communication. In fact, rather than cognition being reduced by the restricted format, we find instead that these limited codes often become richer and richer the more that they form dominant modes of communication.

For example, deaf sign languages, while they may have once been reduced derivative of spoken language, have become languages in their own right, with grammatical principles, inflection, and the characteristics of aural language. Likewise, the more a pidgin becomes a dominant mode of communication, the more complex it grows; the generation that grows up with a pidgin as a first language will usually transform it into a stable, complex creole, as researchers on pidgins and creoles such as Jared Diamond (1991, before his recent apotheosis) have described (see, for example, John Holm’s work [2000]).

Like Twitter-speak, pidgins, creoles and sign languages were long looked down upon by outsiders who considered them debased, degenerate versions of their contributing languages. So the example from language formation is that, the more people Tweet, the more likely Tweeting (or texting more generally) will develop stable grammars of their own with rules for use not included in the ‘trunk’ language. One clear case of this is the adoption of abbreviations and emoticons that are widely employed by users but impenetrable to the Twitter-illiterates (not to be confused with the fear of someone becoming Twitterate).

We find evidence of the sort of linguistic invention that we would expect in an area of rapid communication change in text-speak. For example, the rise of emoticons and abbreviations to inflect the short phrases is, in a way, pushing the orthography of English in new directions. Emoticons have certain similarities to pictographs (like hieroglyphics) and ideographs (like Chinese characters). The abbreviations remind me of the sorts of formulaic Latin codes that I was forced to write on my papers by some of my Jesuit instructors (for example, ‘AMDG’ in the top left corner of the page for Ad Maiorem Dei Gloriam, ‘for the greater glory of God’ – ironic considering some of the stuff I was writing…).

The pictographs evidence a kind of open innovation and visual intelligence whereas the abbreviated formulaic phrases, to me at least, look like an insiders’ code, a kind of specialized argot that only the Twittagentsia will understand. (Again, in confessional mode, I thought ‘lol’ stood for ‘lots of love’ for several months when I first saw it, demonstrating my abject outsider status.) Subcultural groups frequently generate their own slang, meaning that it becomes increasingly ‘costly’ in terms of learning and time in order to join them and allowing in-group members to signal each other their depth of knowledge. So, although abbreviations may be especially valuable in character-limited genres, they’re hardly unprecedented or a sign of cognitive collapse. Both Australia and Brazil, for example, favour abbreviations to a much greater degree than I found in the US, in part because both have extraordinary bureaucratic proliferation, another in-group where specialized knowledge is hard-won and signaled extravagantly.

It would be interesting to compare different emergent Twitterlects or text codes – say Japanese, HK English, American English, and different European languages – to see if there are any patterns among them in terms of shortening strategies, or shared code. Because Twitting is likely to occur mostly within a language group, rather than between users of different first languages, I wouldn’t expect a pidgin-like dialogue to arise, but an example like Hong Kong or Singaporean English might be an interesting counter-example.

Living online: cognitive consequences of interruption

twitterOkay if grammar is likely not to be a casualty of Twitter, what about the possible deleterious effects on thinking and interaction behaviour of all those constant Tweets bleating away at us? Won’t the constant intrusion of social interaction or electronic stimulation into nearly every waking moment of one’s life have consequences for the way we think and behave?

Although we may have confidence in our ability to ‘multi-task,’ for example, our attention is strictly single-tracked, as a number of studies have found (see, for example, John Medina’s website for his book Brain Rules). At Crucial Thoughts, a poster worries that Twitterfox is ‘dangerous for your cognitive health.’ Vaughan at Mind Hacks points to a similar article by Brandon Keim, Digital Overload Is Frying Our Brains, in Wired Magazine (which also has a graphic that I find pretty disturbing.

Keim’s article is based on an interview Maggie Jackson about her recent book, Distracted: The Erosion of Attention and the Coming Dark Age, in which she cautions of the effects of our ‘high-speed, overloaded, split-focus and even cybercentric society.’ Jackson warns that a ‘never-ending stream of phone calls, e-mails, instant messages, text messages and tweets is part of an institutionalized culture of interruption, and makes it hard to concentrate and think creatively’ (from Keim’s article). The effect could undermine civilization as we know it:

Dark ages are times of forgetting, when the advancements of the past are underutilized. If we forget how to use our powers of deep focus, we’ll depend more on black-and-white thinking, on surface ideas, on surface relationships. That breeds a tremendous potential for tyranny and misunderstanding. The possibility of an attention-deficient future society is very sobering.

Vaughan takes this technophobia on with elegant examples from his own life, not least of which is the most distracting of all old – very old — media, in his post The myth of the concentration oasis:

If you think twitter is an attention magnet, try living with an infant. Kids are the most distracting thing there is and when you have three of even four in the house it is both impossible to focus on one thing, and stressful, because the consequences of not keeping an eye on your kids can be frightening even to think about.

Normally I don’t quote at length in my posts – after all, why not just read the original – but why rewrite this section when Vaughan does it so well? He points out that, in fact, the time and space to concentrate may be the anomaly in the history and sociology of human cognition:

For people trying to work and run a family at the same time, not only are the consequences of missing something more important and potentially more dangerous, but it’s impossible to take a break. A break means your kids are in danger, your family doesn’t get fed and you’re losing money that buys the food.

Now, think about the fact that the majority of the world live just like this, and not in not in the world of email, tweets and instant messaging. Until about 100 years ago everyone lived like this.

In other words, the ability to focus on a single task, relatively uninterrupted, is the strange anomaly in the history of our psychological development.

New technology has not created some sort of unnatural cyber-world, but is just moving us away from a relatively short blip of focus that pervaded parts of the Western world for probably about 50 years at most….

The past, and for most people on the planet, the present, have never been an oasis of mental calm and creativity. And anyone who thinks they have it hard because people keep emailing them should trying bringing up a room of kids with nothing but two pairs of hands and a cooking pot.

In this post, Vaughan is being particularly ‘neuroanthropological,’ using the tried and tested techniques of cross-cultural and cross-temporal comparison to denaturalize our own condition, pointing out that what we might assume to be the ‘natural state’ of humans is, in fact, a bit of an aberration. (Check out the comments on Vaughan’s post as well.)

As Huberman, Romero and Wu (2009) suggest in a recent article in First Monday, ‘attention is the scarce resource in the age of the Web.’ Although social networks may look vast and distracting, most people do not actually interact or pay much attention to the majority of people in their ostensible ‘networks.’ That is, far from social networking causing a new highly distracted state, we both find that distraction is a more pervasive human condition and that, even when in the distracting environment of social networking, we simply ignore a lot of what is happening.

The point for me is not that being distracted or not being distracted is ‘normal,’ but that both are social products with cognitive consequences. A space free of distraction, in which one can engage in the thinking behaviour that leads to extended arguments, writing, creativity, and the like, is actively created, not just determined by our context, whether that context is children or customers or telephones or Twitter or life-threatening predators. One could say that the interruption-free space that facilitates certain kinds of thoughts is a social and technological achievement, likely open to only some people (non-care-givers, those with access to exclusive spaces). The fact that medieval scholars were often cloistered and that university professors were once required to be celibate clergy members demonstrates that concerns about ‘distractions’ on intellectual work are hardly new.

When my students tell me that they have a hard time putting together a consistent argument in an essay, I ask them whether they wear headphones and how many Instant Message windows they have open on their computer desktop when they try to write (see Grinter and Palen 2002 on ‘multitasking’). Of course, many of them are engaged in active self-distraction; Twitter is hardly unprecedented in this way. But I have the same problem around the farm sometimes (as I write this, my wife and I are both trying to concentrate, but our six-month-old colt is harassing his mother and charging around the paddock outside our window in spite of four inches of rain having just fallen on the sloppy ground – distracting).

To assume that a technology is inherently distracting and so pervasive that it cannot be turned off or tuned out is an odd kind of technological determinism, a sense of powerlessness in the face of our own creation. As Vaughan argues, text messages, emails and Twitter are far easier to shut off than babies, family members, or even foals. If we are alarmed that we are distracted by these media, we should ask ourselves what is so important that we won’t solve the ‘problem’ in the most obvious way.

Privacy and inattention

twitter-addictsAnother concern about Twitter is that the very need to use it signals the spread of narcissism or some other personality frailty. In an article, A load of Twitter, which appeared in The New York Times, author Andy Pemberton cites a number of psychologists who see Twitter as a symptom of a deeper psychic problem in our society.

Dr. David Lewise, for example, opines that Twitter is a sign that we fear our annihilation by anonymity: “We are the most narcissistic age ever…. Using Twitter suggests a level of insecurity whereby, unless people recognise you, you cease to exist. It may stave off insecurity in the short term, but it won’t cure it.’

(Of course, as Steve of Belfast wrote in the comments on Pemberton’s article, ‘If leaving short messages on the internet for others to read is a waste of time, then the Times presumably wouldn’t bother letting readers leave comments here…’ on the webpage for the Pemberton article.)

The desire to communicate, however, is not always a need to spread new information; sometimes we interact simply to feel connected to each other. The vacuousness of most tweets is probably no greater than the vapidity of most conversation. As Dominik Lukeš writes in response to the article, A load of Twitter :

Much of our discourse serves no greater purpose than primate grooming (cf. Robin Dunbar) and that includes a lot of academic discourse – the whole referencing system is based on not much more. So when the journalist [Pemberton] asks “What kind of person shares information with the world the minute they get it?”, the answer is any normal person does. Twitter just makes it possible to share instantly with many people but it relies entirely on well-established social and psychological phenomena that are not only perfectly normal but necessary to the functioning of humans in groups.

In fact, the same social forces that compel socialization may also drive the uptake of new communication media. Research suggests that Instant Messaging and SMS (Short Message Service on mobile phones), for example, become almost a social obligation in certain peer groups, with interactions in school continuing online when children returned home (Grinter and Palen 2002).

Having inadvertently ‘eavesdropped’ (or the visual equivalent) on some of these interactions, the vast majority seem to be a sort of textual equivalent of the kind of pointless verbal play (‘whaddup dawg’ talk) or spiraling post-adolescent social drama that occupies vast amounts of young minds’ cognitive resources in normal speech and interaction. Grinter and Palmer generously refer to this activity as ‘socializing.’ Why would we expect electronic media to be any different? In other words, far from being a sign of psychic frailty, most social communication is banal and mundane, so we shouldn’t be terribly concerned that people are using a ‘killer new app’ to do precisely what they do through much more ancient modes of self expression.

What seems to bother some critics is that this sort of mundane expression would normally be ‘private,’ but that it becomes ‘public’ through Twitter. Like Vaughan’s discussion of thinking space, however, the fear for the erosion of privacy assumes that private space is a universal human endowment, when it is anything but.

As any anthropologist or even world traveler well knows, the boundary between what is ‘public’ and what is ‘private’ varies widely across societies. In some, women can only travel in public by transporting a fabric enclosure around on their bodies; in others, the body is even less private than in the United States or Europe (which are not uniform in this regard). In Brazil, for example, people are much more comfortable with body-to-body contact with strangers; the same contact that would demand, ‘Oh, sorry about that,’ in the US elicits no response on a Brazilian bus or at a concert.

In some places, daily life is carried on in close proximity to others, with flimsy walls at best, or no structures all, to separate family members or even neighbours, so that the most ‘intimate’ conversations likely have multiple unacknowledged audience members. When families live in one room, sexual interaction between parents is frequently overheard by children and in some longhouse dwelling communities, couples have to hide in gardens for ‘privacy.’ In some societies, one’s true name is so private that he or she will go to the grave with virtually no one knowing it; in others, the desire for ‘fame’ or ‘renown’ demands that people constantly engage in attention-seeking behaviour and public display. Although we may feel such deep shame that privacy seems ‘natural’ to us, we know that standards for all sorts of ‘private’ behaviours vary widely.

Sociologist Erving Goffman wrote about ‘sanctioned eavesdropping’ and ‘civil inattention,’ the generally unspoken standards we have for interacting in public in ways that we would prefer people not to observe too closely (see Adrian Chan’s interesting discussion of these concepts on Attention and inattention on Twitter and a follow up Transient conversation networks on Twitter). Goffman felt that urban life was only possible through civil inattention as we are forced by close proximity and restricted living space to avoid deep engagement with everyone we encounter.

Civil inattention allows us to dwell in the midst of masses of anonymous strangers and feel little need to identify them or obligation to interrelate with each other (see, for example, Goffman 1963). Even when forced to interact in public life, minimizing our attention to individuals (how many times to people actually look a bank teller or cashier in the eyes?) allows us to engage in manic, shallow interactions, making purchases, crossing paths in the street, and engaging in everyday life. As Goffman writes (1963:84), civil inattention involves giving a person just enough attention ‘to demonstrate that one appreciates that the other is present.’ (See also Wayne Martin Mellinger’s weblog post, Doing Modernity through Civil Inattention.)

Privacy concerns often arise when individuals don’t adhere to the same standards we have for public inattention, for politely ignoring interactions that are only partially public. Along the wrong side of this boundary are impolite stares, rude eavesdropping, even stalking. The situation in online communities is that the standards are not very clear at all, and the act of posting information about one’s self online is the kind of behaviour that Goffman suggests might invalidate any claim to anonymity. Just as being or behaving abnormally means that the restraints on others to maintain inattention are lifted (see, for example, Garland-Thomson 2006), posting personal information online can seem to be an ambiguous act, one that might signal the voluntary surrender of one’s right to private anonymity in a virtual public space.

The irony is that, at the same time that modes of public relation invade our daily lives, encouraging us to self-promote, ‘network’ and engage in very public ‘private’ interactions, we also feel our own vulnerability to privacy invasion more acutely with growing concerns about identity theft, routine surveillance, and the security of personal information.

My sense is that those participating in social networking practices like Twitter are responding to ambiguity in the public-private boundary in new media with enthusiastic extension, allowing new parts of their social lives to become public. In contrast, those who are afraid of the new possibilities are responding by trying to tighten the boundary around the ‘private,’ even to extend what is considered ‘private.’ Obviously, new media make it hard to fall back on cultural consensus about what is appropriately defined one way or the other; there simply is vague precedent, with both sides able to cite parallels for their own perspective.

Final thoughts

Perhaps one of the most annoying things about both Twitter proponents and critics is the myopic, presentist assumption that whatever change they (or their children) are currently in the midst of is so utterly unprecedented, so universe-altering, that we are poised on the cusp of either revolution or complete collapse. From a cross-cultural, evolutionary or comparative perspective, these declarations look like the flipside of each other; equally silly and ill-informed, even if diametrically opposed.

The cognitive consequences of Twitter are likely negligible, although Twitter itself may be part of a larger cultural movement with a range of consequences (for example, the neurological effect of hours spent online and thus not engaged in things like exercise). Steve and Pete and their Fletch quotations, like countless other examples, probably demonstrate that the human brain, thankfully, thrives on a stimulation of all sorts, even if it looks asinine to someone who does not share the interest.

My own concern is more that modes of interaction from customer service and public relations are becoming more and more pervasive in a range of places; Twitter and personal webpages seem to me to be a subset of activities that encourage us to consider ourselves as products to be promoted, as portfolios to be built for potential employers, or as candidates for social approval. The stress of treating the self as a sales project, of transforming self-improvement or just enjoying life into a kind of long-term product development, seems to me to be part of an insecure age.

I’m not saying that we are ‘the most narcissistic age ever’ or something like that. Rather, our age seems to be taking its mythology and its spiritual practice from a range of promotional strategies, transforming wholesale a range of activities by reconceiving them as part of a personal marketing campaign. At work, in social life, at school, even increasingly in our personal life, we are encouraged to transform everyday activities into resume-stuffers, network-builders, or chances to add to our profiles, in all the forms these take. In this context, Twittering about the inanity of everyday life to our ‘followers,’ even if they pay us little attention, might even feel like a bit of a relief…

Further reading:

If you still haven’t had your fill on Twitter-related musings, there’s much, much more available on line. The Guardian announces Twitter switch for Guardian, after 188 years of ink, just make sure to check the dateline before you get too worked up. The best part is the effort to convert their archives into Twitter format: “1832 Reform Act gives voting rights to one in five adult males yay!!!”; “OMG Hitler invades Poland, allies declare war see tinyurl.com/b5x6e for more”; and “JFK assassin8d @ Dallas, def. heard second gunshot from grassy knoll WTF?”

For general information on social networking sites, check out A Collection of Social Network Stats for 2009 by Jeremiah Owyang. Owyang also also provides a kind of executive summary cum business strategy reading of Huberman, Romero and Wu (2009, below) at Understanding HP Lab’s Twitter Research.

If you’re interested in Lady Susan Greenfield’s issues with Twitter, Facebook and computer technology in general, surf your short attention span over to the ABC All in the Mind website for Computers and your head – Susan Greenfield on All in the Mind.

In the face of the mounting luddite charge on Twitter, Chuck Tryon countered with, Why You Should Be on Twitter. Or check out at least marginally positive stories about Twitter in The Boston Globe (‘All a-Twitter’) and The New York Times (‘Twitter? It’s What You Make It’).

One of the better responses to the Baroness Greenfield and her ilk is provided by Robert Mackey at Is Social Networking Killing You? on The New York Times website. Mackey has a bit of fun with Greenfield’s interviews (she deserves it), but he especially deals well with a recent paper by Dr. Aric Sigman in Biologist on the ‘biological implications of social networking.’

Another take-down on the pseudo-psychology of anti-Twitter technophobia is provided by Dominik Lukeš in Twitter backlash exposes shallowness of modern psychology. I suppose the title sort of says it all…

Coturnix at A Blog Around the Clock collects some quotes and links about others’ experiences with news being broadcast through Twitter, especially around recent NASA-related events. See Journalism on Twitter?. Although I agree with some of the sentiments, I also am uncomfortable by the hunger for ‘right now’ news, the demand for coevality with events everywhere, that I feel is already part of the information overload many of us fight to overcome.

I quite like an older post on social networking and sociological analysis, again by Vaughan at Mind Hacks, The distant sound of well-armed sociologists. In the post, he discusses how social network analysis can take advantage of our own online networking practices to understand better how we influence each other in all sorts of online forums. The opportunities might make old school face-to-face social network analysts green with envy (so much data, so little time).

Cartoons from Web Designer Depot and Webasticno.com.

Stumble It!

References:

Diamond, Jared. 1991. Reinventions of Human Language. Natural History May 1991: 22-28. (a version available here)

Garland-Thomson, Rosemarie. 2006. Ways of Staring. Journal of Visual Culture 5(2):173-192. (Abstract)

Goffman, Erving. 1963. Behavior in Public Places: Notes on the Social Organization of Gatherings. Glencoe: Free Press.

Grinter, Rebecca E., and Leysia Palen. 2002. Instant messaging in teen life. Proceedings of the ACM Conference on Computer–Supported Work, pp. 21–30; accessed 13 April 2009. (pdf available here)

Harrell, Lester E., Jr. 1957. A comparison of oral and written language in school age children. (Monographs of the Society for Research in Child Development 22, Serial number 66.) Lafayette, IN: Child Development Publications.

Holm, John. 2000. An Introduction to Pidgins and Creoles. Cambridge: Cambridge University Press. (pdf of intro and front material available here)

Huberman, Bernardo A., Daniel M. Romero, and Fang Wu. 2009. Social networks that matter: Twitter under the microscope. First Monday 14 (1, 5 January 2009). (available here)

McNeill, David. 1992. Hand and Mind: What Gestures Reveal about Thought. Chicago: University of Chicago Press.



Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images