Curry Chandler

Curry Chandler is a writer, researcher, and independent scholar working in the field of communication and media studies. His writing on media theory and policy has been published in the popular press as well as academic journals. Curry approaches the study of communication from a distinctly critical perspective, and with a commitment to addressing inequality in power relations. The scope of his research activity includes media ecology, political economy, and the critique of ideology.

Curry is a graduate student in the Communication Department at the University of Pittsburgh, having previously earned degrees from Pepperdine University and the University of Central Florida.

POTUS on Net Neutrality: treat Internet as a utility

 

  • Yesterday President Barack Obama released a statement on the future of the Internet. In a written statement and accompanying 2-minute video, Obama outlined an approach to Internet policy that supports net neutrality provisions and suggests reclassifying the Internet as a utility. It's an encouraging show of support for net neutrality advocates, but as Obama makes clear in the statement, it is ultimately up to the FCC as an independent regulatory agency to decide the future of Internet regulation. From Don Reisinger and Roger Cheng's CNET writeup:

Obama wades into a contentious debate that has raged over how to treat Internet traffic, which has only heated up as the FCC works to prepare an official guideline. Those rules were expected to be made available later this year, though reports now claim they may be delayed until early 2015. The debate has centered on whether broadband should be placed under Title II regulation under the Telecommunications Act, which already tightly controls phone services.

Proponents believes Title II regulation would ensure the free and fair flow of traffic across the Internet. Opponents, however, believe the onerous rules would limit investment in the infrastructure and new services, and that toll roads of sorts would provide better service to companies that can support their higher traffic volumes. That has created widespread concern that ISPs could throttle service in some instance, intentionally slowing down some content streams and speeding up others.

In a statement outlining what he'd like internet service to look like, Obama highlights four major points: internet providers wouldn't be allowed to block websites offering legal content, they wouldn't be allowed to intentionally slow down or speed up certain websites or services based on their own preferences, and they wouldn't be able to offer paid fast lanes. Obama also asks that the FCC investigate and potentially apply net neutrality rules to the interconnect points that sit between service providers, like Comcast and Verizon, and content providers, like Netflix. That's potentially huge news for Netflix, which has been arguing that this area of the internet should be covered by net neutrality all year.

Obama also asks that the commission apply these rules to mobile internet service. That would be a significant change as well, as mobile service hasn't previously been subject to the same net neutrality rules that wired connections have been. That said, Obama does leave a significant amount of room for exceptions in the wireless space, potentially allowing some amount of throttling so that providers can manage their networks when under heavy use. Notably, his proposal also asks the FCC not to enforce rate regulations on internet service.

The president said broadband service was “of the same importance, and must carry the same obligations” as services such as the telephone network, and asked the FCC to classify it as such. For proponents of net neutrality in Britain and elsewhere, having such a powerful supporter to point to is important.

It also bolsters the case for considering internet access as a right that should be safeguarded by government, something suggested by Britain’s Labour Digital group, which proposed that “government should assess the viability of providing free basic internet access to all citizens, possibly as a requirement for participation in 5G auctions, or targeted at children eligible for free school meals”.

If money is speech, then the poor have the softest voices. A deregulated Internet extends this logic, and cultural logic is what is at stake. The maintenance of net neutrality would exist in tension with the SCOTUS election campaign decision, maintaining a battleground on which the speech-money relationship remains fraught. Net non-neutrality, in whatever form, would combine with the SCOTUS decision, tipping the cultural scales in a sharply capitalistic direction; A direction in which the economic system drives the political system, and rights are things to be earned, bought, and sold.

Your brain on Kindle; 21st Century media literacy; how Disney shapes youth identity

Neuroscience, in fact, has revealed that humans use different parts of the brain when reading from a piece of paper or from a screen. So the more you read on screens, the more your mind shifts towards "non-linear" reading — a practice that involves things like skimming a screen or having your eyes dart around a web page.

Using the technology approach, the iPhone is the “school” and anyone who uses it adeptly is the master and anyone over 30 is, well, handicapped at best.   New technologies enable this approach because now, hardware and software are available and production has been democratized — everyone is a producer, a collaborator, a distributor and a participant.  While experiential and project-based learning is truly exciting and an important component of media literacy, it is not synonymous because the outcome of the technology approach is often limited to technical proficiency without critical autonomy. Whether using an iPad, a pencil or a videocam, pressing the right buttons is important but not enough!

The information, entertainment and cultural pedagogy disseminated by massive multimedia corporations have become central in shaping and influencing every waking moment of children's daily lives - all toward a lifetime of constant, unthinking consumption. Consumer culture in the United States and increasingly across the globe, does more than undermine the ideals of a secure and happy childhood: it exhibits the bad faith of a society in which, for children, "there can be only one kind of value, market value; one kind of success, profit; one kind of existence, commodities; and one kind of social relationship, markets."

Critical perspectives on the Isla Vista spree killer, media coverage


Reuters/Lucy Nicholson

Reuters/Lucy Nicholson

  • Immediately following Elliot Rodger's spree killing in Isla Vista, CA last month Internet users discovered his YouTube channel and a 140-page autobiographical screed, dubbed a "manifesto" by the media. The written document and the videos documented Rodger's sexual frustration and his chronic inability to connect with other people. He specifically lashed out at women for forcing him " to endure an existence of loneliness, rejection and unfulfilled desires" and causing his violent "retribution". Commentators and the popular press framed the killings as an outcome of misogynistic ideology, with headlines such as: How misogyny kills men, further proof that misogyny kills, and Elliot Rodger proves the danger of everyday sexism. Slate contributor Amanda Hess wrote:

Elliot Rodger targeted women out of entitlement, their male partners out of jealousy, and unrelated male bystanders out of expedience. This is not ammunition for an argument that he was a misandrist at heart—it’s evidence of the horrific extent of misogyny’s cultural reach.

His parents saw the digitally mediated rants and contacted his therapist and a social worker, who contacted a mental health hotline. These were the proper steps. But those who interviewed Rodger found him to be a “perfectly polite, kind and wonderful human.” They deemed his involuntary holding unnecessary and a search of his apartment unwarranted. That is, authorities defined Rodger and assessed his intentions based upon face-to-face interaction, privileging this interaction over and above a “vast digital trail.” This is digital dualism taken to its worst imaginable conclusion.

In fact, in the entire 140-odd-page memoir he left behind, “My Twisted World,” documents with agonizing repetition the daily tortured minutiae of his life, and barely has any interactions with women. What it has is interactions with the symbols of women, a non-stop shuffling of imaginary worlds that women represented access to. Women weren’t objects of desire per se, they were currency.

[...]

What exists in painstaking detail are the male figures in his life. The ones he meets who then reveal that they have kissed a girl, or slept with a girl, or slept with a few girls. These are the men who have what Elliot can’t have, and these are the men that he obsesses over.

[...]

Women don’t merely serve as objects for Elliot. Women are the currency used to buy whatever he’s missing. Just as a dollar bill used to get you a dollar’s worth of silver, a woman is an indicator of spending power. He wants to throw this money around for other people. Bring them home to prove something to his roommates. Show the bullies who picked on him that he deserves the same things they do.

[...]

There’s another, slightly more obscure recurring theme in Elliot’s manifesto: The frequency with which he discusses either his desire or attempt to throw a glass of some liquid at happy couples, particularly if the girl is a ‘beautiful tall blonde.’ [...] These are the only interactions Elliot has with women: marking his territory.

[...]

When we don’t know how else to say what we need, like entitled children, we scream, and the loudest scream we have is violence. Violence is not an act of expressing the inexpressible, it’s an act of expressing our frustration with the inexpressible. When we surround ourselves by closed ideology, anger and frustration and rage come to us when words can’t. Some ideologies prey on fear and hatred and shift them into symbols that all other symbols are defined by. It limits your vocabulary.

While the motivations for the shootings may vary, they have in common crises in masculinity in which young men use guns and violence to create ultra-masculine identities as part of a media spectacle that produces fame and celebrity for the shooters.

[...]

Crises in masculinity are grounded in the deterioration of socio-economic possibilities for young men and are inflamed by economic troubles. Gun carnage is also encouraged in part by media that repeatedly illustrates violence as a way of responding to problems. Explosions of male rage and rampage are also embedded in the escalation of war and militarism in the United States from the long nightmare of Vietnam through the military interventions in Afghanistan and Iraq.

For Debord, “spectacle” constituted the overarching concept to describe the media and consumer society, including the packaging, promotion, and display of commodities and the production and effects of all media. Using the term “media spectacle,” I am largely focusing on various forms of technologically-constructed media productions that are produced and disseminated through the so-called mass media, ranging from radio and television to the Internet and the latest wireless gadgets.

  • Kellner's comments from a 2008 interview talking about the Virginia Tech shooter's videos broadcast after the massacre, and his comments on critical media literacy, remain relevant to the current situation:

Cho’s multimedia video dossier, released after the Virginia Tech shootings, showed that he was consciously creating a spectacle of terror to create a hypermasculine identity for himself and avenge himself to solve his personal crises and problems. The NIU shooter, dressed in black emerged from a curtain onto a stage and started shooting, obviously creating a spectacle of terror, although as of this moment we still do not know much about his motivations. As for the television networks, since they are profit centers in a highly competitive business, they will continue to circulate school shootings and other acts of domestic terrorism as “breaking events” and will constitute the murderers as celebrities. Some media have begun to not publicize the name of teen suicides, to attempt to deter copy-cat effects, and the media should definitely be concerned about creating celebrities out of school shooters and not sensationalize them.

[...]

People have to become critical of the media scripts of hyperviolence and hypermasculinity that are projected as role models for men in the media, or that help to legitimate violence as a means to resolve personal crises or solve problems. We need critical media literacy to analyze how the media construct models of masculinities and femininities, good and evil, and become critical readers of the media who ourselves seek alternative models of identity and behavior.

  • Almost immediately after news of the violence broke, and word of the killer's YouTube videos spread, there was a spike of online backlash against the media saturation and warnings against promoting the perpetrator to celebrity status through omnipresent news coverage. Just two days after the killings Isla Vista residents and UCSB students let the news crews at the scene know that they were not welcome to intrude upon the community's mourning. As they are wont to do, journalists reported on their role in the story while ignoring the wishes of the residents, as in this LA Times brief:

More than a dozen reporters were camped out on Pardall Road in front of the deli -- and had been for days, their cameras and lights and gear taking up an entire lane of the street. At one point, police officers showed up to ensure that tensions did not boil over.

The students stared straight-faced at reporters. Some held signs expressing their frustration with the news media:

"OUR TRAGEDY IS NOT YOUR COMMODITY."

"Remembrance NOT ratings."

"Stop filming our tears."

"Let us heal."

"NEWS CREWS GO HOME!"

Fukuyama: 25 years after the "End of History"

 

I argued that History (in the grand philosophical sense) was turning out very differently from what thinkers on the left had imagined. The process of economic and political modernization was leading not to communism, as the Marxists had asserted and the Soviet Union had avowed, but to some form of liberal democracy and a market economy. History, I wrote, appeared to culminate in liberty: elected governments, individual rights, an economic system in which capital and labor circulated with relatively modest state oversight.

[...]

So has my end-of-history hypothesis been proven wrong, or if not wrong, in need of serious revision? I believe that the underlying idea remains essentially correct, but I also now understand many things about the nature of political development that I saw less clearly during the heady days of 1989.

[...]

Twenty-five years later, the most serious threat to the end-of-history hypothesis isn't that there is a higher, better model out there that will someday supersede liberal democracy; neither Islamist theocracy nor Chinese capitalism cuts it. Once societies get on the up escalator of industrialization, their social structure begins to change in ways that increase demands for political participation. If political elites accommodate these demands, we arrive at some version of democracy.

When he wrote "The End of History?", Fukuyama was a neocon. He was taught by Leo Strauss's protege Allan Bloom, author of The Closing of the American Mind; he was a researcher for the Rand Corporation, the thinktank for the American military-industrial complex; and he followed his mentor Paul Wolfowitz into the Reagan administration. He showed his true political colours when he wrote that "the class issue has actually been successfully resolved in the west … the egalitarianism of modern America represents the essential achievement of the classless society envisioned by Marx." This was a highly tendentious claim even in 1989.

[...]

Fukuyama distinguished his own position from that of the sociologist Daniel Bell, who published a collection of essays in 1960 titled The End of Ideology. Bell had found himself, at the end of the 1950s, at a "disconcerting caesura". Political society had rejected "the old apocalyptic and chiliastic visions", he wrote, and "in the west, among the intellectuals, the old passions are spent." Bell also had ties to neocons but denied an affiliation to any ideology. Fukuyama claimed not that ideology per se was finished, but that the best possible ideology had evolved. Yet the "end of history" and the "end of ideology" arguments have the same effect: they conceal and naturalise the dominance of the right, and erase the rationale for debate.

While I recognise the ideological subterfuge (the markets as "natural"), there is a broader aspect to Fukuyama's essay that I admire, and cannot analyse away. It ends with a surprisingly poignant passage: "The end of history will be a very sad time. The struggle for recognition, the willingness to risk one's life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands."

 

In an article that went viral in 1989, Francis Fukuyama advanced the notion that with the death of communism history had come to an end in the sense that liberalism — democracy and market capitalism — had triumphed as an ideology. Fukuyama will be joined by other scholars to examine this proposition in the light of experience during the subsequent quarter century.

Featuring Francis Fukuyama, author of “The End of History?”; Michael Mandelbaum, School of Advanced International Studies, Johns Hopkins University; Marian Tupy, Cato Institute; Adam Garfinkle, editor, American Interest; Paul Pillar, Nonresident Senior Fellow, Foreign Policy, Center for 21st Century Security and Intelligence, Brookings Institution; and John Mueller, Ohio State University and Cato Institute.

Critical Pedagogy and Imperialism; social media and commodity fetishism

Gramsci has had a huge impact on critical pedagogy especially because of the importance he attached to the role of culture, in both its highbrow and popular forms, in the process of hegemony which combines rule by force with rule by consent. His discussion on the role of intellectuals in this process also infuenced discussions centering around educators as cultural workers in the critical pedagogy field. Henry Giroux has been particularly influential here. One issue which deserves greater treatment in critical pedagogy, in my view, is that of ‘powerful knowledge’ which, though not necessarily popular knowledge and also needs to be problematised, should still be mastered for one not to remain at the margins of political life.

[...]

Following Freire, I would say: the commitment to teaching is a political commitment because education is a political act. There is no such thing as a neutral educaton. We must always ask on whose side are we when we teach?  More importantly we should ask, with whom are we educating and learning? I ask this question in the spirit of Freire’s emphasis on working with rather than for  the oppressed.

In tying Marxist ideology to social media, there are a number of things to clarify, as the comparison is not a perfect one. Perhaps the most questionable caveat is the ownership of the modes of production. In the social media model, it can be said that the proletariat themselves own the modes of productions since they typically own the computer or devices that they are using to channel their intellectual labor through. Additionally, almost all popular social media networks today allow users to retain the copyright of the content that they post  (Facebook, a; MySpace, n.d.; Twitter, n.d.). Thus, it would seem that making the argument that users are alienated from the results of their intellectual labor power is a moot point.

[...]

I humbly suggest that in the social media model, owning the output or product of intellectual labor power has little if anything to do with Marx’s species being. Instead, I feel that it is the social connections created, broken, strengthened, or weakened that feed directly to the worker’s species being. Since the output of the intellectual labor power in this case is not a tangible good, the only “finished product” that the worker can place value in and not be alienated from is the actual social connection that their output generates; not the actual output itself. This allows for a supra or meta level of social connection above that of the social connections embodied in physical outputs outlined by Marx.

Mind-controlled exoskeleton opens World Cup; AI will crash the stock market; Cortana's personality

The exoskeleton -- a system comprising a helmet implanted with a microchip that sticks out from the underside; a T-shirt loaded with sensors; metal leg braces; and a battery worn in a backpack -- is set in motion when the user envisions himself making the kick. The chip translates those electronic commands to a digital language that powers the skeleton, which then moves accordingly. The T-shirt vibrates to enhance the user's sensation of movement (and eliminate the need to look at his feet to see if he's stepping forward).

Talk about dropping the ball. Earlier today, Juliano Pinto — a 29 year-old paraplegic — successfully kicked off the 2014 FIFA World Cup by using a mind-controlled exoskeleton. But sadly, most TV networks failed to show it.

After months of hype, the official broadcast of the opening ceremonies showed only a fraction of it, while some TV networks missed the event altogether. Commentators criticized the organizers for casting aside the moment in favor of performing acts. 

The invasion of high-frequency trading machines is now forcing capitalism far away from anything either Adam Smith or the founders of the NYSE could possibly find virtuous. 

We’re not about to let robots compete in the Olympics, driverless cars race in the Indianapolis 500, or automated machines play sports like football, basketball, or baseball. So why is it we allow them to play a role in the most valuable contest of all, the world wide stock exchange? 

With crude forms of AI now entering the quant manipulator’s toolbox, we are now teetering dangerously close to a total collapse of the stock market, one that will leave many corporations and individuals financially destitute.

  • Microsoft has announced their version of apple's Siri virtual assistant. coming to Windows smartphones: Named Cortana, after the AI character from the Halo video game series, she is coming to Windows smartphones, and as Brad Molen at engadget reports, developers programmed her with a distinct personality:

Confident, caring, competent, loyal; helpful, but not bossy: These are just some of the words Susan Hendrich, the project manager in charge of overseeing Cortana's personality, used to describe the program's most significant character traits. "She's eager to learn and can be downright funny, peppering her answers with banter or a comeback," Hendrich said. "She seeks familiarity, but her job is to be a personal assistant." With that kind of list, it sure sounds like Hendrich's describing a human. Which is precisely what she and her team set out to do during Cortana's development; create an AI with human-like qualities.

Microsoft's decision to infuse Cortana with a personality stemmed from one end goal: user attachment. "We did some research and found that people are more likely to interact with [AI] when it feels more human," said Hendrich. To illustrate that desired human-machine dynamic, Hendrich pointed to her grandmother's experience with a Roomba vacuum: "She gave a name and a personality to an inanimate object, and it brought her joy." That sense of familiarity is exactly what Microsoft wants Window Phone users to feel when interacting with Cortana on their own devices.

Chomsky on Snowden, Žižek on Buddhism, Fuchs on social media and the public sphere

These exposures lead us to inquire into state policy more generally and the factors that drive it. The received standard version is that the primary goal of policy is security and defense against enemies.

The doctrine at once suggests a few questions: security for whom, and defense against which enemies? The answers are highlighted dramatically by the Snowden revelations.

Policy must assure the security of state authority and concentrations of domestic power, defending them from a frightening enemy: the domestic population, which can become a great danger if not controlled.

 

Social media has become a key term in Media and Communication Studies and public discourse for characterising platforms such as Facebook, Twitter, YouTube, Wikipedia, LinkedIn, Wordpress, Blogspot, Weibo, Pinterest, Foursquare and Tumblr. This lecture discusses the role of the concept of the public sphere for understanding social media critically. It argues against an idealistic interpretation of Habermas and for a cultural-materialist understanding of the public sphere concept that is grounded in political economy. It sets out that Habermas’ original notion should best be understood as a method of immanent critique that critically scrutinises limits of the media and culture grounded in power relations and political economy. It introduces a theoretical model of public service media that it uses as foundation for identifying three antagonisms of the contemporary social media sphere in the realms of the economy, the state and civil society. It concludes that these limits can only be overcome if the colonisation of the social media lifeworld is countered politically so that social media and the Internet become public service and commons-based media.

Graeber on labor and leisure; the perils of hipster economics; and the educational value of MOOCs

Right after my original bullshit jobs piece came out, I used to think that if I wanted, I could start a whole career in job counseling – because so many people were writing to me saying “I realize my job is pointless, but how can I support a family doing something that’s actually worthwhile?” A lot of people who worked the information desk at Zuccotti Park, and other occupations, told me the same thing: young Wall Street types would come up to them and say “I mean, I know you’re right, we’re not doing the world any good doing what we’re doing. But I don’t know how to live on less than a six figure income. I’d have to learn everything over. Could you teach me?”

But I don’t think we can solve the problem by mass individual defection. Or some kind of spiritual awakening. That’s what a lot of people tried in the ‘60s and the result was a savage counter-offensive which made the situation even worse. I think we need to attack the core of the problem, which is that we have an economic system that, by its very nature, will always reward people who make other people’s lives worse and punish those who make them better. I’m thinking of a labor movement, but one very different than the kind we’ve already seen. A labor movement that manages to finally ditch all traces of the ideology that says that work is a value in itself, but rather redefines labor as caring for other people.

Proponents of gentrification will vouch for its benevolence by noting it "cleaned up the neighbourhood". This is often code for a literal white-washing. The problems that existed in the neighbourhood - poverty, lack of opportunity, struggling populations denied city services - did not go away. They were simply priced out to a new location.

That new location is often an impoverished suburb, which lacks the glamour to make it the object of future renewal efforts. There is no history to attract preservationists because there is nothing in poor suburbs viewed as worth preserving, including the futures of the people forced to live in them. This is blight without beauty, ruin without romance: payday loan stores, dollar stores, unassuming homes and unpaid bills. In the suburbs, poverty looks banal and is overlooked.

In cities, gentrifiers have the political clout - and accompanying racial privilege - to reallocate resources and repair infrastructure. The neighbourhood is "cleaned up" through the removal of its residents. Gentrifiers can then bask in "urban life" - the storied history, the selective nostalgia, the carefully sprinkled grit - while avoiding responsibility to those they displaced.

Hipsters want rubble with guarantee of renewal. They want to move into a memory they have already made.

In the pedagogic trenches, MOOCs are considered a symptom of wider economic patterns which effectively vacuum resources up into the financial stratosphere, leaving those doing the actual work with many more responsibilities, and far less compensation. Basic questions about the sustainability of this model remain unanswered, but it is clear that there is little room for enfranchised, full-time, fully-compensated faculty. Instead, we find an army of adjuncts servicing thousands of students; a situation which brings to mind scenes from Metropolis rather than Dead Poets Society.

[...]

For companies pushing MOOCs, education is no different from entertainment: it is simply a question of delivering ‘content.’ But learning to think exclusively via modem is like learning to dance by watching YouTube videos. You may get a sense of it, but no-one is there to point out mistakes, deepen your understanding, contextualise the gestures, shake up your default perspective, and facilitate the process. The role of the professor or instructor is not simply the shepherd for the transmission of information from point A to point B, but the co-forging of new types of knowledge, and critically testing these for various versions of soundness and feasibility. Wisdom may be eternal, but knowledge – both practical and theoretical – evolves over time, and especially exponentially in the last century, with all its accelerated technologies. Knowledge is always mediated, so we must consciously take the tools of mediation into account. Hence the need for a sensitive and responsive guide: someone students can bounce new notions off, rather than simply absorb information from. Without this element, distance learning all too often becomes distanced learning. Just as a class taken remotely usually leads to a sea of remote students.

[...]

Marshall McLuhan was half-right when he insisted that the electronic age is ushering in a post-literate society. But no matter how we like to talk of new audio-visual forms of literacy, there is still the ‘typographic man’ pulling the strings, encouraging us to express ourselves alphabetically. Indeed, the electronic and the literate are not mutually exclusive, much as people like to pit them against each other.

  • Pettman also quotes Ian Bogost's comments on distance learning:

The more we buy into the efficiency argument, the more we cede ground to the technolibertarians who believe that a fusion of business and technology will solve all ills. But then again, I think that's what the proponents of MOOCs want anyway. The issue isn't online education per se, it's the logics and rationales that come along with certain implementations of it.

Guns with Google Glass, city of driverless cars, Kurzweil on hybrid thinking

 
  • Tech companies and weapons manufacturers are exploring the crossover potential for firearms and wearable technology devices like Google Glass. Brian Anderson at Motherboard reported Austin tech startup TrackingPoint's foray into this inevitable extension of augmented reality applications and posted the company's concept video:

"When paired with wearable technology, PGFs can provide unprecedented benefits to shooters, such as the ability to shoot around corners, from behind low walls, and from other positions that provide exceptional cover," according to a TrackingPoint press release. "Without PGF technology, such positions would be extremely difficult, if not impossible, to fire from."

The steadied rise of wearable technology is unlocking a dizzying number of potential killer apps. Indeed, If there was any lingering doubt that wearable tech is coming to the battlefield, the Glassification of a high-profile smart weapon should put any uncertainties to rest.

If being able to track and drop a moving target with single-shot accuracy at 1,500 feet using a long-range robo rifle wasn't sobering enough already, to think basically anyone can now do so over a hill, perhaps overlooking a so-called "networked battlefield" shot through with data-driven soldiers, is sure to be even more so.

The simulation is run by a proprietary software, and programmers will code in dangerous situations—traffic jams and potential collisions—so engineers can anticipate problems and, ideally, solve for them before the automated autos hit the streets. It's laying the groundwork for the real-world system planned for 2021 in Ann Arbor.

There will surely be some technical barriers to work out, but the biggest hurdles self-driving cars will have to clear are likely regulatory, legal, and political. Will driverless cars be subsidized like public transit? If autonomous cars eliminate crashes, will insurance companies start tanking? Will the data-driven technology be a privacy invasion?

Today you can buy a top-of-the-line S-Class car from Mercedes-Benz that figuratively says “ahem” when you begin to stray out of your lane or tailgate. If you do nothing, it’ll turn the wheel slightly or lightly apply the brakes. And if you’re still intent on crashing, it will take command. In 5 years, cars will be quicker to intervene; in 20, they won’t need your advice; and in 30, they won’t take it.

Accident rates will plummet, parking problems will vanish, streets will narrow, cities will bulk up, and commuting by automobile will become a mere extension of sleep, work, and recreation. With no steering column and no need for a crush zone in front of the passenger compartment (after all, there aren’t going to be any crashes), car design will run wild: Collapsibility! Stackability! Even interchangeability, because when a car can come when called, pick up a second or third passenger for a fee, and park itself, even the need to own the thing will dwindle.

Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

The headband picks up four channels from seven EEG sensors, five across the forehead and two conductive rubber ear sensors. Together, the sensors detect the five basic types of brain waves, and, unlike conventional sensors, they don’t need to be surrounded by gel to work. Software helps filter out the noise and syncs the signal, via Bluetooth, to a companion app. The app shows the user the brainwave information and offers stress-reduction exercises.

A bit further down the road of possibilities is brain-to-brain networking. Last year, researchers at the University of Washington used EEG sensors to detect one person’s intention to move his arm and used it to stimulate the other person’s brain with an external coil and watched as the second person moved his hand without planning to.

TV still sucks, we should still complain about hipsters, your job shouldn't exist

None of this could be happening at a worse time. According to the latest S.O.S. from climate science, we have maybe 15 years to enact a radical civilizational shift before game over. This may be generous, it may be alarmist; no one knows. What is certain is that pulling off a civilizational Houdini trick will require not just switching energy tracks, but somehow confronting the “endless growth” paradigm of the Industrial Revolution that continues to be shared by everyone from Charles Koch to Paul Krugman. We face very long odds in just getting our heads fully around our situation, let alone organizing around it. But it will be impossible if we no longer even understand the dangers of chuckling along to Kia commercials while flipping between Maher, “Merlin” and “Girls.”

  • Zaitchik's article name checks pertinent critics and theorists including Adorno's "cultural industry," Postman’s “Amusing Ourselves to Death,” and even Jerry Mander's "Four Arguments for the Elimination of Television." Where this article was discussed on sites like Reddit or Metafilter commenters seemed angry at Zaitchik, overly defensive as if they felt under attack for watching "Hannibal" and "Game of Thrones". I thoroughly enjoyed Zaitchik's piece, even if it doesn't present a fully developed argument, because the perspective he presents strongly resonates with many of the philosophical foundations that have shaped my own views on media, particularly the media ecology tradition. A large part of Zaitchik's argument is that even if television content is the highest quality it has ever been, the form of television and its effects are the same as ever:

Staring at images on a little screen — that are edited in ways that weaken the brain’s capacity for sustained and critical thought, that encourage passivity and continued viewing, that are controlled by a handful of publicly traded corporations, that have baked into them lots of extremely slick and manipulating advertising — is not the most productive or pleasurable way to spend your time, whether you’re interested in serious social change, or just want to have a calm, clear and rewarding relationship with the real world around you.

But wait, you say, you’re not just being a killjoy and a bore, you’re living in the past. Television in 2014 is not the same as television in 1984, or 1994. That’s true. Chomsky’s “propaganda model,” set out during cable’s late dawn in “Manufacturing Consent,” is due for an update. The rise of on-demand viewing and token progressive programming has complicated the picture. But only by a little. The old arguments were about structure, advertising, structure, ownership, and structure, more than they were about programming content, or what time of the day you watched it. Less has changed than remains the same. By all means, let’s revisit the old arguments. That is, if everyone isn’t busy binge-watching “House of Cards.”

It’s been something to watch, this televisionification of the left. Open a window on social media during prime time, and you’ll find young journalists talking about TV under Twitter avatars of themselves in MSNBC makeup. Fifteen years ago, these people might have attended media reform congresses discussing how corporate TV pacifies and controls people, and how those facts flow from the nature of the medium. Today, they’re more likely to status-update themselves on their favorite corporate cable channel, as if this were something to brag about.

The entertainment demands of the 21st Century seem (apparently) bottomless. We’ve outsourced much of our serotonin production to the corporations which control music, sports, television, games, movies, and books. And they’ve grown increasingly desperate to produce the most universally acceptable, exportable, franchisable, exciting, boring, money-making pablum possible. Of course that is not new either… yet it continues to worsen.

Various alternative cultures have been attempting to fight it for decades. The beats, hippies, punks, and grunge kids all tried… and eventually lost. But the hipsters have avoided it altogether by never producing anything of substance except a lifestyle based upon fetishizing obscurity and cultivating tasteful disdain. A noncommital and safe appreciation of ironic art and dead artists. No ideals, no demands, no struggle.

Rarely has the modern alternative to pop culture been so self-conscious and crippled. The mainstream has repeatedly beaten down and destroyed a half-century’s worth of attempts to keep art on a worthwhile and genuine path, but now it seems the final scion of those indie movements has adopted the: ‘if you can’t beat‘em, join‘em’ compromise of creative death.

  • In an interview for PBS, London School of Economics professor David Graeber poses the question: should your job exist?

How could you have dignity in labor if you secretly believe your job shouldn’t exist? But, of course, you’re not going to tell your boss that. So I thought, you know, there must be enormous moral and spiritual damage done to our society. And then I thought, well, maybe that explains some other things, like why is it there’s this deep, popular resentment against people who have real jobs? They can get people so angry at auto-workers, just because they make 30 bucks an hour, which is like nowhere near what corporate lawyers make, but nobody seems to resent them. They get angry at the auto-workers; they get angry at teachers. They don’t get angry at school administrators, who actually make more money. Most of the problems people blame on teachers, and I think on some level, that’s resentment: all these people with meaningless jobs are saying, but, you guys get to teach kids, you get to make cars; that’s real work. We don’t get to do real work; you want benefits, too? That’s not reasonable.

If someone had designed a work regime perfectly suited to maintaining the power of finance capital, it’s hard to see how they could have done a better job. Real, productive workers are relentlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the, universally reviled, unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc) – and particularly its financial avatars – but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value. Clearly, the system was never consciously designed. It emerged from almost a century of trial and error. But it is the only explanation for why, despite our technological capacities, we are not all working 3-4 hour days.

Žižek on post-U.S. order, Harvey on Piketty, Rushkoff's new job and doc

The "American century" is over, and we have entered a period in which multiple centres of global capitalism have been forming. In the US, Europe, China and maybe Latin America, too, capitalist systems have developed with specific twists: the US stands for neoliberal capitalism, Europe for what remains of the welfare state, China for authoritarian capitalism, Latin America for populist capitalism. After the attempt by the US to impose itself as the sole superpower – the universal policeman – failed, there is now the need to establish the rules of interaction between these local centres as regards their conflicting interests.

In politics, age-old fixations, and particular, substantial ethnic, religious and cultural identities, have returned with a vengeance. Our predicament today is defined by this tension: the global free circulation of commodities is accompanied by growing separations in the social sphere. Since the fall of the Berlin Wall and the rise of the global market, new walls have begun emerging everywhere, separating peoples and their cultures. Perhaps the very survival of humanity depends on resolving this tension.

  • Thomas Piketty's book Capital in the 21st Century has received widespread media attention, and enjoyed so much popular success that at times Amazon has been sold out of copies. It seems natural then that David Harvey, reigning champion of Marx's Capital in the 21st century would comment on the work, which he has now done on his web site:

The book has often been presented as a twenty-first century substitute for Karl Marx’s nineteenth century work of the same title. Piketty actually denies this was his intention, which is just as well since his is not a book about capital at all. It does not tell us why the crash of 2008 occurred and why it is taking so long for so many people to get out from under the dual burdens of prolonged unemployment and millions of houses lost to foreclosure. It does not help us understand why growth is currently so sluggish in the US as opposed to China and why Europe is locked down in a politics of austerity and an economy of stagnation. What Piketty does show statistically (and we should be indebted to him and his colleagues for this) is that capital has tended throughout its history to produce ever-greater levels of inequality. This is, for many of us, hardly news. It was, moreover, exactly Marx’s theoretical conclusion in Volume One of his version of Capital. Piketty fails to note this, which is not surprising since he has since claimed, in the face of accusations in the right wing press that he is a Marxist in disguise, not to have read Marx’s Capital.

[...]

There is, however, a central difficulty with Piketty’s argument. It rests on a mistaken definition of capital. Capital is a process not a thing. It is a process of circulation in which money is used to make more money often, but not exclusively through the exploitation of labor power.

  • At the 2012 Media Ecology conference in Manhattan I heard Douglas Rushkoff explain that he had stopped teaching classes at NYU because the department was not letting him teach a sufficient number of hours, all while using his likeness on program brochures. Well, Rushkoff has just been appointed to his first full-time academic post. Media Bistro reported CUNY's announcement :

Beginning this fall at CUNY’s Queens College, students can work their way towards an MA in Media Studies. Set to mold the curriculum is an expert responsible for terms such as “viral media” and “social currency.”

  • Lastly, this news made me realize that I completely missed Rushkoff's new Frontline special that premiered in February: Generation Like, which is available on the Frontline web site.

Ernesto Laclau dies

  • Ernesto Laclau, post-Marxist critical theorist and significant figure in discourse analysis (along with his wife and collaborator Chantal Mouffe), died on April 13. An obituary by British historian and academic Robin Blackburn was posted on the Verso web site:

Ernesto and Chantal used the work of Antonio Gramsci to reject what they saw as the reductionism and teleology of much Marxist theory. Though sometimes calling himself a ‘post-Marxist’ and an advocate of ‘radical democracy’, Ernesto insisted that he remained a radical anti-imperialist and anti-capitalist. His criticisms of Marx and Marxism were made in a constructive spirit, and without a hint of rancour.

Ernesto was recognised as leading thinker in Latin America but also as an intellectual star in the academic world, co-authoring Contingency, Hegemony and Universality with Slavoj Žižek and Judith Butler in 2008. He gave courses at a string of leading universities in Europe and the Americas, including North Western and the New School for Social Research. Ernesto became Emeritus professor at Essex in 2003, but the Centre he established continues its work.

With collaborators including his wife, Chantal Mouffe, and the cultural theorist Stuart Hall, Laclau played a key role in reformulating Marxist theory in the light of the collapse of communism and failure of social democracy. His "post-Marxist" manifesto Hegemony and Socialist Strategy (1985), written with Mouffe, was translated into 30 languages, and sales ran into six figures. The book argued that the class conflict identified by Marx was being superseded by new forms of identity and social awareness. This worried some on the left, including Laclau's friend Ralph Miliband, who feared that he had lost touch with the mundane reality of class division and conflict, but his criticisms of Marx and Marxism were always made in a constructive spirit.

Political populism was an enduring fascination for Laclau. His first book, Politics and Ideology in Marxist Theory (1977), offered a polite but devastating critique of the conventional discourse on Latin America at the time. This "dependency" approach tended to see the large landowners – latifundistas – as semi-feudal and pre-capitalist, while Laclau showed them to be part and parcel of Latin American capitalism which fostered enormous wealth and desperate poverty.

Witnessing the impact of the Perónist movement in Argentina led Professor Laclau to a fascination with populism. He wrote a celebrated essay on the subject in the 1970s and then a full-length book, On Populist Reason (2005), looking at the rise of leftist politicians such as Hugo Chávez across much of Latin America. Both the current president of Argentina, Cristina Fernández de Kirchner, and her late husband and predecessor Néstor Kirchner, are said to have been great admirers of his work.

Laclau’s theory of populism has played a critical role in my research. Without his theoretical insights and captivating character, I could not have expanded my initial observations of populist practices to this level.  Beside his theoretical legacy and rich intellectual input outside academia, Prof. Laclau also contributed to the training and development of students and researchers from different parts of the world – thanks to the IDA programme he founded.  His death is a great loss.

Ludology grab bag: video games and authenticity, semiocapitalism, and geography

The postmodern condition presents a constant struggle and conflict between our own desires and a world that seems fully available to experience but devoid of concrete or objective meaning. Video games, by virtue of their most basic structure, allow easy access to the feeling that your chosen actions and goals are both informed and legitimised by the overarching rules surrounding them. This is the very definition of authenticity.

None of this is to say that games are better or preferable experience to real life or other media, but it is to suggest they’re uniquely placed at this point in time to provide satisfying experiences. Indeed, matching the appeal of video games with the search for authenticity goes a ways to explaining the particular trajectory of gaming’s prevalence, from largely rejected as a toy in the production-focused late eighties and early nineties, to an explosion of mainstream acceptance as the global media and advertising machine makes up more and more of our everyday lives.

  • Forbes contributor Michael Thomsen reports on an independent video game that touched on symbolic representation, labor and productivity, and even namedrops Franco Berardi's "semiocapitalism":

In most videogames, the semiotic meaning of the system is accepted by players before they begin playing—they don’t know what tactics they’ll use to win, nor whether they’ll play long enough to do so, but they know that winning or completion is the organizing metaphor. Players aren’t often encouraged to question the values of competitive systems, but only asked to internalize the responsibility of making them work as efficiently as possible, postponing the anxious reality of failure for a few magical moments that we’ve agreed to describe as fun.

In contrast, Rehearsals and Returns overflows with signifiers placed in a system that remains indifferent to their interpretative meaning, and which consciously obscures the player’s desire  to interpret them in terms of winning or losing. The system acknowledges player choices—whether you chose to tell Hilary Clinton something hateful or nice—but the game doesn’t interpret the player’s choice, nor does it tie the economy of collectible conversation pieces to any allegorical meaning. It uses the game as a sort of digital confessional chamber, in which familiar units of social and political meaning are taken out of their historical narratives and given to the player in an incomplete space meant only for self-reflection.

  • Ian Bogost recorded an interview for the Go For Rainbow podcast, discussing "gaming culture as it relates to geographical space, and when and when not to whip out the PhD cred". Full audio is available here.

Video mélange: David Harvey, Antonio Negri, and Saints Row IV

 

 

Technology, hyperemployment, and femininity

If you’re like many people, you’ve started using your smartphone as an alarm clock. Now it’s the first thing you see and hear in the morning. And touch, before your spouse or your crusty eyes. Then the ritual begins. Overnight, twenty or forty new emails: spam, solicitations, invitations or requests from those whose days pass during your nights, mailing list reminders, bill pay notices. A quick triage, only to be undone while you shower and breakfast.

[...]

Often, we cast these new obligations either as compulsions (the addictive, possibly dangerous draw of online life) or as necessities (the importance of digital contact and an “online brand” in the information economy). But what if we’re mistaken, and both tendencies are really just symptoms of hyperemployment?

[...]

Hyperemployment offers a subtly different way to characterize all the tiny effort we contribute to Facebook and Instagram and the like. It’s not just that we’ve been duped into contributing free value to technology companies (although that’s also true), but that we’ve tacitly agreed to work unpaid jobs for all these companies. And even calling them “unpaid” is slightly unfair, since we do get something back from these services, even if they often take more than they give. Rather than just being exploited or duped, we’ve been hyperemployed. We do tiny bits of work for Google, for Tumblr, for Twitter, all day and every day.

Bogost writes, “hyperemployment offers a subtly different way to characterize all the tiny effort we contribute to Facebook and Instagram and the like. It’s not just that we’ve been duped into contributing free value to technology companies (although that’s also true), but that we’ve tacitly agreed to work unpaid jobs for all these companies.” This tacit agreement, however, extends beyond social media and e-mail and is really a form of housework and maintenance for our daily lives. In that regard, I wonder if calling the cozy arrangement between digital technologies, data economies, and invisible labor “employment” runs the danger of side-stepping the deeper (gendered and racialized) antagonisms inherent in the distinction between what is considered labor and what is considered “care.”

For more than thirty years, Marxist feminists have been arguing that women’s unpaid labor–housework, reproduction, etc.–is a prerequisite for capitalist wage labor, surplus value extraction, and profit-making. Capital can extract surplus value from waged labor only because the wage laborer is supported by (extracts surplus value from) unwaged labor, mainly in the form of the wife. Gregory’s argument is that what Bogost is pointing to isn’t a new phenomenon so much as a reconfiguration of an ongoing practice: we are all our own wives and moms, so to speak. Indeed, as Bogost’s example suggests, our smartphones wake us up, not our moms, just as emails accomplish a lot of the relational work (scheduling, reminding, checking in, etc.) conventionally performed by women.

So does technology relieve the burden on women to perform certain traditionally feminine tasks? Sure! If your husband scans the news on his iPad, you no longer need to collect the morning paper. If your kids have SpongeBob SquarePants for company, you are free to leave them bathed in television glare while you check Twitter/wallow in 21st-century guilt. On the other hand, assigning a task to a computer doesn’t necessarily make it go away. Wageless work may now be more evenly distributed among men and women, but someone still has to send the reminder emails and program the vacuum bot. We haven’t escaped the reality of unpaid labor; we’ve simply spread it around.

Powered by Squarespace. Background image of New Songdo by Curry Chandler.