Curry Chandler

Curry Chandler is a writer, researcher, and independent scholar working in the field of communication and media studies. His writing on media theory and policy has been published in the popular press as well as academic journals. Curry approaches the study of communication from a distinctly critical perspective, and with a commitment to addressing inequality in power relations. The scope of his research activity includes media ecology, political economy, and the critique of ideology.

Curry is a graduate student in the Communication Department at the University of Pittsburgh, having previously earned degrees from Pepperdine University and the University of Central Florida.

Filtering by Tag: ideology

Anthropocene Imaginaries: Climate Fiction as Communication Infrastructure

Early reviews for Adam McKay’s new film Don’t Look Up are out, and they are decidedly mixed. This new movie seems to continue McKay’s trend of real-world-oriented comedies that engage with current socio-political events. McKay has transitioned from broad comedies including notable collaborations with Will Ferrell to a series of based-on-a-true-story/ripped-from-the-headlines entertainments. His films adopt a left-adjacent critical stance even though their output is often ideologically specious (the much-lauded The Big Short recapitulates a narrative of outlier exemplars of greed rather than recognizing the contradictory logics inherent to capitalism, and the otherwise impressive Vice contains a baffling scene where Steve Carell’s Donald Rumsfield laughs off the notion of ideology itself). Don’t Look Up is being received as a thinly-veiled climate change fable, which makes it the latest entry in the growing genre of climate fiction.

Writing for the L.A. Review of Books, Katie Yee outlines the language of climate fiction:

The landscape of climate fiction is populated by Greta Thunbergs. It features eerily mature kids, left on their own. While our instinct should be to protect and pacify the children, ironically, in these novels they are forced to be the purveyors of cruel truths as the adults around them are lulled into a state of passivity. The roles are reversed. The alarm here is new, electrifying, contagious. Just as Greta Thunberg speaks directly to you in the ads, these characters invite you into the fold of these stories. They warn us not only with the tragedies they face but with the careful words they use to recount them. Climate fiction is just as much about the tales we spin, the way we talk about our actions.

This past July I watched The Tomorrow War. Attempting to justify my rationale for doing so reminds me of Bill Hicks’ explanation for eating at a Waffle House: “I’m not proud of it, I was hungry.” In my case it was because I was days away from a major move and was eager for distraction as I packed boxes. I had anticipated that the movie would provide some alien invasion schlockery but was surprised when the opening sequence featured the interdimensional arrival of soldiers who announced themselves as “your children and grandchildren.” These soldiers emerged into our present day to deliver a warning of humanity’s imminent destruction. Watching these scenes it was impossible to not think of Greta Thunberg, the climate change activist whose impassioned pleas for her generation’s future have thrust her to the forefront of the climate culture wars. Did The Tomorrow War’s scenes of soldiers interrupting the World Cup to deliver a message of impending doom from the future not evoke a remediated echo of Thunberg’s famous “how dare you” address to the United Nations?

It was an invigorating metaphor for contemporary climate anxiety, and I was interested to see whether the filmmakers would lean in to this angle or if it was an unintentional veneer on this science fiction story. To my surprise, there were persistent allusions to the climate crisis throughout the film. The protagonist, played by Chris Pratt, is an ex-military operative now teaching high school science. When we first see him in his classroom he is trying to engage his students who have lost all interest in their studies in light of the revelations from the future war soldiers. They wonder: why study for exams, or apply to college, or hope for any future at all when they have received confirmation of humanity’s ultimate demise within the next thirty years?

It’s an evocative illustration of climate despair, the pervading melancholia that has particularly affected younger generations who are not only facing the specter of a transformed world, but also reconciling with the associated employment prospects. The scene dramatizes the “eco-anxiety” that may even become a diagnosable condition.

Pratt’s character attempts to counter his students’ existential apathy by arguing that science is more important than ever: it will take scientific ingenuity to meet and hopefully overcome this looming challenge. His speech about the importance of science would seem on-the-nose even if it wasn’t being delivered in front of images of polar bears precariously perched on pint-sized ice floes.

Global warming also plays an integral role in the resolution of The Tomorrow War’s plot. Our heroes ultimately realize (spoiler alert) that the spacecraft carrying the alien invaders is not destined to arrive in Earth’s future but rather crash landed more than a thousand years in the past. Initially submerged in ice, the gradual warming of the planet eventually thawed the aliens out, thus precipitating their attack on humanity.

Many reviewers found The Tomorrow War’s climate change metaphor to be wanting. For some, the metaphor fell flat. Others thought the dull action movie trappings failed to live up to the challenge. The discourse around The Tomorrow War reminded me of the chatter surrounding TENET when it was released last year. A brief exchange of dialogue during that film’s climax suggests that environmental catastrophe is the primary motivation for the temporal war that fuels the plot. Many commentators seized onto this brief bit of backstory as the key to unlocking the labyrinthine narrative, with reviewers referring to the film as a climate change allegory, “Christopher Nolan’s statement on climate change,” and a treatise on intergenerational justice. Like The Tomorrow War, critics also derided the fact that TENET eluded the climate crisis rather than confronting it head on.

These commentators are touching upon the potential for climate fiction to shape political imaginaries, and suggesting that these films can elucidate an agenda for addressing the climate crisis. Manjana Milkoreit has written about the potential for climate fiction to influence societal responses to climate change by depicting imaginaries of the future. Yet the imaginaries depicted in these science fictions seem insufficient for addressing political realities. Returning to The Tomorrow War: this film imagines the climate crisis as an alien invasion, and the solution to the problem is to go kill the aliens with guns and bombs. There is something effective in how the film posits that the “war” cannot be displaced or projected to the future, but rather must be fought in our own time, yet the solution it imagines is overly simplistic and individualistic. As Matt Christman noted in one of his CushVlog entries, The Tomorrow War overlooks the fact that everyone is aware of the threat yet lacks the mechanisms of collective action that would enable them to do anything about it.

Similar critiques are emerging in reviews of Don’t Look Up. One largely negative review ended by asserting that “if the movie helps to do something about climate change, such critical objections are unimportant.” The potential of climate fiction to function as infrastructure for political imaginaries seems like a salient area of inquiry, but perhaps we’re asking too much of our entertainments.

Interpassivity, Reaction Videos, and Emotions as Content: Why Pablo Hidalgo is (maybe) Right

Amidst all the Cyberpunk 2077 discourse over the past month-and-a-half, I was struck by the opinion expressed by gamepressure’s Michael Chwistek that the game perhaps offers more potential as an interactive movie than as an open-world RPG. The article begins thusly:

“I don't like games that complete themselves. Take Telltale games, for example. I only managed to finish the first season of Walking Dead, and my adventure with Life is Strange ended on the first episode. Now, these are fine stories, of course, and I really like a well-crafted story, but I expect more from games. For story itself, I prefer to read a book or watch a movie, instead of mindlessly pressing keys to see just another portion of dialogue.”

These sentiments stood out to me for two reasons. In the first place, the comments resonated with recent thoughts I’ve been working through in regards to so-called “walking simulators,” games that emphasize environmental exploration and narrative with interactive gameplay elements often limited to mere movement. During the coronavirus quarantine I have both played several noted entries in this sub-genre, and watched several others as walkthrough videos on YouTube. I’ve been fascinated by the ways in which many of these games engage with psychogeographic ideas and explore possibilities of a topological (rather than chronological) narrative presentation. It’s a topic I’ve been considering writing about, so more on walking simulators later.

My other thoughts on these comments have to do with interpassivity. The theory of interpassivity was first articulated by Austrian philosopher Robert Pfaller to describe trends in interactive artwork. Pfaller’s original formulation was directed as a response to discourses on interactivity predominating in art theory during the 1990s, but the concept has since been taken up to theorize modes of quasi-interactivity or mediated engagement, such as practices of online online “slacktivism.” Chwistek’s formulation of “games that complete themselves” accords perfectly with Pfaller’s initial framing of interpassive objects as “the work of art that observes itself.”

Interpassivity was also evoked by another recent ripple in online discourse. A Star Wars-centric YouTuber released a reaction video showing themselves crying while watching an episode of The Mandalorian. It later transpired that Lucasfilm employee Pablo Hidalgo had responded to an online discussion of the reaction video by tweeting: “emotions are not for sharing.” Hidalgo later apologized and attempted to clarify the intent behind his comments:

“I wish to clarify that my post that ’emotions are not to be shared’ was sarcastic self-mockery and was certainly not intended to be hurtful to anyone and I’m deeply sorry that it was. As a lifelong fan, I appreciate fans expressing how they feel – it’s what being a fan is about!”

The controversy over Hidalgo’s comments may seem like a temporary tempest in a teapot, just another ripple in the continual current of click bait content and rage tweeting. But I think it also highlights salient aspects of contemporary media culture and some of the attendant ideological assumptions. Particularly in relation to interpassivity and the mediation of emotions.

In most applications interpassivity refers to phenomena in which activity or behavior is delegated or “outsourced” to another agent. In a recent book Pfaller (2017) repositioned interpassivity as the delegation of enjoyment. Rather than having other people or machines work on your behalf, “interpassive behaviour entails letting others consume in your place” (p. 1). Through interpassivity, Pfaller argues, “people delegate precisely those things that they enjoy doing” (p. 2). 

The myriad genres of video content that have proliferated on YouTube in recent years offer clear potential for an interpassive analysis. Reaction videos, unboxing videos, and “let’s play” videogame livestreams all represent emergent manifestations of the attention economy. But these examples also evince a commodification of reception and response, a shift in media consumption where consumption itself is what is being consumed. These video genres can be seen as interpassive media because they enable the view to enjoy through the other, to vicariously unpackage the commodity or play the videogame through the mediation of the video creator.

The phenomena of interpassivity has also been tied to belief. For Pfaller, interpassivity is marked by a double delegation, involving a transfer not only of pleasure but also of belief to a representative agent. This delegation of belief has been central to Slavoj Žižek’s use of the term. Žižek employs the theory of interpassivity to argue that cynical distance and doubt buttress rather than undermine ideological function by positing the existence of an “other supposed to believe” and “illusions without owners.” Žižek cites examples of interpassive operation from electronic media. The “canned laughter” on the soundtrack of a TV sitcom “performs” laughter on behalf of the viewer “so that it is the object itself that ‘enjoys the show’ instead of me, relieving me of the superego duty to enjoy myself” (1998, p. 5). Video recording of TV programs allows one to continue working in the evening “while the VCR passively enjoys for me” (p. 7). Advertising messages perform the enjoyment of commodities on behalf of the consumer (“Coke cans bearing the inscription ‘Ooh! Ooh! What taste!’” , p. 5).

Žižek has also frequently used the example of the Tibetan prayer wheel as a key analogy in his theory of how ideology is perpetuated through disavowed belief. The prayer wheel allows the user to delegate religious belief, as spinning the wheel executes the prayer ritual on the subject’s behalf. For Žižek, the situation is analogous to capitalist subjects who act “as if” they believe the economic system works while professing a cynical distance. As with the prayer wheel, ideology allows subjects to dispense with belief or conviction while persisting in the routines and behaviors through which the belief is enacted.

Critical responses to the proliferation of self-promotion and exhibition on social media tend to focus on issues of privacy surveillance. The advent of pervasive communication technologies has apparently expanded the notion of generalized panoptical surveillance beyond earlier formulations based on overreaching state intervention. We now live in a world where individuals readily broadcast the details of their own lives to an anonymous audience. We are so suffused in the endless stream of media signals that we contribute our own responses in the form of new consumable content. What becomes of personal affect and sentiment in this circumstance? Is “privacy” fated to be an illusion without owners? 

Pablo Hidalgo’s flippant remark that “emotions are not meant to be shared” contains an implicit argument against the mass mediated publicity of online culture. A tacit defense of intimate and inner experience against the colonization of the lifeworld by popular culture, against the transmutation of authentic emotional reactions into “content.” This oblique rebuke only seems radical in the context of Hidalgo’s position as a Lucasfilm executive, placing him within the gargantuan Disney apparatus which is at the forefront of subsuming our shared culture and imaginative expression in its ever-expanding portfolio of “intellectual property.” It is this crucial fact that underlies both the controversy over his comments and his public mea culpa.

Dutch philosopher Gijs van Oenen has further developed the theory of interpassivity, expanding the scope of interpassive operations to the domains of politics and citizenship. For van Oenen, interpassivity emerges as a response to the overwhelming demands for interactivity and expectations of civic responsibility facing modern subjects. The “privilege of self-realization” has come to be experienced as a burden as an “imperative to participate” (2011, p. 10). Interpassivity provides subjects with a means to “outsource the burden of interactivity” and promises repose in the form of institutions and objects that “appear prepared to assume the load of emancipation and self-realization” (p. 11). Van Oenen thus considers interpassivity as “a form of resistance to the pressures exerted by successful emancipation” and a relief from the obligation to always live up to our emancipatory promise (p. 1).

Interpassivity also features in Jodi Dean’s (2009) notion of “communicative capitalism.” Dean defines communicative capitalism as “the materialization of ideals of inclusion and participation in information, entertainment, and communication technologies in ways that capture resistance and intensify global capitalism” (p. 2). She argues that discourses and practices of networked communications media fetishize speech, opinion, and participation in such a way that the exchange value of a message overtakes the use value. Messages are thus unmoored from “contexts of action and application” (p. 26) and become part of a circulating data stream that relieves institutional actors from the obligation to respond. Thus, for Dean, communicative capitalism is “democracy that talks without responding” (p. 22).

Dean argues that the ostensible democratic possibilities offered by participatory media merely serve to provide a semblance of participation by substituting superficial contributions of message circulation for real political engagement, a phenomenon she connects to the theoretical concept of “interpassivity.” Changes in communication networks represented by the acceleration and intensification of global telecommunications have consolidated democratic ideals and logics of capital accumulation, resulting in a “strange merg­ing of democracy and capitalism in which contemporary subjects are produced and trapped” (p. 22). The integration of communication technologies and message circulation into neoliberal governance calls the very possibility of an emancipatory communicative practice into question.

The phenomenon of interpassivity further troubles traditional schemas of subversion and resistance. Whereas Dean identifies interpassivity with the capture and neutralization of resistance, van Oenen sees interpassive operations as a form of resistance in themselves. If van Oenen is correct that citizens are burdened by interactivity and the imperative to participate, then how might an emancipatory politics be formulated in the post-emancipatory era of interpassivity?

Various authors have explored the possibilities of an anti-politics of withdrawal, such as Zizek’s (2006) promotion of a “Bartelby politics” which elevates the fictional scrivener’s refrain of “I prefer not to” into a political mantra. In response to the calls for interaction and engagement that proliferate in contemporary discourse Zizek states that the “threat today is not passivity but pseudo-activity, the urge to ‘be active,’ to ‘participate’” (p. 334).

Against this backdrop we might discern a latent revolutionary impulse in Hidalgo’s admonition that “emotions are not meant to be shared.”

References

Dean, Jodi. Democracy and Other Neoliberal Fantasies: Communicative Capitalism and Left Politics. Durham, NC: Duke University Press, 2009.

Pfaller, Robert. Interpassivity: The Aesthetics of Delegated Enjoyment. Edinburgh, Edinburgh University Press: 2017.

van Oenen, Gijs. Interpassive agency: Engaging Actor-Network Theory’s view on the agency of objects. Theory & Event 14, no. 2 (2011):

Zizek, Slavoj. The Interpassive Subject. Centre Georges Pompidou, Traverses, 1998.

Zizek, Slavoj. The Parallax View. Cambridge, MA: MIT Press, 2006.


Smoke Signals: Buda’s Wagon and Infrastructure Terrorism in Nashville

“The car bomb, in other words, suddenly became a semi-strategic weapon that under certain circumstances was comparable to air-power in its ability to knock out critical urban nodes and headquarters as well as terrorize populations of entire cities. [...] It is the car bombers’ incessant blasting-away at the moral and physical shell of the city, not the more apocalyptic threats of nuclear or bioterrorism, that is producing the most significant mutations in city form and urban lifestyle.” - Mike Davis, Buda’s Wagon

When the sun dawned over Nashville on Christmas morning the day’s first light illumined dark tufts of smoke above downtown. Like many other Nashvillians my Christmas morning began with local news coverage of a powerful explosion on Second Avenue. Some aspects of the initial details seemed familiar and inherently plausible (an RV transformed into a Vehicle-borne Improvised Explosive Device), while others strained credulity (early rumors of an audio countdown message emanating from the vehicle smacked of Internet hoaxery, though these reports have since been confirmed).

Indeed the early morning attack does seem to have included a warning message that prompted people in the area to evacuate. Remarkably it appears that no one but the perpetrator was killed in the blast. The bombing site in downtown Nashville was in the proverbial shadow of the city’s iconic AT&T skyscraper -- colloquially known as the Batman Building as the tower’s twin antennae somewhat resemble the pointed ears on the caped crusader’s cowl -- yet more significantly the RV was positioned directly in front of an AT&T switching station. This is a building dedicated to housing telecommunications infrastructure; the 15-floor windowless red-brick structure in Nashville bears some superficial resemblance to 33 Thomas Street in Manhattan, the AT&T “Long Lines” building whose 29 stories of windowless brutalist concrete have long sparked observers’ imaginations. 

Considered as an instance of infrastructure terrorism the bombing was quite effective. The explosion didn’t seem to jeopardize the overall structural integrity of the switching station, yet enough damage was done to disrupt critical services. Many areas around the city -- including here in Brentwood -- lost 911 emergency phone services. The Nashville Airport ceased all flight operations due to the telecommunications issues, and the city’s COVID-19 community hotline was also knocked out of commission. Communications were affected throughout the region including in Knoxville, Chattanooga, and Louisville

Infrastructure terrorism became a key concern for U.S. authorities following the 9/11 attacks. In 2003 the Department of Homeland Security published a National Strategy for the Physical Protection of Critical Infrastructures and Key Assets. The report details the vulnerabilities inherent to maintaining national and transnational networks supported by critical nodes:

“The facilities, systems, and functions that comprise our critical infrastructures are highly sophisticated and complex. They consist of human capital and physical and cyber systems that work together in processes that are highly interdependent. They each encompass a series of key nodes that are, in turn, essential to the operation of the critical infrastructures in which they function. To complicate matters further, our most critical infrastructures typically interconnect and, therefore, depend on the continued availability and operation of other dynamic systems and functions.” (DHS, 2003, p. 6)

The Nashville bombing thus reveals in spectacular fashion the intrinsic vulnerabilities of infrastructural networks. This vulnerability is not just a threat to urban centers: the use of car bombs to terrorize city populations has a long history, and recent attacks in New York, Toronto, and Nice have demonstrated that a vehicle doesn’t need to be equipped with explosives to cause mass destruction and death. Rather, the apparent target of the Nashville bombing and the subsequent communication disruptions that resulted illustrate the oft-invisible yet overlapping infrastructural entanglements of our networked world. An attack centered on one building in Nashville can produce institutional breakdowns not only throughout the entire city but also in neighboring states. Network resiliency and redundancy was of course the primary goal of ARPANET, the technological foundation for the modern Internet.

The bombing also indicates one of the central paradoxes of our increasingly interconnected technological apparatuses: as the infrastructures of our daily lives become “smarter,” more integrated and networked, they also become more vulnerable to distributed disruption and systemic failure. The implementation of “intelligent” infrastructures in urban environments is often motivated by official imaginaries of omniscient visibility and pervasive control, and accordingly produce attendant anxieties over authoritarian encroachment and the specter of a stifling panoptical security state. Yet the increasing complexity of administrative infrastructures and technologies simultaneously gives rise to greater systemic precarity and emergent opportunities for breakdown.

Mike Davis charts some of the interplay between vehicle-based terrorism and urban governmentality in his book Buda’s Wagon: A Brief History of the Car Bomb (2007/2017). Echoing the DHS quote above, he situates the spread of car bombings in an “open source” era of terrorism marked by “a seamless merger of technologies: the car bomb plus the cell phone plus the Internet together constitute a unique infrastructure for global networked terrorism that obviates any need for transnational command structures or vulnerable hierarchies of decision-making” (p. 11, emphases in original).

Davis also notes that car bombs are “‘loud’ in every sense,” as these explosions are “usually advertisements for a cause, leader, or abstract principle” (p. 9).

“In contrast to other forms of political propaganda, from graffiti on walls to individual assassinations, their occurrence is almost impossible to deny or censor. This certainty of being heard by the world, even in a highly authoritarian or isolated setting, is a major attraction to potential bombers.” (ibid.)

Davis cites Regis Debray’s observation that such attacks are “manifestos written in the blood of others.” Yet the Nashville bombing has thus far failed to yield an explicit political motive or ideological agenda. The Christmas-day confusion was compounded not only by the revelation of the audio warning announcements, but also by the lack of any corresponding media manifesto or claims of responsibility. The volitional vacuum prompted news and social media discourse to project possible motives onto the perpetrator: perhaps the bomber was a right-winger who targeted the AT&T building because of the 5G-Coronavirus conspiracy, or maybe a leftist seeking retribution for the telecomm company’s complicity in domestic spying programs? Personally, I felt the technological elements of both the target and the weapon evoked Unabomber vibes, although the preliminary evacuation notice evinced a greater concern for collateral damage and human life than Kaczynski’s methods.

The impulse to apprehend an underlying motive behind an act of mass violence is understandable, yet ultimately no explanation for terror or mass murder can ever be satisfying or even elucidating. Our current political climate produces knee-jerk responses to inciting events that seek to assign ideological complicity to the “other side,” casting preemptive blame to our imagined opponents (i.e. “this is surely the work of a MAGA anti-masker,” or “this must be a BLM assault on the police” ) such that our own ideological position is affirmed and our cognitive maps cohere. Rituals of scapegoating have long provided essential support for both group and personal identities. Yet no declaration of intent can truly explain wanton destruction, just as no ideological rationalization can justify mass murder.

In recent history the Las Vegas massacre perpetrated by Stephen Paddock epitomizes the unfulfilled search for an explanatory motive. The question of what circumstances led up to Paddock raining bullets on a crowd of concert-goers has fueled futile speculation and conspiracy theory. When police photos of Paddock’s hotel-suite-turned-sniper’s-nest appeared online, viewers seized upon a piece of paper visible on a side table as a critical clue. Surely this was the killer’s suicide note, or personal manifesto, some explanation for the attack! It turned out the paper bore only mathematical equations for calculating trajectory, the killer’s calculus for maximizing mortality.

The lack of a clearly defined motivation can be experienced as a secondary shock to the initial trauma of the attack itself. It seems to deny some semblance of resolution or closure. So far no underlying explanation for the Nashville bombing has been unearthed. It remains an explosive enigma rendered all the more inexplicable by the bomber’s choice to broadcast a warning message prior to detonation. Yet the police have revealed that the vehicle-based speaker system not only conveyed a verbal countdown notice, they also played music:

“Police in the area moments before the blast said the speakers also played the wistful 1963 song ‘Downtown’ by Petula Clark. The lyric, about going to the city to seek refuge from sadness, echoed down Second Avenue just before the blast: ‘The lights are much brighter there.’”

Without reading too much into the song choice as a potential clue, the reported musical selection does seem to suggest that the perpetrator saw some significance to the location of his attack beyond the mere tactical position of the apparent target. The use of the song “Downtown” conveys a striking concession to the particularities of place in comparison to the considerations of extended networks and distributed effects offered earlier. And while the ramifications of terror attacks may resonate across geographic distance and within virtual spaces, every ground zero occupies material as well as mental territory.


Memes, Enthymemes, and the Reproduction of Ideology

In his 1976 book The Selfish Gene, biologist Richard Dawkins introduced the word “meme” to refer to a hypothetical unit of cultural transmission. The discussion of the meme concept was contained in a single chapter of a book that was otherwise dedicated to genetic transmission, but the idea spread. Over decades, other authors further developed the meme concept, establishing “memetics” as a field of study. Today, the word “meme” has entered the popular lexicon, as well as popular culture, and is primarily associated with specific internet artifacts, or “viral” online content. Although this popular usage of the term is not always in keeping with Dawkins’ original conception, these examples from internet culture do illustrate some key features of how memes have been theorized.

This essay is principally concerned with two strands of memetic theory: the relation of memetic transmission to the reproduction of ideology; and the role of memes in rhetorical analysis, especially in relation to the enthymeme as persuasive appeal. Drawing on these theories, I will advance two related arguments: ideology as manifested in discursive acts can be considered to spread memetically; and ideology functions enthymemetically. Lastly, I will present a case study analysis to demonstrate how the use of methods and terminology from rhetorical criticism, discourse analysis, and media studies, can be employed to analyze artifacts based on these arguments.

Examples of memes presented by Dawkins include “tunes, ideas, catch-phrases, clothes fashions, ways of making pots or building arches” (p.192). The name “meme” was chosen due to its similarity to the word “gene”, as well as its relation to the Greek root “mimeme” meaning “that which is imitated” (p.192). Imitation is key to Dawkins’ notion of the meme because imitation is the means by which memes propagate themselves amongst members of a culture. Dawkins identifies three qualities associated with high survival in memes: longevity, fecundity, and copying-fidelity (p.194).

Distin (2005) further developed the meme hypothesis in The Selfish Meme. Furthering the gene/meme analogy, Distin defines memes as “units of cultural information” characterized by the representational content they carry (p.20), and the representational content is considered “the cultural equivalent of DNA” (p.37). This conceptualization of memes and their content forms the basis of Distin’s theory of cultural heredity. Distin then seeks to identify the representational system used by memes to carry their content (p.142). The first representational system considered is language, what Distin calls “the memes-as-words hypothesis” (p.145). Distin concludes that language itself is “too narrow to play the role of cultural DNA” (p.147).

Balkin (1998) took up the meme concept to develop a theory of ideology as “cultural software”. Balkin describes memes as “tools of understanding,” and states that there are “as many different kinds of memes as there are things that can be transmitted culturally” (p.48). Stating that the “standard view of memes as beliefs is remarkably similar to the standard view of ideology as a collection of beliefs” (p.49), Balkin links the theories of memetic transmission to theories of ideology. Employing metaphors of virility similar to how other authors have written of memes as “mind viruses,” Balkin considers memetic transmission as the spread of “ideological viruses” through social networks of communication, stating that “this model of ideological effects is the model of memetic evolution through cultural communication” (p.109). Balkin also presents a more favorable view of language as a vehicle for memes than Distin presented, writing: “Language is the most effective carrier of memes and is itself one of the most widespread forms of cultural software. Hence it is not surprising that many ideological mechanisms either have their source in features of language or are propagated through language” (p.175).

Balkin approaches the subject from a background in law, and although not a rhetorician and skeptical of the discursive turn in theories of ideology, Balkin does employ rhetorical concepts in discussing the influence of memes and ideology: “Rhetoric has power because understanding through rhetorical figures already forms part of our cultural software” (p.19). Balkin also cites Aristotle, remarking that “the successful rhetorician builds upon what the rhetorician and the audience have in common,” and “what the two have in common are shared cultural meanings and symbols” (p.209). In another passage, Balkin expresses a similar notion of the role of shared understanding in communication: “Much human communication requires the parties to infer and supplement what is being conveyed rather than simply uncoding it” (p.51).

Although Balkin never uses the term, these ideas are evocative of the rhetorical concept of the enthymeme. Aristotle himself discussed the enthymeme, though the concept was not elucidated with much specificity. Rhetorical scholars have since debated the nature of the enthymeme as employed in persuasion, and Bitzer (1959) surveyed various accounts to produce a more substantial definition. Bitzer’s analysis comes to focus on the enthymeme in relation to syllogisms, and the notion of the enthymeme as a syllogism with a missing (or unstated) proposition. Bitzer states: “To say that the enthymeme is an ‘incomplete syllogism’ – that is, a syllogism having one or more suppressed premises – means that the speaker does not lay down his premises but lets his audience supply them out of its stock of opinion and knowledge” (p.407).

Bitzer’s formulation of the enthymeme emphasizes that “enthymemes occur only when the speaker and audience jointly produce them” (p.408). That they are “jointly produced” is key to the role of the enthymeme is successful persuasive rhetoric: “Owing to the skill of the speaker, the audience itself helps construct the proofs by which it is persuaded” (p.408). That the enthymeme’s “premises are always drawn from the audience,” and the “successful construction is accomplished through the joint efforts of speaker and audience,” Bitzer defines as the “essential character” of the enthymeme. This joint construction, and supplying of the missing premise(s), resonates with Balkin’s view of the spread of cultural software, as well as various theories of subjects’ complicity in the functioning of ideology.

McGee (1980) supplied another link between rhetoric and ideology with the “ideograph”. McGee argued that “ideology is a political language composed of slogan-like terms signifying collective commitment” (p.15), and these terms he calls “ideographs”. Examples of ideographs, according to McGee, include “liberty,” “religion,” and “property” (p.16). Johnson (2007) applies the ideograph concept to memetics, to argue for the usefulness of the meme as a tool for materialist criticism. Johnson argues that although “the ideograph has been honed as a tool for political (“P”-politics) discourses, such as those that populate legislative arenas, the meme can better assess ‘superficial’ cultural discourses” (p.29). I also believe that the meme concept can be a productive tool for ideological critique. As an example, I will apply the concepts of ideology reproduction as memetic transmission, and ideological function as enthymematic, in an analysis of artifacts of online culture popularly referred to as “memes”.

As Internet culture evolved, users adapted and mutated the term “meme” to refer to specific online artifacts. Even though they may be considered a type of online artifact, Internet memes come in a variety of different forms. One of the oldest and most prominent series of image macro memes is the “LOLcats” series of memes. The template established by LOLcats of superimposing humorous text over static images became and remains the standard format for image macro memes. Two of the most prominent series of these types of memes are the “First World Problems” (FWP) and “Third World Success” image macros. Through analysis of these memes, it is possible to examine how the features of these artifacts and discursive practices demonstrate many of the traits of memes developed by theorists, and how theories of memetic ideological transmission and enthymematic ideological function can be applied to examine ideological characteristics of these artifacts.

 

References

Balkin, J.M. (1998). Cultural software: A theory of ideology. Dansbury, CT: Yale

University Press.

Bitzer, L. F. (1959). Aristotle’s enthymeme revisited. Quarterly Journal Of Speech,

45(4), 399-408.

Dawkins, R. (2006). The Selfish Gene. New York, NY: Oxford University Press. (original

work published 1976)

Distin, K. (2005). The selfish meme: A critical reassessment. New York, NY: Cambridge

University Press.

McGee, M. C. (1980). The “ideograph”: A link between rhetoric and ideology. Quarterly

Journal Of Speech66(1), 1-16.

Media Ecology Monday: Golumbia and the Political Economy of Computationalism

In The Cultural Logic of Computation Golumbia raises questions and addresses issues that are promising, but then proceeds in making an argument that is ultimately unproductive. I am sympathetic to Golumbia’s aims; I share an attitude of skepticism toward the rhetoric surrounding the Internet and new media as inherently democratizing, liberating devices. Golumbia characterizes such narratives as “technological progressivism,” and writes that “technological progressivism […] conditions so much of computational discourse.” Following the “Arab Spring” and watching the events unfold was exhilarating, but I was always uncomfortable with the narrative promoted in the mainstream news media characterizing these social movements as a “Twitter revolution,” and I remain skeptical toward hashtag activism and similar trends.

So while I was initially inclined toward the project Golumbia laid out in the book’s introductory pages, the chapters that followed only muddled rather than clarified my understanding of the argument being presented. The first section contains a sustained attack on Noam Chomsky’s contributions to linguistics, and their various influences and permutations, but also on Chomsky himself. I don’t know why Golumbia needed to question Chomsky’s “implausible rise to prominence,” or why Chomsky’s “magnetic charisma” needs to be mentioned in this discussion of linguistic theory.

Golumbia focuses on Chomsky’s contributions to linguistics, because that is where his interests and argument draw him; based on my own interests and background I would’ve preferred engagement with the other side of Chomsky’s contributions to communication studies, namely the propaganda model and political economy of the media. I suspect that a fruitful analysis would be possible from considering some of the issues Golumbia brings up in relation the work of Chomsky and others in ideological analysis of news media content. The notion of computationalism as ideology is compelling to me; so is the institutionalized rhetoric of computationalism, which is a separate, promising argument, I think.

 In reading I have a tendency to focus on what interests me, appeals to me, or may be useful for me. Some of Golumbia’s concepts, such as “technological-progressive neoliberalism” and its relation to centralized power, fall into this category. I’m still skeptical about computationalism as an operationalizable concept (it seems like there are already multiple theoretical models and critical perspectives that cover the same territory, I’m not convinced that Golumbia makes the case for a need for the term), others may be more productive. Ultimately I will use a quote from Golumbia (addressing Internet and emerging technologies) that reflects my feelings on this book: “We have to learn to critique even that which helps us.”

Critical perspectives on the Isla Vista spree killer, media coverage


Reuters/Lucy Nicholson

Reuters/Lucy Nicholson

  • Immediately following Elliot Rodger's spree killing in Isla Vista, CA last month Internet users discovered his YouTube channel and a 140-page autobiographical screed, dubbed a "manifesto" by the media. The written document and the videos documented Rodger's sexual frustration and his chronic inability to connect with other people. He specifically lashed out at women for forcing him " to endure an existence of loneliness, rejection and unfulfilled desires" and causing his violent "retribution". Commentators and the popular press framed the killings as an outcome of misogynistic ideology, with headlines such as: How misogyny kills men, further proof that misogyny kills, and Elliot Rodger proves the danger of everyday sexism. Slate contributor Amanda Hess wrote:

Elliot Rodger targeted women out of entitlement, their male partners out of jealousy, and unrelated male bystanders out of expedience. This is not ammunition for an argument that he was a misandrist at heart—it’s evidence of the horrific extent of misogyny’s cultural reach.

His parents saw the digitally mediated rants and contacted his therapist and a social worker, who contacted a mental health hotline. These were the proper steps. But those who interviewed Rodger found him to be a “perfectly polite, kind and wonderful human.” They deemed his involuntary holding unnecessary and a search of his apartment unwarranted. That is, authorities defined Rodger and assessed his intentions based upon face-to-face interaction, privileging this interaction over and above a “vast digital trail.” This is digital dualism taken to its worst imaginable conclusion.

In fact, in the entire 140-odd-page memoir he left behind, “My Twisted World,” documents with agonizing repetition the daily tortured minutiae of his life, and barely has any interactions with women. What it has is interactions with the symbols of women, a non-stop shuffling of imaginary worlds that women represented access to. Women weren’t objects of desire per se, they were currency.

[...]

What exists in painstaking detail are the male figures in his life. The ones he meets who then reveal that they have kissed a girl, or slept with a girl, or slept with a few girls. These are the men who have what Elliot can’t have, and these are the men that he obsesses over.

[...]

Women don’t merely serve as objects for Elliot. Women are the currency used to buy whatever he’s missing. Just as a dollar bill used to get you a dollar’s worth of silver, a woman is an indicator of spending power. He wants to throw this money around for other people. Bring them home to prove something to his roommates. Show the bullies who picked on him that he deserves the same things they do.

[...]

There’s another, slightly more obscure recurring theme in Elliot’s manifesto: The frequency with which he discusses either his desire or attempt to throw a glass of some liquid at happy couples, particularly if the girl is a ‘beautiful tall blonde.’ [...] These are the only interactions Elliot has with women: marking his territory.

[...]

When we don’t know how else to say what we need, like entitled children, we scream, and the loudest scream we have is violence. Violence is not an act of expressing the inexpressible, it’s an act of expressing our frustration with the inexpressible. When we surround ourselves by closed ideology, anger and frustration and rage come to us when words can’t. Some ideologies prey on fear and hatred and shift them into symbols that all other symbols are defined by. It limits your vocabulary.

While the motivations for the shootings may vary, they have in common crises in masculinity in which young men use guns and violence to create ultra-masculine identities as part of a media spectacle that produces fame and celebrity for the shooters.

[...]

Crises in masculinity are grounded in the deterioration of socio-economic possibilities for young men and are inflamed by economic troubles. Gun carnage is also encouraged in part by media that repeatedly illustrates violence as a way of responding to problems. Explosions of male rage and rampage are also embedded in the escalation of war and militarism in the United States from the long nightmare of Vietnam through the military interventions in Afghanistan and Iraq.

For Debord, “spectacle” constituted the overarching concept to describe the media and consumer society, including the packaging, promotion, and display of commodities and the production and effects of all media. Using the term “media spectacle,” I am largely focusing on various forms of technologically-constructed media productions that are produced and disseminated through the so-called mass media, ranging from radio and television to the Internet and the latest wireless gadgets.

  • Kellner's comments from a 2008 interview talking about the Virginia Tech shooter's videos broadcast after the massacre, and his comments on critical media literacy, remain relevant to the current situation:

Cho’s multimedia video dossier, released after the Virginia Tech shootings, showed that he was consciously creating a spectacle of terror to create a hypermasculine identity for himself and avenge himself to solve his personal crises and problems. The NIU shooter, dressed in black emerged from a curtain onto a stage and started shooting, obviously creating a spectacle of terror, although as of this moment we still do not know much about his motivations. As for the television networks, since they are profit centers in a highly competitive business, they will continue to circulate school shootings and other acts of domestic terrorism as “breaking events” and will constitute the murderers as celebrities. Some media have begun to not publicize the name of teen suicides, to attempt to deter copy-cat effects, and the media should definitely be concerned about creating celebrities out of school shooters and not sensationalize them.

[...]

People have to become critical of the media scripts of hyperviolence and hypermasculinity that are projected as role models for men in the media, or that help to legitimate violence as a means to resolve personal crises or solve problems. We need critical media literacy to analyze how the media construct models of masculinities and femininities, good and evil, and become critical readers of the media who ourselves seek alternative models of identity and behavior.

  • Almost immediately after news of the violence broke, and word of the killer's YouTube videos spread, there was a spike of online backlash against the media saturation and warnings against promoting the perpetrator to celebrity status through omnipresent news coverage. Just two days after the killings Isla Vista residents and UCSB students let the news crews at the scene know that they were not welcome to intrude upon the community's mourning. As they are wont to do, journalists reported on their role in the story while ignoring the wishes of the residents, as in this LA Times brief:

More than a dozen reporters were camped out on Pardall Road in front of the deli -- and had been for days, their cameras and lights and gear taking up an entire lane of the street. At one point, police officers showed up to ensure that tensions did not boil over.

The students stared straight-faced at reporters. Some held signs expressing their frustration with the news media:

"OUR TRAGEDY IS NOT YOUR COMMODITY."

"Remembrance NOT ratings."

"Stop filming our tears."

"Let us heal."

"NEWS CREWS GO HOME!"

Fukuyama: 25 years after the "End of History"

 

I argued that History (in the grand philosophical sense) was turning out very differently from what thinkers on the left had imagined. The process of economic and political modernization was leading not to communism, as the Marxists had asserted and the Soviet Union had avowed, but to some form of liberal democracy and a market economy. History, I wrote, appeared to culminate in liberty: elected governments, individual rights, an economic system in which capital and labor circulated with relatively modest state oversight.

[...]

So has my end-of-history hypothesis been proven wrong, or if not wrong, in need of serious revision? I believe that the underlying idea remains essentially correct, but I also now understand many things about the nature of political development that I saw less clearly during the heady days of 1989.

[...]

Twenty-five years later, the most serious threat to the end-of-history hypothesis isn't that there is a higher, better model out there that will someday supersede liberal democracy; neither Islamist theocracy nor Chinese capitalism cuts it. Once societies get on the up escalator of industrialization, their social structure begins to change in ways that increase demands for political participation. If political elites accommodate these demands, we arrive at some version of democracy.

When he wrote "The End of History?", Fukuyama was a neocon. He was taught by Leo Strauss's protege Allan Bloom, author of The Closing of the American Mind; he was a researcher for the Rand Corporation, the thinktank for the American military-industrial complex; and he followed his mentor Paul Wolfowitz into the Reagan administration. He showed his true political colours when he wrote that "the class issue has actually been successfully resolved in the west … the egalitarianism of modern America represents the essential achievement of the classless society envisioned by Marx." This was a highly tendentious claim even in 1989.

[...]

Fukuyama distinguished his own position from that of the sociologist Daniel Bell, who published a collection of essays in 1960 titled The End of Ideology. Bell had found himself, at the end of the 1950s, at a "disconcerting caesura". Political society had rejected "the old apocalyptic and chiliastic visions", he wrote, and "in the west, among the intellectuals, the old passions are spent." Bell also had ties to neocons but denied an affiliation to any ideology. Fukuyama claimed not that ideology per se was finished, but that the best possible ideology had evolved. Yet the "end of history" and the "end of ideology" arguments have the same effect: they conceal and naturalise the dominance of the right, and erase the rationale for debate.

While I recognise the ideological subterfuge (the markets as "natural"), there is a broader aspect to Fukuyama's essay that I admire, and cannot analyse away. It ends with a surprisingly poignant passage: "The end of history will be a very sad time. The struggle for recognition, the willingness to risk one's life for a purely abstract goal, the worldwide ideological struggle that called forth daring, courage, imagination, and idealism, will be replaced by economic calculation, the endless solving of technical problems, environmental concerns, and the satisfaction of sophisticated consumer demands."

 

In an article that went viral in 1989, Francis Fukuyama advanced the notion that with the death of communism history had come to an end in the sense that liberalism — democracy and market capitalism — had triumphed as an ideology. Fukuyama will be joined by other scholars to examine this proposition in the light of experience during the subsequent quarter century.

Featuring Francis Fukuyama, author of “The End of History?”; Michael Mandelbaum, School of Advanced International Studies, Johns Hopkins University; Marian Tupy, Cato Institute; Adam Garfinkle, editor, American Interest; Paul Pillar, Nonresident Senior Fellow, Foreign Policy, Center for 21st Century Security and Intelligence, Brookings Institution; and John Mueller, Ohio State University and Cato Institute.

Ender's Game analyzed, the Stanley Parable explored, Political Economy of zombies, semiotics of Twitter, much more

It's been a long time since the last update (what happened to October?), so this post is extra long in an attempt to catch up.

In a world in which interplanetary conflicts play out on screens, the government needs commanders who will never shrug off their campaigns as merely “virtual.” These same commanders must feel the stakes of their simulated battles to be as high as actual warfare (because, of course, they are). Card’s book makes the nostalgic claim that children are useful because they are innocent. Hood’s movie leaves nostalgia by the roadside, making the more complex assertion that they are useful because of their unique socialization to be intimately involved with, rather than detached from, simulations.

  • In the ongoing discourse about games criticism and its relation to film reviews, Bob Chipman's latest Big Picture post uses his own review of the Ender's Game film as an entry point for a breathless treatise on criticism. The video presents a concise and nuanced overview of arts criticism, from the classical era through film reviews as consumer reports up to the very much in-flux conceptions of games criticism.  Personally I find this video sub-genre (where spoken content is crammed into a Tommy gun barrage of word bullets so that the narrator can convey a lot of information in a short running time) irritating and mostly worthless, since the verbal information is being presented faster than the listener can really process it. It reminds me of Film Crit Hulk, someone who writes excellent essays with obvious insight into filmmaking, but whose aesthetic choice (or "gimmick") to write in all caps is often a distraction from the content and a deterrent to readers. Film Crit Hulk has of course addressed this issue and explained the rationale for this choice, but considering that his more recent articles have dropped the third-person "Hulk speak"  writing style the all caps seems to be played out. Nevertheless, I'm sharing the video because Mr. Chipman makes a lot of interesting points, particularly regarding the cultural contexts for the various forms of criticism. Just remember to breathe deeply and monitor your heart rate while watching.

  • This video from Satchbag's Goods is ostensibly a review ofHotline Miami, but develops into a discussion of art movements and Kanye West:

  • This short interview with Slavoj Žižek in New York magazine continues a trend I've noticed since Pervert's Guide to Ideology has been releasing, wherein writers interviewing Žižek feel compelled to include themselves and their reactions to/interactions with Žižek into their article. Something about a Žižek encounter brings out the gonzo in journalists. The NY mag piece is also notable for this succinct positioning of Žižek's contribution to critical theory:

Žižek, after all, the ­Yugoslav-born, Ljubljana-based academic and Hegelian; mascot of the Occupy movement, critic of the Occupy movement; and former Slovenian presidential candidate, whose most infamous contribution to intellectual history remains his redefinition of ideology from a Marxist false consciousness to a Freudian-Lacanian projection of the unconscious. Translation: To Žižek, all politics—from communist to social-democratic—are formed not by deliberate principles of freedom, or equality, but by expressions of repressed desires—shame, guilt, sexual insecurity. We’re convinced we’re drawing conclusions from an interpretable world when we’re actually just suffering involuntary psychic fantasies.

Following the development of the environment on the team's blog you can see some of the gaps between what data was deemed noteworthy or worth recording in the seventeenth century and the level of detail we now expect in maps and other infographics. For example, the team struggled to pinpoint the exact location on Pudding Lane of the bakery where the Great Fire of London is thought to have originated and so just ended up placing it halfway along.

  • Stephen Totilo reviewed the new pirate-themed Assassin's Creed game for the New York Times. I haven't played the game, but I love that the sections of the game set in the present day have shifted from the standard global conspiracy tropes seen in the earlier installments to postmodern self-referential and meta-fictional framing:

Curiously, a new character is emerging in the series: Ubisoft itself, presented mostly in the form of self-parody in the guise of a fictional video game company, Abstergo Entertainment. We can play small sections as a developer in Abstergo’s Montreal headquarters. Our job is to help turn Kenway’s life — mined through DNA-sniffing gadgetry — into a mass-market video game adventure. We can also read management’s emails. The team debates whether games of this type could sell well if they focused more on peaceful, uplifting moments of humanity. Conflict is needed, someone argues. Violence sells.

It turns out that Abstergo is also a front for the villainous Templars, who search for history’s secrets when not creating entertainment to numb the population. In these sections, Ubisoft almost too cheekily aligns itself with the bad guys and justifies its inevitable 2015 Assassin’s Creed, set during yet another violent moment in world history.

  • Speaking of postmodern, self-referential, meta-fictional video games: The Stanley Parable was released late last month. There has already been a bevy of analysis written about the game, but I am waiting for the Mac release to play the game and doing my best to avoid spoilers in the meantime. Brenna Hillier's post at VG24/7 is spoiler free (assuming you are at least familiar with the games premise, or its original incarnation as a Half Life mod), and calls The Stanley parable "a reaction against, commentary upon, critique and celebration of narrative-driven game design":

The Stanley Parable wants you to think about it. The Stanley Parable, despite its very limited inputs (you can’t even jump, and very few objects are interactive) looks at those parts of first-person gaming that are least easy to design for – exploration and messing with the game’s engine – and foregrounds them. It takes the very limitations of traditional gaming narratives and uses them to ruthlessly expose their own flaws.

Roy’s research focus prior to founding Bluefin, and continued interest while running the company, has to do with how both artificial and human intelligences learn language. In studying this process, he determined that the most important factor in meaning making was the interaction between human beings: non one learns language in a vacuum, after all. That lesson helped inform his work at Twitter, which started with mapping the connection between social network activity and live broadcast television.

Aspiring to cinematic qualities is not bad in an of itself, nor do I mean to shame fellow game writers, but developers and their attendant press tend to be myopic in their point of view, both figuratively and literally. If we continually view videogames through a monocular lens, we miss much of their potential. And moreover, we begin to use ‘cinematic’ reflexively without taking the time to explain what the hell that word means.

Metaphor is a powerful tool. Thinking videogames through other media can reframe our expectations of what games can do, challenge our design habits, and reconfigure our critical vocabularies. To crib a quote from Andy Warhol, we get ‘a new idea, a new look, a new sex, a new pair of underwear.’ And as I hinted before, it turns out that fashion and videogames have some uncanny similarities.

Zombies started their life in the Hollywood of the 1930s and ‘40s as simplistic stand-ins for racist xenophobia. Post-millennial zombies have been hot-rodded by Danny Boyle and made into a subversive form of utopia. That grim utopianism was globalized by Max Brooks, and now Brad Pitt and his partners are working to transform it into a global franchise. But if zombies are to stay relevant, it will rely on the shambling monsters' ability to stay subversive – and real subversive shocks and terror are not dystopian. They are utopian.

Ironically, our bodies now must make physical contact with devices dictating access to the real; Apple’s Touch ID sensor can discern for the most part if we are actually alive. This way, we don’t end up trying to find our stolen fingers on the black market, or prevent others from 3D scanning them to gain access to our lives.

This is a monumental shift from when Apple released its first iPhone just six years ago. It’s a touchy subject: fingerprinting authentication means we confer our trust in an inanimate object to manage our animate selves - our biology is verified, digitised, encrypted, as they are handed over to our devices.

Can you really buy heroin on the Web as easily as you might purchase the latest best-seller from Amazon? Not exactly, but as the FBI explained in its complaint, it wasn't exactly rocket science, thanks to Tor and some bitcoins. Here's a rundown of how Silk Road worked before the feds swooped in.

  • Henry Jenkins posted the transcript of an interview with Mark J.P. Wolf. The theme of the discussion is "imaginary worlds," and they touch upon the narratology vs. ludology conflict in gaming:

The interactivity vs. storytelling debate is really a question of the author saying either “You choose” (interaction) or “I choose” (storytelling) regarding the events experienced; it can be all of one or all of the other, or some of each to varying degrees; and even when the author says “You choose”, you are still choosing from a set of options chosen by the author.  So it’s not just a question of how many choices you make, but how many options there are per choice.  Immersion, however, is a different issue, I think, which does not always rely on choice (such as immersive novels), unless you want to count “Continue reading” and “Stop reading” as two options you are constantly asked to choose between.

Warren Ellis on violent fiction, death of the Western, Leatherface as model vegan

As we learn early on, the movie’s killers, the murderous Sawyer family (comprised of Leatherface, Grandpa, et al), used to run a slaughterhouse, and the means they use to slaughter their victims are the same as those used to slaughter cattle. They knock them over the head with sledgehammers, hang them on meat hooks, and stuff them into freezers. Often this takes place as the victims are surrounded by animal bones, a detail that could be explained away as the evidence of their former occupation—except that the cries of farm animals (there are none around) are played over the scenes.

Through the past century of Western movies, we can trace America's self-image as it evolved from a rough-and-tumble but morally confident outsider in world affairs to an all-powerful sheriff with a guilty conscience. After World War I and leading into World War II, Hollywood specialized in tales of heroes taking the good fight to savage enemies and saving defenseless settlements in the process. In the Great Depression especially, as capitalism and American exceptionalism came under question, the cowboy hero was often mistaken for a criminal and forced to prove his own worthiness--which he inevitably did. Over the '50s, '60s, and '70s however, as America enforced its dominion over half the planet with a long series of coups, assassinations, and increasingly dubious wars, the figure of the cowboy grew darker and more complicated. If you love Westerns, most of your favorites are probably from this era--Shane, The Searchers, Butch Cassidy and the Sundance Kid, McCabe & Mrs. Miller, the spaghetti westerns, etc. By the height of the Vietnam protest era, cowboys were antiheroes as often as they were heroes.

The dawn of the 1980s brought the inauguration of Ronald Reagan and the box-office debacle of the artsy, overblown Heaven's Gate. There's a sense of disappointment to the decade that followed, as if the era of revisionist Westerns had failed and a less nuanced patriotism would have to carry the day. Few memorable Westerns were made in the '80s, and Reagan himself proudly associated himself with an old-fashioned, pre-Vietnam cowboy image. But victory in the Cold War coincided with a revival of the genre, including the revisionist strain, exemplified in Clint Eastwood's career-topping Unforgiven. A new, gentler star emerged in Kevin Costner, who scored a post-colonial megahit with Dances With Wolves. Later, in the 2000s, George W. Bush reclaimed the image of the cowboy for a foreign policy far less successful than Reagan's, and the genre retreated to the art house again.

Westerns are fundamentally about political isolation. The government is far away and weak. Institutions are largely irrelevant in a somewhat isolated town of 100 people. The law is what the sheriff says it is, or what the marshall riding through town says, or the posse. At that scale, there may be no meaningful distinction between war and crime. A single individual's choices can tilt the balance of power. Samurai and Western stories cross-pollinated because when you strip away the surface detail the settings are surprisingly similar. The villagers in Seven Samurai and the women in Unforgiven are both buying justice/revenge because there is no one to appeal to from whom they could expect justice. Westerns are interesting in part because they are stories where individual moral judgment is almost totally unsupported by institutions.

Westerns clearly are not dying. We get a really great film in the genre once every few years. However, they've lost a lot of their place at the center of pop culture because the idea of an isolated community has grown increasingly implausible. In what has become a surveillance state, the idea of a place where the state has no authority does not resonate as relevant.

The function of fiction is being lost in the conversation on violence. My book editor, Sean McDonald, thinks of it as “radical empathy.” Fiction, like any other form of art, is there to consider aspects of the real world in the ways that simple objective views can’t — from the inside. We cannot Other characters when we are seeing the world from the inside of their skulls. This is the great success of Thomas Harris’s Hannibal Lecter, both in print and as so richly embodied by Mads Mikkelsen in the Hannibal television series: For every three scary, strange things we discover about him, there is one thing that we can relate to. The Other is revealed as a damaged or alienated human, and we learn something about the roots of violence and the traps of horror.

Rushkoff on Manning verdict, Chomsky/Žižek on NSA leaks, looking for McLuhan in Afghanistan

We are just beginning to learn what makes a free people secure in a digital age. It really is different. The Cold War was an era of paper records, locked vaults and state secrets, for which a cloak-and-dagger mindset may have been appropriate. In a digital environment, our security comes not from our ability to keep our secrets but rather our ability to live our truth.

In light of the recent NSA surveillance scandal, Chomsky and Žižek offer us very different approaches, both of which are helpful for leftist critique. For Chomsky, the path ahead is clear. Faced with new revelations about the surveillance state, Chomsky might engage in data mining, juxtaposing our politicians' lofty statements about freedom against their secretive actions, thereby revealing their utter hypocrisy. Indeed, Chomsky is a master at this form of argumentation, and he does it beautifully in Hegemony or Survival when he contrasts the democratic statements of Bush regime officials against their anti-democratic actions. He might also demonstrate how NSA surveillance is not a strange historical aberration but a continuation of past policies, including, most infamously, the FBI's counter intelligence programme in the 1950s, 60s, and early 70s.

Žižek, on the other hand, might proceed in a number of ways. He might look at the ideology of cynicism, as he did so famously in the opening chapter of The Sublime Object of Ideology, in order to demonstrate how expressions of outrage regarding NSA surveillance practices can actually serve as a form of inaction, as a substitute for meaningful political struggle. We know very well what we are doing, but still we are doing it; we know very well that our government is spying on us, but still we continue to support it (through voting, etc). Žižek might also look at how surveillance practices ultimately fail as a method of subjectivisation, how the very existence of whistleblowers like Thomas Drake, Bradley Manning, Edward Snowden, and the others who are sure to follow in their footsteps demonstrates that technologies of surveillance and their accompanying ideologies of security can never guarantee the full participation of the people they are meant to control. As Žižek emphasises again and again, subjectivisation fails.

In early 2011, award-winning photographer Rita Leistner was embedded with a U.S. marine battalion deployed to Helmand province as a member of Project Basetrack, an experiment in using new technologies in social media to extend traditional war reporting. This new LRC series draws on Leistner’s remarkable iPhone photos and her writings from her time in Afghanistan to use the ideas of Marshall McLuhan to make sense of what she saw there – “to examine the face of war through the extensions of man.”

Epic EVE battle, Critical games criticism, indie developer self-publishing

Update, 9:18PM ET: The battle is over. After more than five hours of combat, the CFC has defeated TEST Alliance. Over 2,900 ships were destroyed today in the largest fleet battle in Eve Online's history. TEST Alliance intended to make a definitive statement in 6VDT, but their defeat at the hands of the CFC was decisive and will likely result in TEST's withdrawal from the Fountain region.

In a conversation with Whitten, he told us that the commitment to independent developers is full. There won't be restrictions on the type of titles that can be created, nor will there be limits in scope. In response to a question on whether retail-scale games could be published independently, Whitten told us, "Our goal is to give them access to the power of Xbox One, the power of Xbox Live, the cloud, Kinect, Smartglass. That's what we think will actually generate a bunch of creativity on the system." With regard to revenue splitting with developers, we were told that more information will be coming at Gamescom, but that we could think about it "generally like we think about Marketplace today." According to developers we've spoken with, that split can be approximately 50-50.

Another difference between the Xbox One and Xbox 360 is how the games will be published and bought by other gamers. Indie games will not be relegated to the Xbox Live Indie Marketplace like on the Xbox 360 or required to have a Microsoft-certified publisher to distribute physically or digitally outside the Indie Marketplace. All games will be featured in one big area with access to all kinds of games.

If anything has hurt modern video game design over the past several years, it has been the rise of 'freemium'. It seems that it is rare to see a top app or game in the app stores that has a business model that is something other than the 'free-to-play with in-app purchases' model. It has been used as an excuse to make lazy, poorly designed games that are predicated on taking advantage of psychological triggers in its players, and will have negative long term consequences for the video game industry if kept unchecked.

Many freemium games are designed around the idea of conditioning players to become addicted to playing the game. Many game designers want their games to be heavily played, but in this case the freemium games are designed to trigger a 'reward' state in the player's brain in order to keep the player playing (and ultimately entice the user to make in-app purchases to continue playing). This type of conditioning is often referred to as a 'Skinner box', named after the psychologist that created laboratory boxes used to perform behavioral experiments on animals.

It obviously isn’t beyond the realm of possibility that, not only do financial considerations influence a game’s structure and content, financial outcomes affect a studio’s likelihood of survival in the industry, based upon the machinations of its publishing overlords. Activision killed Bizarre Creations, Eidos ruined Looking Glass Studios, EA crushed Westood, Pandemic, Bullfrog, Origin Systems… well, the list could go on, until I turn a strange, purple color, but you get my point. And, when 3.4 million copies sold for a Tomb Raider reboot isn’t enough by a publisher’s standards, you can’t help but feel concern for a developer’s future.

This relationship between environment-learner-content interaction and transfer puts teachers in the unique position to capitalize on game engagement to promote reflection that positively shapes how students tackle real-world challenges. To some, this may seem like a shocking concept, but it’s definitely not a new one—roleplay as instruction, for example, was very popular among the ancient Greeks and, in many ways, served as the backbone for Plato’s renowned Allegory of the Cave. The same is true of Shakespeare’s works, 18th and 19th century opera, and many of the novels, movies, and other media that define our culture. More recently, NASA has applied game-like simulations to teach astronauts how to maneuver through space, medical schools have used them to teach robotic surgery, and the Federal Aviation Administration has employed them to test pilots.

The relationship between the creator, the product, and the audience, are all important contexts to consider during media analysis, especially with games. This is because the audience is an active participant in the media. So if you are creating a game you always have to keep in mind the audience. Even if you say the audience doesn’t matter to you, it won’t cease to exist, and it does not erase the impact your game will have.

Similarly, if you are critiquing or analyzing any media, you can’t ignore the creator and the creator’s intentions. Despite those who claim the “death of the author,” if the audience is aware of the creator’s intentions, it can affect how they perceive the game. Particularly, if you consider the ease in which creators can release statements talking about their work, you’ll have an audience with varying levels of awareness about the creator’s intentions. These factors all play off of each other–they do not exist in a vacuum.

When we talk about any medium’s legitimacy, be it film or videogames or painting, it’s a very historical phenomenon that is inextricably tied to its artness that allows for them to get in on the ground floor of “legitimate" and “important." So if we contextualize the qualities that allowed for film or photography to find themselves supported through a panoply of cultural institutions it was a cultural and political economic process that lead them there.


[...]

Videogames, the kind that would be written about in 20 dollar glossy art magazines, would be exactly this. When creators of videogames want to point to their medium’s legitimacy, it would help to have a lot of smart people legitimate your work in a medium (glossy magazines, international newspapers) that you consider to be likewise legitimate. Spector concedes that ‘yes all the critics right now are online’, but the real battle is in getting these critics offline and into more “legitimate" spaces of representation. It’s a kind of unspoken hierarchy of mediums that is dancing before us here: at each step a new gatekeeper steps into play, both legitimating and separating the reader from the critic and the object of criticism. 

All three games define fatherhood around the act of protection, primarily physical protection. And in each of these games, the protagonist fails—at least temporarily—to protect their ward. In Ethan’s case, his cheery family reflected in his pristine home collapses when he loses a son in a car accident. Later, when his other son goes missing, the game essentially tests Ethan’s ability to reclaim his protective-father status.

No video game grants absolute freedom; they all have rules or guidelines that govern what you can and can’t do. The sci-fi epic Mass Effect is a series that prides itself on choice, but even that trilogy ends on a variation of choosing between the “good” and “bad” ending. Minecraft, the open-world creation game, is extremely open-ended, but you can’t build a gun or construct a tower into space because it doesn’t let you. BioShock’s ending argues that the choices you think you’re making in these games don’t actually represent freedom. You’re just operating within the parameters set by the people in control, be they the developers or the guy in the game telling you to bash his skull with a golf club.

BioShock’s disappointing conclusion ends up illustrating Ryan’s point. A man chooses, a player obeys. It’s a grim and cynical message that emphasizes the constraints of its own art form. And given that the idea of choice is so important to BioShock’s story, I don’t think it could’ve ended any other way.

 

The Ideology of Scarface, Community as PoMo masterpiece, Present Shock reviewed, etc.

[youtube=http://www.youtube.com/watch?v=YanhEVEgkYI&w=560&h=315]

[youtube=http://www.youtube.com/watch?v=7MCmBHPqz6I&w=560&h=315]

In The Godfather, the blurring of the line between crime and the “legitimate” economy can still seem shocking. In Scarface, the distinction seems quaintly naïve. In The Godfather, Don Vito almost loses everything over his refusal to deal in heroin. In Scarface, Tony Montana knows that coke is just another commodity in a boom economy. Michael Corleone marries the wispy, drooping Kate Adams to give his enterprise some old-fashioned, WASP class. When Tony Montana takes possession of the coked-up bombshell called Elvira Hancock, he is filling his waterbed with cash, not class. Even more excruciatingly, Scarface tells us these truths without any self-righteousness, without the consoling promise that manly discipline can save America from its fate. In the moral economy of this movie, the terms of critique have become indistinguishable from the terms of affirmation. “You know what capitalism is?” Tony answers his own question: “Getting fucked.”

Donovan put Neumann in charge of the Research and Analysis Branch of the OSS, studying Nazi-ruled central Europe. Neumann was soon joined by the philosopher Herbert Marcuse and the legal scholar Otto Kirchheimer, his colleagues at the left-wing Institute for Social Research, which had been founded in Frankfurt in 1923 but had moved to Columbia University after the Nazis came to power.

An update of the promise, that the media could create a different, even a better world, seems laughable from our perspective of experience with the technologically based democracies of markets. As a utopia-ersatz, this promise appears to be obsolete in the former hegemonial regions of North America and western and northern Europe. Now that it is possible to create a state with media, they are no longer any good for a revolution. The media are an indispensable component of functioning social hierarchies, both from the top down and the bottom up, of power and countervailing power. They have taken on systemic character. Without them, nothing works anymore in what the still surviving color supplements in a careless generalization continue to call a society. Media are an integral part of the the everyday coercive context, which is termed “practical constraints.” As cultural techniques, which need to be learned for social fitness, they are at the greatest possible remove from what whips us into a state of excitement, induces aesthetic exultation, or triggers irritated thoughts.

[...]

At the same time, many universities have established courses in media design, media studies, and media management. Something that operates as a complex, dynamic, and edgy complex between the discourses, that is, something which can only operate interdiscursively, has acquired a firm and fixed place in the academic landscape. This is reassuring and creates professorial chairs, upon which a once anarchic element can be sat out and developed into knowledge for domination and control. Colleges and academies founded specifically for the media proactively seek close relationships with the industries, manufacturers, and the professional trades associations of design, orientation, and communication.

There are five ways Rushkoff thinks present shock is being experienced and responded to. To begin, we are in an era in which he thinks narrative has collapsed. For as long as we have had the power of speech we have corralled time into linear stories with a beginning, middle and ending. More often than not these stories contained some lesson. They were not merely forms of entertainment or launching points for reflection but contained some guidance as to how we should act in a given circumstance, which, of course, differed by culture, but almost all stories were in effect small oversimplified models of real life.

[...]

The medium Rushkoff thinks is best adapted to the decline of narrative are video games. Yes, they are more often than not violent, but they also seem tailor made for the kinds of autonomy and collaborative play that are the positive manifestations of our new presentism.

 

2nd Update: Žižek responds to Chomsky's "Fantasies"

  • Žižek v. Chomsky continues: Žižek has responded to Chomsky's last comment in an article in the International Journal of Žižek Studies. You can read the entire article here, select excerpts follow. I am particularly interested in how Žižek focuses on conflicting definitions of ideology as a key factor in Chomsky's misunderstanding of Žižek's work:

For me, on the contrary, the problem is here a very rational one: everything hinges on how we define “ideology.”

[...]

This bias is ideology - a set of explicit and implicit, even unspoken, ethico-political and other positions, decision, choices, etc., which predetermine our perception of facts, what we tend to emphasize or to ignore, how we organize facts into a consistent whole of a narrative or a theory.

  • After a rational and diplomatic refutation of Chomsky's comments, Žižek ends the essay with a parting blow:

Chomsky obviously doesn’t agree with me here. So what if – just another fancy idea of mine – what if Chomsky can not find anything in my work that goes "beyond the level of something you can explain in five minutes to a twelve-year-old because” because, when he deals with continental thought, it is his mind which functions as the mind of a twelve-years-old, the mind which is unable to distinguish serious philosophical reflection from empty posturing and playing with empty words?

 

Hollywood implosion: end of an era?

  • Last month Steven Spielberg and George Lucas caused a bit of a stir when they predicted an impending "implosion" of Hollywood that would forever alter the filmmaking industry. Speaking at a USC event, Spielberg posited a scenario in which a series of big budget flops would necessitate a change in the Hollywood business model:

"That's the big danger, and there's eventually going to be an implosion — or a big meltdown. There's going to be an implosion where three or four or maybe even a half-dozen megabudget movies are going to go crashing into the ground, and that's going to change the paradigm."

People complain that The Lone Ranger is boring, that it's almost totally devoid of fun except for the final 10 minutes, that it's ridiculously violent and yet inert. And all of these things are true — but you have to understand, it's all part of a calculated strategy, to sink far enough to burrow all the way to the infarcted heart of the terrible superhero origin story.

The goal is to show you who is to blame for the crappiness of so many superhero origin movies — you — and to punish you for allowing movies like The Lone Ranger to exist.

[...]

We tend to think of superhero movies as power fantasies, in which the use of America's status as a superpower is reflected by the hero struggling to use his or her power responsibly. But Lone Ranger seems to be making the case that the real seductive fantasy of these stories is absolution from blame — the Lone Ranger gets the Native American seal of approval from Tonto, as long as he's wearing the mask. He gets surcease from America's original sin.

Zizek's guide to ideology, Netflix tackles TV, digital dualist conservatism

Other ideological “masterpieces” that Žižek points to are much subtler, precisely because they occupy more prominent positions in the western cultural imaginary. He reads Jaws as a condensation of all the “foreign invaders” that privileged societies like upper-middle-class America worry will disrupt their peaceful communities. Part of what makes Fiennes’ film such a great showcase for Žižek’s approach to cultural studies is the persuasive effect of supplementing his explications with film clips. After listening to Žižek’s account of the ideological coordinates of the film, it’s difficult not to notice that all of the beach-goers scrambling to make it to the shore in one piece are affluent white Americans.

  • Writing for Memeburn, Michelle Atagana considers the strategies employed by Netflix in trying to “win television”. The strategies include producing original content, feeding binge habits, and using product placement.

If Netflix refines its model and signs on more shows, chances are it will make a formidable foe of big cable players such as HBO. The model that the company is currently working could also be exported to film, essentially making the next cinematic experience wherever, whenever and on whatever device the audience wants.

  • The Society Pages’ Cyborgology blog is one of my favorite resources for probing and provocative analysis of new media issues from a sociological perspective. One of the most interesting concepts considered by the blogs contributors is the notion of Digital Dualism. A recent post by Jesse Elias Spafford refines the digital dualism concept:

I posit that digital dualism, in fact, draws from both the ontological and the normative analyses. Specifically the digital dualist:

  1. Establishes an ontological distinction that carves up the world into two mutually exclusive (and collectively exhaustive) categories—at least one of which is somehow bound up with digital technology (e.g., that which is “virtual” vs. that which is “real”.)

  2. Posits some normative criteria that privileges one category over the other. (In most cases, it is the non-technological category that is deemed morally superior. However, charges of digital dualism would equally apply to views that favored the technological.)

After Earth's ideology, Assange on the new digital age, Voyager re-explored

  • The new M. Night Shyamalan film and Smith dynasty vehicle After Earthunderperformed at the box office last weekend, opening in third place. Neither the film or its box office numbers interest me, but elements of its inception and marketing are curious. Up until a few years ago Shyamalan's name featured prominently in promotional materials for films (most recently in 2010 for the Shyamalan-directed Last Airbender and the Shyamalan-produced Devil). Yet during the months of promotion for After Earth the director's name wasn't mentioned. In a piece on the Mother Jones site Asawin Suebsaeng refers to Shyamalan "he who must not be named", and asserts that the director's decline from "the next Hitchcock" to a "critical and pop-cultural punchline" made his association with the movie a liability for the studio.
Much in the same way that a marketing campaign will go out of its way not to use the word "gay" when promoting a film about two despondent gay cowboys, the marketing campaign for After Earth has gone out of its way not to mention the words "M. Night Shyamalan." That sort of tells you everything you need to know about how highly Sony thinks of the 42-year-old director and his current standing. ​
  • Other commentators have focused what influence star Will Smith's affiliation with Scientology may have had on the film. (At the end of last year when trailers for After Earth and Oblivion were both playing before new releases I noted ​that not only the similarity between the film's post-apocalyptic-Earth plots, but also the fact that both movies starred prominent celebrity Scientologists.) The Hollywood Reporter ran an analysis of the film written by a former member of the church.
Will Smith’s character is pretty much devoid of all emotions for the entire movie. While this may be part of his character or something that was directed in the script, in Scientology, one goes through great amounts of training and counseling to control one’s emotions and “mis-emotion,” as described by Hubbard. Anyone who has done even the smallest amount of Scientology training will recall sitting and staring at a person for hours on end without being allowed to blink, smile or turn one’s head. Will Smith pretty much masters that for the entirety of this movie.
Without being too obvious, Smith has delivered an incredibly mainstream platform for the Church's ideology. After Earth’s subtext makes every beat feel like a nod to the lessons of L. Ron Hubbard. Fleeing Earth to another planet only to return to home mirrors the idea of thetan resurrection. The ship Cypher and Kitai take on their mission isn't that far off from the Douglas DC-8–esque ship that took Xenu's kidnapped souls to earth. And the prominently advertised volcano that functions as a backdrop to a large After Earth set piece? Just look at the cover to Hubbard's book that started it all —Dianetics.
If After Earth were intentional propaganda, it would be an even bigger failure than it already is – the path to self-enlightenment is reduced to an overlong, tedious quest to find shit. Who wants to join that club? For the strong-willed, fear may be a choice, but for everyone else this weekend, avoiding boredom is an even clearer choice. ​
  • Perhaps The Onion's analysis has it, and audiences found the gimmick of Smith-and-son starring in a movie "more annoying than appealing".
Let’s just say, for argument’s sake, that I was an average, everyday American consumer. Would I enjoy seeing an incredibly rich and famous man use his money and power to make his children incredibly rich and famous? Would I enjoy seeing the face of a young teenager plastered on movie posters across the entire nation, not because of who he is, but because of who his father is? To be totally honest, I’m not so sure I would. In fact, it’s conceivable that I might find it unbelievably infuriating and downright unbearable.​
This book is a balefully seminal work in which neither author has the language to see, much less to express, the titanic centralizing evil they are constructing. “What Lockheed Martin was to the 20th century,” they tell us, “technology and cybersecurity companies will be to the 21st.” Without even understanding how, they have updated and seamlessly implemented George Orwell’s prophecy. If you want a vision of the future, imagine Washington-backed Google Glasses strapped onto vacant human faces — forever.
“Caretaker” invokes ’90s environmentalism, a superpower’s role as world police, and two oppositional parties working together to run that superpower as best as they can, but it’s nothing so much as a reminder of Gene Roddenberry’s Prime Directive. Starfleet is expressly prohibited from interfering with the progress of pre-warp societies. The Caretaker’s species had no such guidelines and nearly wiped out a whole species. Now, Voyager has the task of upholding Alpha Quadrant standards in the absence of Alpha Quadrant hierarchy. ​

Powered by Squarespace. Background image of New Songdo by Curry Chandler.