Curry Chandler

Curry Chandler is a writer, researcher, and independent scholar working in the field of communication and media studies. His writing on media theory and policy has been published in the popular press as well as academic journals. Curry approaches the study of communication from a distinctly critical perspective, and with a commitment to addressing inequality in power relations. The scope of his research activity includes media ecology, political economy, and the critique of ideology.

Curry is a graduate student in the Communication Department at the University of Pittsburgh, having previously earned degrees from Pepperdine University and the University of Central Florida.

Filtering by Tag: mediastudies

An Urban Media History

A Media History of the City

            A media history of the city could take on any number of forms. The shape of this history would largely be determined by how we defined its key terms. How should “the city” be understood? Such a history could begin in ancient or pre-historical times, starting with the earliest human settlements and urban agglomerations. On the other hand, it would also be possible to select a single moment along this vast timeline and analyze this temporal snapshot to see how various media are intersecting with urban life. This history could even be a contemporary history of modern media practices and institutions and their role in the urban experience. The other key question is how “media” should be understood. What media should be included in our study? How inclusive or exclusive should our definition be? Depending on how expansive our definition is, our history could begin by looking at human settlements established in pre-literate societies where spoken language was the primary communication medium. Our history could also look at the development of alphabets, and the role of various writing media such as tablets, papyrus, and parchment in facilitating the construction and governance of cities. Our history could instead follow a traditional mass communication view of modern media. In contemporary New York City place names such as Radio City Music Hall and Times Square attest to the impact that media of mass communication has made in urban spaces.

In order to limit the scope of this essay, I will frame my response as a curriculum overview for an imagined undergraduate course on media and the city. Framing the response in this way provides a framework and rationale for defining the terms of our analysis and the range of history we can reasonably attempt. A typical U.S. undergraduate introductory course in media studies approaches its subject using the “big 5” traditional media: newspapers, magazines, film, radio, and TV. For the sake of this essay, and imagining a potential undergraduate course based on this subject, I will structure my response around these “big 5” traditional media. Also in following the structure of a typical undergraduate media course, the history I present will correspond to the history of mass communication in the United States. The history I offer here is mostly confined to the 20th century, and focuses on U.S. cities. As such, this imaginary course I am outlining could be called “History of U.S. media and urbanization.” In what follows, I offer five key moments in this “media history of the city,” with each moment corresponding to one of the big 5 traditional media. Each entry will offer some historical information on the development of that media form, and a case study that illustrates the intersection between media use and the life of city. Finally, I will offer a sixth moment and case study that accounts for more recent developments in digital media and technological convergence, as well as salient aspects of urban life in the 21st century metropolis.

Moment One: Penny Papers and Newsboys on Strike

Early colonial newspapers tended to be political in nature, what were called the “partisan press” as opposed to commercial papers. In the 1830s, technological developments associated with the industrial revolution allowed for new paper production practices. Expensive handmade paper could be replaced by cheaper mass produced paper. Before this change in production, newspapers cost about 5 cents to purchase, which was relatively expensive for the time. Therefore newspaper readers tended to be affluent. Using the less expensive production techniques, publishers could sell papers for as cheap as 1 cent. Thus the “penny press” or “penny papers” were born, and this is the moment when newspapers truly became a mass media. Newspaper publishers had long relied on subscription service for reliable purchases of their papers, but in the penny press era individual street sales became an important part of the business model as well. One of the major penny press papers was the New York Sun owned by Benjamin Day. Under Day’s stewardship, the Sun privileged accounts of the daily triumphs and travails of the human condition, what are now known as “human interest stories.”

The penny papers introduced many innovations that remain part of the newspaper industry today, including assigning “beat” reporters to cover special story topics such as crime, and shifting the economic basis for publishing from the support of political parties (as in the “partisan press” era) to the market. The penny press era gave rise to an increase in newspaper production with an emphasis on competitive, profitable papers. This economic environment set the stage for some of the most famous newspaper barons to enter the scene. For instance, Joseph Pulitzer bought the New York World, and William Randolph Hearst bought the New York Journal shortly thereafter. Pulitzer pushed for the use of maps and illustrations in his papers, so that immigrants who were not fluent in English could understand the stories. Both Pulitzer and Hearst used bold headlines and layouts to attract reader attention. These practices became emblematic of the yellow journalism period, a term that also connotes sensationalism and even unscrupulous journalistic standards. Pulitzer and Hearst papers did call for social reforms and drew attention to the poor living conditions of poor immigrants in the cities; however, the papers also embellished stories, fabricated interviews, and staged promotional stunts in order to increase reader interest and boost circulation. In 1895, a conflict began that would go on to boost both papers’ fortunes. The island of Cuba had been a colony of Spain since the arrival of Columbus, and in 1895 an insurrection began against Spanish rule that would become known as the Cuban War of Independence. At the time Hearst and Pulitzer were engaged in a war of their own: a circulation war. Hearst’s and Pulitzer’s papers used the conflict to sell papers and boost circulation, deriding Spain in headlines and calling for U.S. intervention. In 1898 the U.S. ship the Maine was sent to Cuba and exploded and sank in Havana harbor, with hundreds of sailors killed. The World and the Journal ran headlines like “Spanish Murderers” and “Remember the Maine,” and the Spanish-American War is still remembered as a prime example of propaganda in the U.S. media swaying public opinion in favor of war, even when facts were misrepresented or embellished.

Benjamin Day’s New York Sun did not offer a subscription service, and instead relied solely on individual street sales to make a profit. To better distribute his papers, Day placed a wanted ad seeking workers to sell the newspapers on the street. Day expected adult workers to respond to the ad, but he found instead that children inquired about the job instead. The first vendor he hired was 10 year old Irish immigrant who would take bundles of papers onto a street corner and shout out the most arresting headlines to get reader interest. Soon this became a new and pervasive method of selling newspapers on city streets. These newspaper vendors or “hawkers” were also called newsboys or paperboys, although girls were often found in their ranks as is evident in many of the photographs taken of children news vendors at the time. These children worked long hours, often through late nights and early mornings, and even sleeping on front stoops or in the street, something also attested to by photographs of the period. Vendors would buy bundles of newspapers from the publishers, and they were not refunded for unsold papers. In 1899, in the wake of the boost in circulation numbers precipitated by the Spanish-American War coverage, many publishers raised the price of newsboy bundles from 50 cents to 60 cents. In response, in July 1899, newsboys refused to sell Pulitzer and Hearst papers. Newsboys demonstrated in the thousands and broke up newspaper distribution in the streets. One gathering blocked off the Brooklyn Bridge, disrupting traffic across the East river as well as interrupting news circulation throughout the entire region. Pulitzer tried to hire adults to vend his newspapers but they were sympathetic to the newsboys’ plight and refused to defy the strike. He did hire men to break up newsboy demonstrations and to protect newspaper deliveries. The newsboys asked the public to not buy any newspapers until the cost of bundles was lowered and the strike was resolved. Eventually the publishers relented: although the cost of bundles was not decreased, the publishers agreed to buy back unsold papers from the newsboys. The strike ended in August 1899, two weeks after it had started.

The 1899 Newsboy Strike is a significant moment in the history of U.S. media, U.S. urban life, and U.S. labor relations. New York City was built by a great deal of immigrant labor, and many of these laborers were children. It is important to remember and acknowledge this important part of U.S. urban history. The 1899 strike was credited for inspiring similar newsboy strikes in Butte, Montana and Louisville, Kentucky. It is an important story in the history of labor law reform in the U.S., even though it is not as well-remembered as landmark events such as the Triangle Shirtwaist Factory fire. While the newsboy strike did not lead to the sort of immediate reforms that the Shirtwaist factory disaster did, it did impact the implementation of child labor laws in the city over the following decades. Furthermore this case illustrates the practices of distribution and circulation that newspapers relied on, as well as the political economy of the media and its relationship to national and global politics.

Moment Two: Muckraking Magazines and the Shame of the Cities

            The modern magazine has decidedly “urban” roots. The word “magazine” originally referred to a storehouse for munitions. The first use of the term to refer to a publication was in 1731 by “The Gentleman’s Magazine” published in London. The publisher of “The Gentleman’s Magazine” used the pen name Sylvanus Urban, and this is what I meant when I said that magazines had “urban” roots. As with newspapers, developments of the industrial revolution such as conveyor systems and printing processes allowed for less expensive manufacturing practices, and therefore magazines could be sold cheaper and reach a wider audience. Another significant development was the Postal Act of 1879, which reduced the postal rates of magazines to the same price as newspapers, making the cost of a magazine subscription affordable for more Americans. Additionally, more and more jobs and people were moving from rural areas to cities. As increasing numbers of immigrants came together in urban cores, national magazines helped facilitate the formation of national identities as opposed to local or regional identity. Relatedly, the increase in the number of dime stores, drug stores, and department stores created new venues for consumer items, and magazines offered new venues for advertising these items. Ladies’ Home Journal was known for running the latest consumer advertisements, and became the first magazine to reach a subscription base of one million customers, reflecting the growth of the female consumer base.

In addition to sustaining and reflecting the growing consumer economy in the country, magazines also played an important role in social reform movements. Jane Addams reportedly first read about the settlement house movement from a magazine article (possibly from an article in Century magazine). With her interest piqued by the article, Addams and a friend soon travelled to London to visit the first settlement house, Toynbee Hall. The settlement house movement advocated the establishment of “settlement houses” in poor areas where middle class volunteers would come and live, with the goal of alleviating conditions of poverty and creating solidarity among the social classes. Two years after visiting Toynbee Hall Addams opened the first U.S. settlement house, Hull House in Chicago. Addams also wrote articles about the settlement house movement for magazines such as Ladies’ Home Journal and McClure’s. Another important role of magazines in social reform movements was related to photojournalism. Magazines had the ability to reproduce high quality photographs, giving them a visual edge against other media of the day. In the late 1880s an emigrant to the U.S. named Jacob Riis became shocked at the living conditions in the New York City slums and purchased a detective camera to document life in these areas. Riis exhibited his photographs as part of a public lecture presentation called “The Other Half: How it Lives and Dies in New York.” The lectures became popular and Riis wrote an article based on his lectures for Scribner’s Magazine. His project was eventually published as a book.

The aforementioned McClure’s magazine was a hotbed of reform-minded journalism at the turn of the 20th century. At the end of the 1800s the magazine had published exposes on the working conditions of miners and corporate practices of the Standard Oil Company. In 1901 journalist Lincoln Steffens published the first article in a series on corruption in U.S. cities. Steffens first went to St. Louis and reported on the machinations of the local political machine. Next he went to Minneapolis, and found the mayor and police chief colluding to take bribes for local houses of prostitution. Then he went to Pittsburgh (Pittsburg at the time), writing that “if the environment of Pittsburg is hell with the lid off, the political scene in the city is hell with the lid on.” The final entries in the series were based on visits to Philadelphia, Chicago, and New York. The series was eventually published in book form in 1904, titled The Shame of the Cities. The articles made Steffens a national celebrity and inspired a trend of similar expose articles in magazines, including Cosmopolitan’s “The Treason of the Senate”. Steffen’s magazine articles became icons of the muckraker movement, so called by president Roosevelt because they climbed through society’s much to cover the stories. The muckraking journalists are an important part of U.S. media history, and the social reform movements are an important part of U.S. urban history.

Moment Three: Movie Palaces and a Tale of One City

            As with newspapers and magazines, the development of motion pictures was closely tied to technological and social developments occurring as part of the industrial revolution. Developments in celluloid film, electric lighting, and the mechanical gears to turn film reels all contributed to technological underpinnings of film as a mass media. In France the Lumiere brothers invented one of the earliest film cameras, and the first film they shot was of workers leaving their family factory in Lyon. In the U.S., Thomas Edison developed the kinetograph, and shortly thereafter established an association of film and technology producers called the Trust. The Trust was a consortium of U.S. and French producers who agreed to pool film technology patents. Edison had also made an arrangement with George Eastman to make the Trust the exclusive recipient of Eastman’s motion picture film stock. To escape the control of the Trust, independent film producers left the traditional motion picture centers of New York and New Jersey. They went west, eventually settling in Southern California which offered cheap labor, ample space, and a mild climate that allowed for year-round location shooting. Southern California became the center of the U.S. film industry, and Hollywood became a toponym for the U.S. studio system (and remains metonymic of that industry today). The Hollywood studio system was built on vertical integration, which meant ownership of every means of the movie production process. This included production (everything involved in making a movie), distribution (getting movies to theaters), and exhibition (the process of screening the movies). Edison’s Trust tried to get the edge on exhibition by controlling the flow of films to theaters. The Hollywood studios instead decided to buy theaters themselves. The Edison Trust was eventually ended due to trade violations, and the Hollywood studios controlled every part of movie production and circulation. Paramount studios alone owned more than 300 theaters. During this period of film exhibition, movie studies built single-screen movie palaces, often ornate architectural achievements that offered a more hospitable viewing environment. Some of the most ornate and expansive movie palaces were built in Chicago. The architectural firm of Balaban and Kurtz designed many of the most famous, including the landmark Chicago Theatre (originally called the Balaban & Kurtz Chicago Theater). Other Chicago theaters built by the firm included the Oriental, the Riviera, and the Uptown theaters. The Uptown theater was the largest movie palace built in the United States.

In 1906 of a group of Chicago officials, designers, and business interests met to discuss the various problems facing the city. The Columbia Exposition a few years earlier had been received as a great success, but now problems of overcrowding, congestion, and the growth of manufacturing in the city were causing concern. This group of stakeholders met over a period of 30 months, and in 1909 they finalized their agreed-upon plan. The Chicago Plan proposed sweeping improvements to the city including rehabilitating the waterfront, redirecting railroad traffic in the city, and redesigning streets to permit better flow in and out of the business district. The mayor signed off on the proposal and then ordered a massive public relations campaign to promote the plan. Informative lectures explaining the plan were held throughout the city, articles and editorials were published in the newspapers, and the proposals were even summarized into a textbook that was taught in city schools, and a generation of Chicago school children grew up learning the values of the Chicago Plan. Also produced as part of this campaign was a two reel film titled A Tale of One City. This film was screened in city movie theaters continuously as part of the vigorous PR effort. Communication scholar James Hay has written about the role of the film in promoting the Chicago Plan as a significant moment in the history of urban renewal projects. The role of the film’s exhibition in the promotional campaign demonstrates the significance of the networks of film distribution and exhibition in reaching a mass audience, but also how the architectural design and location of downtown theaters in the city center made movie theaters important sites for engaging the public and shaping the vision of future urban development.

The Paramount decision of 1948 ended vertical integration and required studios to give up their theaters. This ended the era of studio control, but opened up new venues for film screening such as art houses that exhibited foreign films and documentaries, as well as hundreds of drive-in movie theaters for the millions of filmgoers who now had automobiles. As Americans moved to the suburbs, the movies did, too, building new forms of theaters in multiplexes and then megaplexes. While industry expressions such as “blockbuster” harken back to the role of downtown theaters in film exhibition (the term refers to patrons lined up “around the block” to get into a movie theater), most of the movie palaces have been repurposed, disused, or destroyed. Methods of film distribution and exhibition have significantly changed, and the downtown theaters and movie palaces have been largely replaced by suburban multiplexes. The example of A Tale of One City shows, however, that for a time downtown movie theaters played an integral part in the public life of the city.

Moment Four: Radio Remotes and Mediated Urban Nightlife

            The groundwork of popular broadcast radio was being established during the late 1800s. Developments in telegraphy and the theoretical proof of electromagnetic waves were among the chief developments in this early history of the medium. The rise of the new medium of the airwaves was soon reflected in the built form of the U.S. metropolis, which was also turning increasingly skyward. By the 1920s and 30s radio broadcasters were transmitting from the Metropolitan Life building in Manhattan, and the Chrysler and Empire State buildings were designed and built with spires to serve as antennas for broadcasting radio transmissions.

In 1923 a nightclub called the Cotton Club opened in the Harlem neighborhood of New York. The Cotton Club was a whites-only establishment, even though the club featured many of the premiere African American performers of the time. In 1927 Duke Ellington and his band the Washingtonians opened at the Cotton Club. Not long after, a Manhattan-based radio station began broadcasting Ellington’s performances live from the Cotton Club. Scholar Tim Wall has written about the Ellington remotes (the radio industry term for these live, on-location broadcasts) as occurring during a moment of transition for both radio and jazz. The technological, organizational, and cultural futures for the new medium were still being explored and negotiation. The broadcasting of jazz music was significant during this period as well. In 1929, radio network WABC began broadcasting the Ellington performances. WABC broadcast nationally, so now Ellington was being transmitted coast-to-coast. As Wall argues, the national broadcasting of jazz music represented the intrusion of urban life and culture into the country. In 1930 another radio network picked up the Ellington broadcasts, and now the performances were heard on the flagship stations of NBC’s Red and Blue networks. These broadcasts grew Ellington’s fame, and he recorded more than a hundred compositions during this period. The Ellington broadcasts represent a significant moment in the regulatory history of radio, but also the attempts of the young medium to establish a cultural role for its programming. The case of the Ellington Cotton Club remotes also represents how urban culture and performance, and especially African American culture, was being mediated through the shifting systems of national radio networks.

Moment Five: Sitcom Suburbs and the Urban Crisis

            Television truly became a mass medium in the years following World War II. Housing subsidies and entrepreneurial real estate developments privileged private suburban construction. Many Americans left urban centers to move to the suburbs, which had a lower tax base. Home ownership doubled between 1945 and 1950. As Americans left cities, and therefore also left the downtown movie theaters, music halls, and other urban venues of recreation and entertainment, radio became a cheap alternative to the movies. The years 1948 and 1949 saw peak radio listenership. After that, television replaced radio as the dominant medium in the home.

In addition to the role of housing policies and subsidies in spurring suburban development, there were also many discriminatory housing policies designed to keep U.S. minorities from moving into suburban communities. This practice has been referred to as American apartheid, and is one of the driving factors of the “urban crisis” that developed in U.S. urban life and discourse during this period. The scholar Dolores Hayden has used the phrase “sitcom suburbs” to refer to the homogenous developments that were also depicted in many of the nationally popular sitcoms during this period. One early flare up of these tensions happened in the Los Angeles area. In 1965, California voters passed a proposition that effectively repealed a fair housing act designed to alleviate discriminatory policies that prevented black and Mexican Americans from buying and renting in certain areas. Shortly thereafter, riots began in the Watts district and lasted for 5 days. More riots occurred in U.S. cities in 1967, and again in April 1968 following the assassination of Martin Luther King Jr. In each of these cases, the U.S. news media broadcast TV images that have become iconic of these riots and the overall “urban crisis” that came to dominate discourses on U.S. cities for decades.

Following the riots President Johnson appointed a special commission to investigate the causes of the unrest, and suggest how to prevent further unrest. The Kerner Commission detailed several factors that contributed to the urban riots, including explicit and implicit racism and housing discrimination. The commission also called attention to the news media for coverage that misrepresented facts of life in these cities and contributed to a deepening of divisions between white and black Americans. The Kerner Commission’s concerns were echoed by media theorist George Gerbner in his cultivation theory of television, which posits that increased exposure to violent TV programming cultivates a worldview in the viewer in which they perceive reality to be more dangerous than it really is. This period of urban fear and flight, the move to fortified homes and gated communities, has analogous developments in media coverage and development up to today.

Moment Six: Oppa Gangnam Style

            Our history so far has taken us from 1899 to 1968. In this last section, let us catch up on some of the developments that occurred in the last 60 years or so. Developments in microprocessor technology led to a computer revolution. Beginning in the 1980s, home computers became more popular and were predicted to revolutionize daily life. Developments in graphical user interfaces allowed everyday, non-technical users to approach computers. In the late 1960s the U.S. defense department began researching a redundant communication system that could remain intact following a nuclear attack. The project. ARPAnet, eventually developed into the Internet. Web browsers and HTML, such as Tim Berners-Lee’s “worldwideweb” launched in 1990, have enabled the Internet to become a mass medium. The computer revolution has also lead to unprecedented technological convergence. Computers connected to the internet have access to the full array of media content. Developments in smartphone technology have changed what was once merely a phone in a mobile device and site of media convergence, and increasingly the favorite device for media consumption and production.

On December 21, 2012, a milestone was reached. The music video for Psy’s “Gangnam Style” became the first video on YouTube to receive one billion views. The case of Gangnam Style can tell us a lot about the state of mass media industries, as well as the state of cities, in our present moment. Psy is a K-Pop musical act, which stands for “Korean Pop,” a genre originating in South Korea. His global popularity points to the importance of transnational media flows in the contemporary media environment. For instance, the increasing importance of the Chinese box office market for the Hollywood studio system. Also, the fact that his popularity spread globally via the Internet indicates the significance of media convergence, as well as how digital platforms for media circulation have upset the traditional forms of media dissemination, as well as changed our metrics for gauging media success (i.e. YouTube views versus box office, Nielsen ratings, or circulation numbers, etc.).

Gangnam Style also tells us a lot about cities in the early 21st century. The title of Psy’s song refers to the Gangnam district in Seoul, South Korea. The Gangnam district is known for its affluence, and is a hip and trendy neighborhood. This association, and the apparently mocking portrayal of lavish lifestyles in the music video, have led some commentators to interpret the song as a satirical and subversive critique of conspicuous consumption. It should be noted that Psy’s own comments about the meaning of the song do not support these interpretations. Regardless, the Gangnam Style example can help illustrate the valorization of cities that has been a trend of post-industrial economics and post-modern cultural practices. In the 1970s New York City went through a fiscal crisis. City services were sparse, the city government almost went broke, and crime and visible disorder in the city reached peak levels. As part of the city’s recovery and repositioning, the I <3 (love) NY branding campaign appeared. This campaign has remained hugely popular, and is representative of a postmodern consumption of the symbolic capital of cities. Another salient example would be the tote bags sold by American Apparel that just list names of global cities (Madrid, Tokyo, London, etc.). These cultural products, and the Gangnam Style song, are indicative of a revanchist return of capital to city centers. These examples, and indeed neighborhoods such as Seoul’s Gangnam, also point to the role of gentrification as a global urban strategy for development. In this way, Gangnam Style can serve as a vehicle for addressing some of the most pressing issues facing urban citizens today.

Public space, the public sphere, and the urban as public realm

This essay was originally written as part of my PhD comprehensive examinations. It was written to address connections between theories of the public sphere and concerns about public space, and to conceptualize the urban environment as a public realm. 

Introduction

Questions of space have always been implicated with the concept of the public sphere, but the idea of space has been conceptualized and applied in various ways within this context. Carragee’s challenge for scholars to address the nexus between public sphere theory and the study of public space has a solid foundation in the pertinent implications for civic life, attempts to connect academic perspectives and planning disciplines, and his own analysis of the impact of urban design on the character of public interaction. I agree with Carragee’s assertion that a vital public sphere requires vital public spaces. I am less inclined to agree with his claim that communication scholars have been silent on the issue, as there have been moves to address the communicative implications of the built environment through approaches such as material rhetoric. Nevertheless, it is worthwhile to consider how scholars of communication and other fields have approached this nexus, and how this line of inquiry might be extended.

To properly address this question about the relationship between public space and the public sphere it is helpful to define our terms. Both “public space” and “public sphere” have been used by different actors to signify differing meanings. The Habermasian formulation of the public sphere posited a novel form of social interaction facilitated by a network of institutions comprised by physical locations and mediated discourses. Following this model, scholars have understood the public sphere as a discursive space rooted in place-based communication as well as mediated exchanges. Catherine Squires has defined the public sphere as “a set of physical and mediated spaces” in which people come together to identify, express, and deliberate interests of common concern. Nancy Fraser has characterized the public sphere as “a theater” for social interaction where political activity is actualized through the medium of speech. The public sphere can also be understood as a particular kind of relationship among participants. This relationship is mediated by these historical forms of sociability enacted at specific points in space and time. Kurt Iveson refers to public spheres as “social imaginaries” that are always in the process of being formed. The public sphere has also been understood procedurally (or processually), as a normative ideal founded on a set of principles intended to guide interaction.

The meaning of “public space” may seem obvious, but this term too has been conceptualized in a variety of ways. Notions of “public space” can be rooted in the physical characteristics of a location, the institutional structures and policies affecting a place, or the types of uses and activities undertaken in the space. Seyla Benhabib offers a procedural definition of public space. Understood procedurally, “public space” is any space that, through public address at a particular time, is transformed as a site of political action through speech and persuasion. In Benhabib’s formulation, “public space” is not merely “open” space, or physical, absolute, geographical space. More to the point, public space is never merely space in this physical sense. This represents an approach to public space that contrasts with Carragee’s view of public space as material, empirical, and concrete, as opposed to the public sphere which he sees as more conceptual and virtual. In Benhabib’s procedural definition these realms are not so clearly distinguished from one another.

This essay will further explore influential notions of public space, the public sphere, and their relationship to one another. The first section will review significant and influential approaches to this nexus as represented by three prominent theorists. The second section looks at how the contemporary city has figured as a key referent in discussion of public space and the public sphere. The third section considers how the introduction of networked communication technologies has complicated understandings this relationship. Finally, I conclude with some contemporary issues facing work in this area.

Three Models of the Public Realm: Arendt, Habermas, Sennett

Hannah Arendt was a political theorist who wrote about power, authority, totalitarianism, and democracy. In one of her best known and most influential works, “The Human Condition,” she surveyed different conceptions and enactments of human activity beginning in ancient societies. The second section of this book is dedicated to “the private and public realms.” According to Arendt, life in ancient Greek society was divided between the private realm and the public realm. The private realm was the sphere of the household, and the public realm was the site of “action”. Activity in the private realm was preoccupied with bodily necessities, whereas the public realm was free of these necessities and in which one could distinguish oneself through great works and deeds. Arendt further proposes a dichotomy of human life based on the concepts of “zoe” and “bios”. Both words are etymologically linked to mean “life,” but Arendt is distinguishing human activity into two modes: animalistic (zoe) and humanistic (bios). This distinction between zoe and bios is connected to Arendt’s notion of life in the market versus public space, which she also refers to as the private realm (oikos) and the public realm (polis). Arendt considers the market an impoverished place where subjects are treated as animals, mere consumers driven to satisfy bodily and selfish needs. In the context of the oikos, one’s human identity and individuality is of no importance: in order to purchase a commodity, you need only pay the appropriate price, regardless of who you are. In the public realm, by contrast, the individual identity of each subject does acquire prominence. Through public discussion subjects or speakers are recognized as unique human beings who are inexchangeable with anyone else. Without language, human beings live on the level of “laboring animals,” merely concerned with continuing their lives. Through the medium of linguistic communication, humans open themselves up to the existence of others as well as the existence of a world that is shared with others. This then is the key idea in Arendt’s distinction between the private and public realms: people live privately as animals, and as humans only in public. Arendt valorizes the types of relations in ancient cities such as Athens, but she distinguishes between the built environment and the polis. She says that the polis, properly understood, does not refer to the physical city-state but to the relations that emerge from acting and speaking together, regardless of where the participants are. “Not Athens, but Athenians, were the polis.”

Jurgen Habermas defined the public sphere as “the sphere of private individuals come together as a public.” Similar to Arendt, he also considers this “public” relation as rooted in and a consequence of discourse and communication. Habermas’ notion of the public sphere is based on an empirical study of voluntary social associations and literary practices that emerged in Europe in the 18th century. The emergence of a “debating public” and an ethos of local governance were tied to the development of “provincial urban” institutions. These included coffee houses, salons, and theaters. Habermas’ study of the bourgeois public sphere is not only an account of specific historical phenomena, it also represents a normative ideal for rational-critical debate and deliberative politics. As such, Habermas’ theory has been interpreted as distinctly aspatial, not concerned with physical spaces but rather only an abstract discursive space. Several critics have argued that in order for Habermas’ theory to function as both a historical social explanation and a normative political idea, as his study proposes, it must be founded in an understanding of situated contexts of specific communities.

Richard Sennett is an urban sociologist who has written extensively on city design, public life, and civic engagement. His first book, “The Uses of Disorder,” argued that excessively ordered environments stifle personal development, and that people who live in such environments end up with overly rigid worldviews and insufficiently developed political consciousness. Sennett calls for practices of city design that allow for unpredictability, anarchy, and creative disorder that will foster adults better equipped to confront the complexities of life. In “The Conscience of the Eye” Sennett suggests that the built forms of modern cities are bland and neutralized spaces that diminish contact and wall people off from encounters with the Other. His remedy for this condition is a creative art of exposure to others and city life that should instill an appreciation for and empathy with difference. “A city,” Sennett says, “should be a school for learning to live a centered life.” Sennett’s book “The Fall of Public Man” outlines the decline of public life since the 18th century. In the 18th century, Sennett argues, public and private space were more clearly delineated than today. The disappearance of public space in the 20th century is attributed to a rise in intimacy and narcissism associated with industrial capitalism. In an essay titled “The Public Realm,” Sennett situates his approach to public life in relation to Arendt and Habermas. Sennett describes Arendt’s model of the public realm as inherently political and based on public deliberation in which participants discard their private interests. He calls Arendt the champion of the urban center “par excellence,” as the population density of urban centers provides the condition of anonymity that he sees as central to Arendt’s ideal. Sennett considers Habermas less interested in place than Arendt, as his theory includes mass produced texts such as newspapers as sites for the public sphere. For Habermas, Sennett states, the public realm is “any medium, occasion, or event” that facilitates free communication among strangers. Regarding his own approach, Sennett defines the public realm as “a place where strangers can come together.” He emphasizes that the public realm is a place, traditionally understood as a location on the ground, but Sennett states that developments in communication technologies have challenged this sense of place. Today “cyberspace” can function as a public realm as much as any physical place. Sennett also argues that “the public realm is a process.” As is evident in the arguments from his books summarized above, Sennett believes that shared spaces that accommodate unplanned and unmanaged encounters between strangers are beneficial for personal and social development. His emphasis on incompleteness and process, as opposed to fixity and determination, recalls Chantal Mouffe’s concept of agonistic Pluralism. Mouffe challenges the ideal espoused by Habermas that the deliberative ideal should be consensus reached by rational individuals. She argues that for freedom to exist the intrusion of conflict must be allowed for. The democratic process, Mouffe says, should provide an arena for the emergence of conflict and difference. Similarly Sennett says that daily experience doesn’t register much without “disruptive drama.”

The Modern City as Public Realm

In her book “Justice and Political Difference,” political theorist Iris Marion Young writes of city life as a normative ideal for communicative and political interaction. Young states that urbanity must be understood as an inherent aspect of life in advanced industrial societies, and that the material of our environment and structures available to us presuppose the forms of interactions that occur in these spaces. By “city life” Young refers to a type of social relation that she refers to as “a relation among strangers.” Urban experience, and in particular urban spaces, provide ideal conditions for the exposure to difference lives that a politics of difference should be predicated on. Young states that public spaces are crucial for open communicative democracy.

In “City of Rhetoric,” rhetorical scholar David Fleming argues that the city is the ideal context for the revitalization of the public sphere. He proposes an ideal space of relation that is between the intimacy of friends and family, on the one hand, and the mutual suspicion of strangers on the other. Fleming argues that the built environment and public space of the city is perfectly situated between users, relating and separating them at the same.

Don Mitchell has written about the “disappearance of public space” in the modern city. In a similar vein to influential critiques of the Habermasian public sphere, Mitchell states that the ideal of public space “open to all” has never been an existing state of affairs, but the ideal of public space circulates to powerful effect. For instance, Mitchell says, the circulation of the “open” public space ideal has served as a rallying call for successive waves of political movements to utilize space for activism and inclusionary ends.

Mediated Spaces and Mediated Spheres

Since Habermas’ formulation the idea of the public sphere has included elements of mediation. Habermas directly implicates the mass media in “The Structural Transformation,” citing the role of literature and the press in establishing the bourgeois public sphere, and the impact of television and other commercial mass media in diminishing the public sphere. The advent of the World Wide Web in the 1990s spawned enthusiasm from some regarding the deliberative and participatory potential of the medium. To some, the Internet seemed to realize all the ideals of Habermas’ public sphere. It was universal, non-hierarchical, based on uncoerced communication, and enabled public opinion formation based on voluntary deliberation. By these principles, and many others, the Internet looked like the realization of the ideal speech situation. Iveson suggests that the procedural understanding of public space allows various media to be understood as “public spaces” because they facilitate the formation of publics. Other scholars have considered media as new “spaces” for interaction. Sheller and Urry have compared new media to Arendt’s “space of appearances,” suggesting that in the digital age this “space” may be a “screen” on which public matters appear.

Still other scholars have voiced opposing accounts of the relationship between virtual spaces and the ideals of the public sphere. Don Mitchell has argued that the Internet can never meet or surpass the street as a public space, saying the infrastructure of the medium precludes certain uses and political opportunities. Public space remains crucial because it makes it possible for disadvantaged groups to occupy the space in a way that is precluded in virtual space. This space is especially important for homeless people because it is also a space to be and live in; a space for living rather than just visibilization. Iris Marion Young also addresses the distinction between physical space and virtual space with her concept of “embodied public space.” She says that media can facilitate public address and formation, and in this sense is not dependent on physical space. To the extent that public space is shrinking, or that individuals are withdrawing from public space, there is a democratic crisis. She uses the term “embodied public space” to refer to streets, squares, plazas, parks, and other physical spaces of the built environment that she deems crucial to allowing access to anyone and enabling encounters with difference. These spaces allow varieties of public interaction that are fundamental to her notion of city life as a normative ideal.

Jodi Dean has persistently criticized the “inclusionary ideal” promoted by the internet as an ideology of technocracy that she calls “communicative capitalism.” Dean’s article “Why the Net is Not a Public Sphere” challenges claims that the Internet can enable the ideals of the public sphere. In the public sphere ideal, communicative exchange is supposed to provide the basis for real political action. Under conditions of communicative capitalism, these exchanges function merely as message circulation rather than acclamations to be responded to. Political theorist Robert Putnam posited a decline of social capital in U.S. communities since 1950 in his book “Bowling Alone.” Putnam cites evidence of civic decline indicated by decreased voter turnout, public meeting attendance, and committee participation. The book’s title refers to the fact that while the number of Americans who bowl has increased in past decades, the number of people who participate in bowling leagues has declined. He attributes this fall in social capital to the “individualizing” of leisure time enabled by television and the Internet. Sherry Turkle has similarly argued for a technologically-promoted decline of physical proximity and interaction in the book “Alone Together.” Iveson has responded to such criticisms by arguing that the “stage” and “screen” (or “print” and “polis”) should not be seen as mutually exclusive arenas. Rather, he points to examples where movements of co-present interaction were facilitated through, managed by, or arranged around mediated forms of interaction.

Conclusion

There are several areas where continued research into the relationship between public space and the public sphere could be productive. First, it is important to consider how networked technology and mediated communication have changed the use of public space. Have the dispersed networks of power, access, and participation diminished the potency of public space for realizing political agency? Are these changes reversing Arendt’s formulation of the public and private realms? Has the logic of the market short circuited the function of the polis? Have new uses of public space emerged, and have traditional uses disappeared? It is now common for bodies to occupy physical space while their gaze and consciousness are directed not at their environment but at their various devices. How does this change our understanding of and approach to public and shared spaces? What does mean in relation to Mitchell and Young’s arguments about the role of “embodied public space”? In light of pervasive mediation in daily life it is important to affirm the fundamental importance of physical locations as public space.

Secondly, it is important not just to consider physical and virtual space in a dichotomous relationship, but also how they interrelate. How are digital technologies and mediated communication intersecting with the use of public space, and vice versa? To be clear, the phenomena at the core of this question are not new. Habermas’ model of the bourgeois public sphere concerns the relation between mass media and association in public space. More recently, the political uprisings collectively referred to as the “Arab Spring” brought attention to this issue. After social media and text messaging were use to organize demonstrations in Cairo, Egypt that eventually led to the removal of president Mubarak, pundits and media theorists began referring to this social movement as the “Twitter revolution.” Again, it is important to differentiate between the means of communication used to exchange information and organize bodies, and the site of political protest as represented in this case by Tahrir Square.

Finally, the implementation of information technology into the built environment is raising questions about the role of technologies in public space and civic life. In a November 2016 article, urban media scholar Shannon Mattern considered this issue in relation to the implementation and subsequent shuttering of the LinkNYC terminals in New York City. The LinkNYC initiative involved replacing telephone booths throughout the sidewalks of Manhattan with kiosks that provided access to electricity and wireless internet service. The city government promoted the terminals as places where tourists could access maps and online information and New Yorkers could charge their cell phones. The resultant “misuse” of these terminals, exemplified by people using the service for watching pornography or illicitly downloading media, resulted in the program being suspended indefinitely. Mattern uses this example to argue the importance of “vital spaces of information exchange” in our public spaces. She suggests that ideologies of “data solutionism” have influenced planning commissions to the detriment of small, local, and analog data perspectives that she considers essential to urban life. Mattern encourages city planning boards and project committees to include librarians and archivists in their ranks in the interest of such spaces of information exchange. At stake, Mattern argues, is the nature and well-being of our democracy.

These are just a few of the issues and questions that I think should inform future research into the relationship between public space and the public sphere. My own work is informed by these questions, and my interest in “smart city” policies and practices of implementation seeks to extend and challenge the conceptual zones outlined in this essay. Related questions explored in my research include: changing conceptions of public and private infrastructure; shifting models of civic engagement; and the predominance of market rationalities and discourses in (re)shaping the built environment. These questions are likely to only increase in prominence in the foreseeable future, and unforeseen developments are always arising. The essential questions of public space and the public sphere, however, will remain of crucial importance in our increasingly interconnected collective lives.

Video: Marshall Arts - McLuhan and media scholars

The Institute of General Semantics has recently posted videos of presentations given at the 2011 General Semantics Symposium. Included is my presentation: "Marshall Arts: Retrieving McLuhan for Communication Scholars". This was my first conference presentation, and the paper eventually became my first academic publication. The focus of my work has shifted considerably in the time since, but this was a personal milestone and I enjoyed being able to revisit it four years on. You can watch the talk, along with others from the symposium, through the official IGS Youtube channel, and via the embed below:

Wound Culture and Public Space: Mark Seltzer's concept of the pathological public sphere

Mark Seltzer: Serial Killers (II): The Pathological Public Sphere

Critical Inquiry, Vol. 22, No. 1 (Autumn, 1995), pp. 122-149

Seltzer’s essay on serial killers and the pathological public sphere immediately calls J.G. Ballard to mind. Eventually Seltzer does cite Ballard, but it is in reference to Ballard’s Atrocity Exhibition, a selection that renders the author’s omission of Ballard’s subsequent novel, Crash, all the more conspicuous (Crash was adapted into a film by David Cronenberg in 1996, the year after Seltzer’s article was published). The article’s introductory anecdote about Sylvestre Matushka, who engineered train wrecks and claimed to only achieve sexual satisfaction when witnessing these accidents, is obviously evocative of Crash. Ballard’s story follows characters who are sexually excited by car crashes, and stage car accidents and recreate famous wrecks. Seltzer cites The Atrocity Exhibition in order to borrow Ballard’s phrase and relate it to his own notion of the pathological public sphere: “spectacular corporeal/machine violence, a drive to make mass technology and public space a vehicle of private desire in public spectacle: the spectacles of public sex and public violence” (p. 124). Though he never refers to Crash, Seltzer’s language here could have come direct from the book’s dust jacket: “The coupling of bodies and machines is thus also, at least in these cases, a coupling of private and public spaces” (p. 125).

Seltzer’s argument is also evocative of a different Crash: the identically-titled but textually-dissimilar Crash, a 2004 film exploring race relations in contemporary Los Angeles through the interweaving of multiple characters and plotlines. Los Angeles is famous for its iconic freeway system, and the city is often regarded as the apotheosis of car culture, an alternatingly visionary or dystopic manifestation of car-dependent society. The film Crash uses the city’s freeway network as a thematic device, beyond the relation of the story’s interweaving plot threads and intersecting characters to the on-ramps and cloverleaf interchanges of L.A.’s freeways as seen from above. The film opens at the scene of a car accident one of these L.A. freeways, and the first lines of dialogue (spoken by a character riding in a car involved in the accident) establishes the thematic significance of the film’s Los Angeles setting:

Graham: It's the sense of touch. In any real city, you walk, you know? You brush past people, people bump into you. In L.A., nobody touches you. We're always behind this metal and glass. I think we miss that touch so much, that we crash into each other, just so we can feel something.

Compare this sentiment with these words of serial killer Ted Bundy quoted in Seltzer’s article:

“Another factor that is almost indispensable to this kind of behavior is the mobility of contemporary American life. Living in a large center of population and living with lots of people, you can get used to dealing with strangers. It’s the anonymity factor.” (p. 133)

Seltzer does cite a Los Angeles-based film in his discussion of public and private space: the action-thriller Speed, a sort of wish-fulfillment Hollywood fantasy for Angelenos where the city’s congested freeways are cleared of all traffic and the hero’s speedometer never drops below 50 miles per hour. Seltzer notes the film’s use of “public vehicles of what might be called stranger-intimacy” (p. 125): elevators, buses, airplanes, and the city subway system. Seltzer’s highlighting of transit systems to illustrate the collisions of public and private space resonated with my own research in this area. Seltzer cites urban sociologist Georg Simmel’s account of “the stranger” in urban life; Simmel’s theories have influenced a great deal of urban studies, including theories of transportation and public space.

Toiskallio (2000) applied Simmel’s sociability to an analysis of “the interaction between the taxi driver and the fare as an example of an intensive urban semi-public situation where feasible and face-saving social interaction is needed” (p. 4). The term “semi-public” refers to that are neither public nor totally private, as taxicabs are neither public nor private transportation, but “paratransit” (p. 8). Such distinctions are further complicated by the recent advent of “car-share” or rideshare services such as Uber and Lyft. These services are essentially hired car services, and function much like taxicabs, but with significant differences. Most relevant to the current discussion is the fact that rideshare drivers do not drive company vehicles as taxi drivers do, but operate their private vehicles to transport customers. This situation transforms a person’s private car into a space of stranger-intimacy. There are consequences here not only for transformations of public and private space, but also the coupling of bodies and machines, as well as implication for affective labor and transportation services.

Memes, Enthymemes, and the Reproduction of Ideology

In his 1976 book The Selfish Gene, biologist Richard Dawkins introduced the word “meme” to refer to a hypothetical unit of cultural transmission. The discussion of the meme concept was contained in a single chapter of a book that was otherwise dedicated to genetic transmission, but the idea spread. Over decades, other authors further developed the meme concept, establishing “memetics” as a field of study. Today, the word “meme” has entered the popular lexicon, as well as popular culture, and is primarily associated with specific internet artifacts, or “viral” online content. Although this popular usage of the term is not always in keeping with Dawkins’ original conception, these examples from internet culture do illustrate some key features of how memes have been theorized.

This essay is principally concerned with two strands of memetic theory: the relation of memetic transmission to the reproduction of ideology; and the role of memes in rhetorical analysis, especially in relation to the enthymeme as persuasive appeal. Drawing on these theories, I will advance two related arguments: ideology as manifested in discursive acts can be considered to spread memetically; and ideology functions enthymemetically. Lastly, I will present a case study analysis to demonstrate how the use of methods and terminology from rhetorical criticism, discourse analysis, and media studies, can be employed to analyze artifacts based on these arguments.

Examples of memes presented by Dawkins include “tunes, ideas, catch-phrases, clothes fashions, ways of making pots or building arches” (p.192). The name “meme” was chosen due to its similarity to the word “gene”, as well as its relation to the Greek root “mimeme” meaning “that which is imitated” (p.192). Imitation is key to Dawkins’ notion of the meme because imitation is the means by which memes propagate themselves amongst members of a culture. Dawkins identifies three qualities associated with high survival in memes: longevity, fecundity, and copying-fidelity (p.194).

Distin (2005) further developed the meme hypothesis in The Selfish Meme. Furthering the gene/meme analogy, Distin defines memes as “units of cultural information” characterized by the representational content they carry (p.20), and the representational content is considered “the cultural equivalent of DNA” (p.37). This conceptualization of memes and their content forms the basis of Distin’s theory of cultural heredity. Distin then seeks to identify the representational system used by memes to carry their content (p.142). The first representational system considered is language, what Distin calls “the memes-as-words hypothesis” (p.145). Distin concludes that language itself is “too narrow to play the role of cultural DNA” (p.147).

Balkin (1998) took up the meme concept to develop a theory of ideology as “cultural software”. Balkin describes memes as “tools of understanding,” and states that there are “as many different kinds of memes as there are things that can be transmitted culturally” (p.48). Stating that the “standard view of memes as beliefs is remarkably similar to the standard view of ideology as a collection of beliefs” (p.49), Balkin links the theories of memetic transmission to theories of ideology. Employing metaphors of virility similar to how other authors have written of memes as “mind viruses,” Balkin considers memetic transmission as the spread of “ideological viruses” through social networks of communication, stating that “this model of ideological effects is the model of memetic evolution through cultural communication” (p.109). Balkin also presents a more favorable view of language as a vehicle for memes than Distin presented, writing: “Language is the most effective carrier of memes and is itself one of the most widespread forms of cultural software. Hence it is not surprising that many ideological mechanisms either have their source in features of language or are propagated through language” (p.175).

Balkin approaches the subject from a background in law, and although not a rhetorician and skeptical of the discursive turn in theories of ideology, Balkin does employ rhetorical concepts in discussing the influence of memes and ideology: “Rhetoric has power because understanding through rhetorical figures already forms part of our cultural software” (p.19). Balkin also cites Aristotle, remarking that “the successful rhetorician builds upon what the rhetorician and the audience have in common,” and “what the two have in common are shared cultural meanings and symbols” (p.209). In another passage, Balkin expresses a similar notion of the role of shared understanding in communication: “Much human communication requires the parties to infer and supplement what is being conveyed rather than simply uncoding it” (p.51).

Although Balkin never uses the term, these ideas are evocative of the rhetorical concept of the enthymeme. Aristotle himself discussed the enthymeme, though the concept was not elucidated with much specificity. Rhetorical scholars have since debated the nature of the enthymeme as employed in persuasion, and Bitzer (1959) surveyed various accounts to produce a more substantial definition. Bitzer’s analysis comes to focus on the enthymeme in relation to syllogisms, and the notion of the enthymeme as a syllogism with a missing (or unstated) proposition. Bitzer states: “To say that the enthymeme is an ‘incomplete syllogism’ – that is, a syllogism having one or more suppressed premises – means that the speaker does not lay down his premises but lets his audience supply them out of its stock of opinion and knowledge” (p.407).

Bitzer’s formulation of the enthymeme emphasizes that “enthymemes occur only when the speaker and audience jointly produce them” (p.408). That they are “jointly produced” is key to the role of the enthymeme is successful persuasive rhetoric: “Owing to the skill of the speaker, the audience itself helps construct the proofs by which it is persuaded” (p.408). That the enthymeme’s “premises are always drawn from the audience,” and the “successful construction is accomplished through the joint efforts of speaker and audience,” Bitzer defines as the “essential character” of the enthymeme. This joint construction, and supplying of the missing premise(s), resonates with Balkin’s view of the spread of cultural software, as well as various theories of subjects’ complicity in the functioning of ideology.

McGee (1980) supplied another link between rhetoric and ideology with the “ideograph”. McGee argued that “ideology is a political language composed of slogan-like terms signifying collective commitment” (p.15), and these terms he calls “ideographs”. Examples of ideographs, according to McGee, include “liberty,” “religion,” and “property” (p.16). Johnson (2007) applies the ideograph concept to memetics, to argue for the usefulness of the meme as a tool for materialist criticism. Johnson argues that although “the ideograph has been honed as a tool for political (“P”-politics) discourses, such as those that populate legislative arenas, the meme can better assess ‘superficial’ cultural discourses” (p.29). I also believe that the meme concept can be a productive tool for ideological critique. As an example, I will apply the concepts of ideology reproduction as memetic transmission, and ideological function as enthymematic, in an analysis of artifacts of online culture popularly referred to as “memes”.

As Internet culture evolved, users adapted and mutated the term “meme” to refer to specific online artifacts. Even though they may be considered a type of online artifact, Internet memes come in a variety of different forms. One of the oldest and most prominent series of image macro memes is the “LOLcats” series of memes. The template established by LOLcats of superimposing humorous text over static images became and remains the standard format for image macro memes. Two of the most prominent series of these types of memes are the “First World Problems” (FWP) and “Third World Success” image macros. Through analysis of these memes, it is possible to examine how the features of these artifacts and discursive practices demonstrate many of the traits of memes developed by theorists, and how theories of memetic ideological transmission and enthymematic ideological function can be applied to examine ideological characteristics of these artifacts.

 

References

Balkin, J.M. (1998). Cultural software: A theory of ideology. Dansbury, CT: Yale

University Press.

Bitzer, L. F. (1959). Aristotle’s enthymeme revisited. Quarterly Journal Of Speech,

45(4), 399-408.

Dawkins, R. (2006). The Selfish Gene. New York, NY: Oxford University Press. (original

work published 1976)

Distin, K. (2005). The selfish meme: A critical reassessment. New York, NY: Cambridge

University Press.

McGee, M. C. (1980). The “ideograph”: A link between rhetoric and ideology. Quarterly

Journal Of Speech66(1), 1-16.

McLuhan and Mad Men

  • The final episode of acclaimed TV series Mad Men aired this week. I've not seen any of the show (though now that the series is complete it is ripe for binge-watching), but I did appreciate this piece from Stephen Marche at Esquire, analyzing Mad Men through Marshall McLuhan's media theory (spoilers if, like me, you're not caught up with the show):

I sometimes wonder when I'm watching Mad Men, if and when the various characters read the passage above, from Marshall McLuhan's Understanding Media, which came out in 1964. Of all the great sixties cultural icons that are missing from Mad Men—and some of the absences can be glaring—I've always found the lack of any mention of media writer and thinker McLuhan the most inexplicable. Maybe he was just too close to the bone.

McLuhan is the perfect guide to Mad Men for one obvious reason: He loved advertising. He was among the first to celebrate unreservedly what he called "the Madison Avenue frog-men-of-the-mind." The business of trying to sell people more stuff neither frightened nor appalled him. He didn't look down on it, as so many of his contemporaries did.

Critical perspectives on the Isla Vista spree killer, media coverage


Reuters/Lucy Nicholson

Reuters/Lucy Nicholson

  • Immediately following Elliot Rodger's spree killing in Isla Vista, CA last month Internet users discovered his YouTube channel and a 140-page autobiographical screed, dubbed a "manifesto" by the media. The written document and the videos documented Rodger's sexual frustration and his chronic inability to connect with other people. He specifically lashed out at women for forcing him " to endure an existence of loneliness, rejection and unfulfilled desires" and causing his violent "retribution". Commentators and the popular press framed the killings as an outcome of misogynistic ideology, with headlines such as: How misogyny kills men, further proof that misogyny kills, and Elliot Rodger proves the danger of everyday sexism. Slate contributor Amanda Hess wrote:

Elliot Rodger targeted women out of entitlement, their male partners out of jealousy, and unrelated male bystanders out of expedience. This is not ammunition for an argument that he was a misandrist at heart—it’s evidence of the horrific extent of misogyny’s cultural reach.

His parents saw the digitally mediated rants and contacted his therapist and a social worker, who contacted a mental health hotline. These were the proper steps. But those who interviewed Rodger found him to be a “perfectly polite, kind and wonderful human.” They deemed his involuntary holding unnecessary and a search of his apartment unwarranted. That is, authorities defined Rodger and assessed his intentions based upon face-to-face interaction, privileging this interaction over and above a “vast digital trail.” This is digital dualism taken to its worst imaginable conclusion.

In fact, in the entire 140-odd-page memoir he left behind, “My Twisted World,” documents with agonizing repetition the daily tortured minutiae of his life, and barely has any interactions with women. What it has is interactions with the symbols of women, a non-stop shuffling of imaginary worlds that women represented access to. Women weren’t objects of desire per se, they were currency.

[...]

What exists in painstaking detail are the male figures in his life. The ones he meets who then reveal that they have kissed a girl, or slept with a girl, or slept with a few girls. These are the men who have what Elliot can’t have, and these are the men that he obsesses over.

[...]

Women don’t merely serve as objects for Elliot. Women are the currency used to buy whatever he’s missing. Just as a dollar bill used to get you a dollar’s worth of silver, a woman is an indicator of spending power. He wants to throw this money around for other people. Bring them home to prove something to his roommates. Show the bullies who picked on him that he deserves the same things they do.

[...]

There’s another, slightly more obscure recurring theme in Elliot’s manifesto: The frequency with which he discusses either his desire or attempt to throw a glass of some liquid at happy couples, particularly if the girl is a ‘beautiful tall blonde.’ [...] These are the only interactions Elliot has with women: marking his territory.

[...]

When we don’t know how else to say what we need, like entitled children, we scream, and the loudest scream we have is violence. Violence is not an act of expressing the inexpressible, it’s an act of expressing our frustration with the inexpressible. When we surround ourselves by closed ideology, anger and frustration and rage come to us when words can’t. Some ideologies prey on fear and hatred and shift them into symbols that all other symbols are defined by. It limits your vocabulary.

While the motivations for the shootings may vary, they have in common crises in masculinity in which young men use guns and violence to create ultra-masculine identities as part of a media spectacle that produces fame and celebrity for the shooters.

[...]

Crises in masculinity are grounded in the deterioration of socio-economic possibilities for young men and are inflamed by economic troubles. Gun carnage is also encouraged in part by media that repeatedly illustrates violence as a way of responding to problems. Explosions of male rage and rampage are also embedded in the escalation of war and militarism in the United States from the long nightmare of Vietnam through the military interventions in Afghanistan and Iraq.

For Debord, “spectacle” constituted the overarching concept to describe the media and consumer society, including the packaging, promotion, and display of commodities and the production and effects of all media. Using the term “media spectacle,” I am largely focusing on various forms of technologically-constructed media productions that are produced and disseminated through the so-called mass media, ranging from radio and television to the Internet and the latest wireless gadgets.

  • Kellner's comments from a 2008 interview talking about the Virginia Tech shooter's videos broadcast after the massacre, and his comments on critical media literacy, remain relevant to the current situation:

Cho’s multimedia video dossier, released after the Virginia Tech shootings, showed that he was consciously creating a spectacle of terror to create a hypermasculine identity for himself and avenge himself to solve his personal crises and problems. The NIU shooter, dressed in black emerged from a curtain onto a stage and started shooting, obviously creating a spectacle of terror, although as of this moment we still do not know much about his motivations. As for the television networks, since they are profit centers in a highly competitive business, they will continue to circulate school shootings and other acts of domestic terrorism as “breaking events” and will constitute the murderers as celebrities. Some media have begun to not publicize the name of teen suicides, to attempt to deter copy-cat effects, and the media should definitely be concerned about creating celebrities out of school shooters and not sensationalize them.

[...]

People have to become critical of the media scripts of hyperviolence and hypermasculinity that are projected as role models for men in the media, or that help to legitimate violence as a means to resolve personal crises or solve problems. We need critical media literacy to analyze how the media construct models of masculinities and femininities, good and evil, and become critical readers of the media who ourselves seek alternative models of identity and behavior.

  • Almost immediately after news of the violence broke, and word of the killer's YouTube videos spread, there was a spike of online backlash against the media saturation and warnings against promoting the perpetrator to celebrity status through omnipresent news coverage. Just two days after the killings Isla Vista residents and UCSB students let the news crews at the scene know that they were not welcome to intrude upon the community's mourning. As they are wont to do, journalists reported on their role in the story while ignoring the wishes of the residents, as in this LA Times brief:

More than a dozen reporters were camped out on Pardall Road in front of the deli -- and had been for days, their cameras and lights and gear taking up an entire lane of the street. At one point, police officers showed up to ensure that tensions did not boil over.

The students stared straight-faced at reporters. Some held signs expressing their frustration with the news media:

"OUR TRAGEDY IS NOT YOUR COMMODITY."

"Remembrance NOT ratings."

"Stop filming our tears."

"Let us heal."

"NEWS CREWS GO HOME!"

TV still sucks, we should still complain about hipsters, your job shouldn't exist

None of this could be happening at a worse time. According to the latest S.O.S. from climate science, we have maybe 15 years to enact a radical civilizational shift before game over. This may be generous, it may be alarmist; no one knows. What is certain is that pulling off a civilizational Houdini trick will require not just switching energy tracks, but somehow confronting the “endless growth” paradigm of the Industrial Revolution that continues to be shared by everyone from Charles Koch to Paul Krugman. We face very long odds in just getting our heads fully around our situation, let alone organizing around it. But it will be impossible if we no longer even understand the dangers of chuckling along to Kia commercials while flipping between Maher, “Merlin” and “Girls.”

  • Zaitchik's article name checks pertinent critics and theorists including Adorno's "cultural industry," Postman’s “Amusing Ourselves to Death,” and even Jerry Mander's "Four Arguments for the Elimination of Television." Where this article was discussed on sites like Reddit or Metafilter commenters seemed angry at Zaitchik, overly defensive as if they felt under attack for watching "Hannibal" and "Game of Thrones". I thoroughly enjoyed Zaitchik's piece, even if it doesn't present a fully developed argument, because the perspective he presents strongly resonates with many of the philosophical foundations that have shaped my own views on media, particularly the media ecology tradition. A large part of Zaitchik's argument is that even if television content is the highest quality it has ever been, the form of television and its effects are the same as ever:

Staring at images on a little screen — that are edited in ways that weaken the brain’s capacity for sustained and critical thought, that encourage passivity and continued viewing, that are controlled by a handful of publicly traded corporations, that have baked into them lots of extremely slick and manipulating advertising — is not the most productive or pleasurable way to spend your time, whether you’re interested in serious social change, or just want to have a calm, clear and rewarding relationship with the real world around you.

But wait, you say, you’re not just being a killjoy and a bore, you’re living in the past. Television in 2014 is not the same as television in 1984, or 1994. That’s true. Chomsky’s “propaganda model,” set out during cable’s late dawn in “Manufacturing Consent,” is due for an update. The rise of on-demand viewing and token progressive programming has complicated the picture. But only by a little. The old arguments were about structure, advertising, structure, ownership, and structure, more than they were about programming content, or what time of the day you watched it. Less has changed than remains the same. By all means, let’s revisit the old arguments. That is, if everyone isn’t busy binge-watching “House of Cards.”

It’s been something to watch, this televisionification of the left. Open a window on social media during prime time, and you’ll find young journalists talking about TV under Twitter avatars of themselves in MSNBC makeup. Fifteen years ago, these people might have attended media reform congresses discussing how corporate TV pacifies and controls people, and how those facts flow from the nature of the medium. Today, they’re more likely to status-update themselves on their favorite corporate cable channel, as if this were something to brag about.

The entertainment demands of the 21st Century seem (apparently) bottomless. We’ve outsourced much of our serotonin production to the corporations which control music, sports, television, games, movies, and books. And they’ve grown increasingly desperate to produce the most universally acceptable, exportable, franchisable, exciting, boring, money-making pablum possible. Of course that is not new either… yet it continues to worsen.

Various alternative cultures have been attempting to fight it for decades. The beats, hippies, punks, and grunge kids all tried… and eventually lost. But the hipsters have avoided it altogether by never producing anything of substance except a lifestyle based upon fetishizing obscurity and cultivating tasteful disdain. A noncommital and safe appreciation of ironic art and dead artists. No ideals, no demands, no struggle.

Rarely has the modern alternative to pop culture been so self-conscious and crippled. The mainstream has repeatedly beaten down and destroyed a half-century’s worth of attempts to keep art on a worthwhile and genuine path, but now it seems the final scion of those indie movements has adopted the: ‘if you can’t beat‘em, join‘em’ compromise of creative death.

  • In an interview for PBS, London School of Economics professor David Graeber poses the question: should your job exist?

How could you have dignity in labor if you secretly believe your job shouldn’t exist? But, of course, you’re not going to tell your boss that. So I thought, you know, there must be enormous moral and spiritual damage done to our society. And then I thought, well, maybe that explains some other things, like why is it there’s this deep, popular resentment against people who have real jobs? They can get people so angry at auto-workers, just because they make 30 bucks an hour, which is like nowhere near what corporate lawyers make, but nobody seems to resent them. They get angry at the auto-workers; they get angry at teachers. They don’t get angry at school administrators, who actually make more money. Most of the problems people blame on teachers, and I think on some level, that’s resentment: all these people with meaningless jobs are saying, but, you guys get to teach kids, you get to make cars; that’s real work. We don’t get to do real work; you want benefits, too? That’s not reasonable.

If someone had designed a work regime perfectly suited to maintaining the power of finance capital, it’s hard to see how they could have done a better job. Real, productive workers are relentlessly squeezed and exploited. The remainder are divided between a terrorised stratum of the, universally reviled, unemployed and a larger stratum who are basically paid to do nothing, in positions designed to make them identify with the perspectives and sensibilities of the ruling class (managers, administrators, etc) – and particularly its financial avatars – but, at the same time, foster a simmering resentment against anyone whose work has clear and undeniable social value. Clearly, the system was never consciously designed. It emerged from almost a century of trial and error. But it is the only explanation for why, despite our technological capacities, we are not all working 3-4 hour days.

Žižek on post-U.S. order, Harvey on Piketty, Rushkoff's new job and doc

The "American century" is over, and we have entered a period in which multiple centres of global capitalism have been forming. In the US, Europe, China and maybe Latin America, too, capitalist systems have developed with specific twists: the US stands for neoliberal capitalism, Europe for what remains of the welfare state, China for authoritarian capitalism, Latin America for populist capitalism. After the attempt by the US to impose itself as the sole superpower – the universal policeman – failed, there is now the need to establish the rules of interaction between these local centres as regards their conflicting interests.

In politics, age-old fixations, and particular, substantial ethnic, religious and cultural identities, have returned with a vengeance. Our predicament today is defined by this tension: the global free circulation of commodities is accompanied by growing separations in the social sphere. Since the fall of the Berlin Wall and the rise of the global market, new walls have begun emerging everywhere, separating peoples and their cultures. Perhaps the very survival of humanity depends on resolving this tension.

  • Thomas Piketty's book Capital in the 21st Century has received widespread media attention, and enjoyed so much popular success that at times Amazon has been sold out of copies. It seems natural then that David Harvey, reigning champion of Marx's Capital in the 21st century would comment on the work, which he has now done on his web site:

The book has often been presented as a twenty-first century substitute for Karl Marx’s nineteenth century work of the same title. Piketty actually denies this was his intention, which is just as well since his is not a book about capital at all. It does not tell us why the crash of 2008 occurred and why it is taking so long for so many people to get out from under the dual burdens of prolonged unemployment and millions of houses lost to foreclosure. It does not help us understand why growth is currently so sluggish in the US as opposed to China and why Europe is locked down in a politics of austerity and an economy of stagnation. What Piketty does show statistically (and we should be indebted to him and his colleagues for this) is that capital has tended throughout its history to produce ever-greater levels of inequality. This is, for many of us, hardly news. It was, moreover, exactly Marx’s theoretical conclusion in Volume One of his version of Capital. Piketty fails to note this, which is not surprising since he has since claimed, in the face of accusations in the right wing press that he is a Marxist in disguise, not to have read Marx’s Capital.

[...]

There is, however, a central difficulty with Piketty’s argument. It rests on a mistaken definition of capital. Capital is a process not a thing. It is a process of circulation in which money is used to make more money often, but not exclusively through the exploitation of labor power.

  • At the 2012 Media Ecology conference in Manhattan I heard Douglas Rushkoff explain that he had stopped teaching classes at NYU because the department was not letting him teach a sufficient number of hours, all while using his likeness on program brochures. Well, Rushkoff has just been appointed to his first full-time academic post. Media Bistro reported CUNY's announcement :

Beginning this fall at CUNY’s Queens College, students can work their way towards an MA in Media Studies. Set to mold the curriculum is an expert responsible for terms such as “viral media” and “social currency.”

  • Lastly, this news made me realize that I completely missed Rushkoff's new Frontline special that premiered in February: Generation Like, which is available on the Frontline web site.

Ender's Game analyzed, the Stanley Parable explored, Political Economy of zombies, semiotics of Twitter, much more

It's been a long time since the last update (what happened to October?), so this post is extra long in an attempt to catch up.

In a world in which interplanetary conflicts play out on screens, the government needs commanders who will never shrug off their campaigns as merely “virtual.” These same commanders must feel the stakes of their simulated battles to be as high as actual warfare (because, of course, they are). Card’s book makes the nostalgic claim that children are useful because they are innocent. Hood’s movie leaves nostalgia by the roadside, making the more complex assertion that they are useful because of their unique socialization to be intimately involved with, rather than detached from, simulations.

  • In the ongoing discourse about games criticism and its relation to film reviews, Bob Chipman's latest Big Picture post uses his own review of the Ender's Game film as an entry point for a breathless treatise on criticism. The video presents a concise and nuanced overview of arts criticism, from the classical era through film reviews as consumer reports up to the very much in-flux conceptions of games criticism.  Personally I find this video sub-genre (where spoken content is crammed into a Tommy gun barrage of word bullets so that the narrator can convey a lot of information in a short running time) irritating and mostly worthless, since the verbal information is being presented faster than the listener can really process it. It reminds me of Film Crit Hulk, someone who writes excellent essays with obvious insight into filmmaking, but whose aesthetic choice (or "gimmick") to write in all caps is often a distraction from the content and a deterrent to readers. Film Crit Hulk has of course addressed this issue and explained the rationale for this choice, but considering that his more recent articles have dropped the third-person "Hulk speak"  writing style the all caps seems to be played out. Nevertheless, I'm sharing the video because Mr. Chipman makes a lot of interesting points, particularly regarding the cultural contexts for the various forms of criticism. Just remember to breathe deeply and monitor your heart rate while watching.

  • This video from Satchbag's Goods is ostensibly a review ofHotline Miami, but develops into a discussion of art movements and Kanye West:

  • This short interview with Slavoj Žižek in New York magazine continues a trend I've noticed since Pervert's Guide to Ideology has been releasing, wherein writers interviewing Žižek feel compelled to include themselves and their reactions to/interactions with Žižek into their article. Something about a Žižek encounter brings out the gonzo in journalists. The NY mag piece is also notable for this succinct positioning of Žižek's contribution to critical theory:

Žižek, after all, the ­Yugoslav-born, Ljubljana-based academic and Hegelian; mascot of the Occupy movement, critic of the Occupy movement; and former Slovenian presidential candidate, whose most infamous contribution to intellectual history remains his redefinition of ideology from a Marxist false consciousness to a Freudian-Lacanian projection of the unconscious. Translation: To Žižek, all politics—from communist to social-democratic—are formed not by deliberate principles of freedom, or equality, but by expressions of repressed desires—shame, guilt, sexual insecurity. We’re convinced we’re drawing conclusions from an interpretable world when we’re actually just suffering involuntary psychic fantasies.

Following the development of the environment on the team's blog you can see some of the gaps between what data was deemed noteworthy or worth recording in the seventeenth century and the level of detail we now expect in maps and other infographics. For example, the team struggled to pinpoint the exact location on Pudding Lane of the bakery where the Great Fire of London is thought to have originated and so just ended up placing it halfway along.

  • Stephen Totilo reviewed the new pirate-themed Assassin's Creed game for the New York Times. I haven't played the game, but I love that the sections of the game set in the present day have shifted from the standard global conspiracy tropes seen in the earlier installments to postmodern self-referential and meta-fictional framing:

Curiously, a new character is emerging in the series: Ubisoft itself, presented mostly in the form of self-parody in the guise of a fictional video game company, Abstergo Entertainment. We can play small sections as a developer in Abstergo’s Montreal headquarters. Our job is to help turn Kenway’s life — mined through DNA-sniffing gadgetry — into a mass-market video game adventure. We can also read management’s emails. The team debates whether games of this type could sell well if they focused more on peaceful, uplifting moments of humanity. Conflict is needed, someone argues. Violence sells.

It turns out that Abstergo is also a front for the villainous Templars, who search for history’s secrets when not creating entertainment to numb the population. In these sections, Ubisoft almost too cheekily aligns itself with the bad guys and justifies its inevitable 2015 Assassin’s Creed, set during yet another violent moment in world history.

  • Speaking of postmodern, self-referential, meta-fictional video games: The Stanley Parable was released late last month. There has already been a bevy of analysis written about the game, but I am waiting for the Mac release to play the game and doing my best to avoid spoilers in the meantime. Brenna Hillier's post at VG24/7 is spoiler free (assuming you are at least familiar with the games premise, or its original incarnation as a Half Life mod), and calls The Stanley parable "a reaction against, commentary upon, critique and celebration of narrative-driven game design":

The Stanley Parable wants you to think about it. The Stanley Parable, despite its very limited inputs (you can’t even jump, and very few objects are interactive) looks at those parts of first-person gaming that are least easy to design for – exploration and messing with the game’s engine – and foregrounds them. It takes the very limitations of traditional gaming narratives and uses them to ruthlessly expose their own flaws.

Roy’s research focus prior to founding Bluefin, and continued interest while running the company, has to do with how both artificial and human intelligences learn language. In studying this process, he determined that the most important factor in meaning making was the interaction between human beings: non one learns language in a vacuum, after all. That lesson helped inform his work at Twitter, which started with mapping the connection between social network activity and live broadcast television.

Aspiring to cinematic qualities is not bad in an of itself, nor do I mean to shame fellow game writers, but developers and their attendant press tend to be myopic in their point of view, both figuratively and literally. If we continually view videogames through a monocular lens, we miss much of their potential. And moreover, we begin to use ‘cinematic’ reflexively without taking the time to explain what the hell that word means.

Metaphor is a powerful tool. Thinking videogames through other media can reframe our expectations of what games can do, challenge our design habits, and reconfigure our critical vocabularies. To crib a quote from Andy Warhol, we get ‘a new idea, a new look, a new sex, a new pair of underwear.’ And as I hinted before, it turns out that fashion and videogames have some uncanny similarities.

Zombies started their life in the Hollywood of the 1930s and ‘40s as simplistic stand-ins for racist xenophobia. Post-millennial zombies have been hot-rodded by Danny Boyle and made into a subversive form of utopia. That grim utopianism was globalized by Max Brooks, and now Brad Pitt and his partners are working to transform it into a global franchise. But if zombies are to stay relevant, it will rely on the shambling monsters' ability to stay subversive – and real subversive shocks and terror are not dystopian. They are utopian.

Ironically, our bodies now must make physical contact with devices dictating access to the real; Apple’s Touch ID sensor can discern for the most part if we are actually alive. This way, we don’t end up trying to find our stolen fingers on the black market, or prevent others from 3D scanning them to gain access to our lives.

This is a monumental shift from when Apple released its first iPhone just six years ago. It’s a touchy subject: fingerprinting authentication means we confer our trust in an inanimate object to manage our animate selves - our biology is verified, digitised, encrypted, as they are handed over to our devices.

Can you really buy heroin on the Web as easily as you might purchase the latest best-seller from Amazon? Not exactly, but as the FBI explained in its complaint, it wasn't exactly rocket science, thanks to Tor and some bitcoins. Here's a rundown of how Silk Road worked before the feds swooped in.

  • Henry Jenkins posted the transcript of an interview with Mark J.P. Wolf. The theme of the discussion is "imaginary worlds," and they touch upon the narratology vs. ludology conflict in gaming:

The interactivity vs. storytelling debate is really a question of the author saying either “You choose” (interaction) or “I choose” (storytelling) regarding the events experienced; it can be all of one or all of the other, or some of each to varying degrees; and even when the author says “You choose”, you are still choosing from a set of options chosen by the author.  So it’s not just a question of how many choices you make, but how many options there are per choice.  Immersion, however, is a different issue, I think, which does not always rely on choice (such as immersive novels), unless you want to count “Continue reading” and “Stop reading” as two options you are constantly asked to choose between.

Manifesto for a Ludic Century, ludonarrative dissonance in GTA, games and mindf*cks, and more

Systems, play, design: these are not just aspects of the Ludic Century, they are also elements of gaming literacy. Literacy is about creating and understanding meaning, which allows people to write (create) and read (understand).

New literacies, such as visual and technological literacy, have also been identified in recent decades. However, to be truly literate in the Ludic Century also requires gaming literacy. The rise of games in our culture is both cause and effect of gaming literacy in the Ludic Century.

So, perhaps there is one fundamental challenge for the Manifesto for a Ludic Century: would a truly ludic century be a century of manifestos? Of declaring simple principles rather than embracing systems? Or, is the Ludic Manifesto meant to be the last manifesto, the manifesto to end manifestos, replacing simple answers with the complexity of "information at play?"

Might we conclude: videogames are the first creative medium to fully emerge after Marshall McLuhan. By the time they became popular, media ecology as a method was well-known. McLuhan was a popular icon. By the time the first generation of videogame players was becoming adults, McLuhan had become a trope. When the then-new publication Wired Magazine named him their "patron saint" in 1993, the editors didn't even bother to explain what that meant. They didn't need to.

By the time videogame studies became a going concern, McLuhan was gospel. So much so that we don't even talk about him. To use McLuhan's own language of the tetrad, game studies have enhanced or accelerated media ecology itself, to the point that the idea of studying the medium itself over its content has become a natural order.

Generally speaking, educators have warmed to the idea of the flipped classroom far more than that of the MOOC. That move might be injudicious, as the two are intimately connected. It's no accident that private, for-profit MOOC startups like Coursera have advocated for flipped classrooms, since those organizations have much to gain from their endorsement by universities. MOOCs rely on the short, video lecture as the backbone of a new educational beast, after all. Whether in the context of an all-online or a "hybrid" course, a flipped classroom takes the video lecture as a new standard for knowledge delivery and transfers that experience from the lecture hall to the laptop.

  • Also, with increased awareness of Animal Crossing following from the latest game's release for the Nintendo 3DS, Bogost recently posted an excerpt from his 2007 book Persuasive Games discussing consumption and naturalism in Animal Crossing:

Animal Crossing deploys a procedural rhetoric about the repetition of mundane work as a consequence of contemporary material property ideals. When my (then) five-year-old began playing the game seriously, he quickly recognized the dilemma he faced. On the one hand, he wanted to spend the money he had earned from collecting fruit and bugs on new furniture, carpets, and shirts. On the other hand, he wanted to pay off his house so he could get a bigger one like mine.

Ludonarrative dissonance is when the story the game is telling you and your gameplay experience somehow don’t match up. As an example, this was a particular issue in Rockstar’s most recent game, Max Payne 3. Max constantly makes remarks about how terrible he is at his job, even though he does more than is humanly possible to try to protect his employers – including making perfect one-handed head shots in mid-air while drunk and high on painkillers. The disparity and the dissonance between the narrative of the story and the gameplay leave things feeling off kilter and poorly inter-connnected. It doesn’t make sense or fit with your experience so it feels wrong and damages the cohesiveness of the game world and story. It’s like when you go on a old-lady only murdering spree as Niko, who is supposed to be a reluctant killer with a traumatic past, not a gerontophobic misogynist.

What I find strange, in light of our supposed anti-irony cultural moment, is a kind of old-fashioned ironic conceit behind a number of recent critical darlings in the commercial videogame space. 2007's Bioshock and this year’s Bioshock: Infinite are both about the irony of expecting ‘meaningful choice’ to live in an artificial dome of technological and commercial constraints. Last year’s Spec Ops: The Line offers a grim alchemy of self-deprecation and preemptive disdain for its audience. The Grand Theft Auto series has always maintained a cool, dismissive cynicism beneath its gleefully absurd mayhem. These games frame choice as illusory and experience as artificial. They are expensive, explosive parodies of free will.

To cut straight to the heart of it, Bioshock seems to suffer from a powerful dissonance between what it is about as a game, and what it is about as a story. By throwing the narrative and ludic elements of the work into opposition, the game seems to openly mock the player for having believed in the fiction of the game at all. The leveraging of the game’s narrative structure against its ludic structure all but destroys the player’s ability to feel connected to either, forcing the player to either abandon the game in protest (which I almost did) or simply accept that the game cannot be enjoyed as both a game and a story, and to then finish it for the mere sake of finishing it.

The post itself makes a very important point: games, for the most part, can’t pull the Mindfuck like movies can because of the nature of the kind of storytelling to which most games are confined, which is predicated on a particular kind of interaction. Watching a movie may not be an entirely passive experience, but it’s clearly more passive than a game. You may identify with the characters on the screen, but you’re not meant to implicitly think of yourself as them. You’re not engaging in the kind of subtle roleplaying that most (mainstream) games encourage. You are not adopting an avatar. In a game, you are your profile, you are the character you create, and you are also to a certain degree the character that the game sets in front of you. I may be watching everything Lara Croft does from behind her, but I also control her; to the extent that she has choices, I make them. I get her from point A to B, and if she fails it’s my fault. When I talk about something that happened in the game, I don’t say that Lara did it. I say that I did.

Anachrony is a common storytelling technique in which events are narrated out of chronological order. A familiar example is a flashback, where story time jumps to the past for a bit, before returning to the present. The term "nonlinear narrative" is also sometimes used for this kind of out-of-order storytelling (somewhat less precisely).

While it's a common technique in literature and film, anachrony is widely seen as more problematic to use in games, perhaps even to the point of being unusable. If the player's actions during a flashback scene imply a future that differs considerably from the one already presented in a present-day scene (say, the player kills someone who they had been talking to in a present-day scene, or commits suicide in a flashback), this produces an inconsistent narrative. The root of the problem is that players generally have degree of freedom of action, so flashbacks are less like the case in literature and film—where already decided events are simply narrated out of order—and more like time travel, where the player travels back in time and can mess up the timeline.

The first of the books are set to be published in early 2014. Some of the writers that will be published by Press Select in its first round have written for publications like Edge magazine, Kotaku, Kill Screen and personal blogs, including writers like Chris Dahlen, Michael Abbott, Jenn Frank, Jason Killingsworth, Maddy Myers, Tim Rogers, Patricia Hernandez and Robert Yang.

Videodrome turns 30

Videodrome’s depiction of techno-body synthesis is, to be sure, intense; Cronenberg has the unusual talent of making violent, disgusting, and erotic things seem even more so. The technology is veiny and lubed. It breaths and moans; after watching the film, I want to cut my phone open just to see if it will bleed. Fittingly, the film was originally titled “Network of Blood,” which is precisely how we should understand social media, as a technology not just of wires and circuits, but of bodies and politics. There’s nothing anti-human about technology: the smartphone that you rub and take to bed is a technology of flesh. Information penetrates the body in increasingly more intimate ways.

  • I also came across this short piece by Joseph Matheny at Alterati on Videodrome and YouTube:

Videodrome is even more relevant now that YouTube is delivering what cable television promised to in the 80s: a world where everyone has their own television station. Although digital video tools began to democratize video creation, it’s taken the further proliferation of broadband Internet and the emergence of convenient platforms like YouTube and Google Video to democratize video distribution.

  • There's also my Videodrome-centric post from a couple of years ago. Coincidentally, I watched eXistenZ for the first time last week. I didn't know much about the film going in, and initially I was enthusiastic that it seemed to be a spiritual successor to Videodrome, updating the media metaphor for the New Flesh from television to video games. I remained engaged throughout the movie (although about two thirds into the film I turned to my fiancee and asked "Do you have any idea what's going on?"), and there were elements that I enjoyed but ultimately I was disappointed. I had a similar reaction at the ending of Cronenberg's Spider, thinking "What was the point of all that?" when the closing credits started to roll, though it was much easier to stay awake during eXistenZ.

Warren Ellis on violent fiction, death of the Western, Leatherface as model vegan

As we learn early on, the movie’s killers, the murderous Sawyer family (comprised of Leatherface, Grandpa, et al), used to run a slaughterhouse, and the means they use to slaughter their victims are the same as those used to slaughter cattle. They knock them over the head with sledgehammers, hang them on meat hooks, and stuff them into freezers. Often this takes place as the victims are surrounded by animal bones, a detail that could be explained away as the evidence of their former occupation—except that the cries of farm animals (there are none around) are played over the scenes.

Through the past century of Western movies, we can trace America's self-image as it evolved from a rough-and-tumble but morally confident outsider in world affairs to an all-powerful sheriff with a guilty conscience. After World War I and leading into World War II, Hollywood specialized in tales of heroes taking the good fight to savage enemies and saving defenseless settlements in the process. In the Great Depression especially, as capitalism and American exceptionalism came under question, the cowboy hero was often mistaken for a criminal and forced to prove his own worthiness--which he inevitably did. Over the '50s, '60s, and '70s however, as America enforced its dominion over half the planet with a long series of coups, assassinations, and increasingly dubious wars, the figure of the cowboy grew darker and more complicated. If you love Westerns, most of your favorites are probably from this era--Shane, The Searchers, Butch Cassidy and the Sundance Kid, McCabe & Mrs. Miller, the spaghetti westerns, etc. By the height of the Vietnam protest era, cowboys were antiheroes as often as they were heroes.

The dawn of the 1980s brought the inauguration of Ronald Reagan and the box-office debacle of the artsy, overblown Heaven's Gate. There's a sense of disappointment to the decade that followed, as if the era of revisionist Westerns had failed and a less nuanced patriotism would have to carry the day. Few memorable Westerns were made in the '80s, and Reagan himself proudly associated himself with an old-fashioned, pre-Vietnam cowboy image. But victory in the Cold War coincided with a revival of the genre, including the revisionist strain, exemplified in Clint Eastwood's career-topping Unforgiven. A new, gentler star emerged in Kevin Costner, who scored a post-colonial megahit with Dances With Wolves. Later, in the 2000s, George W. Bush reclaimed the image of the cowboy for a foreign policy far less successful than Reagan's, and the genre retreated to the art house again.

Westerns are fundamentally about political isolation. The government is far away and weak. Institutions are largely irrelevant in a somewhat isolated town of 100 people. The law is what the sheriff says it is, or what the marshall riding through town says, or the posse. At that scale, there may be no meaningful distinction between war and crime. A single individual's choices can tilt the balance of power. Samurai and Western stories cross-pollinated because when you strip away the surface detail the settings are surprisingly similar. The villagers in Seven Samurai and the women in Unforgiven are both buying justice/revenge because there is no one to appeal to from whom they could expect justice. Westerns are interesting in part because they are stories where individual moral judgment is almost totally unsupported by institutions.

Westerns clearly are not dying. We get a really great film in the genre once every few years. However, they've lost a lot of their place at the center of pop culture because the idea of an isolated community has grown increasingly implausible. In what has become a surveillance state, the idea of a place where the state has no authority does not resonate as relevant.

The function of fiction is being lost in the conversation on violence. My book editor, Sean McDonald, thinks of it as “radical empathy.” Fiction, like any other form of art, is there to consider aspects of the real world in the ways that simple objective views can’t — from the inside. We cannot Other characters when we are seeing the world from the inside of their skulls. This is the great success of Thomas Harris’s Hannibal Lecter, both in print and as so richly embodied by Mads Mikkelsen in the Hannibal television series: For every three scary, strange things we discover about him, there is one thing that we can relate to. The Other is revealed as a damaged or alienated human, and we learn something about the roots of violence and the traps of horror.

End of 2012 mega blow-out post

[youtube http://www.youtube.com/watch?v=6sMo4cTZTgQ&w=560&h=315]

"Those levels of interactivity, for me, recapitulated the levels of participation that we as a society have had since the invention of media," Rushkoff said, referring to similar shifts that occurred when humans first transitioned from written language to the age of movable type.

Our conversation started with Rushkoff’s concept of “present-shock” and moved into a larger discussion of the relationship between market thinking, quantification, and what is ultimately measurable and knowable.

[youtube http://www.youtube.com/watch?v=8AXIAM7dTTg&w=560&h=315]

  • "Drop the mic": an article about how the microphone changed Catholic mass.

In 1974, Marshall McLuhan argued that the microphone was the proximate cause both of the elimination of Latin from the Mass and of the turning around of the priest to face the congregation. Before microphones, a priest quietly said Mass in Latin, with his back to the congregation. From any distance, his voice was indistinct, although an instructed Catholic could follow what he was saying from a missal containing the Latin text of the Mass or a translation of it.

  • "The humanism of Media Ecology": this address was delivered by Neil Postman at the 2000 MEA convention, but I just came across it and wanted to share it here.

I think there is considerable merit in McLuhan’s point of view about avoiding questions of good and bad when thinking about media. But that view has never been mine. To be quite honest about it, I don’t see any point in studying media unless one does so within a moral or ethical context. I am not alone in believing this. Some of the most important media scholars—Lewis Mumford and Jacques Ellul, for example—could scarcely write a word about technology without conveying a sense of either its humanistic or anti-humanistic consequences.

Media matters: Alone Together @ TED, fear in the attention economy, Chomsky tweets and more

[youtube http://www.youtube.com/watch?v=t7Xr3AsBEK4&w=560&h=315]

  • I recently came across this Salon article by UMD doctoral student Nathan Jurgenson from last year where he argues that Noam Chomsky is wrong about Twitter. Both Chomsky's and the author's statements about new media forms are extremely interesting from a medium theory perspective. Jurgenson cites the role of social media in the Arab Spring protests as evidence that new media aren't as shallow and superficial as Chomsky believes:

In fact, in the debate about whether rapid and social media really are inherently less deep than other media, there are compelling arguments for and against. Yes, any individual tweet might be superficial, but a stream of tweets from a political confrontation like Tahrir Square, a war zone like Gaza or a list of carefully-selected thinkers makes for a collection of expression that is anything but shallow. Social media is like radio: It all depends on how you tune it.

In responding to calls, emails, texts, social media, etc, our electronic devices play to a primitive impulse to react to immediate threats and dangers.  Our responding to that call, email or social media post provokes excitement and stimulates the release of dopamine to the brain.  Little by little, we become addicted to its small kick in regular, minute doses.  In its absence, people feel bored.

Powered by Squarespace. Background image of New Songdo by Curry Chandler.