Curry Chandler

Curry Chandler is a writer, researcher, and independent scholar working in the field of communication and media studies. His writing on media theory and policy has been published in the popular press as well as academic journals. Curry approaches the study of communication from a distinctly critical perspective, and with a commitment to addressing inequality in power relations. The scope of his research activity includes media ecology, political economy, and the critique of ideology.

Curry is a graduate student in the Communication Department at the University of Pittsburgh, having previously earned degrees from Pepperdine University and the University of Central Florida.

Filtering by Tag: google

Guns with Google Glass, city of driverless cars, Kurzweil on hybrid thinking

 
  • Tech companies and weapons manufacturers are exploring the crossover potential for firearms and wearable technology devices like Google Glass. Brian Anderson at Motherboard reported Austin tech startup TrackingPoint's foray into this inevitable extension of augmented reality applications and posted the company's concept video:

"When paired with wearable technology, PGFs can provide unprecedented benefits to shooters, such as the ability to shoot around corners, from behind low walls, and from other positions that provide exceptional cover," according to a TrackingPoint press release. "Without PGF technology, such positions would be extremely difficult, if not impossible, to fire from."

The steadied rise of wearable technology is unlocking a dizzying number of potential killer apps. Indeed, If there was any lingering doubt that wearable tech is coming to the battlefield, the Glassification of a high-profile smart weapon should put any uncertainties to rest.

If being able to track and drop a moving target with single-shot accuracy at 1,500 feet using a long-range robo rifle wasn't sobering enough already, to think basically anyone can now do so over a hill, perhaps overlooking a so-called "networked battlefield" shot through with data-driven soldiers, is sure to be even more so.

The simulation is run by a proprietary software, and programmers will code in dangerous situations—traffic jams and potential collisions—so engineers can anticipate problems and, ideally, solve for them before the automated autos hit the streets. It's laying the groundwork for the real-world system planned for 2021 in Ann Arbor.

There will surely be some technical barriers to work out, but the biggest hurdles self-driving cars will have to clear are likely regulatory, legal, and political. Will driverless cars be subsidized like public transit? If autonomous cars eliminate crashes, will insurance companies start tanking? Will the data-driven technology be a privacy invasion?

Today you can buy a top-of-the-line S-Class car from Mercedes-Benz that figuratively says “ahem” when you begin to stray out of your lane or tailgate. If you do nothing, it’ll turn the wheel slightly or lightly apply the brakes. And if you’re still intent on crashing, it will take command. In 5 years, cars will be quicker to intervene; in 20, they won’t need your advice; and in 30, they won’t take it.

Accident rates will plummet, parking problems will vanish, streets will narrow, cities will bulk up, and commuting by automobile will become a mere extension of sleep, work, and recreation. With no steering column and no need for a crush zone in front of the passenger compartment (after all, there aren’t going to be any crashes), car design will run wild: Collapsibility! Stackability! Even interchangeability, because when a car can come when called, pick up a second or third passenger for a fee, and park itself, even the need to own the thing will dwindle.

Two hundred million years ago, our mammal ancestors developed a new brain feature: the neocortex. This stamp-sized piece of tissue (wrapped around a brain the size of a walnut) is the key to what humanity has become. Now, futurist Ray Kurzweil suggests, we should get ready for the next big leap in brain power, as we tap into the computing power in the cloud.

The headband picks up four channels from seven EEG sensors, five across the forehead and two conductive rubber ear sensors. Together, the sensors detect the five basic types of brain waves, and, unlike conventional sensors, they don’t need to be surrounded by gel to work. Software helps filter out the noise and syncs the signal, via Bluetooth, to a companion app. The app shows the user the brainwave information and offers stress-reduction exercises.

A bit further down the road of possibilities is brain-to-brain networking. Last year, researchers at the University of Washington used EEG sensors to detect one person’s intention to move his arm and used it to stimulate the other person’s brain with an external coil and watched as the second person moved his hand without planning to.

Inside Korea's gaming culture, virtual worlds and economic modeling, Hollywood's Summer of Doom continued, and more

  • I've long been fascinated by the gaming culture in South Korea, and Tom Massey has written a great feature piece for Eurogamer titled Seoul Caliber: Inside Korea's Gaming Culture. From this westerner's perspective, having never visited Korea, the article reads almost more like cyberpunk fiction than games journalism:

Not quite as ubiquitous, but still extremely common, are PC Bangs: LAN gaming hangouts where 1000 Won nets you an hour of multiplayer catharsis. In Gangnam's Maxzone, overhead fans rotate at Apocalypse Now speed, slicing cigarette smoke as it snakes through the blades. Korea's own NCSoft, whose European base is but a stone's throw from the Eurogamer offices, is currently going strong with its latest MMO, Blade & Soul.

"It's relaxing," says Min-Su, sipping a Milkis purchased from the wall-mounted vending machine. "And dangerous," he adds. "It's easy to lose track of time playing these games, especially when you have so much invested in them. I'm always thinking about achieving the next level or taking on a quick quest to try to obtain a weapon, and the next thing I know I've been here for half the day."

[youtube=http://www.youtube.com/watch?v=Kue_gd8DneU&w=420&h=315]

Creation and simulation in virtual worlds appear to offer the best domain to test the new ideas required to tackle the very real problems of depravation, inequality, unemployment, and poverty that exist in national economies. On that note the need to see our socioeconomic institutions for the games that they really are seems even more poignant.

In the words of Vili Lehdonvirta, a leading scholar in virtual goods and currencies, the suffering we see today is “not some consequence of natural or physical law” it instead “is a result of the way we play these games.”

The global economy seems to be bifurcating into a rich/tech track and a poor/non-tech track, not least because new technology will increasingly destroy/replace old non-tech jobs. (Yes, global. Foxconn is already replacing Chinese employees with one million robots.) So far so fairly non-controversial.

The big thorny question is this: is technology destroying jobs faster than it creates them?

[...]

We live in an era of rapid exponential growth in technological capabilities. (Which may finally be slowing down, true, but that’s an issue for decades hence.) If you’re talking about the economic effects of technology in the 1980s, much less the 1930s or the nineteenth century, as if it has any relevance whatsoever to today’s situation, then you do not understand exponential growth. The present changes so much faster that the past is no guide at all; the difference is qualitative, not just quantitative. It’s like comparing a leisurely walk to relativistic speeds.

We begin with a love story--from a man who unwittingly fell in love with a chatbot on an online dating site. Then, we encounter a robot therapist whose inventor became so unnerved by its success that he pulled the plug. And we talk to the man who coded Cleverbot, a software program that learns from every new line of conversation it receives...and that's chatting with more than 3 million humans each month. Then, five intrepid kids help us test a hypothesis about a toy designed to push our buttons, and play on our human empathy. And we meet a robot built to be so sentient that its creators hope it will one day have a consciousness, and a life, all its own.

[youtube=http://www.youtube.com/watch?v=pHCwaaactyY&w=420&h=315]

"These outages are absolutely going to continue," said Neil MacDonald, a fellow at technology research firm Gartner. "There has been an explosion in data across all types of enterprises. The complexity of the systems created to support big data is beyond the understanding of a single person and they also fail in ways that are beyond the comprehension of a single person."

From high volume securities trading to the explosion in social media and the online consumption of entertainment, the amount of data being carried globally over the private networks, such as stock exchanges, and the public internet is placing unprecedented strain on websites and on the networks that connect them.

What I want is systems that have intrinsic rewards; that are disciplines similar to drawing or playing a musical instrument. I want systems which are their own reward.

What videogames almost always give me instead are labor that I must perform for an extrinsic reward. I want to convince you that not only is this not what I want, this isn’t really what anyone wants.

[youtube=http://www.youtube.com/watch?v=GpO76SkpaWQ&w=560&h=315]

This 'celebrification' is enlivening making games and giving players role models, drawing more people in to development, especially indie and auteured games. This shift is proving more prosperous than any Skillset-accredited course or government pot could ever hope for. We are making men sitting in pants at their laptops for 12 hours a day as glamorous as it could be.

Creating luminaries will lead to all the benefits that more people in games can bring: a bigger and brighter community, plus new and fresh talent making exciting games. However, celebritydom demands storms, turmoil and gossip.

Spielberg's theory is essentially that a studio will eventually go under after it releases five or six bombs in a row. The reason: budgets have become so gigantic. And, indeed, this summer has been full of movies with giant budgets and modest grosses, all of which has elicited hand-wringing about financial losses, the lack of a quality product (another post-apocalyptic thriller? more superheroes?), and a possible connection between the two. There has been some hope that Hollywood's troubles will lead to a rethinking of how movies get made, and which movies get greenlit by studio executives. But a close look at this summer's grosses suggest a more worrisome possibility: that the studios will become more conservative and even less creative.

[youtube=http://www.youtube.com/watch?v=F4mDNMSntlA&w=420&h=315]

Hacker's death, wearable tech, and some Cyberpunk

His genius was finding bugs in the tiny computers embedded in equipment, such as medical devices and cash machines. He often received standing ovations at conferences for his creativity and showmanship while his research forced equipment makers to fix bugs in their software.

Jack had planned to demonstrate his techniques to hack into pacemakers and implanted defibrillators at the Black Hat hackers convention in Las Vegas next Thursday. He told Reuters last week that he could kill a man from 30 feet away by attacking an implanted heart device.

Without the right approach, the continual distraction of multiple tasks exerts a toll that disrupts performance. It takes time to switch tasks, to get back what attention theorists call “situation awareness.” Interruptions disrupt performance, and even a voluntary switching of attention from one task to another is an interruption of the task being left behind.

Furthermore, it will be difficult to resist the temptation of using powerful technology that guides us with useful side information, suggestions, and even commands. Sure, other people will be able to see that we are being assisted, but they won’t know by whom, just as we will be able to tell that they are being minded, and we won’t know by whom.

9am to 1pm: Throughout the day you connect to your Dekko-powered augmented reality device, which overlays your vision with a broad range of information and entertainment. While many of the products the US software company is proposing are currently still fairly conceptual, Dekko hopes to find ways to integrate an extra layer of visual information into every part of daily life. Dekko is one of the companies supplying software to Google Glass, the wearable computer that gives users information through a spectacle-like visual display. Matt Miesnieks, CEO of Dekko, says that he believes "the power of wearables comes from connecting our senses to sensors."

Researchers at Belgian nonelectronics reseach and development center Imec and Belgium’s Ghent University are in the very early stages of developing such a device, which would bring augmented reality–the insertion of digital imagery such as virtual signs and historical markers with the real world–right to your eyeballs. It’s just one of several such projects (see “Contact Lens Computer: It’s Like Google Glass Without The Glasses”), and while the idea is nowhere near the point where you could ask your eye doctor for a pair, it could become more realistic as the cost and size of electronic components continue to fall and wearable gadgets gain popularity.

Speaking on the sidelines of the Wearable Technologies conference in San Francisco on Tuesday, Eric Dy, Imec’s North America business development manager, said researchers are investigating the feasibility of integrating an array of micro lenses with LEDs, using the lenses to help focus light and project it onto the wearer’s retinas.

The biggest barrier, beyond the translation itself, is speech recognition. In so many words, background noise interferes with the translation software, thus affecting results. But Barra said it works "close to 100 percent" when used in "controlled environments." Sounds perfect for diplomats, not so much for real-world conversations. Of course, Google's non-real-time, text-based translation software built into Chrome leaves quite a bit to be desired, making us all the more wary of putting our faith into Google's verbal solution. As the functionality is still "several years away," though, there's still plenty of time to convert us.

There will be limitations, however. It's easy to think that a life-sized human being, standing in your living room, would be capable of giving you a hug, for instance. But if that breakthrough is coming, it hasn't arrived yet. Holodeck creations these are not. And images projected through the magic of HoloVision won't be able to follow you into the kitchen for a snack either — not unless you've got a whole network of HoloVision cameras, anyway.

The implications of Euclid’s technology do not stop at surveillance or privacy. Remember, these systems are meant to feed data to store owners so that they can rearrange store shelves or entire showroom floors to increase sales. Malls, casinos, and grocery stores have always been carefully planned out spaces—scientifically arranged and calibrated for maximum profit at minimal cost. Euclid’s systems however, allow for massive and exceedingly precise quantification and analysis. More than anything, what worries me is the deliberateness of these augmented spaces. Euclid will make spaces designed to do exactly one thing almost perfectly: sell you shit you don’t need. I worry about spaces that are as expertly and diligently designed as Amazon’s home page or the latest Pepsi advertisement. A space built on data so rich and thorough that it’ll make focus groups look quaint in comparison.

Of course the US is not a totalitarian society, and no equivalent of Big Brother runs it, as the widespread reporting of Snowden’s information shows. We know little about what uses the NSA makes of most information available to it—it claims to have exposed a number of terrorist plots—and it has yet to be shown what effects its activities may have on the lives of most American citizens. Congressional committees and a special federal court are charged with overseeing its work, although they are committed to secrecy, and the court can hear appeals only from the government.

Still, the US intelligence agencies also seem to have adopted Orwell’s idea of doublethink—“to be conscious of complete truthfulness,” he wrote, “while telling carefully constructed lies.” For example, James Clapper, the director of national intelligence, was asked at a Senate hearing in March whether “the NSA collect[s] any type of data at all on millions or hundreds of millions of Americans.” Clapper’s answer: “No, sir…. Not wittingly.”

The drone is carrying a laptop so it can communicate with the headset, but right now the sticking point is range; since it's using wi-fi to communicate, it'll only get to around 50-100m.

"It's not a video game movie, it's a cyberpunk movie," Cargill said. "Eidos Montreal has given us a lot of freedom in terms of story; they want this movie to be Blade Runner. We want this movie to be Blade Runner."

INTERVIEWER

There’s a famous story about your being unable to sit through Blade Runner while writing Neuromancer.

GIBSON

I was afraid to watch Blade Runner in the theater because I was afraid the movie would be better than what I myself had been able to imagine. In a way, I was right to be afraid, because even the first few minutes were better. Later, I noticed that it was a total box-office flop, in first theatrical release. That worried me, too. I thought, Uh-oh. He got it right and ­nobody cares! Over a few years, though, I started to see that in some weird way it was the most influential film of my lifetime, up to that point. It affected the way people dressed, it affected the way people decorated nightclubs. Architects started building office buildings that you could tell they had seen in Blade Runner. It had had an astonishingly broad aesthetic impact on the world.

The concept was formally introduced in William Gibson's 1984 punkn novel, NEUROMANCER.  Although this first novel swept the Triple Crown of science fiction--the Hugo, the Nebula, and the Philip K. Dick awards--it is not really science fiction.  It could be called "science faction" in that it occurs not in another galaxy in the far future, but 20 years from now, in a BLADE RUNNER world just a notch beyond our silicon present.

      In Gibson's Cyberworld there is no-warp drive and "beam me up, Scotty."  The high technology is the stuff that appears on today's screens or that processes data in today's laboratories: Super-computer boards.  Recombinant DNA chips.  AI systems and enormous data banks controlled by multinational combines based in Japan and Zurich.

Mice memory implants, augmented reality trends, predictive policing, more

Scientists have created a false memory in mice by manipulating neurons that bear the memory of a place. The work further demonstrates just how unreliable memory can be. It also lays new ground for understanding the cell behavior and circuitry that controls memory, and could one day help researchers discover new ways to treat mental illnesses influenced by memory.

Augmented reality blurs the line between the virtual and real-world environment. This capability of augmented reality often confuses users, making them unable to determine the difference between the real world experience and the computer generated experience. It creates an interactive world in real-time and using this technology, businesses can give customers the opportunity to feel their products and service as if it is real right from their current dwelling.

AR technology imposes on the real world view with the help of computer-generated sensory, changing what we see. It can use any kind of object to alter our senses. The enhancements usually include sound, video, graphics and GPS data. And its potentials are tremendous as developers have just started exploring the world of augmented reality. However, you must not confuse between virtual reality and augmented reality, as there is a stark difference between them. Virtual reality, as the name suggests, is not real. It is just a made up world. On the other hand, augmented reality is enhancing the real world, providing an augmented view of the reality. The enhancements can be minor or major, but AR technology only changes how the real world around the user looks like.

Augmentedrealitytrends.com: Why augmented reality and why your prime focus is on retail industry?

SeeMore Interactive: We recognize the importance of merging brick-and-mortar retail with cloud-based technology to create the ultimate dynamic shopping experience. It’s simply a matter of tailoring a consumer’s shopping experience based on how he or she wants to shop; the ability to research reviews, compare prices, receive new merchandise recommendations, share photos and make purchases while shopping in-store or from the comfort of their home.

Deep learning is based on neural networks, simplified models of the way clusters of neurons act within the brain that were first proposed in the 1950s. The difference now is that new programming techniques combined with the incredible computing power we have today are allowing these neural networks to learn on their own, just as humans do. The computer is given a huge pile of data and asked to sort the information into categories on its own, with no specific instruction. This is in contrast to previous systems that had to be programmed by hand. By learning incrementally, the machine can grasp the low-level stuff before the high-level stuff. For example, sorting through 10,000 handwritten letters and grouping them into like categories, the machine can then move on to entire words, sentences, signage, etc. This is called “unsupervised learning,” and deep learning systems are very good at it.

Intelligent policing can convert these modest gains into significant reductions in crime. Cops working with predictive systems respond to call-outs as usual, but when they are free they return to the spots which the computer suggests. Officers may talk to locals or report problems, like broken lights or unsecured properties, that could encourage crime. Within six months of introducing predictive techniques in the Foothill area of Los Angeles, in late 2011, property crimes had fallen 12% compared with the previous year; in neighbouring districts they rose 0.5% (see chart). Police in Trafford, a suburb of Manchester in north-west England, say relatively simple and sometimes cost-free techniques, including routing police driving instructors through high-risk areas, helped them cut burglaries 26.6% in the year to May 2011, compared with a decline of 9.8% in the rest of the city.

Although they may all look very different, the cities of the future share a new way of doing things, from sustainable buildings to walkable streets to energy-efficient infrastructure. While some are not yet complete – or even built – these five locations showcase the cutting edge of urban planning, both in developing new parts of an existing metropolitan area and building entirely new towns. By 2050, it is forecast that 70% of the world’s population will live in cities. These endeavours may help determine the way we will live then, and in decades beyond.

Mention thorium—an alternative fuel for nuclear power—to the right crowd, and faces will alight with the same look of spirited devotion you might see in, say, Twin Peaks and Chicago Cubs fans. People love thorium against the odds. And now Bill Gates has given them a new reason to keep rooting for the underdog element.

TerraPower, the Gates-chaired nuclear power company, has garnered attention for pursuing traveling wave reactor tech, which runs entirely on spent uranium and would rarely need to be refueled. But the concern just quietly announced that it's going to start seriously exploring thorium power, too.

Google might have put the kibosh on allowing x-rated apps onto Glass (for now) but that hasn't stopped the porn industry from doing what they do best: using new technology to enhance the, um, adult experience. The not yet titled film stars James Deen and Andy San Dimas.

There has always been a basic split in machine vision work. The engineering approach tries to solve the problem by treating it as a signal detection task using standard engineering techniques. The more "soft" approach has been to try to build systems that are more like the way humans do things. Recently it has been this human approach that seems to have been on top, with DNNs managing to learn to recognize important features in sample videos. This is very impressive and very important, but as is often the case the engineering approach also has a trick or two up its sleeve.

  • From Google Research:

We demonstrate the advantages of our approach by scaling object detection from the current state of the art involving several hundred or at most a few thousand of object categories to 100,000 categories requiring what would amount to more than a million convolutions. Moreover, our demonstration was carried out on a single commodity computer requiring only a few seconds for each image. The basic technology is used in several pieces of Google infrastructure and can be applied to problems outside of computer vision such as auditory signal processing.

Google settles over privacy violations, Social media segregation, the era of big data, and more...

  • Google is reportedly reaching a settlement with the Federal Trade Commission over an incident in which the Internet search giant violated an agreement with the FTC by tracking Safari users' data. From the Associated Press:

Google is poised to pay a $22.5 million fine to resolve allegations that it broke a privacy promise by secretly tracking millions of Web surfers who rely on Apple's Safari browser, according to a person familiar with settlement.

If approved by the FTC's five commissioners, the $22.5 million penalty would be the largest the agency has ever imposed on a single company.

  • Adrianna Jeffries at BetaBeat covers a BBC report on how users of specific web sites break down along racial demographics. The article misleadingly refers to "segregation" in social media, but the information and analysis by danah boyd is interesting:

Pinterest is 70 percent female and 79 percent white, according to the BBC. By contrast, black and Latino users are overrepresented on Twitter versus the general population.

Ms. Boyd theorized that there was an exodus of users from Myspace to Facebook similar to white flight to the suburbs when the U.S. desegregated schools. Facebook, the vanilla of social media sites, was approaching the makeup of the U.S. population at the time of an analysis done in 2009. That was the year that white users stopped being overrepresented and black and Latino users stopped being underrepresented.

Among companies of more than 1,000 employees in 15 out of the economy's 17 sectors, the average amount of data is a surreal 235 terabytes. That's right -- each of these companies has more info than the Library of Congress. And so, why should we care? Because data is valuable. The growth of digital networks and the networked sensors in everything from phones to cars to heavy machinery mean that data has a reach and sweep it has never had before. The key to Big Data is connecting these sensors to computing intelligence which can make sense of all this information (in pure Wall-E style, some theorists call this the Internet of Things).

  • This short post at Kethu.org presents survey data and rhetorically wonders whether social media behaviors negatively impact life enjoyment:

Consider this: 24% of respondents to one survey said they’ve missed out on enjoying special moments in person because — ironically enough — they were too busy trying to document their experiences for online sharing. Many of us have had to remind ourselves to “live in the now” — instead of worry about composing the perfect tweet or angling for just the right Instagram shot.

I’m coming to believe that classroom time is too limiting in the teaching of tools. At CUNY, we’ve seen over the years that students come in with widening gulfs in both their prior experience and their future ambitions in tools and technologies. My colleagues at CUNY, led by Sandeep Junnarkar, have implemented many new modules and courses to teach such topics as data journalism (gathering, analysis, visualization) and familiarity with programming.

Note well that I have argued since coming to CUNY that we should not and cannot turn out coders. I also do not subscribe to the belief that journalism’s salvation lies in hunting down that elusive unicorn, the coder-journalism, the hack-squared. I do believe that journalists must become conversant in technologies, sufficient to enable them to (a) know what’s possible, (b) specify what they want, and (c) work with the experts who can create that.

in medias res: bridging the "time sap" gap, DIY politics, Google thinks you're stupid, and more

  • When researchers started using the term "digital divide" in the 1990s they were referring to an inequality of access to the Internet and other ICTs. Over time the issue shifted from unequal access to emphasizing disparities of technological competency across socioeconomic sectors. The new manifestation of the digital divide, according to a New York Times article, is reflected in whether time on the Internet is spent being productive, or wasting time:

As access to devices has spread, children in poorer families are spending considerably more time than children from more well-off families using their television and gadgets to watch shows and videos, play games and connect on social networking sites, studies show.

The new divide is such a cause of concern for the Federal Communications Commission that it is considering a proposal to spend $200 million to create a digital literacy corps. This group of hundreds, even thousands, of trainers would fan out to schools and libraries to teach productive uses of computers for parents, students and job seekers.

A study published in 2010 by the Kaiser Family Foundation found that children and teenagers whose parents do not have a college degree spent 90 minutes more per day exposed to media than children from higher socioeconomic families. In 1999, the difference was just 16 minutes.

  • In an op-ed for the LA Times Neal Gabler writes that Obama's legacy may be disillusionment with partisan politics and a shift toward do-it-yourself democracy:

Disillusionment with partisan politics is certainly nothing new. Obama's fall from grace, however, may look like a bigger belly flop because his young supporters saw him standing so much higher than typical politicians. Yet by dashing their hopes, Obama may actually have accomplished something so remarkable that it could turn out to be his legacy: He has redirected young people's energies away from conventional electoral politics and into a different, grass-roots kind of activism. Call it DIY politics.

We got a taste of DIY politics last fall with the Occupy Wall Street sit-ins, which were a reaction to government inaction on financial abuses, and we got another taste when the 99% Spring campaign mobilized tens of thousands against economic inequality. OWS and its tangential offshoots may seem political, but it is important to note that OWS emphatically isn't politics as usual. It isn't even a traditional movement.

  • In a piece on The Daily Beast Andrew Blum, author of a new net-centric book titled The Tubes: A Journey to the Center of the Internet, details the condescension and furtiveness he encountered while researching Google for his book:

Walking past a large data center building, painted yellow like a penitentiary, I asked what went on inside. Did this building contain the computers that crawl through the Web for the search index? Did it process search queries? Did it store email? “You mean what The Dalles does?” my guide responded. “That’s not something that we probably discuss. But I’m sure that data is available internally.” (I bet.) It was a scripted non-answer, however awkwardly expressed. And it might have been excusable, if the contrast weren’t so stark with the dozens of other pieces of the Internet that I visited. Google was the outlier—not only for being the most secretive but the most disingenuous about that secrecy.

After my tour of Google’s parking lot, I joined a hand-picked group of Googlers for lunch in their cafeteria overlooking the Columbia River. The conversation consisted of a PR handler prompting each of them to say a few words about how much they liked living in The Dalles and working at Google. (It was some consolation that they were treated like children, too.) I considered expressing my frustration at the kabuki going on, but I decided it wasn’t their choice. It was bigger than them. Eventually, emboldened by my peanut-butter cups, I said only that I was disappointed not to have the opportunity to go inside a data center and learn more. My PR handler’s response was immediate: “Senators and governors have been disappointed too!”

When news reports focus on individuals and their stories, rather than simply facts or policy, readers experience greater feelings of compassion, said Penn State Distinguished Professor Mary Beth Oliver, co-director of the Media Effects Research Laboratory and a member of the Department of Film-Video and Media Studies. This compassion also extends to feelings about social groups in general, including groups that are often stigmatized.

"Issues such as health care, poverty and discrimination all should elicit compassion," Oliver said. "But presenting these issues as personalized stories more effectively evokes emotions that lead to greater caring, willingness to help and interest in obtaining more information."

The emphasis on "personalized stories" reminds me of Zillmann's exemplification theory, though the article makes no mention of exemplification.

The problem with living through a revolution is that you've no idea how things will turn out. So it is with the revolutionary transformation of our communications environment driven by the internet and mobile phone technology. Strangely, our problem is not that we are short of data about what's going on; on the contrary we are awash with the stuff. This is what led Manuel Castells, the great scholar of cyberspace, to describe our current mental state as one of "informed bewilderment": we have lots of information, but not much of a clue about what it means.

If, however, you're concerned about things such as freedom, control and innovation, then the prospect of a world in which most people access the internet via smartphones and other cloud devices is a troubling one. Why? Because smartphones (and tablets) are tightly controlled, "tethered" appliances. You may think that you own your shiny new iPhone or iPad, for example. But in fact an invisible chain stretches from it all the way back to Apple's corporate HQ in California. Nothing, but nothing, goes on your iDevice that hasn't been approved by Apple.

In medias res: end-of-the-semester reading list

Due to end-of-the-semester activities posting has been slow the last couple of weeks. But my exams are finished and I've submitted grades so here's a celebratory news roundup:

In an interview published Sunday, Google’s co-founder cited a wide range of attacks on “the open internet,” including government censorship and interception of data, overzealous attempts to protect intellectual property, and new communication portals that use web technologies and the internet, but under restrictive corporate control.

There are “very powerful forces that have lined up against the open internet on all sides and around the world,” says Brin. “I thought there was no way to put the genie back in the bottle, but now it seems in certain areas the genie has been put back in the bottle."

The post-social world is an “attention economy.” If you don’t have engagement, you don’t have attention and if you don’t have attention – well you don’t have anything really.

In the 1970s, the scholar Herbert Simon argued that "in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients."

His arguments give rise both to the notion of "information overload" but also to the "attention economy". In the attention economy, people's willingness to distribute their attention to various information stimuli create value for said stimuli. Indeed, the economic importance of advertisements is predicated on the notion that getting people to pay attention to something has value.

If one wanted to track three trends likely to have the most impact on international relations over the next decade, what three trends could help us anticipate global political crises? At the top of my news feed are items about who is in jail and why, rigged elections, and social media.

School shootings and domestic terrorism have proliferated on a global level. In recent months there have been school shootings in Finland, Germany, Greece, and other countries as well as the United States. Although there may be stylistic differences, in all cases young men act out their rage through the use of guns and violence to create media spectacles and become celebrities-of-the-moment.

Class dismissed, have a great summer!

Society of the spectacles: Varying views on Google's goggles

This week Google released a video depicting what it might be like to wear their augmented reality glasses, known as Project Glass: [youtube http://www.youtube.com/watch?v=9c6W4CCU9M4&w=560&h=315]

A bloke named Tom Scott released his own vision of what the Google goggle experience might be like, envisioning technologically-enhanced ways of getting injured:

[youtube http://www.youtube.com/watch?v=t3TAOYXT840&w=560&h=315]

Youtuber rebelliouspixels remixed the original Google video to depict the Google goggle experience with the ADdition of Google adverts:

[youtube http://www.youtube.com/watch?v=_mRF0rBXIeg&w=560&h=315]

Via a link on Metafilter I came across this delightful video posted a year ago on vimeo by Keiichi Matsuda. Titled Augmented (hyper)Reality: Domestic Robocop the video is a fantastic POV depiction of a possible experience with augmented reality eyewear.

More coverage of Project Glass and its AI elements from CNet:

For the most part, the augmented-reality glasses do what a person could do with a smartphone, such as look up information and socialize. But the demo also shows glimpses of an artificial-intelligence (AI) system working behind the scenes. It's the AI system that could make mobile devices, including wearable computers, far more powerful and take on more complex tasks, according to an expert.

Powered by Squarespace. Background image of New Songdo by Curry Chandler.