Edit Photos In Augmented Reality

Common photo-editing software allows users to move objects—a flower from the left side of a photo to the right, for example—along a plane, but that’s about all they can do. Computer scientists at Carnegie Mellon and the University of California at Berkeley have developed the first application that lets users scale, rotate, and move onscreen objects in the 3-D space. Normally, such tasks would require hours of work and professional skills, but the software does them in minutes, and with very little processing power. The software is in the experimental stage now, but enterprising creatives can download a prototype version for free. Carnegie Mellon University

I Got High on an Augmented Reality Drug at the Whitney Museum

For the Whitney’s final gala at its home of nearly five decades, Marcel Breuer’s Brutalist masterpiece on Madison and Seventy-fifth, before the museum is relocated to its bright new Renzo Piano-designed digs in the Meatpacking District, artist Will Pappenheimer created an augmented reality “virtual designer drug,” called Proxy, 5-WM2A, to calm the moving nerves of the artists and patrons in attendance.For the uninitiated: augmented reality is the supplementation of our live physical world with computer-generated media. This is different from virtual reality, which is the replacement of our live physical world with an alternate world created through computer-generated media. If I walked out of my apartment in Chelsea wearing Google Glass and said, “Take me to the Whitney,” and a big blue arrow flashed in my line of vision, pointing the way north on Eighth Avenue, that would be augmented reality (AR). If, on the other hand, I stayed in my apartment, put on an Oculus Rift headset and said, “Take me to the Whitney,” and images and sounds created the effect that I had been transported to the actual museum, that would be virtual reality (VR).

At the party, presented by Louis Vuitton in honor of every living artist who has ever had a solo exhibition at the Breuer Building, I along with some friends—everybody was doing it, Mom!—decided to try some Proxy. It was my very first augmented reality “drug” experience, and the effects were positively mystifying.

When I opened the Layar app (the gateway drug, if you will), and located and scanned the proper QR code (with the help of my friend Alyssa), Proxy transformed the Breuer Building’s imposing space into a festival of kaleidoscopic colors and shapes right before my very iPhone. A sparkling red fish swam across a wall where Jasper Johnses and Edward Hoppers once hung. An architect’s blueprint bordered by a lone dancing pink flower was swiftly bifurcated by an army of spinning green and blue disks that I suspect would have made Alexander Calder smile. Then our guest of honor, the Breuer Building, made a surprise appearance as a languidly rotating digital model on the right side of my screen—only to be summarily splattered with stars and specks of digital paint as though by the wrist of a virtual Jackson Pollock.

And those are just a few of the visual effects I experienced. I will admit that I had my doubts about the potential efficacy of a “drug” that claims to enter the mind through the phone-brain barrier. But by the end of the night, I was truly impressed. It felt as if one man’s original interpretation of the Whitney’s history at the Breuer Building had been effectively consolidated into a supplemental patina of playful digital reality that I could enter at will by merely glancing at my phone.

When I spoke with Pappenheimer, who was patiently demo-ing Proxy and assisting revelers with their technical issues all night long, I learned that this had been his intention. “Since my work is site-specific,” he told me, “I always start asking how I can bring out some aspect of the situation and make it part of the human moment, the social moment.” For this installation, his onsite research inspired him to investigate and model Proxy’s visual pyrotechnics on the category of dissociative drugs. “I found there was this whole category of dissociative drugs, and what they do is cause you to detach from and withdraw from your environment. So this is the dissociative class drug for the Whitney, so that we can all begin to withdraw our deep love and memories from the Breuer Building and move to a new place.”

What moved me in particular was Pappenheimer’s juxtaposition of sheer frivolity—multicolored rays, stars, dots, childish figurines, a wandering rainbow tunnel, a veritable solar system of radiant colors—with the heavy, concrete gravitas of the Breuer Building. Proxy’s unabashed levity seemed to underscore the point that, during the 48 years that the building has housed the Whitney, the art within its walls has never been much constrained by it. Just last month, at the building’s very last Whitney exhibition, a career-spanning Jeff Koons retrospective, we witnessed how effectively this austere structure supports and frames works like Balloon Dog and Hulk Elvis.

I will miss seeing art at the Breuer Building, but Proxy certainly eased the pain.

Microsoft’s Augmented Reality Ads Turn Bus Shelters into Video Game Arenas

For new video game franchise, Microsoft re-engineers out-of-home advertising

What’s on your bus stop shelter wall? A mangled schedule, or torn map? Perhaps you, like many commuters, only venture inside when the wind is howling, rain pouring, or you’re desperately looking for a seat.

Microsoft is ready to change the bus stop from inclement weather coverage into a digital experience. To promote its newest shooter video game Sunset Overdrive, Microsoft has outfitted three bus stops in San Francisco, London and Melbourne with augmented reality technology that immerses viewers in the game’s digital environment.

The advertisements look like regular digital screens from a distance, but at close-range, the augmented reality display manipulates figures move across the screen and look as if they are about to jump offscreen.

The ads, which are supported by Clear Channel Outdoor and media agency Empowering Media, will run for one month. Their augmented reality displays were produced by Grand Visual, a digital out-of-home agency that has previously incorporated augmented reality, motion graphics, and color recognition technology in outdoor advertising campaigns for Heineken, Pepsi, and Tropicana.

Sunset Overdrive, which was developed by Insomniac Games for Xbox One, is the first installment of a new fast-action shooting game franchise that pits players against mutant attackers. The outdoor innovation reflects Microsoft’s attempt to make a splash in a saturated video game market. Hoping to cut through the noise of video game advertising, as well as the noise of the hectic external environment, Microsoft’s augmented reality billboard is likely to not only establish a new franchise, but also herald in a new era of re-engineered outdoor advertising.

Doctors find Google Glass blocks peripheral vision (Duh!)

Google Glass Might Curb Your Vision

Some peripheral sight may be obstructed while wearing device, researchers find

TUESDAY, Nov. 4, 2014 (HealthDay News) — Since its initial launch in 2013, Google Glass has been touted as a revolutionary entry into the world of “smart” eyewear.

The promise: a broadly expanded visual experience with on-the-move, hands-free access to photos, videos, messaging, web-surfing and apps.

The catch: a small new study suggests that the structure of the glasses (rather than the software) may curtail natural peripheral vision, creating blind spots that undermine safety while engaging in routine tasks, such as driving or walking.

“I am very pro new technology,” said Dr. Tsontcho Ianchulev, lead author of a research letter concerning Google Glass, and a clinical associate professor in the department of ophthalmology at University of California, San Francisco. “I’m an aficionado of anything new or novel, and I myself was an early adopter of Google Glass,” he added.

“But I almost got into a car accident when I was driving with it. And the device was even turned off at the time. So, that really alerted me to how much my peripheral vision seemed to be blocked by the frame,” he continued.

“What we’ve done is test the glasses in a very simple low-budget way, using standard ophthalmology to compare it to regular eyewear,” Ianchulev said. “And we found that the frame of Google Glass cuts out a portion of your vision that prevents a user from seeing things on the right side of their visual field.”

In a statement, Google Glass said the findings should not surprise users and the device remains safe.

“Put on your favorite shades, glasses, baseball hat, or hoodie, and you’ll quickly see that this study tells us what we already know; wearing something on your face or head may affect your peripheral vision,” the company said. “From the beginning, the Glass team has worked closely with a range of experts to develop a device that is safe for use, and after extensive study they have not found any safety issues when it’s used correctly.”

Ianchulev and colleagues reported their findings in the Nov. 5 issue of the Journal of the American Medical Association.

To examine the problem, the investigators outfitted three individuals with 20/20 corrected vision, and gave each an hour to become comfortable with Google Glass (per Google’s own recommendations). Then, with the software turned off, each underwent standard peripheral vision testing.

The result: when compared with regular glasses, each participant experienced a “clinically meaningful” loss of vision in their upper right quadrant, the study findings showed.

In addition, the research team conducted an analysis of 132 photos (found in a Google search online) of people wearing the device. The review revealed that the way the glasses are typically worn suggests that the risk for developing a blind spot is both real and common.

“Now, this was a very initial effort, based on just three participants and the follow-up analysis,” Ianchulev stressed. “Our goal is really just to open up a discussion and have the manufacturer address the impact in a substantial way, because we realized there was really very little on the topic.”

To that end, the study team has already shared their findings with Google. “We do think this is a fixable problem, because it’s a frame-wear issue, not a software problem,” said Ianchulev. “And this device wonderfully expands and extends some functions. At the same time, it seems to compromise a biological function. So we need to make sure the trade-off is appropriate, because you don’t want to find out about this problem in the statistics of the [Department of Motor Vehicles].”

Two additional vision experts emphasized the critical role peripheral vision plays in maintaining safe daily function.

“Most people do not recognize anything beyond the small center of the [visual] field, which is the only part that is in sharp focus,” said Dr. Alfred Sommers, a professor of ophthalmology and dean emeritus of the Bloomberg School of Public Health at Johns Hopkins University in Baltimore.

“But we often do respond to events happening in the periphery, which is not in sharp focus but might catch our attention, or not, like a car or person we would otherwise bump into,” Sommers said.

And that’s why there’s a potential problem with Google Glass, according to Mark Rosenfield, a professor at the State University of New York College of Optometry in New York City.

“This loss of peripheral vision due to the obstruction [of Google Glass frames] could be significant depending upon what the observer is doing while wearing the device,” Rosenfield said. “A subject who was driving, operating machinery or in motion could be severely, and dangerously, impacted by the visual field loss,” he said.

SOURCES: Tsontcho Ianchulev, M.D., M.P.H., clinical associate professor, department of ophthalmology, University of California, San Francisco; Alfred Sommers, M.D., professor, ophthalmology, and dean emeritus, Bloomberg School of Public Health, Johns Hopkins University, Baltimore; Mark Rosenfield, O.D., Ph.D., professor, State University of New York (SUNY) College of Optometry, New York City; Nov. 5, 2014, Journal of the American Medical Association; statement, Google Glass

HealthDay

MIT’s Augmented Reality Room Shows What Robots Are Thinking

In it me or does this seem very similar to Niantic Lab’s Ingress. Instead of robots they use people as agents.

Most of the time, most of us have absolutely no idea what robots are thinking. Someone who builds and programs a robot does have some idea how that robot is supposed to act based on certain inputs, but as sensors get more ubiquitous and the software that manages them and synthesizes their data to make decisions gets more complex, it becomes increasingly difficult to get a sense of what’s actually going on. MIT is trying to address that issue, and they’re using augmented reality to do it.

In an experiment, the researchers used their AR system to place obstacles—like human pedestrians—in the path of robots, which had to navigate through a virtual city. The robots had to detect the obstacles and then compute the optimal route to avoid running into them. As the robots did that, a projection system displayed their “thoughts” on the ground, so researchers could visualize them in real time. The “thoughts” consisted of colored lines and dots—representing obstacles, possible paths, and the optimal route—that were constantly changing as the robots and pedestrians moved.

On some level, it should be possible to trace back every decision a robot makes to some line of code. This is part of what’s so nice about robots: there’s a sense that everything they do is both understandable, and controllable. It’s different in practice, of course, but the idea here is that by seeing in real-time when and how a robot decides to take the actions that it does, it’ll be a lot simpler to debug and get it doing things that reliably make sense.

As to the other thing that they talked about in the vid, it’s a big floor projection that can be used to test vision systems. From the press release:

In addition to projecting a drone’s intentions, the researchers can also project landscapes to simulate an outdoor environment. In test scenarios, the group has flown physical quadrotors over projections of forests, shown from an aerial perspective to simulate a drone’s view, as if it were flying over treetops. The researchers projected fire on various parts of the landscape, and directed quadrotors to take images of the terrain — images that could eventually be used to “teach” the robots to recognize signs of a particularly dangerous fire.

This is marginally more useful, if somewhat less exciting, than bringing a bunch of plants into your lab and then setting them on fire. Although, I’m sure that’s a thing that’s been done before, intentionally or not.

See how Google’s new ‘Project Tango’ smartphones sense the world

Computer vision application to ar technology.

projecttango9_1020_verge_super_wideprojecttango8_1020_verge_super_wideprojecttango6_1020_verge_super_wideprojecttango7_1020_verge_super_wide

Google’s surprise reveal of Project Tango, a smartphone equipped with a variety of cameras and vision sensors that provides a whole new perspective on the world around it, left us with quite a few questions about how this device actually works and what it’s for. Google says the Tango smartphone can capture a wealth of data never before available to app developers, including depth- and object-tracking and real-time 3D mapping. And it’s no bigger or more dependent on power than your typical smartphone. We sat down with Remi El-Ouazzane, CEO of Movidius, the company that developed some of the technology used in Tango, to get a better idea of what this device can do and what it means for applications of the future. We also got a chance to use the device Google will be delivering to developers next month.

Movidius has been working on computer vision technology for the past seven years — it developed the processing chips used in Project Tango, which Google paired with sensors and cameras to give the smartphone the same level of computer vision and tracking that formerly required much larger equipment. In fact, El-Ouzzane says the technology isn’t very different at all from what NASA’s Exploration Rover used to map the surface of Mars a decade ago, but instead of being in a 400-pound vehicle, it fits in the palm of your hand.

projecttango4_1020_verge_super_wide

The phone is equipped with a standard 4-megapixel camera paired with a special combination RGB and IR sensor and a lower-resolution image-tracking camera. Those image sensors give the smartphone a similar perspective on the world as you and I, complete with spatial awareness and a perception of depth. They feed data to Movidius’ custom Myriad 1 low-power computer-vision processor, which can then crunch the data and feed it to apps through a set of APIs.

IT’S LIKE HAVING THE MARS ROVER’S EYES IN THE PALM OF YOUR HAND

But what can you do with all of that data? That’s really up to app developers and is the reason Google is giving out 200 of these prototype devices to developers in the coming weeks. The devices that we saw were equipped with a few demonstration apps to show off some of the hardware’s capabilities. One of the apps was able to display a distance heat map on top of what the camera sees, layering blue colors on far away objects and red colors on things that are close up. Another took the data from the image sensors and paired with the device’s standard motion sensors and gyroscopes to map out paths of movement down to 1 percent accuracy and then plot that onto an interactive 3D map.

Perhaps the most impressive demo was an app that was able to capture a 3D model of a scene in real time and draw it on the display as you moved the device around the room. It’s pretty amazing to see a three-dimensional model of the table in front of you get drawn in real time in just a few seconds by a smartphone.

projecttango2_1020_verge_super_wideprojecttango5_1020_verge_super_wide

U.S. airbase commander bans augmented reality games as possible terrorist threat

The government has a long history of warning of the dangers of video games. But video games as national security risk?

That’s the latest concern to come out of an impromptu investigation launched at Colorado Spring’s Schriever Air Force Base. The investigation was kicked off after a base patrol questioned a visitor who was taking pictures near the base’s 9/11 display, according to the Schriever Sentinel.

It turns out the man was playing Ingress, the augmented reality game created by Google’s Niantic Labs. In the game, players capture virtual portals located in real locations around the world using their smartphones.

After digging into the basics of the game, a 50th Security Forces Squadron investigator shared his information with the base’s special investigations office and then with security forces organizations at other bases in the area.

The result, as of this month, is that Ingress and other geo-location games like it are banned from the base. Base personnel are prohibited from playing those games or from escorting anyone onto the base to play the game.

Col Bill Liquori, 50th Space Wing Commander, told the base paper that the games can create operational security issues. The primary purpose of the 50th Space Wing is to track and maintain the country’s military satellites. The 50th Space Wing also manages the Global Positioning System.

Liquori told the paper that the nature of the game, which includes taking pictures, could creates an opportunity to “provide a cover for surveillance of a possible terrorist target.”

We’ve reached out to Schriever Air Force Base and Google to see how widespread the ban is on bases and what sort of impact it might have onIngress and will update this story when they respond.

Update: A Schriever Air Force Base official confirmed the ban, but said she didn’t know if it extended to any other bases.