Search

The Future of Virtual Reality—and Spying—Lies in a Bag of Potato Chips - Popular Mechanics

Close-Up Of Potato Chips In Plastic

Axel Bueckert / EyeEmGetty Images

  • Researchers at the University of Washington, Seattle have found a way to recover images of the world that are reflected in a bag of potato chips.
  • Using commercially available RGB-D sensors—which are also used in the Microsoft Kinect system—the team recovered depth information about the surroundings reflected in the bag.
  • By using these reflections, the researchers could create new images of the surroundings from angles that they had not seen before. This could prove useful for AR and VR game design. Their work was published to the preprint server ArXiv earlier this year.

Mirrors aren't the only shiny objects that reflect our surroundings. Turns out a humble bag of potato chips can pull off the same trick, as scientists from the University of Washington, Seattle have made it possible to recreate detailed images of the world from reflections in the snack's glossy wrapping.

The scientists took their work a step further by predicting how a room's likeness might appear from different angles, essentially "exploring" the room's reflection in a bag of chips as if they were actually present. This is analogous to a classical problem in computer vision and graphics: view synthesis, or the ability to create a new, synthetic view of a specific subject based on other images, taken at various angles.

There is distinctly detailed information hidden in the glint of light reflected from a lustrous object. Scientists can deduce the object's shape, composition, and condition—like whether it's wet or dry, round or flat, rough or polished—all from the patterns of that light. Usually, these images are so distorted to the human eye that you may not even notice they're there.

"Remarkably, images of the shiny bag of chips contain sufficient clues to be able to reconstruct a detailed image of the room, including the layout of lights, windows, and even objects outside that are visible through windows," the researchers noted in a paper published to the preprint server ArXiv earlier this year.

These scientists were inspired by Massachusetts Institute of Technology researchers, who proved back in 2014 that it's possible to turn everyday objects, like a bag of Utz Potato Crab Chips, into "visual microphones" by using high-speed video to study minute vibrations in the object. Those vibrations are then extracted to partially recover the sound that produced them.

To create the environmental reconstructions, which the researchers call "specular reflectance maps," or SRM, they used handheld RGB-D sensors to take 360-degree video of certain glassy, reflective objects. The sensors fuse together depth information recorded in each pixel of the image with regular RGB imaging, which uses red, blue, and green light to create an array of colors.

One example of a consumer RGB-D sensor lies in Microsoft's Kinect system, which the company introduced a decade ago for its XBOX 360 gaming console. The Kinect, which powers movement-based games like Just Dance, allows players to "grab" items, hit dance moves, or fight opponents through depth perception.

In one instance, the team recorded footage of a bag of Corn Cho, a type of puffy corn chips covered in chocolate. Using what they've called an SRM estimation algorithm, the scientists turned the morphed reflections into approximations of what the real environment might look like. While the simulated images are still muddy and distorted, the researchers were able to recover great detail, like the image of a man reflected in a window.

image

University of Washington, Seattle

The scientists' algorithm works on virtually any shiny object. In another scenario, they used a porcelain cat statue to recover the alignment of fluorescent lights on the ceiling. It took the algorithm an average of two hours per object to turn the reflections into representations of the environment.

This work could become potentially problematic, though. Child predators or stalkers could potentially download an image from a social media website, like Instagram, without the creator's consent. Then, they could deploy the new algorithm to find out private information about where the image creator lives. Luckily, Instagram images don't contain depth information (yet), so this dystopian use of the algorithm isn't likely at the moment.

In video game design—particularly in augmented reality and virtual reality applications—shiny objects look a bit off because it's challenging to reproduce the reflections from every angle from which you can view that object. The researchers found that by first deconstructing the reflections, it's easier to create realistic renderings of the reflective object in simulated settings. So the hope is to see better gaming graphics in the future.

In the meantime, to test this theory out, pay attention to mirrors, windows, and other shiny objects in the games you play. There's a pretty good chance their reflections are nowhere near as advanced as those seen in a simple bag of potato chips.

Let's block ads! (Why?)



"potato" - Google News
April 01, 2020 at 01:10AM
https://ift.tt/2xDj1CS

The Future of Virtual Reality—and Spying—Lies in a Bag of Potato Chips - Popular Mechanics
"potato" - Google News
https://ift.tt/2rh4zOj
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update

Bagikan Berita Ini

0 Response to "The Future of Virtual Reality—and Spying—Lies in a Bag of Potato Chips - Popular Mechanics"

Posting Komentar

Diberdayakan oleh Blogger.