By David Cardinal
Much
of the excitement around VR is built using amazing demos of virtual
worlds. Perhaps fittingly, virtual worlds are actually easier to portray
than real worlds in VR, because it is possible to model exactly what a
person would see from any point of view, and with any lighting. Creating
stereo views is also relatively simple — basic stereo camera support
can be added in game engines such as Unity merely by checking a box.
Now, though, we’re starting to see full-fidelity experiences based on
modeling real-world locations. Huang showcased two of the most elaborate
during his keynote: Everest VR and Mars 2030.
Everest VR was put together by Solfar and Rfx using 108 billion
pixels of actual photographs, that were turned into 10 million polygons,
which in turn drive a VR experience that is essentially photorealistic.
Game-like physics are used to generate drifting snow for additional
“reality.” What makes immersive experiences like Everest much more
powerful than an ordinary 360-degree video is that the user is not
limited to a particular location, but can move through the environment.
That’s where iRay VR comes in. For those willing to buy, or rent time on, a high-performance computing cluster (like Nvidia’s own DGX-100),
iRay VR can generate a navigable model of a scene. The user can move
around in the scene, and turn their head, while getting
physically-accurate lighting, shadows, and reflections as they move.
Nvidia demoed this with an interactive VR model of its planned
headquarters that was quite convincing. Unfortunately, even the viewing
computer needs to be pretty massive — requiring around a 24GB frame
buffer like the one in the 24GB Quadro M6000 to run.
Everest
and Mars are both very expensive, long-development-cycle, efforts, more
or less the equivalent of making a feature film. That limits their
creation to large organizations, and their subjects to those that are
likely to attract millions. Startup Realities.io has developed a system
that allows it to relatively quickly, and inexpensively, create
photorealistic environments from “everyday” locations. Using around 350
photographs of a scene, it can use a process called photogrammetry to
create an interactive model that the user can walk around.
A
more common way of creating VR experiences is using 360-degree camera
rigs. There are plenty of those on the market, ranging from consumer
units with a couple fish-eye lenses to studio-quality rigs like the ones
from Samsung and Jaunt VR. However, all of them require a lot of
post-processing to accurately stitch the images together. One company,
Videostitch, has made a good living providing stitching solutions, but
at GTC they announced they’ve gone one step further. They introduced
their own turnkey camera + stitcher, the Orah 4i. Available for
pre-order for $1,800, it has four high-quality cameras with 90-degree
lenses, which feed synchronized data to a small processor box that
stitches them and streams a live 360-degree image. In addition to the
obvious use for event coverage, the system will also allow excellent
real-time previewing for those doing high-end 360-degree production
work.
VR
at the show wasn’t all about entertainment. Two of the demos I
experienced were all about commercial applications. ZeroLight was
showing off an amazingly realistic Audi in VR — part of a
customer-focused sales experience that Audi will start rolling out later
this year — complete with Vive headsets in dealer showrooms that allow
you to configure and virtually experience your car.
VR
at GTC was deliberately about nearly every application other than games
— after all we just had VR-frenzy at GDC last month — but for most
individuals, VR in 2016 will either be about gaming (I’m a racing sim
fan, so my favorites that are available so far are Project Cars and Dirt
Rally, but there are tons of others) or involve fiddling around with
360 experiences using a Gear VR or Cardboard. For those who do buy a
high-end headset for gaming, sure the other experiences are cool, but I
don’t see anyone upgrading their computer and plunking down another $800
just to walk around a virtual Mars for a bit. Beyond that, I’m hoping
Google announces something amazing under the Android VR banner in May,
that can bridge the gap between the current low-end mobile phone
offerings and the current crop of gamers-and-hackers-only PC-driven
headset offerings.
As all of us who are excited about the launch of Oculus and
Vive are learning, virtual reality is all about GPUs. While many PCs
have enough CPU horsepower and memory to handle a VR workload, very few
have GPUs that are up to even the minimum suggested specs for VR
playback — let alone development. This nearly insatiable need for GPU
horsepower makes VR a natural area of focus for Nvidia, as it showed at
this year’s GPU Technology Conference (GTC). With 37 VR-related
sessions, dozens of demos, and a good portion of CEO Jen-Hsun Huang’s
keynote dedicated to VR, it was, along with deep learning and autonomous vehicles, one of the three biggest themes at the conference.
Look at me: Eye-catching “real world” VR experiences
Much
of the excitement around VR is built using amazing demos of virtual
worlds. Perhaps fittingly, virtual worlds are actually easier to portray
than real worlds in VR, because it is possible to model exactly what a
person would see from any point of view, and with any lighting. Creating
stereo views is also relatively simple — basic stereo camera support
can be added in game engines such as Unity merely by checking a box.
Now, though, we’re starting to see full-fidelity experiences based on
modeling real-world locations. Huang showcased two of the most elaborate
during his keynote: Everest VR and Mars 2030.
Similarly, Mars 2030 is largely based on massive numbers of
photographs our spacecraft have sent back from there, allowing NASA,
with help from Fusion VR, to model eight square kilometers of the
planet’s surface. The studio then went to work enhancing the model,
including creating 1 million hand-sculpted rocks, and 3D versions of
mile-long underground lava tube caves. Hoping for some star appeal,
Huang brought Apple co-founder Steve Wozniak up on screen so we could
see him be the first ever to experience Mars 2030. Everything went well
for the first couple minutes until Woz said he felt dizzy, and needed to
stop before he fell out of his chair. That was definitely awkward, and
symptomatic of the “queasiness” issue that continues to intermittently
trouble VR rollouts.
Making fantasies seem real: iRay VR and iRay VR Lite
Nvidia’s iRay is the ray-tracing package of choice for many
of the top 3D modeling packages already. This year the company will be
extending it with additional capabilities to support the unique
requirements of VR. Before VR, producers of computer-generated video
content could choose between either time-consuming photorealistic
rendering, like that used for feature films, or real-time, plausible
rendering needed for interactivity in video games. VR experiences are
creating a demand to achieve the best of both — realistic, immersive,
experiences that are high-quality, 3D, and allow the user to move
around. That means they can’t be entirely pre-rendered. Unfortunately,
moving from a 3D model to a photorealistic experience is too
processor-intensive to do entirely in realtime. So views of the model
need to be rendered, typically using ray tracing to mimic lighting and
reflections in a physically accurate way. Traditional applications like
movies or print only require high-resolution 2D images to be produced,
but immersive VR requires the creation of an interactive experience.
That’s where iRay VR comes in. For those willing to buy, or rent time on, a high-performance computing cluster (like Nvidia’s own DGX-100),
iRay VR can generate a navigable model of a scene. The user can move
around in the scene, and turn their head, while getting
physically-accurate lighting, shadows, and reflections as they move.
Nvidia demoed this with an interactive VR model of its planned
headquarters that was quite convincing. Unfortunately, even the viewing
computer needs to be pretty massive — requiring around a 24GB frame
buffer like the one in the 24GB Quadro M6000 to run.
Even with a supercomputer at hand, rendering for VR requires
some compromises. Lucasfilm’s Lutz Latta explained that, for example,
the Millenium Falcon model used for Star Wars is made up of
over 4 million polygons. Perfect for the ultimate in cinematic reality,
when it can be rendered one frame at a time, but too complex for a Star Wars
VR experience. In addition to simplifying it, the studio has worked on a
way to have a unified asset specification, so that models can be built
once and then are available for use in a variety of different media like
film and VR.
For those of us on slightly more limited budgets, iRay VR
Lite will allow you to upload a model to Nvidia’s servers, where they
will generate a photorealistic 360-degree stereo view — but one that you
can’t walk around. For a full-fidelity experience, even the Lite
version will take advantage of 6GB or more of frame buffer, but the
experiences will also be viewable on low-end devices like Google’s
Cardboard. Nvidia expects to have iRay VR Lite available by mid-year,
with iRay VR to follow.
iRay VR is only one part of Nvidia’s VRWorks set of tools
for VR development and delivery. Other parts of VRWorks also had updates
announced at the show. In particular VRWorks SLI will provide OpenGL
across multiple GPUs and there is now a VR support in GameWorks.
Realities.io: Immersive environments made practical
Everest
and Mars are both very expensive, long-development-cycle, efforts, more
or less the equivalent of making a feature film. That limits their
creation to large organizations, and their subjects to those that are
likely to attract millions. Startup Realities.io has developed a system
that allows it to relatively quickly, and inexpensively, create
photorealistic environments from “everyday” locations. Using around 350
photographs of a scene, it can use a process called photogrammetry to
create an interactive model that the user can walk around.
The level of detail is pretty amazing. You can stoop down
and see trash on the floor, or walk over to a wall and see the brush
textures in the graffiti. Realities also captures scene lighting, using
light probes, so reflections change realistically as you move. If you’re
one of the lucky few that have a Vive, you can download Realities for
free via Steam.
Even more exciting, Realities founders David Finsterwalder and Daniel
Sproll hope to further democratize the process of creating immersive
experiences even more by enabling others to go out and capture the
images they can then process and produce.Orah 4i camera: A stitch in real time
A
more common way of creating VR experiences is using 360-degree camera
rigs. There are plenty of those on the market, ranging from consumer
units with a couple fish-eye lenses to studio-quality rigs like the ones
from Samsung and Jaunt VR. However, all of them require a lot of
post-processing to accurately stitch the images together. One company,
Videostitch, has made a good living providing stitching solutions, but
at GTC they announced they’ve gone one step further. They introduced
their own turnkey camera + stitcher, the Orah 4i. Available for
pre-order for $1,800, it has four high-quality cameras with 90-degree
lenses, which feed synchronized data to a small processor box that
stitches them and streams a live 360-degree image. In addition to the
obvious use for event coverage, the system will also allow excellent
real-time previewing for those doing high-end 360-degree production
work.
As an aside, the Orah, like most 360-degree camera rigs,
does not produce a stereo image. And because it only has one camera
facing each direction, you can’t really generate one after the fact.
Higher end rigs with more cameras, like Jaunt’s, do allow
post-processing to generate depth maps and stereo views.
However, almost all of these units — whether mono or stereo —
are bundled together under the umbrella of “VR.” Similarly, many “VR”
experiences are static 360-degree photos that don’t allow you to move
around the scene (but may or may not be stereo). It’d be great to start
to get some standardization on terminology here. In my case, I’ve
started to use immersive only for experiences that are both stereo and
allow movement within the scene. I think it’d also be helpful if we only
used VR to refer to experiences with a sense of depth (e.g. stereo),
but the term is too popular as a marketing tool for that to be likely to
happen.
VR gets down to business
VR
at the show wasn’t all about entertainment. Two of the demos I
experienced were all about commercial applications. ZeroLight was
showing off an amazingly realistic Audi in VR — part of a
customer-focused sales experience that Audi will start rolling out later
this year — complete with Vive headsets in dealer showrooms that allow
you to configure and virtually experience your car.
WorldViz has extended standard VR functionality to make it
ideal for many industrial and commercial applications. It allows users
to see each other and work together on a task in a virtual environment —
even if they have different brand headsets — and it supports much
larger “room scale” environments. For example, one client created a
virtual hospital in a gym, so doctors could test out the work
environment before it was built. One of the scenarios I ran through
involved working on a helicopter rotor. It really brought home the power
of the Vive’s touch controllers. They are a dramatic step forward from
trying to use a small remote or gaming controller to manipulate objects
in 3D. I hope Oculus gets their version out soon.
Is VR right for you?
VR
at GTC was deliberately about nearly every application other than games
— after all we just had VR-frenzy at GDC last month — but for most
individuals, VR in 2016 will either be about gaming (I’m a racing sim
fan, so my favorites that are available so far are Project Cars and Dirt
Rally, but there are tons of others) or involve fiddling around with
360 experiences using a Gear VR or Cardboard. For those who do buy a
high-end headset for gaming, sure the other experiences are cool, but I
don’t see anyone upgrading their computer and plunking down another $800
just to walk around a virtual Mars for a bit. Beyond that, I’m hoping
Google announces something amazing under the Android VR banner in May,
that can bridge the gap between the current low-end mobile phone
offerings and the current crop of gamers-and-hackers-only PC-driven
headset offerings.
Fire up your GPU: VR takes hold at Nvidia’s GTC 2016
Post a Comment