Nvidia GI Hardware Support

Started by
10 comments, last by Chris_F 9 years, 6 months ago

I saw the Nvidia moon landing GI demo and the corresponding GI demo on the Nvidia youtube channel.

They mention this is coming with their new gpu, is this some hardware supported feature? Does anyone know how it will integrate with other software if that is the case?

Also, in the demo with the sphere, they show the voxel data. I'm assuming that the pixel shader performs a raycast into the voxel data to find the closest voxel? I've never messed with voxels before. I'm not sure how that works. Is the data on the gpu a 3d texture and it will just ray cast from the fragments world position + normal until it hits a non blank texel in the 3d texture and then use that information?

If so, wouldn't that 3d texture be pretty big? for that small scene it looks like it is about 64x64x64 if it is a 3d texture. Maybe I just don't know enough about voxels. I remember the latest Crysis tech pdf mentioning their GI stuff. Were they doing the same type of thing then?


NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

Advertisement

They use spase volume texture to store virtually huge texture with lot of "blank" Inside. It's a DX12 feature.

Actually, they are using Voxel Global Illumination (VXGI) which seems to be very similar to Sparse Voxel Octree Global Illumination (SVOGI) which was first published by Cyril Crassin of NVIDIA in 2011.

From the NVIDIA page on the technology behind VXGI:

In 2011, NVIDIA engineers developed and demonstrated an innovative new approach to computing a fast, approximate form of global illumination dynamically in real time on the GPU. This new GI technology uses a voxel grid to store scene and lighting information, and a novel voxel cone tracing process to gather indirect lighting from the voxel grid. NVIDIA’s Cyril Crassin describes the technique in his paper on the topic and a video from GTC 2012 is available here. Epic’s ‘Elemental’ Unreal Engine 4 tech demo from 2012 used a similar technique.Since that time, NVIDIA has been working on the next generation of this technology—VXGI—that combines new software algorithms and special hardware acceleration in the Maxwell architecture.

VXGI is not a Direct3D 12 specific feature, in fact, SVOGI it was first implemented in OpenGL 4.x and Direct3D 11. Though the technology has (mostly) been available for years, NVIDIA's Maxwell graphics architecture is the first of its kind to have "built-in" support for the technology.

Some favourite quotes:Never trust a computer you can't throw out a window.
- Steve Wozniak

The best way to prepare [to be a programmer] is to write programs, and to study great programs that other people have written.
- Bill Gates

There's always one more bug.
- Lubarsky's Law of Cybernetic Entomology

Think? Why think! We have computers to do that for us.
- Jean Rostand

Treat your password like your toothbrush. Don't let anybody else use it, and get a new one every six months.
- Clifford Stoll

To err is human - and to blame it on a computer is even more so.
- Robert Orben

Computing is not about computers any more. It is about living.
- Nicholas Negroponte

1)They mention this is coming with their new gpu, is this some hardware supported feature?

2)Does anyone know how it will integrate with other software if that is the case?

2) nVidia has become a middleware vendor now. Your business people get in touch with their business people, you sign NDAs, then a lot of money and/or contracts change hands and then you get access to a little middleware library...

As for (1) my guess would be what vlj said -- that it's restricted to their new GPUs as these are their first to implement sparse volume textures. Their library will use some vendor hack to make use of this feature in D3D11.

@Josh - yes SVOGI runs on current hardware, but nVidias new middleware library (VXGI) apparently doesn't... Which is hat makes me think they're relying on sparse volume texture support (which is listed as a new D3D12 feature by Microsoft, but is implemented in these new cards via a GL extension) in their algorithm.

@dpadam450 - look up the sparse voxel octrees cone tracing Gobal ilumiation papers to understand the algorithm.
If my guess is correct, VXGI is basically the same, but replacing the octree with a plain 3D grid (or maybe a hierarchy/cascade of grids), but using a hardware feature where empty parts of the world consume no memory.

Thank god nVidia with their Maxwell GPU will shut up all those conspiracy theorists!

*blindingly obvious sarcasm*

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Half Life 2 still looks better than this. :)

I'm confused. Nvidia supports GL_ARB_sparse_texture with both Kepler and Fermi, so why is this a Maxwell only feature?

I'm confused. Nvidia supports GL_ARB_sparse_texture with both Kepler and Fermi, so why is this a Maxwell only feature?

They're probably using GL_EXT_sparse_texture2.

Who would actually use this middleware though? If VXGI only supports the very latest nVidia GPUs, then you still need to implement your own GI system yourself to support older nVidia cards, and ATI cards, and Intel users... Bit of a poisoned chalice really.

I'm confused. Nvidia supports GL_ARB_sparse_texture with both Kepler and Fermi, so why is this a Maxwell only feature?

They're probably using GL_EXT_sparse_texture2.

Who would actually use this middleware though? If VXGI only supports the very latest nVidia GPUs, then you still need to implement your own GI system yourself to support older nVidia cards, and ATI cards, and Intel users... Bit of a poisoned chalice really.

I did understand that it does work on other hardware but only with a lot lower speed.(3X without new hardware features)

it's not a maxwell only feature, it runs on fermi hardware already. it was previously called giworks: https://developer.nvidia.com/gi-works

maxwell got optimized hardware for some of the voxel stuff. in their whitepaper nvidia mentions 'conservative rasterization' and 'multi viewport rendering'. both is possible on older hardware using geometry shaders, yet we all know how geometry shader perform sad.png

This topic is closed to new replies.

Advertisement