Jump to content

  • Log In with Google      Sign In   
  • Create Account

ATEFred

Member Since 19 Sep 2007
Offline Last Active Today, 04:25 AM

Posts I've Made

In Topic: Visual Studio 2013 graphics debugger vs Nvidia Nsight?

11 August 2014 - 11:04 AM

My big problem with NSight is many of the really great features require you to remote in from another machine that is running your application.

 

Have you tried RenderDoc? It's free and an amazing graphics debugger.

If you are referring to stepping through shader code, this is no longer the case. You can now do everything on one single (nv) gpu machine. 
I think NSight is way superior to the vis graphics debugger. If you have an NV gpu, it is really worth it. It also works with OGL if that is of any use to you.
 


In Topic: glDrawElements invalid operation

12 June 2014 - 05:21 AM

Latest version of NV Nsight also supports ogl 4.x ( 4.2/4.3? can't remember exactly ). Was really helpful figuring these kind of errors out ( gl event list with highlighting for the dodgy calls ). Highly recommended if you have the required hardware.


In Topic: Geometry Shader for a quad or not ?

18 May 2014 - 08:32 AM

It does depend heavily on HW. ATI gave a presentation at gdc this year that showed that on their hw using the GS was somewhat slower than calling DrawInstanced with 4 verts per quad and doing the work in the VS instead, whereas on recent NV hw it was pretty much the same.


In Topic: Kinds of deferred rendering

10 May 2014 - 09:35 AM

You don't need position, you can reconstruct it from the depth and pixel positions when you need it. (allows you to not require 32bit per channel additional RT in your gbuffer).
other than that, you have a good starting point. Normals, material properties like spec power, spec factor / roughness, albedo colour and depth of course.
 

gamma correction only really makes sense if you support HDR (or you would get quite bandy results I think). The idea is to get your colour texture samples and colour constants in linear space (either doing the conversion manually or using the api / hw for textures and rendertargets at least). then do lighting, then hdr resolve and conversion back into gamma space. 


In Topic: Kinds of deferred rendering

08 May 2014 - 02:45 AM

There are commercial game engines that use all of these approaches. Frostbite 3 uses the tiled CS approach, the Stalker engine was the first to use the deferred shading approach that I know of, loads of games have used light prepass ( especially 360 games to get around edram size limitations / avoid tiling ), Volition came up with and used inferred in their engine, forward+ seems to be one of the trendier approaches for future games, not sure if anything released uses that already.

The main thing is for you to decide what your target platforms are and what kind of scenes you want to render. (visible entity counts, light counts, light types, whether a single lighting model is enough for all your surfaces, etc.) 

For learning purposes though, they are all similar enough that you can just pick a simpler one (deferred shading or light prepass maybe), get it working, and then adapt afterwards to a more complex approach if needed. 


As for docs / presentations, there are plenty around for all of these. I would recommend reading the GPU pro books, there are plenty papers on this. Dice.se has presentations on their website you can freely access for the tiled approach they used on bf3. GDC vault is also a great place to look.

You can also find example implementations around, like here:
https://hieroglyph3.codeplex.com/
(authors are active one this forum btw)


PARTNERS