Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a $50 Amazon gift card. Click here to get started!

Vilem Otte

Member Since 11 May 2006
Offline Last Active Today, 04:00 AM

Posts I've Made

In Topic: opengl shadows - what kind should i use?

04 August 2015 - 04:01 AM

While I am a guy who has been working a lot with real time ray tracing and even real time path tracing a lot, in general ray tracing approach is a huge problem, and that is acceleration structure for your scene.


Now typically for my standard rendering game engine I use a BVH (Bounding Volume Hierarchy) that contains single meshes in leaves (note mesh inside my game engine is a single vertex buffer object, optionally index buffer object ... packed in vertex array object + additional data) and each frame this BVH is re-built so I allow for dynamic objects. The re-building of BVH is not done using re-fit algorithm but full re-build (re-fit is ending up in bad BVH in general), which can be done quickly for thousands of objects.


For typical real time ray tracing scenario (or path tracing), I build BVH once (because I need to go as low as few triangles per leaf), and never re-fit(dynamic geometry is therefore a problem), yes I can re-build on demand, but it takes time (for Split-BVH algorithm which gives me very high quality tree, it can go even into minutes for more complex scene). You can go for hierarchical BVH - like in the upper case and have SplitBVH built for each separate mesh yes, that would work ... unless you have a lot of moving objects. Also noting that hierarchical BVH decreases performance compared to full SplitBVH.


So to speak - ray tracing shadows are good, as long as you have static only scene. Dynamic scenes... not that much, sadly (but hey - this can change with any new hardware, using lower quality but fast-build BVHs can give you enough speed for dynamic scenes, but you will be quite limited with geometry).


Honestly to the link ...  fix cascaded shadow maps ... they are looking really bad, honestly they are comparing one of the worst cascaded shadow maps ​​​implementation to ray traced shadows (is it intention?), they only use static geometry in both, and we don't see the scene geometrical complexity. This way the comparison is non objective and therefore useless!


Anyways the OP came for opinions, I can tell you what I'm using now inside the current rendering engine I use. Standard cascaded-shadow-mapping (allowing to choose from few heuristics to select cascades splits) for directional lights. Standard old fashioned shadow maps for spotlights and cube-maps for point lights. Each single shadow map is rendered into 'shadow map texture atlas' which is limiting memory spent on shadowing.

In Topic: Water texturing of caustics effects

04 June 2015 - 08:14 AM

Nice article, as I've already did the caustics using projections there is not much new for me in there.


I would like to note possible extension - the caustics at various distances are sharp or blurred (instead of just multiplying by the distance from water surface) ... while it wasn't that important for me in the project (and going the way you showed would be also good), I created actually a 3D texture that was used to achieve this effect (so with animation it was actually 4D texture), the performance was okay and I didn't care about the memory that much (I still had budget in memory).


Note that pre-computing the caustics is possible using path tracing, or some tool like caustics generator (using some sort of hack).

In Topic: win32 cpu render bottleneck

19 May 2015 - 05:06 AM

The only reason one writes a software rasterizer is:
#1: Learning.
#2: Mega-advanced occlusion culling.
#3: It is the year 1993 and hardware-accelerated graphics aren’t mainstream yet.


#4: You're doing it in OpenCL/CUDA in a massively parallel way as proof-of-concept for some epic new hi-tech technique (does that count as software rasterizer?)

In Topic: How to create an MSAA shader? (For deferred rendering)

18 May 2015 - 01:47 AM

Actually in my game engine I've implemented full MSAA for deferred renderer. In OpenGL simple approach works as following:


  • Render your scene into G-Buffer, storing everything using multisampled textures with n samples
  • Once doing the shading phase, you have to perform it per-sample and then resolve

Your G-Buffer shader will still look the same, you will just write the output into texture created using glTexImage2DMultisample, you might also want to set multisample renderbuffer storage (for your render buffer). This modification is fairly simple.


In shading phase you pass in the multisampled texture(s) - they are read using Sampler2DMS instead of Sampler2D. And you have to read samples explicitly using texelFetch where you also pass which sample do you want to read.


This should help you start with the basics.




Just a note about one serious problem - when to do resolve? The problem is, that when you render multisampled G-Buffer, resolve during shading, then tone mapping (or basically any other post processing effect) might ruin the anti-aliasing. The solution is quite simple - you render multisampled G-buffer, shade per-sample writing into multisampled buffer, apply each post processing effect on multisampled buffer producing multisampled buffer (incl. tone mapping in the end) and then you resolve. This can sound as a bit of a problem (as it will need a lot more computation power and memory to do this).


A lot of game engines although do resolve during shading, apply post-processing on already resolved buffer and then use FXAA hack to smooth out edges where sharp edges appeared (because it is a lot faster and they think gamers won't notice). (I personally don't like FXAA - it in my opinion blurs whole image and degrades the final image quality - using high quality MSAA (with lots of samples) is really looking better)

In Topic: no Vsync? why should you

18 May 2015 - 01:28 AM

Because in many games I never hit monitor refresh rate.

My monitor is 60 hz. My framerate is usually somewhere between 40 and 50. Vsync enabled would reduce me to 30, possibly even less if stop-n-wait happens.

Keep in mind that there are some incredibly low spec machines. Such as Atoms. Vidcards with 64 bit DDR3 buffers for 30 bucks are still on the shelves and they do sell. Dudes on budget running Intel GMA.


I'd also note that low-performance doesn't mean running on budget. This is more important in other software businesses than games in general - but I've already met conditions where we used low-power (without any active cooler!) consumption machine (that was quite expensive) ~ and the machine was a lot slower compared to same price machines with active cooling and over 10 times the power consumption.


Although I'm not really sure whether low-power machines count in PC gaming today (for mobile platforms the situation is different!).