# IvicaKolic

Members

81

325 Neutral

• Rank
Member
1. ## Avoiding huge far clip

Hodgman is right - you should use Inverted Z buffer. Then you'll be able to render everything from ants to solar systems without any z-fighting. Regular z-buffering should die and never be talked about again.   Just one small correction. Far has to be set to 0 (not 0.1) because floating point number have greatest precision around 0 - with inv z buffer you practically get logarithmic precision because both exponent and mantissa part of the floating point number gets used. On the other hand, if you use standard z-buffer then great distances converge to 1 which has very low precision (only few bits of mantissa are use for enormous range - hence the z-fighting).   Of course, if you set FAR to 0, function that calculates proj matrix might crash (because of division with 0) so it is best that you set this manually. In fact, I recommend not even using 4x4 matrices - they occupy to much space and they are not worth the trouble. 4x3 matrices are good enough for everything but projection. Instead of using projection matrix just calculate coordinates like this: w=z; z=Znear; x*=fovX; y*=fovY.  Notice that Zfar is not used at all (and x,y,z are in camera space)   After interpolation - screen space Z will be Znear/z At Znear distances it will be 1. At greater distances it will converge to 0 which will give you great precision.
2. ## OpenGL How to build a "renderer"

So yeah, Renderer are pretty much always done through abstract interface (as demonstrated by Hodgman and Tangletail).   This is the abstract graphic interface in my case (used by main engine either through scripts or hardcoded):     SetRenderMode("bla bla"); // This will set render targets and prepare everything (and remove prev render targets and bind them to shader texture variables for the future use)   // Main engine will loop through visible objects and call these two commands (similar is done for light-shadow pass):   SetRenderParam(renderParamID, value); // camData or objPos or obj bones or some custom constant or whatever (done by object script).   RenderMesh(meshID, availableDesiredLODDistance); // This will add mesh to render que and calculate render stuff based on material/mesh tokens     That is it - It is very very abstract. I don't think it is possible to have it more abstract than that. There are also some callbacks that graphic engine will do to calc some required rendering parameter that was not set, but that's another story...   NOTE: Sometimes I assign some material to some geometry that are not compatible (defined by meshID). In that case graphic engine will issue a warning that it doesn't know how to render this particular combination of render_mode/mesh/material/geometry tokes. Then I have to add exception rule to the INI file or add a shader for that particular combination.
3. ## OpenGL How to build a "renderer"

Essentially something like this. My current design seperates the engine's data from it's own.When The engine calls the renderer, the rendering logic does not need to know about the particulars of the actual engine's data. Instead, it's api recieves copies and transforms the data to a state that is needed.   When it comes time to render, the culling system will work independently of the current game state, it'll display a latent frame instead, and cull data based on what it has. This also means that the Renderer uses it's own octree for it's own processes. Primarily culling, But also as a way of determining some broader spectrum of LOD.   The engine's logic has it's own octree for logical processes. RayCasting, Scripts that effect certain regions of land, Navmesh Collisions, etc.   For occlusion culling I'm using view frustum culling + Hi-Z algorithm (actually it's Lo-Z because of inverted z-buffer).   Main engine script does this:   SetRenderMode("PrepForHiZ"); // This will invoke setting of render targets, rendering quads, and after rendering is done, setting shader textures of those render targets, etc...   RenderOcclussionSpheres(); // this is main engine command - it keeps track of current objects (each has his own id and pos/radius)   void * flags = LockGraphicBuffer("occlusion_test_render_target", (X + 10)%10); // I'm having 10 frames delay (and 10 occlusion buffers)   SetOcclussionFlagsForObjects(flags);   Graphic engine doesn't even know that it did occlusion calculation. It doesn't understand what data sent to it means - only how to send it to graphic card. That way it can be forward renderer, deffered rendered, forward+ rendered, ray tracer, <some new renderer that hasn't been invented yet>. It doesn't care what the data is - only how to render it efficiently.   As for token processor:   Rendering_technique_name = <Render Mode Name> + <Remaining Mesh Tokens> + <Remaining Material Tokens> + <Remaining Vertex Tokens>   SAMPLE: When rendering depth, there is no need to have tokens that have something to do with color. Then if all texture tokens are removed from material, Vertex token that represents texture coordinates will be removed (if there is no alpha mask texture). At the end you end up with only few tokens. Vertex token that represent NORMAL will probably also be removed (since it isn't even registered for RenderDepth render mode).   Once you know rendering technique name, you know which render parameters need to be sent to graphic card - and RenderParamTracker does that efficiently (without repetition).   The point is: there is no harcoding of anything graphic wise (shaders + graphic data come with the game files - not engine). You write graphic engine once and then you don't touch it for years. If some new way of rendering/post process effect gets published you don't change the engine - you just stuff the shader into first game package and add a few lines in the INI file. Maybe add few SetRenderModes("blablabla") into rendering script for some new post-processes.
4. ## OpenGL How to build a "renderer"

I prefer generic design where graphic engine only consist of a RenderQue, RenderParamTracker and TokenProcessor (meshes, materials and vertex definitions have those). INI file has definition of render modes, render targets, render params, token rules, etc...   It's tiny and efficient and you don't have to touch engine code ever (if implemented properly)
5. ## Source code for Camera Space Shadow Mapping (CSSM)

Yep, CSSM has some issues - probably due to the way projections are packed into textures - which causes some discontinuities. Maybe you can change the focus range to some smaller value - that should fix it.   I'm hoping that most of these issues will be solved with depth histogram version, but I'm yet to figure out exactly how to apply it on all possible cases. Directional light coming from a side is most basic case (displayed on image above) is relatively easy to figure out, but others might be tricky.   NOTE: Some people have been complaining about some weird things in my math library. That is exactly why I’ve omitted it from the code (some parts are ugly), but if someone needs it I’ll share it. Just let me know. Things like these are sometimes unclear: does matrix.Inverse() inverts the “matrix” or returns a new matrix that is inverse of a “matrix”. Most correct answer to questions like these is: if it returns the reference, then “matrix” is changed. If it returns a new matrix then…  So, use intellisense :) And/or check for the "const" keyword....
6. ## Source code for Camera Space Shadow Mapping (CSSM)

Hi everyone, Here is the source code for the original CSSM shadow mapping API: https://drive.google.com/file/d/0Bzrpqv04ufr9d0dZX3hTdl9hbVk/view?usp=sharing It is missing the math library but things are pretty clear (left handed system + column major matrices and order of multiplication is from left to right). Sorry about some comments on Croatian language.   Paper is here: https://bib.irb.hr/datoteka/570987.12_CSSM.pdf Sample projects are here (DX9 and DX10): http://free-zg.t-com.hr/cssm/   I still don’t have time to start working on “CSSM with Depth Histograms” (image attached), but maybe this will kick-start something. I should probably make some kind of GitHub project with original CSSM and then slowly start updating it to Depth Histogram version (with the help of the community) but I’m currently too busy with my current project. So, if someone is willing to help out and compose starter project that would be great… NOTE: I wouldn’t prefer using my old sample projects because they are MFC + DX Effects (which are not that commonly used any more).   Cheers, -- Ivica Koli?
7. ## Using a 2D Texture for Point-Light Shadows

I once used projections on 4 sides tetrahedron for my shadow mapping needs (one shadow map for all cases for simplicity reasons). This, of course, requires 4 projection matrices that target specific part of the texture + clipping planes. PCF works if you use rays (and dinamically determine which triangle is affected). Hope this helps.
8. ## 3D sound engine for a super realistic sound sources localization

Maybe wait for Oculus Sound SDK?
9. ## What do you use for handling sounds?

Oculus will release new sound SDK really soon... Maybe you should wait for that. Opinions?
10. ## Visual Studio 2010 Express

Hm, I can't help you with the problem, but I would suggest that you switch on Visual Studio 2013 Comunity Edition.  It has all the feature of the Pro version (graphic debuging, MFC, etc) and you can also use it for free as long as you stay "small" developer.
11. ## ShadowMapping in the year 2015?

CSSM is not used in commercial engines as far as I know. Having said that, Cevat Yerly did offer me a job at Crytek at the time, but recruiters somehow figured that I would be ideal for working on GUI for sandbox editor (“not being experienced and all” was the reason they specified). I said: "Thanks, but no thanks".   CSSM with Depth Histogram is a new beast entirely. I think it’s the only algorithm that (almost) achieves ultimate goal of shadow mapping: 1 to 1 texel to pixel matching.   Great thing is: if you have geometry on only 50% of the scene (the rest is sky), then you can use 50% smaller shadow map than the scene and still get near perfect 1to1 shadow map.   I really should write a new API. This time I should include support for Oculus Rift (dual cameras but only one shadow map). Nicely put everything into one function so that it only takes one line of code to implement it (for all imaginable cases).   Actually, Oculus could indirectly benifit from this algorithm since they want to achieve a goal of 90+ fps at high resolutions so I’m guessing 5 times faster rendering of shadows would help a lot (I’m guessing this number but since there is only one small shadow map – it is not hard to compare it with other cascaded approaches being used). I don't really have time right now to do it, but if there is interest, I'll make time.   Explanation of CSSM with Depth Histogram is in the attached picture.
12. ## ShadowMapping in the year 2015?

OK. Basically, difference between regular CSSM and CSSM with Depth Histograms is that every few frames, a depth histogram of the scene is obtained (after few frames of delay).   Now, If you look at the depth histogram, where ever you see a bump that means that there is some geometry on that distance that could have shadow. This helps you with the problem of "space between airplane and the ground" - we could simply ignore that empty space.   Then you integrate depth histogram by view angle which gives you a curved line starting from the bottom of view fostrum to the top of the view frostum (first read the CSSM paper to understand this). Then you simplify that line to 4-8 segmments that will represent new projection planes of the standard CSSM. This time you don't have to specify focus distance. Now it works even with infinitely large scenes that don't have  far clipping plane and quality is always maxed because shadow map texels are only used where they are needed (and distributed optimaly).   You can't get more quality than that - it's an ideal solution for any kind of scene and it works great even with tiny shadow maps. So, not only that shadow mapping is focused only on visible space (no 50% wastage like with other algorithms - old CSSM introduced that) but now it is focused only on parts of the scene that can have shadow.   Original CSSM paper: https://bib.irb.hr/datoteka/570987.12_CSSM.pdf Web page with old CSSM demo application: http://free-zg.t-com.hr/cssm/   Let me know if you want me to sketch this out in Paint   Edit: I've changed "plane" to "airplane" in the "airplane vs ground" example so the people don't get too confused since the term (projection) planes is used a lot.
13. ## ShadowMapping in the year 2015?

Here are my two cents: I'm using Camera Space Shadow Mapping (CSSM) algorithm, but now with DEPTH HISTOGRAMS to get optimal plane placements (every few frames). Before I had to specify focus range (like 15-25m) but now that is handled by depth histograms.   In past (as Hodgman mentioned) there was annoying edge flickering (because it was perspective warping approach), but now, with new optimizations this problem is gone.   There were also problems with scenes like "airplane and the ground" where a lot of shadow map space was wasted on empty space between airplane and the ground (if you are in the airplane looking down and there is some shadow on the wing but most of it is on the ground far away). Not anymore.   IN SHORT: ·         It is fastest (considering the quality); ·         It is most precise; ·         It uses smallest shadow map of all the other algorithm and achieves the greatest quality; ·         Works for every imaginable case (big or small lights - including omni); ·         It's a "one line of code" implementation (not really, but one line of code actually calculates everything, the rest is just passing parameters to the shaders and rendering scene); ·         No one knows about it (I suck in PR and every few years I write shameless advertizing post like this one ).   I should probably write a follow-up paper explaining depth histograms and how they are used to squeeze every little bit of performance and quality from shadow mapping but who has the time these days (plus, convincing reviewer that you've actually created something new is extremely tiresome - they are not game developers - they don't get it - well, some of them do).
14. ## ssao self-occlusion problems

Actually, it would be usefull to compare normals reconstructed from depth with the normal-g-buffer. (Also, does anyone knows why can't I edit my posts?)
15. ## ssao self-occlusion problems

Hm, These normals are already in world space (no need to multiply them with inv_view). Also, vertex normals are not shown (which was requested).