Jump to content

  • Log In with Google      Sign In   
  • Create Account

Digitalfragment

Member Since 29 Aug 2002
Offline Last Active Yesterday, 09:58 PM

#5291074 Scalable Ambient Obscurance and Other algorithms

Posted by Digitalfragment on 11 May 2016 - 12:14 AM

BTW how do you store result of backing calculation? Objects texture's uv can't be reused since an object can appear at different location with different lighting condition, and some texel can be used for several surface.
On the other hand flattening polygon in a 2d plane is though due discontinuity at polygon edge, having to optimize texture space while keeping surface size equivalent...

Typically with a separate UV set for lightmap UVs (to support having a different texel density, and also to avoid issues where UV regions are reused for texturing) and then a scale/bias to look the individual object up within a larger atlas passed in as either a per-instance attribute or a shader constant.




#5284124 Struggling with Cascaded Shadow Mapping

Posted by Digitalfragment on 29 March 2016 - 04:09 PM


Not sure if ithat's entirely how it's supposed to work but at least it looks sensible.


That's pretty much it. There's a little bit of work in tuning it to get rid of artifacts between cascades etc and stabilization, but otherwise the technique is rather simple as you described. If you have framerate issues with recalculating every cascade every frame, you can always stagger updates of the medium/far cascades too.

 

From a tuning point of view it really is dependent on the type of scene you are rendering, for example distances aren't a good metric if your game design suddenly calls for a very tight FOV for zooming (as i discovered the hard way when our artists were given complete control over cutscenes)

Depending on how large your scene is, it might be worthwhile to use a different shadowing technique for the furthest detail, then composite that in with the cascade shadows. For example, using calculating ESM for a mountainous terrain, while using CSM w/PCF for all of the objects on the terrain.




#5283987 Struggling with Cascaded Shadow Mapping

Posted by Digitalfragment on 28 March 2016 - 09:56 PM

Wouldn't that just be the visual side effect caused by the dimensions of the camera frustum changing relative to the shadow direction?
If you are facing toward/away from the light, the frustum will yield a smaller min/max. If you are facing perpendicular, it will be much larger.
 




#5283961 Struggling with Cascaded Shadow Mapping

Posted by Digitalfragment on 28 March 2016 - 06:49 PM

The point behind cascading shadows is to have multiple cascades, but you are on the right track with your math. The next bit is to split the camera frustum along its z-depth. This yields a smaller frustum closer to the camera, and a larger frustum further away. Render out a shadow map for each cascade, and when sampling, take the sample from the first cascade that the pixel falls inside of.

There was a presentation by Crytek on Ryse thats worth checking out.

Edit:

regarding the shimmering, you want to do rounding on the mins/maxes for stabilization based on the resolution of the shadowmap - basically only ever move the shadow projection in multiples of the size of a pixel.




#5275830 How to write / setup shaders that support wrinkle maps

Posted by Digitalfragment on 15 February 2016 - 06:56 PM


I read some questions by other people that had trouble getting enough texture samplers for the different normal maps needed. I guess this was the reason why, as this approach sounds rather heavy on different textures needed. Did you just had a few areas that needed wrinkle maps applied? Did you run into problems with the texture sampler count?

This was back in the DX9 era, where sampler count was even more of a problem, so we had a few hacks around this. One was to atlas the wrinklemaps, so that the UV space was different for the wrinkles to the base normalmap. DX10+ era, we'd probably have used texture arrays instead. We packed some extra information into the UV channels to indicate what body part each part of the mesh belonged to - pretty hacky but it didnt add to the size of the mesh data which was nice.

Another solution is to subdivide the mesh into multiple draw calls.




#5261062 GL vs. D3D Texture/Viewport/Scissors Coordinates

Posted by Digitalfragment on 08 November 2015 - 08:48 PM

Well for now this is what i did:

 

Since gltexImage2D thinks that I'm passing the data in "bottom-top" order I've decided to try make it "work" the D3D way.

 

1st: In order to match the dynamically rendered textures with the loaded onces. I've added a "flip-Y" version for my projection matrices. This however will change the winding of the front-facing triangles. This was easily solved in my solution because I have one big Structure that describes a single draw call, and I can easily swap "implicitly" my triangle winding.

 

2st: Flipping the Y component will make the things that are rendered directly to "framebuffer 0(or the 'screen')" flipped. So "flip y" logic shouldn't be applied on framebuffer 0. This is currently solved by hand, becase I'm my "DrawCall strcture" I really don't know where the uniform for projection matrix is located, so I leave this to the user.

 

 

So my solution looks like this:


 

	static SELF_TYPE GetPerspectiveFovRH(
		const DATA_TYPE& fov,
		const DATA_TYPE& aspect,
		const DATA_TYPE& nearZ,
		const DATA_TYPE& farZ,
		const int GL_Flip_Y = 0) // Does absolutely nothing when rendering with D3D, for OpenGL it just adds the -1 scaling on the Y axis...


struct DrawCall //pseudo code...
{
 Buffer* vbuffer;
 Buffer* ibuffer;
 Program* program;
 Buffer** cbuffer;
 //other stuff...
 GLenum frontFace;
 CullMode cullMode;
 GLint framebuffer;
 int framebufferHeight; // a cached value used for flipping the scissor/viewport rects.
 bool GL_Flip_Y_Mode; // Does absolutely nothing under D3D. For the OpenGL case, this will flip the frontFace value, this will also be used for flipping the scissor/viewport rects.
}

And basically "GL_Flip_Y_Mode" is true when the framebuffer != 0 (this is done again by hand, in order not to lose control).

The things are a bit m​ore complicated because I'm trying to implement a "general purpose solution", however having an exact requirements could lead to a lot of simplifications.

I've seen renderers designed this way, and they often lead to a lot of headache, especially when you start porting it to different platforms. At one point we had our post processing pipeline pointlessly flipping a rendertarget around 15 times, due to "if (isRenderTarget) flip();"

The (IMO) cleaner option is to at data build time flip all texture data to match the rendertarget orientation for the target platform. This may require the texcoords in your geometry to also be vertically flipped - which may or may not be needed anyway because different art packages may or may not follow any given texture orientation.

 

But, once you have that data pipeline there, its rather nice having everything consistent at runtime!




#5258614 Rendering Quake 3 BSP in modern OpenGL

Posted by Digitalfragment on 22 October 2015 - 09:24 PM

If you rely on frustum culling, then you still submit everything to the GPU that is hidden behind walls and closed doors. Even though you might not pay a pixel shader cost, you still pay for the cost of the driver calls on the CPU, any state switching and vertex shader costs.

 

Modern games can still use precalculated vis data, though some are moving to dynamically calculated vis data such as software occlusion buffers due to the need for dynamic scenes.




#5258584 Fitting directional light in view frustum

Posted by Digitalfragment on 22 October 2015 - 04:57 PM

calculate your view-projection matrix for the main camera, and invert it
transform the 8 corners of the NDC cube through the inverted projection-matrix (with -1 for near-z if opengl, instead of 0 IIRC?)
perspective divide the 8 resulting coordinates

that gives you the 8 corners of your view frustum in worldspace

take the average of all 8, that gives you the midpoint of the frustum

 

generate an up & right vector perpendicular to your light direction

create a view matrix centred on the origin, using your light direction and these up/right vectors

 

the projection matrix is an ortho matrix where the left is the minimum of the dot product of all 8 corners and the right vector, the right is the maximum of the dot product of all 8 corners and the right vector, the top is the maximum of the dot product of all 8 corners and the up vector, and you should be able to guess how to derive the other 3 values smile.png

That gives the tightest standard projection to fit the entire frustum.

 

You generally want to pull the near plane of the ortho matrix back to fit all shadow casters in the scene or "pancake" them at the near plane in the vertex shader.

 

The shadowmap sampling matrix is the viewprojection for the shadow rendering, but with a scale and bias afterwards as transforming by the matrix will give values between -1 and 1, whereas the texture sampling will need 0 to 1

 

 

For cascaded shadows, the process is the same as above, but you split the camera frustum up into multiple z slices, which gives tighter projections closer to the camera.




#5256785 Directional Light Calculating by using EV100

Posted by Digitalfragment on 11 October 2015 - 09:11 PM

To further MJPs point, tonemap before gamma correction, just in case you have that backwards.




#5254287 ndotL environment mapping?

Posted by Digitalfragment on 27 September 2015 - 06:20 PM

The N dot L factor only applies if you are performing a convolution on the environment map (i.e. considering each texel in the cube map as its own lightsource)
If you are using the environment map as a light source, it is generally assumed that this step is already done.

If you did an N dot L from  a specific light source, then suddenly you would be losing a light from every direction except that light source...




#5242058 Multi Window application with DX11 and Wpf

Posted by Digitalfragment on 22 July 2015 - 05:45 PM

You don't need 1 device per window, instead you have 1 global device and 1 swap-chain per window.




#5241811 Lightmapping

Posted by Digitalfragment on 21 July 2015 - 05:52 PM

Beyond traditional lightmaps there are many variations, and each have their own uses.

Valve created the Radiosity Normal Map approach, http://www.decew.net/OSS/References/D3DTutorial10_Half-Life2_Shading.pdf

Bungie took it a step further and baked Spherical Harmonic textures, http://halo.bungie.net/inside/publications.aspx

Light probes are effectively another form of lightmapping, albiet based on points in space rather than on the surface of the world.

Many AAA titles still make heavy use of baked lighting, through middle-ware such as Beast, http://gameware.autodesk.com/beast

Others go down the dynamic route, through middle-ware such as Enlighten, http://www.geomerics.com/enlighten/

 

Using baked lighting doesn't prevent usage of dynamic light and shadows, it can be more efficient to mix it up as needed.




#5241663 directional light and dx11 shader

Posted by Digitalfragment on 21 July 2015 - 12:58 AM

You're ignoring the composition matrix when working on the normals, but using that to transform the positions.




#5241653 Slow terrain road editing

Posted by Digitalfragment on 20 July 2015 - 10:45 PM

You can always call reserve() on the vector to presize it, so that push_back() does not trigger allocations (and the subsequent copy of all elements, and deallocation of the previous allocation!)

Its typically the copy from the previous buffer into the new buffer that causes the slow down. Even the built in memory allocator isn't /that/ slow :)




#5231184 Plane and AABB intersection tests

Posted by Digitalfragment on 27 May 2015 - 12:12 AM

Its distance from the origin (0,0,0) to the plane in the direction of the planes normal.

So, assuming you are facing down +Z (0,0,1) the normal for the near plane would be +Z (0,0,1) while the normal for the far plane would be -Z (0,0,1) in order to make them both point inward. As you move forward or backward along the Z plane, those normals don't change, but the D parameter does.

Completely ignore the fact that the coordinates are 3d and think of values along a ruler. If you have a section of a ruler, then from the left side to the right side you are incrementing values (1), from the right to the left you are decrementing values (-1). The distance along the ruler * either 1 or -1 depending on the side of the is the D value.






PARTNERS