Jump to content

  • Log In with Google      Sign In   
  • Create Account

DementedCarrot

Member Since 18 May 2011
Offline Last Active Yesterday, 08:57 PM

Posts I've Made

In Topic: Quick Multitexturing Question - Why is it necessary here to divide by the num...

21 August 2014 - 08:01 PM

Also, don't let color averaging stop you there with texture blending!

If you use a linear interpolation you can blend any amount of a texture into another. With a lerp you can blend more or less of a texture in to taste as long as the interpolate value is between 0 and 1.



vec3 red = vec3(1,0,0);
vec3 black = vec3(0,0,0);
vec3 mixedColor = mix(red, black, 0.25);

// This gives you 75% Red and 25% Black.

Another cool application is smooth texture blending. You can use color lerping on outdoor terrain to seamlessly blend different textures together, like grass and dirt, in irregular ways that break up the texture on a mesh so that it isn't solid. You give different vertices on a mesh different lerp parameters, and vertex interpolation will give you all of the interpolation values in between so it fades from one blend percentage to the other. Check out the screenshot of the day and notice the texture blending on the ground in the back. http://www.gamedev.net/page/showdown/view.html/_/slush-games-r46850

 

Texture blending is pretty handy.


In Topic: Cascaded Shadow Maps Optimization Concept?

30 May 2014 - 04:43 PM

 

Outputting depth from the pixel shader disables early-Z, so that can come at a pretty significant performance cost.

 

On DX11+ hardware you can do texture reads in the vertex shader. You could tesselate the quad up to some level, read the depths, and interpolate them with the verts. That would let you keep the early-z! Then you would have to worry about the tesselation/quality tradoff, and how many polys are in the tesslated quad vs the model. This would also reduce the texture reads.

 

Also, the different zoom-scales of cascades in CSM would actually benefit the tesselation I think. More detail up close, and less further away.

 

Edit: 'Doh, you would have to sample the depth texture in the geometry shader, because tesselation happens after the vertex shader.

 

Also, it may be better to pre-bake out some tesselated quads instead of letting the tesselator do a bunch of redundant work. It would tesselate all of the object quads the same way. Then you could just use the vertex shader for the texture sampling. Managing those vertex buffers might be a pain in the ass though, so I don't know if it's worth it.


In Topic: Use 64bit precision (GPU)

17 April 2014 - 10:45 AM

That's a tough one. I'm not totally sure how to accomplish this when the world meshes are created that way.

 

I'll think on it.


In Topic: Use 64bit precision (GPU)

17 April 2014 - 09:12 AM

My problem is i use cube2sphere mapping which require that the cubecenter is 0,0,0

 

Can you be more specific about what cube2sphere mapping is, and what it's used for?


In Topic: Use 64bit precision (GPU)

16 April 2014 - 09:31 PM

If you want to render stuff relative to the eye in float space using doubles, you:

 

1. Use doubles for your position vectors.

2. Use the double vector for every object position, and for your camera.

 

Then you have to translate your positions into a float-capable space for rendering. You translate every object position to get the position relative to the eye with:



DoubleVector3 objectPosition = object.somePosition;
DoubleVector3 cameraPosition = camera.position;
DoubleVector3 doubleRelativePosition = objectPosition - cameraPosition;

// When you translate the object by the camera position, the resulting number is representable by a float.
// Just cast the double-vector components down to floats!

FloatVector3 relativePosition;
relativePosition.x = (float)doubleRelativePosition.x;
relativePosition.y = (float)doubleRelativePosition.y;
relativePosition.z = (float)doubleRelativePosition.z;

and then that's the position you pass into the shader for rendering.

 

This is really cumbersome for a ton of objects because you have to recompute this translation every time you move your camera. There is an extension of this method to keep you from creating relative coordinates every frame. You have to create a relative anchor point that moves with your camera. To do this you have to:

 

1. Create a double-vector anchor point that moves with your camera periodically. You move this anchor point when float-precision starts to become insufficient to represent points inside the float-anchor-area.
2. You build relative float-vector positions for everything relative to the anchor point, as we did before with the camera but with the anchor point.

3. When you move far enough away from the anchor, you re-locate it.

4. When the anchor moves you re-translate everything relative to the new anchor point. This means everything has a double-vector world position and a float-vector relative anchor position.

5. You use a regular camera view matrix to move around inside this anchor float space.
6. Draw everything normally as if the anchor-relative position is the position, and the anchor relative camera position is the camera location.

I hope this helps!

Edits: Typo-city


PARTNERS