Jump to content

  • Log In with Google      Sign In   
  • Create Account

PhillipHamlyn

Member Since 20 Jul 2012
Offline Last Active Apr 13 2016 02:29 PM

Topics I've Started

Planting Map / Texture Selection Stencils - Best Format ?

21 February 2016 - 12:01 PM

Hi,

 

On a standard terrain I want to render using a texture based on a precalculated "environment map" - i.e. Meadow Texture, Beach Texture etc. I have seen many examples use a full R8G8B8A8 "environment map" texture to allow linear sampling then blend their textures based on the weight of each channel that the sample returns.

 

Is there a more modern way of achieving this ? I feel that committing 8bits to each channel seems wasteful (depending on how high resolution the "environment map" is). Also the need to have multiple environment map textures since each can only depict four possible textures again seems wasteful.

 

Is there a common better method than this ?

 

I have attempted using an R8_UINT texture and using the Load() method - this gives me 255 possible texture selections - I could then do my own 4 tap interpolation and blend based on pixel world distance for each tap. Does this seem a reasonable approach or is it too computationally expensive ?

 

Philip H.


Imposter Lighting

10 August 2015 - 02:43 PM

Hi,

 

I am trying to implement an imposter lighting scheme where I record a texture atlas of my model taken at various Y axis rotation angles (to pre-calculate  a set of textures I can render as imposters, lerping between them). I have a system where I write a second texture atlas containing the model normals instead of the texture values, as a kind of deferred rendering process in my pipeline.

 

Aside from having some trouble using my low grade maths skills to rotate the normal stored in the appropriate pixel, I get some reasonable results; i.e. the lighting on the 3D model somewhat smoothly interpolates into the imposter, which lights itself using the normals stored in the normals texture atlas.

 

I am having one issue though and am looking for help. In my Imposter VS I pass in a radian angle of rotation which matches the angle I will use in my Model matrix when rendering the full 3D model. I use this to select the appropriate texture and normal pixels from my texture atlas. This works OK. In order for my Normal to work I need to rotate my normal in the PS from the pre-baked model normal value rotated via the Y-axis rotation value (Y is UP in my world) - this should then present the same value for the normal as though I'd read it via more standard means through the 3D model VS input structure rotated using a Model matrix.

 

My code fragments are;

imposter VS;

// Matrix def from http://gamedev.stackexchange.com/questions/103002/how-to-rotate-a-3d-instance-using-an-hlsl-shader
	output.ModelRotation =
		float3x3(
		cos(modelRotation), 0.0f, -sin(modelRotation),
		0.0f, 1.0f, 0.0f,
		sin(modelRotation), 0.0f, cos(modelRotation));

imposter PS;

// LERP between my two possible pre-baked normal textures. These are stored in model space, not tangent space.
	float3 normalSample =
		((tex2D(TextureSampler1, ps_input.TextureCoordinate0.xy) * ps_input.TextureCoordinate0.z) +
		(tex2D(TextureSampler1, ps_input.TextureCoordinate1.xy) * ps_input.TextureCoordinate1.z)).rgb;
	
	// Correct them into -1->1 range.
	float3 normal = (2 * (normalSample)) - 1.0f;
	
	// The normal is in model space. We need to apply the specific model rotation on top of it, for this particular instance of the imposter
	normal = normalize(mul(normal, ps_input.ModelRotation)); // Rotate

I then use the standard lighting calculation I use elsewhere to light the pixel using the normal.

 

My problem is that I think my Y-axis rotation matrix is incorrect, but I struggle with the row-vs-column ordering concepts in HLSL vs. DirectX, so cannot easily verify the matrix has the correct effect on the Normal via a C# unit test. If anyone can guide me to the correct method of constructing that matrix, I'd be grateful.

 

Any other comments on the basic methods also gratefully received.

 

Phillip


Multiple Texture Parameter Penalty ?

10 June 2015 - 11:02 AM

Hi,

 

Just wondered if there was a penaly for passing textures to my shader in four seperate textures or is it measurably faster to pass as a single (larger) texture ? I am miles away from exhausting my texture registers so have the "room" to take either approach.

 

I have buffered all my effectParameters, and am only updating my shader parameters when something has changed, and as I think I've read that "changing one value is roughly the same cost as changing multiple values, so if you do need to change a value, do them all". On that basis I am leaning toward keeping my textures in seperate Texture2D and avoiding the slight complexity of making a texture atlas with associated margin mipmap problems.

 

Please be aware I'm still stuck in Dx9 XNA4 territory, but I dont think that makes a difference from reading around the subject.

 

Thanks.

 

Phillip.


"Nested" Instanced Models

24 May 2015 - 03:40 PM

I am looking for a technique to render my Tree models in an efficient way. The model has 4-5000 vertexes but repeats these for a maximum of 20,000 vertexes using Model transforms (one for each tree limb). I think I would normally use hardware instancing to render these efficiently without lots of separate render calls. However I have many trees, and each of these is already hardware instanced (position, rotation). Using XNA4 I cannot combine hardware instances "to multiply them up" as it seems to support only one instance buffer per call.

 

I can repeat the model transforms into an array, and then repeat all that for the 30-40 visible trees, to form a large dynamic instance buffer but I wonder whether this is any more efficient than baking in the transforms into the model (even though the model would then be 20,000 vertexes in size).

 

With a hardware instance approach to rendering complex meshes, is it better to bake-in the model transforms and use the only available hardware instance stream to repeat the whole mesh, or is it better to using the hardware instance stream to render the one mesh efficiently, and the do repeated render calls for each of the visible trees ? (I am using Imposters to view large numbers of trees but want to push out the transition distance to prevent popping, which is why I am considering rendering more tree meshes than I've done before).

 

Any advice greatly appreciated.

 

Phillip

 

 


Imposter Transitioning to Mesh

23 May 2015 - 06:59 AM

I have a hobby grade landscape engine which I continue to tinker with but I'm having a problem making the transition of my tree imposters to 3D mesh work acceptably. My tree meshes are 4-5000 vertexes and I have a sprite sheet of 64 2D images of the tree, each rotated around the vertical axis. The sprite sheet is 2048x2048. For imposter (billboard) rendering I use hardware instancing and calculate the correct sprite sheet offsets in the shader.

 

This allows me to render many trees in an acceptable performance, and then render my 3D meshes when the viewer is within a specific distance. However I get a real problem in transitioning between the imposter and the mesh - its a clear jump, and the difference between the rotation of the 3D mesh and the "coarser" rotation of the 64 samples is also very visible.

 

My question is what technique is used to

1) Align the mesh against the possible 64 pre-sampled rotations that I have billboards for to prevent the two being out of line when I transition.

2) transition from imposter to 3D such that I don't get a jump.

 

My tree meshes are converted from Collada files obtained through Sketchup - I don't have an artistic bone in my body and as its a hobby I don't have the money to spend on a professional vegetation mesh pack. The quality of the meshes isn't that relevant though, as its the transition that I'm concerned with and not the overall quality of the scene.

 

Any suggestions gladly taken; I did search around for "Imposter Transitions" but didn't find anything helpful.

 

Phillip


PARTNERS