Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 1 more developer from Canada and 12 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


dr4cula

Member Since 24 Jul 2013
Offline Last Active Apr 24 2015 12:19 PM

Posts I've Made

In Topic: Bezier Teapot Tessellation

07 April 2015 - 11:49 AM

The teapot is "broken", or rather, not everything should be a quad patch. The patches at the top degenerate to triangles. That's why the derivatives can fail and produce those artifacts. I remember "solving" it by clamping the domain values for the derivatives to (e, 1-e) with some small e.

 

Ah! I see, yes, when I was looking at the model in wireframe, indeed the top part looked triangular but I didn't associate this with the fact that tessellator is using the quad domain.

 

Your "solution" worked out great:

static const float epsilon = 1e-5f;
float u = min(max(coordinates.x, epsilon), 1.0f - epsilon);
float v = min(max(coordinates.y, epsilon), 1.0f - epsilon);
Thanks a lot! :)

In Topic: Heightfield Normals

13 February 2015 - 01:43 PM

 

If you're talking about the slight diamond-shaped anomalies, that's normal for that type of terrain tile - you can hide it pretty well when you start texturing the terrain.

I've never been able to get rid of that effect.

 

 

That's what I was afraid of. I remember encountering something similar when I was messing about with terrain generation.

 

That's an artifact of interpolating between 3 triangle vertices. You should notice that the artifact is much more prominent on one diagonal than the other, right? Like RobMaddison said, it will become less noticeable when you texture the terrain.

 

Also, if you orient your triangulation so that the boundary between two triangles in a quad is aligned with the slope, then it will become less noticeable. See "additional optimizations" here: https://mtnphil.wordpress.com/2011/09/22/terrain-engine/

 

If you want something even smoother, you could probably precalculate and put your normals into a texture. Then sample from that in the pixel shader based on the xz world position. Then you'll be getting the weighted avg of 4 normals instead of 3. Of course now you incur the cost of an extra sample in your pixel shader, and the memory footprint of the normal texture.

 

Thanks for the link! Scouring various papers on simulation methods, I ran into a similar suggested technique. However, upon implementing it, I really couldn't tell much of a difference. I also tried writing the normals onto a texture and sampling but again, no visible difference was achieved. I guess I should look into getting the reflection/refraction working and seeing how apparent the issue then is.

 

Thanks for your quick replies! smile.png


In Topic: Shallow Water

22 January 2015 - 01:11 PM

Shouldn't either U or V be sampled from y component of height? You are sampling both of them from x component:

float hL = height.Sample(pointSampler, texCoord - float2(recTexDimensions.x, 0.0f)).x;

float hR = height.Sample(pointSampler, texCoord).x;

float hB = height.Sample(pointSampler, texCoord - float2(0.0f, recTexDimensions.y)).x;   // change to .y?

float hT = height.Sample(pointSampler, texCoord).x;  // change to .y?

 

Height is stored only in the first component of a 4-component render target, i.e. currently the 3 other components of the rendertarget/texture go unused.

I've not been able to make any progress on this on my own...


In Topic: Renormalizing bilinear interpolation weights

04 January 2015 - 09:17 AM

 

You need to subtract 0.5f from your un-normalized texture coordinate before computing the fractional amount that you use for your bilinear weights. This is because texel centers are located between integer pixel coordinates: so for example, the center of first pixel in a row is at 0.5, the second is 1.5, and so on. Therefore when the coordinate is at 0.5, you want the weight to be 0 such that you get full contribution from the left pixel, and no contribution from the right pixel. Like this:

float2 t = frac((texCoord / recTexDim) - 0.5f);

 

Thanks for your reply! Indeed, I found a couple of implementations online using this 0.5f constant but I wasn't sure as to why it was needed. However, adding this -0.5f term as you've said still doesn't make the function produce the same output as the sampler operation. Here's the result I now get.

 

Also, how would I renormalize the weights based on the values of the texels involved in the interpolation? If I have a texture where texels that are set to 0 are unknown and any other value means that the texel's value is known (i.e. should be used for interpolation) then how exactly do I bias the interpolation towards the texel with a known value? I hope this makes sense....

 

Thanks again!

 

EDIT: FYI, without the 0.5f bias I get this result

EDIT2: Tested this also on a sample program where the values of a texture are interpolated to another texture that is 2x smaller and this was the result. The custom bilinear interpolation functions seem to have a bias towards the top left corner (the circle has shifted from the center of the texture towards the top left corner).


In Topic: Volume Rendering: Eye Position to Texture Space?

28 September 2014 - 10:23 AM



the entry point coordinates (in texture space) is the same as the RGB value at each fragment on the nearest sides of the cube that are facing the camera. The vector to step your ray is the opposite side RGB minus the origin RGB, normalized.

 

Thanks for your reply! Yes, I know how to find the direction vector when I've got access to both back and front texture coordinate data. The GPU gems text refers to only a single rayData texture by combining the data like I have in the original post and then providing the camera's position in texture space in a constant buffer. J. Zink's blog (http://www.gamedev.net/blog/411/entry-2255423-volume-rendering/) uses the same method it seems: at least he is describing the usage of the camera's position in texture space.

 

For now, I've implemented the 2-rendertarget/texture version and I seem to be getting decent results. However, it seems for different data sets I need to scale the alpha component differently (see code below):

float4 rayEnd = rayDataBack.Sample(linearSampler, input.texCoord);
float4 rayStart = rayDataFront.Sample(linearSampler, input.texCoord);
 
float3 rayDir = rayEnd.xyz - rayStart.xyz;
float dist = length(rayDir);
rayDir = normalize(rayDir);
 
int steps = max(max(volumeDimensions.x, volumeDimensions.y), volumeDimensions.z);
float stepSize = dist / steps;
 
float3 pt = rayStart.xyz;
 
float4 color = float4(0.0f, 0.0f, 0.0f, 0.0f);
for(int i = 0; i < steps; ++i) {
 
float3 texCoord = pt;
texCoord.y = 1.0f - pt.y;
float vData = volumeData.SampleLevel(linearSampler, texCoord, 0).r;
 
float4 src = float4(vData, vData, vData, vData);
src.a *= 0.01f; // <- this scalar needs to be different for different data sets
 
color.rgb += (1.0f - color.a) * src.a * src.rgb;
color.a += src.a * (1.0f - color.a);
 
if(color.a >= 0.95f) {
break;
}
 
pt += stepSize * rayDir;
 
if((pt.x > 1.0f) || (pt.y > 1.0f) || (pt.z > 1.0f)) {
break;
}
}
 
return color;

For example:

teapot (scalar = 0.01f): http://postimg.org/image/668oi4imz/

foot (scalar = 0.05f): http://postimg.org/image/tjgggjmy5/

 

Is it normal to tweak the rendering for specific data sets or should one value fit all?

 

Thanks.


PARTNERS