 Home
 » Viewing Profile: Topics: wolf
wolf
Member Since 08 Jan 2000Offline Last Active Feb 01 2014 10:29 PM
Community Stats
 Group Members
 Active Posts 762
 Profile Views 4,521
 Submitted Links 0
 Member Title XNA/DirectX MVP
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling
Topics I've Started
Mapping Depth Values to the near / far range
20 November 2009  07:00 PM
Hi,
I have a projection matrix that is
w, 0, 0, 0,
0, h, 0, 0,
0, 0, (f + n) / (f  n), (2 * f* n) / (f  n),
0, 0, 1, 0);
This matrix is a OpenGL projection matrix. I would like to map the values that I can read from the depth buffer into the near / far plane range of the view frustum in camera space. This is useful for camera based effects like Depth of Field.
Many websites suggest a solution like this:
f * n /(z * (fn) f)
where z is the depth buffer value and f and n the far and near plane. Any references where I can read up on this?
Cover circle with quads
23 April 2009  09:06 AM
Hi,
I would like to approximate a circle with quads. The quads should overlap the circle in a way that there is no area of the circle uncovered.
What I want to find is the best possible way to calculate the quad size and number of quads. Being able to use the lowest possible number of quads with a reasonable overlap would be the target of this research. In other words there are two parameter:
1. number of quads
2. overlap of quads
and both should be as small as possible.
Any good references or thoughts for a starting point?
 Wolfgang
Attenuation / To figure out the Bounding volume of Point lights / Spot lights
05 October 2008  08:11 AM
Hi,
I was thinking about this now for a while. Using the classical attenuation function can make the determination of the size of a bounding volume quite complicated. This was covered in several threads in this forum so far.
1 / (kc + kl * distance + kq * distance^2)
Visualizing this curve shows that it approaches zero but does not reach it. Obviously you can work with limits and come up with a decent solution or hack something together.
What I was thinking about is using the angle of the tangent at a certain point of the curve to figure out the point where the bounding geometry should be. Obviously you do not want this point to close or to far because if it is too close you see artefacts and if it is too far you pay more for rendering the light.
So if I could figure out the angle of the tangent which means the angle of the tangent with the x or y axis I could just say as soon as the tangent has a very small angle, I would start the calculation of the bounding volume for this point. Because I would expect this angle to come with a certain intensity of the light at this point I would consider it a reliable parameter.
Does this sound ok to you? Anyone tried this?
Intel Graphics Media Accelerator 4500MHD
09 September 2008  08:52 AM
I am sure that this graphics chipset has some problems to run certain games fast enough, but what I would like to hear is if all DX10 features are really supported. I am not interested in speed just feature support. I don't care if my simple example runs with 20 or 200 fps, I just want to know if the geometry shader is working.
So if anyone had a chance to run the DX10 examples on this chipset, please tell me what you have found. I already found this page:
http://www.notebookcheck.net/IntelGraphicsMediaAccelerator4500MHDGMAX4500MHD.9883.0.html
Thanks in advance,
 Wolfgang
Reconstructing Position from Depth Data
28 August 2008  12:47 PM
Hi,
I am on my quest to figure out the fastest way to reconstruct a position value from depth data. Here is what I know:
1. If you stay in view space and you can afford a dedicated buffer for a separate depth value you can do the following (see article in ShaderX5 "Overcoming Deferred Shading Drawbacks")
Store the position of the pixel in view space in a buffer like this
G_Buffer.z = length(Input.PosInViewSpace);
Then you can retrieve the position in view space from this buffer like this:
vertex shader: outEyeToScreen = float3(InputScreenPos.x * ViewAspect, Input.ScreenPos.y, inTanHalfFOV);
pixel shader: float3 PixelPos = normalize(Input.vEyeToScreen) * G_Buffer.z;
This is nice because the cost per light is really low.
If you do not have space to store a dedicated depth buffer just for this you might have to read the available depth buffer (this is now also possible on PC cards). Additionally this is only in view space ... if you like world space more there would be another transform necessary.
2. Read Depth buffer and reconstruct world space position:
FLOAT3 screenPos;
screenPos = FLOAT3(PositionXY, gCurrDepth);
FLOAT4 worldPos4 = mul(FLOAT4(screenPos, 1.0f), WorldViewProjInverse);
worldPos4.xyz /= worldPos4.w;
This is cool as long as you can live with the transform in there and you just read only G_Buffer data. I believe I presented this a few times in this forum.
So now the question: is there something faster to reconstruct world space position values from the depth buffer.