Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


user88

Member Since 17 May 2009
Offline Last Active Jul 16 2014 02:56 AM

Topics I've Started

[DX11] Why we need sRGB back buffer

13 June 2014 - 03:32 AM

Hi,

 

after reading a couple of resources in web about Gamma Correction I still feel confused.

 

In my experiment pixel shader simply outputs linear gradient to backbuffer.

 

 - First case: backbuffer format is not sRGB, value of linear gradient is outputted without any modifications:

Attached File  ng.jpg   21.81KB   12 downloads

 

 - Second case: backbuffer format is sRGB, value of linear gradient is outputted without any modifications:

Attached File  g1.jpg   19.75KB   11 downloads

 

 - Third case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 1/2.2):

Attached File  g1div2.2.jpg   18.13KB   11 downloads

 

 - Fourth case: backbuffer format is sRGB, value of linear gradient is outputted with correction of pow(u, 2.2):

Attached File  g2.2.jpg   21.55KB   11 downloads

 

As you see, first and last results are almost the same. So, my question is why we need sRGB backbuffers plus modifying final output pixel shader if we can simply use non-sRGB texture? The result is almost the same:

Attached File  pixcmp.jpg   167.98KB   15 downloads


[DX9] many lights: registers VS Volume Texture

07 February 2014 - 03:35 AM

Hi,

 

I want to expand maximum lights capability in my 3D engine. The DX9, forward rendering is used there. The lights pass to shader as array of corresponding shader structs, located in registers memory. So, registers memory is restriction for max lights.

 

Could anybody tell me tell whether the performance will be good enough when i will hold lights array in Volume Texture? I mean in compare to registers memory..

 

Thanks for your help!


SlimDX future?

14 March 2013 - 09:31 AM

Hi to all, especially hi to SlimDX project developers.

 

I want to ask you about the future of the SlimDX. As i know the SlimDX project hasn't been updited since two year. So the anwer is: Is SlimDX still alive project? Will any updates be release in future (i.e. WinRt support)?


[DX11] InterlockedMin works wrong?

07 December 2012 - 07:03 AM

hello,

code snippet from Compute Shader 5.0:
groupshared int3 sMin;
...
InterlockedMin(sMin.x, asint(viewPosition.x));
...
In code snippet above there is groupshared variable that holds minimum position value across the thread group. I noticed the case that the InterlockedMin works wrong when sMin.x == 0 and viewPosition == some_negaive_sign_value. After the operation sMin.x == 0.

Question: why negative value doesn't take as minimal value? In documentation mentioned that uint and int types are supported by InterlockedMin, so the sine should takes in account in case of int type.

I use nSight to profiler via network to debug shaders. The graphics adapter on target PC is GeForce GTX 460.

Any help would be appreciated..

Plane in view space

30 August 2012 - 06:12 AM

EDIT: I see my last post has a lot of text, and nobody want read it to understand the problem.

So, let me tell one more time:

I have the clipping planes (defined by A, B, C, D equitation of course) that i got from view frustum of the Camera. These planes are in world coordinates system. The point that i want to clip by these planes is in world-view coordinate space. So, i transform the plane in world-view coordinates system and find the distance of this point to planes. If the distance is negative i have to clip this point because it lays on negative side of the plane. In theory it have works but in practice i get unexpected results.. For more details see original text of my post below:

Hello,

i have the camera far clip plane i want to transform it in view space.

Let the plane be:

0 	 	 -1 	 	 	 	  0 	 	 	 		 7126.461

View matrix:
	 1	 	   0 	 	 	 	   0	 	 	 	 	 0
	 0	 	   0.000976562	-0.9999995		 0
	 0	 	   0.9999995   	 0.000976562	  0
	 -2000	 1493.017	 	 7151.461	 	   1

After transformation* of the plane by matrix i get a result:
0 	 	  0	 	 	 		 1		   	 	   -25.0005

I don't understand why the D component of the plane is equal to -25.
If i take transformed separately normal of the plane and point on it i get expectable results. Look what i mean:

Normal of the plane:
0		   -1					  0
Transformed** normal of the plane:
0		   0 					 1

Point on the plane:
0		   -7126.461					  0
Transformed*** point on the plane:
-2000	 	 1493.017 	 	  14277.92

As i aspect the transformed plane and plane that created from transformed separately normal and point should be equal. Why not?

*
		public static void Transform(ref Plane plane, ref Float4x4 temp, out Plane result)
	 {
	  float x = plane.Normal.X;
	  float y = plane.Normal.Y;
	  float z = plane.Normal.Z;
	  float d = plane.D;
	  Float4x4 transformation = Float4x4.Invert( temp );
	  Plane r;
	  r.Normal.X = (((x * transformation.M11) + (y * transformation.M12)) + (z * transformation.M13)) + (d * transformation.M14);
	  r.Normal.Y = (((x * transformation.M21) + (y * transformation.M22)) + (z * transformation.M23)) + (d * transformation.M24);
	  r.Normal.Z = (((x * transformation.M31) + (y * transformation.M32)) + (z * transformation.M33)) + (d * transformation.M34);
	  r.D  = (((x * transformation.M41) + (y * transformation.M42)) + (z * transformation.M43)) + (d * transformation.M44);

	  result = r;
	 }

**
  public static Float3 TransformCoordinate( Float3 coord, Float4x4 transform )
		{
		 Float4 vector;
		 vector.X = (((coord.X * transform.M11) + (coord.Y * transform.M21)) + (coord.Z * transform.M31)) + transform.M41;
		 vector.Y = (((coord.X * transform.M12) + (coord.Y * transform.M22)) + (coord.Z * transform.M32)) + transform.M42;
		 vector.Z = (((coord.X * transform.M13) + (coord.Y * transform.M23)) + (coord.Z * transform.M33)) + transform.M43;
		 vector.W = 1 / ((((coord.X * transform.M14) + (coord.Y * transform.M24)) + (coord.Z * transform.M34)) + transform.M44);
		 return new Float3( vector.X * vector.W, vector.Y * vector.W, vector.Z * vector.W );
		}

***
  public static Float3 TransformNormal( Float3 normal, Float4x4 transform )
		{
		 Float3 vector;
		 vector.X = ((normal.X * transform.M11) + (normal.Y * transform.M21)) + (normal.Z * transform.M31);
		 vector.Y = ((normal.X * transform.M12) + (normal.Y * transform.M22)) + (normal.Z * transform.M32);
		 vector.Z = ((normal.X * transform.M13) + (normal.Y * transform.M23)) + (normal.Z * transform.M33);
		 return vector;
		}


PARTNERS