Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 12 Aug 2011
Offline Last Active Sep 18 2012 04:52 PM

Posts I've Made

In Topic: Derelict2 bindings for D

21 April 2012 - 06:51 PM

Problem sorted thanks to DMD folk. The &v in glBufferData(GL_ARRAY_BUFFER, v.length * GL_FLOAT.sizeof, &v, GL_STATIC_DRAW); should be &v[0]

In Topic: (Tao C#) Help with rendering textured TTF fonts in OGL via SDL_TTF

01 March 2012 - 02:51 PM

It could be because you are using the SDL_Surface of the main screen rather than "surfPtr" used to hold the font. surfPtr will hold info about the size of the font and RGB or RGBA.

By the way, if someone knows how to access this info from the IntPtr please tell. In C++ you would use surfPtr->w, and more importantly you can grab surfPtr->Amask to read weather the image is RGB or RGBA.

In Topic: Tangent space matrix from Normal

12 August 2011 - 04:44 AM

Assuming the sea is rippling upon a flat horizontal plane, and all the normals responsible for the rippling come from a normal map, a texture matrix per vertex is not required. All that is needed is to rotate the normal stored in the map 90 degrees about the x axis so that it points up. Basically. Pull the normal from the map as a vec3 (openGL) float3(HLSL) called norm. Then rotate it into a new vec3 called n as below.

n.x=norm.x; n.y=norm.z; n.z=-norm.y;

Think of it like this, the blue normal map produced in photoshop is said to be in tangent space. But obviously photoshop has no knowledge of the model the map will map, therefore it has no idea of the model's vertices or how those verticies will be UV mapped... basically, photoshop has no knowledge of the tangent space it is supposedly defining normals in. Tangent space normal maps are not mapped in tangent space, they are mapped in eye space. They assume (pretend) that all of the model's vertex normals point (0,0,1) up the z axis (openGL), and since they encode the z value in the blue channel of the map, the maps are mostly blue; the red and green channels encode x and y offsets from the z axis, ie, the perturbations. Given eye coordinates place the camera at (0,0,0) looking down the negative z axis (openGL), a normal in the map is like a south facing wall, the normals point out of the wall toward the camera. So what we need for the sea is to rotate the map 90 degrees so the z normals point up. As you can see from the code above, the y axis (standard base frame in eye space) now takes the z component, meaning, scale the y axis by the amount of z; the x axis remains unchanged because the normal is being rotated about this axis; if you rotate the axis frame 90 degrees about the x axis, the former y axis will now point down the negative z axis (openGL), so n.z is assigned the negative of the y value, which is to say, the z axis is scaled by the negative of the amount y.