Recommended Posts

I wrote an HLSL vertex shader to perform some lighting on a mesh; however, the shading doesn't look right. The vertex shader I wrote is actually a port of a CG-GL shader and the application code to pass in the appropriate parameters is also ported from openGL. Here is what the result looks like with the openGL/Cg-vertex-shader (ignore the little ball in front of the model): http://www.rit.edu/~axv4745/muppetsDX/modelSmoothShaded.jpg Here is what the result looks like with the Direct3D/HLSL-vertex-shader (ignore the little ball inside the model): http://www.rit.edu/~axv4745/muppetsDX/modelFlatShaded.jpg As you can see the shading is coming out flat with the HLSL code and smooth the CGGL code. Here is the code for the CG-GL vertex shader:
void ambient( float4 position		: POSITION,
float4 spec			: COLOR0,
float4 texIn		: TEXCOORD0,
float4 normal		: NORMAL,
uniform float4x4 mvp : _GL_MVP,

uniform float4x4 clipToCube,

out float4 col		: COLOR0,
out float4 pos		: POSITION,
out float4 texOut	: TEXCOORD0,
out float4 envOut	: TEXCOORD3
) {
float3 norm = mul(glstate.matrix.invtrans.modelview[0], normal).xyz;

texOut = texIn;
col = spec;
col.a = 1;
envOut.xyz = mul((float3x3) clipToCube, norm);
envOut.w = 1;

pos = mul(mvp, position);
}


envOut points to texture unit 3 which is bound to an ambient light map(the one you see in the background of the images I showed). This is the code snippet I use to bind the ambient light map: ext->mglActiveTextureARB(GL_TEXTURE3_ARB); //ext is a custom class glEnable(GL_TEXTURE_CUBE_MAP_ARB); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); glBindTexture(GL_TEXTURE_CUBE_MAP_ARB, ambientMap); the clipToCube variable is initialized in the openGL application as follows: glGetFloatv(GL_TRANSPOSE_MODELVIEW_MATRIX_ARB, mat); cgGLSetMatrixParameterfc(cg_clipToCube[CG_AMBIENT_VERTEX], mat); Here is the corresponding code for the HLSL vertex shader:
uniform matrix normalMatrix;
matrix WorldViewProjMatrix;
matrix clipToCube;

struct VS_INPUT
{
vector position : POSITION;
float2 texIn : TEXCOORD0;
float3 normal : NORMAL;
vector ambientCol  : COLOR;
};

struct VS_OUTPUT
{
vector position : POSITION;
vector colOut : COLOR;
float2 texOut : TEXCOORD0;
float3 envOut : TEXCOORD1;
};

VS_OUTPUT Main(VS_INPUT input)
{
// zero out members of output
VS_OUTPUT output = (VS_OUTPUT)0;

output.position = mul(input.position, WorldViewProjMatrix);

float3 norm = mul(input.normal,(float3x3)normalMatrix);

output.colOut = input.ambientCol;
output.colOut.w = 1.0;

output.texOut = input.texIn;
output.envOut.xyz = mul(norm , (float3x3)clipToCube);

return output;
}


envOut points to texture unit 1 which is bound to an ambient light map (the one you see in the background of the images I showed). This is the code snippet I use to bind the ambient light map: deviceHandle->SetTexture(1, ambientMap); deviceHandle->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_MODULATE ); deviceHandle->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_TEXTURE ); deviceHandle->SetTextureStageState(1, D3DTSS_COLORARG2, D3DTA_CURRENT ); the clipToCube, normalMatrix, and WorldViewProjMatrix variables are initialized in the application as follows:
	D3DXMATRIX w,v,p,wv;
deviceHandle->GetTransform(D3DTS_WORLD , &w);
deviceHandle->GetTransform(D3DTS_VIEW , &v);
deviceHandle->GetTransform(D3DTS_PROJECTION , &p);
wv = w*v;

//set World-View-Projection variable
avConstTable->SetMatrix( deviceHandle,avWVPMatHandle,&(wv*p) );

//set clipToCube variable
D3DXMATRIX mat;
D3DXMatrixTranspose(&mat,&(wv));
avConstTable->SetMatrix( deviceHandle,avCToCHandle, &mat );

//set normal matrix (inverse transpose of upper-left 3x3 matrix)
float det = D3DXMatrixDeterminant(&wv);
D3DXMatrixInverse(&mat,&det,&wv);
D3DXMatrixTranspose(&mat,&(mat));
avConstTable->SetMatrix(deviceHandle,avNormMatHandle,&mat);


As far as I can see I'm pretty much doing the same thing in my Direct3D code that is being done in the OpenGL code and yet its looking smooth in GL and flat in Direct3D. Does anyone see anything wrong or have any thoughts as to what might be going wrong? Thanks,

Create an account

Register a new account

• Partner Spotlight

• Forum Statistics

• Total Topics
627676
• Total Posts
2978582
• Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 11
• 12
• 10
• 12
• 22