# jkristia

Member

33

162 Neutral

• Rank
Member
1. ## thick line - fixed width ?

>>If you set the transformation to identity on vertex shader (effectively ... Ahh, didn't think of that. I will give this a try tonigh
2. ## thick line - fixed width ?

>>These steps can easily be implemented in pure software, too, in case you don't target a platform with GS capabilities. This is kind of what I'm trying to do. For now I'm just trying to figure out how to do it, and I'm recreating and recalculating the rectangles on each frame update. Once it is working, I might want to try and do the same in the shader.   I now have the widen line with 'almost' constant width regardless of the zoom (camera distance). Notice the line thickness is almost the same when zoomed out and in. [attachment=18311:thickline_1.png] [attachment=18312:thickline_2.png]   but I have not yet figured out how to 'cancel' the scaling caused by the perspective projection. [attachment=18313:thickline_3.png]
3. ## thick line - fixed width ?

Yes, am I transforming the points into a rectangle with the width. That part is working. The part that I cannot get working is to   - keep the width constant, so the far point appears same thickness as the near point. This is where I thought I just needed to transform the rectangles vertices using the inverse projection matrix. - calculate how wide the far end and near end needs to be if I want the line to appear e.g. 10 pixels wide, near and far, regardless of the zoom.   Hmmm. thinking of it, I think I need to use the inverse proj when calculating the width. I will try that tonight when I have more time to play.
4. ## thick line - fixed width ?

I'm trying to create a thick line class, and I have it working to the point where the line is always facing the camera and the line width is a fixed hard coded unit width, e.g. 0.1f   What I'm struggling with at the moment is to have the line width fixed regardless of zoom level, e.g. I would like to have it 5 pixels wide, whether I have zoomed in or out. To have the fixed width at both ends of the thick line in perspective view.   For #2 I thought it was as simple as using the inverse projection matrix, but ...   Any suggestions of how to attack this is much appreciated.
5. ## Constant buffer

I just started with DirectX recently (SharpDX) and I had problems getting the constant buffer working too. Here is a snippet of my C# code which works. (remember tp update your buffer using UpdateSubResource - that is where I had the problem)       [StructLayout(LayoutKind.Sequential, Pack=4)] struct cbTransform {  public Matrix worldProj; } ... cbTransform m_cbTransform = new cbTransform(); Buffer m_transformBuf; m_transformBuf = new Buffer(rc.Device, Utilities.SizeOf<cbTransform>(), ResourceUsage.Default, BindFlags.ConstantBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0); rc.Context.VertexShader.SetConstantBuffer(0, m_transformBuf); ...   m_cbTransform.worldProj = m_viewProj; m_cbTransform.worldProj = /*Matrix.RotationX(time) **/ Matrix.RotationY(time * 2)/* * Matrix.RotationZ(time * .7f)*/ * m_viewProj; m_cbTransform.worldProj.Transpose(); rc.Context.UpdateSubresource<cbTransform>(ref m_cbTransform, m_transformBuf);
6. ## SharpDX, how to set constant buffer when using effect / technique

thank you for the information.
7. ## problem with basinc light

Argh - last night I had time to debug this, and I found my silly !@#\$% mistake. At some point I had changed the InputLayout, and by mistake (ARGH!!) I had the Normal offset = 0, same as the position, which of course is why it behaved so strange. Once fixed basic diffuse light works as expected, regardless of the objects position (of course). Guess I wont make that mistake again.   [attachment=18172:img.png]
8. ## SharpDX, how to set constant buffer when using effect / technique

I want to change to use effect, and I have it working now, but I was not able to figure out how to set the constant buffers when using effects, so the 2 update calls   rc.Context.UpdateSubresource<cbObjStruct>(ref m_cbObj, m_perObjcBuf); rc.Context.UpdateSubresource<cbTransform>(ref m_cbTransform, m_transformcBuf);   turns into this instead (ignore the extra transpose). Is there an easier way to update the constant buffer than setting it per variable? (obviously I will optimize it to per frame and per obj Update) void Update(RenderContext rc) { Matrix m1 = m_cbTransform.gWorld; m1.Transpose(); EffectMatrixVariable m = m_effect.GetVariableByName("gWorld").AsMatrix(); m.SetMatrix(m1); m1 = m_cbTransform.gView; m1.Transpose(); m = m_effect.GetVariableByName("gView").AsMatrix(); m.SetMatrix(m1); m1 = m_cbTransform.gViewProj; m1.Transpose(); m = m_effect.GetVariableByName("gViewProj").AsMatrix(); m.SetMatrix(m1); EffectVectorVariable v = m_effect.GetVariableByName("objColor").AsVector(); v.Set(m_cbObj.ObjColor); m1 = m_cbObj.ObjWorld; m1.Transpose(); m = m_effect.GetVariableByName("objWorld").AsMatrix(); m.SetMatrix(m1); EffectPass renderpass = m_currentRender.GetPassByIndex(0); renderpass.Apply(rc.Context); }
9. ## problem with basinc light

[attachment=18130:img3.png] this is what it looks like if I create the vertices in model space and let the vertex shade translate the vertices and normals into world space. I still don't understand why it doesn't work if I create it directly in world space.
10. ## problem with basinc light

hmm, if I instead of transforming the vertices into world space at the time of creating the geometry, keep it in local space and then let the vertex shader do the transformation, then it seems to work.   I don't understand this.   If I calculate a normal from a triangle by doing   vector0 = normalized(Vertex0 - Vertex1) vector1 = normalized(Vertex0 - Vertex2) N = cross(vector0, vector1)   Then why does it matter if the vertices are in local space or world space, as long as I only apply a translate, e.g. offset the x value by a given amount. The normal should still be the same - shouldn't it ?
11. ## problem with basinc light

>>EDIT. Remember to multiply the normal by the world matrix. Hmm, I was thinking since I generate the normals after the vertices have been translated to world space that I would not have to transform them, but let me give that a try.   I remember downloading FX Composer at some point, but haven't tried it yet (still very new to 3D and DirectX). I will give that a try, might be easier than shooting in the dark as I'm kind of doing now.
12. ## problem with basinc light

I think the position becomes the light vector. E.g. if I place the light at 1,1,1, then isn't the normalised vector also norm(1,1,1) ? - or did I get that wrong.   One thing that tells me it is a problem with the normal is that if I moved the cylinder to the center (no translation) then it is lit as expected regardless of where I place the camera.   Edit: This is where I change from pos to vector, just didn't name it correct.   float4 lightpos = float4(gDirectionalLightPos,0);
13. ## problem with basinc light

Hi, I have been working on getting basinc light working for the last several evenings, but I'm not able to get the correct behavior, and I would like some help understanding what I'm doing wrong.   The objects vertices are transformed to world space at the time of creating the geometry, so positions are in world space. The normal are calculated from the positions, which I think should be correct.   Here is my simple shader code cbuffer cbTransform : register(b0) { float4x4 gWorld; float4x4 gView; float4x4 gViewProj; float3 gEyePos; float filler; } cbuffer cbObjColor : register(b1) { float4 objColor; float4x4 objInvTransform; } struct VS_IN { float4 pos : POSITION; float4 normal : NORMAL; }; struct PS_IN { float4 posH : SV_POSITION; float4 posWorld : POSITION; float4 col : COLOR; float4 normal : NORMAL; }; PS_IN VS( VS_IN input ) { PS_IN output = (PS_IN)0; output.posWorld = input.pos; output.posH = mul(input.pos, gViewProj); input.normal.w = 0; output.normal = input.normal; output.normal.w = 0; output.col = objColor; return output; } float4 PS( PS_IN input ) : SV_Target { float3 gDirectionalLightPos = float3(1,1,-1); float4 gAmbColor = input.col; // ambient color float gAmbIntensity = 0.0; float4 inputnormal = input.normal; float4 inputpos = input.posWorld; float4 inputcolor = input.col; float4 lightpos = float4(gDirectionalLightPos,0); float4 n = normalize(inputnormal); float4 L = normalize(lightpos); // ambient float4 cAmb = saturate(gAmbColor * gAmbIntensity); // diffuse component float4 sDiff = float4(1,1,1,1); // color of diffuse light float4 mDiff = inputcolor; // color of the object float4 cDiffuse = saturate((sDiff * mDiff) * clamp(dot(n, L), 0, 1)); return saturate(cAmb + cDiffuse); //return input.col; } The problem is that when the camera is at 1,1,-1 the cylinder is not lit   [attachment=18103:img1.png]   but when moved to -1,1-1 the cylinder is lit, but it is lit 'almost' evenly from all sides   [attachment=18104:img2.png]   Anyone can spot my mistake, because I can't. Any help is very much appreciated.
14. ## problem defining the VS input struct

>>As an aside: You usually don't need a Vector4 for position... ah, good information. Thank you.
15. ## problem defining the VS input struct

argh... . That is it. I forgot to change the size when setting the vertex buffer. Still using the other vertex struct I had defined as a test.   rc.Context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(m_vertexBuffer, Utilities.SizeOf<Vertex>(), 0));