Jump to content
  • Advertisement

ManIkWeet

Member
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

122 Neutral

About ManIkWeet

  • Rank
    Newbie
  1. Interesting, as the DeviceCreationFlags.Debug flag works fine for me on Windows 10...   Maybe driver related?
  2. Surely there must be some way to get a file read from the disk into a stream of bytes, right?   If you can get the URI of the file, this might work: Uri uri = new Uri("IDon'tKnowWhatever"); var stream = System.Windows.Application.GetResourceStream(uri).Stream; var shaderByteCode = ShaderBytecode.FromStream(stream); Utilities.Dispose(ref stream);
  3. You could look at how Skyrim does fireballs.   A single textured billboard should work just fine really... (Billboards are meant to be rotated to the camera)   Might have to do something fancy if you want the fireball to have a tail...
  4. Hello all, the idea seems very simple:   While looking at the center of a given object/scene/mesh/cloudofpoints and a rotation that the camera needs to have, calculate the distance from the center to perfectly fit the object/scene/mesh/cloudofpoints on the screen.   However, this idea is appearantly really though to implement properly: The object/scene/mesh/cloudofpoints can be of any width/height/length. The screen can be of any width/height. The camera is perspective, meaning that FOV somehow also plays a role in the equation. The center might not necessarily be the center anymore when the camera has moved, as things can "stick out"   I got all the data you could possibly ever want/need: -All the vertices in the scene, in world space -Screen width/height -FOV of camera -All current camera matrices, that is view/projection -Desired yaw/pitch of the new camera position   Things I managed to do: Axis-Aligned object/scene/mesh/cloudofpoints bounding-box (blue) Desired camera angle-aligned object/scene/mesh/cloudofpoints bounding-box (red) Desired camera angle-aligned object/scene/mesh/cloudofpoints bounding-box aligned to world-axis (yellow) The size of the object/scene/mesh/cloudofpoints when translated to your current camera position (green) Failed attempt to apply a projection matrix to the yellow box  so that the sides would be sloped (cyan)   Screenshots of the bounding boxes when I want the camera to be rotated to the top/front/left of the object/scene/mesh/cloudofpoints: http://puu.sh/gXGON.jpg http://puu.sh/gXGR3.jpg http://puu.sh/gXGV2.png   I figured I wanted bounding boxes for the job because then it's easy to fill the formulas I found here: http://stackoverflow.com/questions/2866350/move-camera-to-fit-3d-scene     Are any of my box approaches in the right direction? Am I forgetting something insanely important? Is it even possible to do? Can I make sure the cyan box gets sloped depending on FOV?
  5.   Bah! I missed that. My brain wasn't working (it's intermittent at best.) There are a lot of moving parts here, and I've lost the bubble.   Have you tried (in the VS or on CPU) multiplying the input normal by inverse-transpose of the instance matrix, passing that normal to the pixel shader, and normalizing it in the PS?   I did try that a while ago, it made a difference but not in any desired way.   The pixel shader that you directed to actually works now, all that had to be done was remove the normal-flipper, actually making it even more efficient.   It's up to me to decide if it's quicker than my geometry-shader solution though.   At the least it's more predictable when it comes to the actual direction of light.
  6.   I hope I understand the problem. Are you saying that for a single triangle, one vertex may have a "front-face" normal, and another vertex in that same triangle may have a "back-face" normal? If so,   In any case, you can calculate the normal in the pixel shader (eliminating the geometry shader), and flip that normal if the triangle is back-face. I.e., don't even pass the normal out of the vertex shader.   I.e., In the vertex shader, send the vertex world position to the pixel shader. It appears the world matrix is assumed to be identity, so just pass it through:    output.worldPos = input.pos; // you don't need output.norm or viewPos In the pixel shader, calculate the normal:    float3 normal = normalize(cross(ddx(input.worldPos), ddy(input.worldPos))); // works even when normals are not supplied!    if( front == false ) normal = -normal; // flip it    float3 lightDirection = normalize( -LightDirection ); // assuming LightDirection is "true" direction in world    // now both normal and lightDirection are in world space    // continue with your dots and colors using local normal and lightDirection variables I tried as you suggested, resulting in the following shader code: struct VS_IN { float4 pos : POSITION; float3 norm : NORMAL; matrix instance : INSTANCEMATRIX; float4 color : INSTANCECOLOR; }; struct PS_IN { float4 pos : SV_POSITION; float3 worldPos : TEXCOORD0; float3 norm : NORMAL; float4 color: COLOR; }; cbuffer viewProj : register (b0) { matrix viewProj; } cbuffer view : register (b1) { matrix view; } cbuffer lights : register (b0) { float4 Light1Color; float3 Light1Direction; float4 Light2Color; float3 Light2Direction; float4 Light3Color; float3 Light3Direction; } PS_IN VS(VS_IN input) { PS_IN output = (PS_IN)0; output.pos = mul(mul(input.pos, input.instance), viewProj); output.worldPos = mul(input.pos, input.instance); output.color = input.color; return output; }; float4 PS(PS_IN input, bool front : SV_IsFrontFace) : SV_Target { float3 normal = normalize(cross(ddx(input.worldPos), ddy(input.worldPos))); // works even when normals are not supplied! if (front == false) normal = -normal; // flip it float3 aLight1Direction = normalize(-Light1Direction); // assuming LightDirection is "true" direction in world float3 aLight2Direction = normalize(-Light2Direction); // assuming LightDirection is "true" direction in world float3 aLight3Direction = normalize(-Light3Direction); // assuming LightDirection is "true" direction in world // now both normal and lightDirection are in world space // continue with your dots and colors using local normal and lightDirection variables float4 newColor = (input.color * Light1Color * max(0, dot(normal, aLight1Direction))) + (input.color * Light2Color * max(0, dot(normal, aLight2Direction))) + (input.color * Light3Color * max(0, dot(normal, aLight3Direction))); newColor.a = input.color.a;//we don't want to affect the alpha level with lights return newColor; }; technique10 Render { pass P0 { SetVertexShader(CompileShader(vs_4_0, VS())); SetPixelShader(CompileShader(ps_4_0, PS())); } } Where my program provides the lights as world-space vectors, the result is the following: I am pretty sure WPF is doing magic, I am calculating the normals on the CPU but they become wrong because the vertex-order is messed up too. I am using the exact same vertex/index array as WPF is using, just converted to my own classes.   Images of magic mesh: (Multiplying the CPU-calculated normal with the world matrix and the view matrix, no pixel-shader based normal flipping)   (using pixel-shader based normal-flipping using SV_IsFrontFace, appearantly that value doesn't change on negative-scale-matrices)   (Using geometry-shader normal calculation without using pixel-shader SV_IsFrontFace)   (using both geometry-shader normal generation and pixel-shader SV_IsFrontFace normalflipping, the desired but heavy-on-gpu output)
  7. ManIkWeet

    [SharpDX] Draw UI elements to screen as quads

    What I did for a GUI was just define a quad in screen-size (vertices going from 0,0,1 to whatever size you want your thing to be, keeping z=1) and have a matrix to decide where the quad has to go on the GPU. I then send an orthographic matrix to the GPU as viewProjection (Matrix.OrthoOffCenterLH(0, screenwidth, screenheight, 0, 1f, 2f);) and I send a translation matrix (the model/world matrix, if you want) to the GPU.   Then a very simple GUI shader: PS_IN VS(VS_IN input) { PS_IN output = (PS_IN)0; output.pos = mul(mul(input.pos, world), viewProj); output.tex = input.tex; return output; }; float4 PS( PS_IN input ) : SV_Target { return image.Sample(imageSampler, input.tex); };
  8. Hi all, I figured this would be the best place to ask this...   I am working on a CADViewer that is getting models that are out of my control. The CADViewer is a replacement of the 3DViewer in an existing WPF application.   WPF is doing a load of magic when it comes to Normals/Vertices/Lighting which means that the models that I am getting are not correct in any way. Faces (triangles) that are on the same side of an object and are facing the same way are having different vertex-orientations. Matrices that I am receiving can have negative scale-values, meaning that vertex-order is once again flipped around. Those are just a few issues.   Currently the way I am tackling this issue to get acceptable results is the following: Have the cullmode set to none. Translate the light direction with the viewMatrix and then send them to the GPU. In the vertex shader translate the vertices' position once with just the worldView matrix and another time with the worldViewProjection matrix. In the geometry shader calculate the per-face normal using the position translated with the worldView matrix, set this normal to the 3 vertices that make up the face. In the pixel shader (ab)use the SV_IsFrontFace variable to determine if the vertex normal (which really is the face normal) should be negative. Then then multiply the inputcolor with the lightcolor, then multiply that with the dotproduct of the vertex normal and the lightdirection. In the pixel shader add a float4(0.1,0.1,0.1,1) to the output color to compensate for the brightness-loss that shouldn't happen. The acceptable part of the results lean on the fact that the lightdirection that I give the GPU has influence, but not the expected influence.     My question is the following: Is there a better way to tackle this issue? Like not calculating normals in the Geometry shader. Or not using the SV_IsFrontFace in the pixel shader. Perhaps a way that I don't have to translate the light's normal with the ViewMatrix?   Full shader code as a reference: struct VS_IN { float4 pos : POSITION; float3 norm : NORMAL; matrix instance : INSTANCEMATRIX; float4 color : INSTANCECOLOR; }; struct PS_IN { float4 pos : SV_POSITION; float4 viewPos : TEXCOORD0; float3 norm : NORMAL; float4 color: COLOR; }; cbuffer viewProj : register (b0) { matrix viewProj; } cbuffer view : register (b1) { matrix view; } cbuffer lights : register (b0) { float4 Light1Color; float3 Light1Direction; } PS_IN VS(VS_IN input) { PS_IN output = (PS_IN)0; output.pos = mul(mul(input.pos, input.instance), viewProj); output.viewPos = mul(mul(input.pos, input.instance), view); output.color = input.color; return output; }; [maxvertexcount(3)] void FlipFaceGS(triangle PS_IN input[3], inout TriangleStream<PS_IN> OutputStream) { PS_IN v1 = input[0]; PS_IN v2 = input[1]; PS_IN v3 = input[2]; float3 faceEdgeA = v2.viewPos - v1.viewPos; float3 faceEdgeB = v3.viewPos - v1.viewPos; float3 faceNormal = normalize(cross(faceEdgeA, faceEdgeB)); v1.norm = faceNormal; v2.norm = faceNormal; v3.norm = faceNormal; OutputStream.Append(v1); OutputStream.Append(v2); OutputStream.Append(v3); OutputStream.RestartStrip(); } float4 PS(PS_IN input, bool front : SV_IsFrontFace) : SV_Target { if (front) { float4 newColor = (input.color * Light1Color * saturate(dot(input.norm, Light1Direction))); newColor += float4(0.1, 0.1, 0.1, 1); newColor.a = input.color.a;//we don't want to affect the alpha level with lights return newColor; } else { float4 newColor = (input.color * Light1Color * saturate(dot(-input.norm, Light1Direction))); newColor += float4(0.1, 0.1, 0.1, 1); newColor.a = input.color.a;//we don't want to affect the alpha level with lights*/ return newColor; } }; technique10 Render { pass P0 { SetVertexShader(CompileShader(vs_4_0, VS())); SetGeometryShader(CompileShader(gs_4_0, FlipFaceGS())); SetPixelShader(CompileShader(ps_4_0, PS())); } } WPF seems to be able to do it, but how?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!