Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

  • Days Won


ChuckNovice last won the day on August 7

ChuckNovice had the most liked content!

Community Reputation

143 Neutral

About ChuckNovice

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

1871 profile views
  1. The question is too general. Start with this sample to get the basics. When all this will be clear for you you can move to loading textures etc... https://github.com/sharpdx/SharpDX-Samples/tree/master/Desktop/Direct3D11/MiniTri
  2. ChuckNovice

    Silly Input Layout Problem

    Well indeed you still have few misconception about instances it seems. Let's take a look at this part : output.position = mul(position, world); // Apply world translation matrix output.position = mul(position, instancePosition); // Apply instance translation matrix instancePosition is basically a world matrix there. You are multiplying twice in a row with a world matrix. One that come from your instance and one that come from your constant buffer. When you draw your stuff you are either using instancing or not. When not using instancing you should take the world matrix in the constant buffer. When using instancing you should take the world matrix in the vertex layout. Don't use both at the same time. You need 2 version of that vertex shader, one for when you draw with instancing and one for regular draws. EDIT: Nevermind you are actually overwriting the position and not using the result of the first multiply. Do you see anything at all on the screen? Did you make sure to transpose your matrix? How do you store the data in your instance buffer? What do your call to SetVertexBuffers look like? How do you issue your draw call?
  3. ChuckNovice

    Silly Input Layout Problem

    Between now and your first post you switched the format from DXGI_FORMAT_R32G32B32A32_FLOAT (float4) to DXGI_FORMAT_R32G32B32_FLOAT (float3). You are now assuming a 3x4 matrix. Put DXGI_FORMAT_R32G32B32A32_FLOAT back there for 4x4.
  4. ChuckNovice

    Silly Input Layout Problem

    A 4x4 matrix of float is 64 bytes, not bits. That's actually 512 bits. One single float is 32 bits and there's 4x4 of them. You represent those like in the last part of your post, so 4x R32G32B32A32_FLOAT. It's the offset of the element in bytes. If your first element is a R32G32B32A32_FLOAT then it means that it's using 16 bytes of data. So your next element should be at the offset 16 and so on.
  5. ChuckNovice

    Failing to draw faces in normal way

    So I loaded the model as provided in private message. This is the result in my engine : So the good news is that your model is fine. The bad one is that there's a problem with your code I suspect that you simply provide the vertex of half the triangles in the wrong order, causing them to be culled. Do you have any backface culling active? That's what I wanted to see when asking you to move the camera inside the teapot. For example if I do that on my side we get this :
  6. ChuckNovice

    Failing to draw faces in normal way

    Move the camera inside the teapot and post another screenshot Also if you provide the said .obj file I could load it in my own engine and see if it's caused by the export process or the library that you use to load it.
  7. ChuckNovice

    Failing to draw faces in normal way

    If you move the camera inside the teapot I presume that you now see the missing triangles?
  8. ChuckNovice

    HDR programming

    No you don't need those insane card to do this, they support the proper color format to do HDR since a long time. Basically you render your scene color to a 11/16/32bits buffer instead of the usual 8bits one and you make sure that you don't clamp your colors to 1.0. During the post-processing stage you tonemap that buffer and you get a result that is displayable on LDR displays. HDR can also be useful to perform effects such as bloom and light adaptation
  9. Yes but you and turanszkij are both right in the sense that you don't need to consider the depth only to get the ray direction. You can safely always pass 1.0 instead of sampling the depth unless you also plan to use the real pixel world position for something else in that same shader.
  10. The code I provided is a generic function to reconstruct the world position of a pixel. You indeed don't need the depth if all you want is a ray.
  11. ChuckNovice

    Normal mapping: tangents vectors clearly incorrect

    To be 100% honest I have no idea what you are trying to show us in that very low res screenshot nor do I have any idea which data you mapped to the colors and I suspect that I'm not alone. For example I would expect to see at most 2 color channel involved in the debug view of texture coordinates but you seem to have the 3 channels involved, including solid pink spots that suggest that you sample the same position everywhere but your normal render doesn't say the same thing? Also I'm familiar with normal / binormal / tangent but I have no idea what is a "uvec" or "vvec" for you?
  12. I got this in one of my utility HLSL that you can use to retrieve the world space position of your pixel so you can subtract it from the camera position as turanszkij suggested : //--------------------------------------------------------------------------------------- // Convert from screen space to world space. //--------------------------------------------------------------------------------------- inline float3 ScreenSpaceToWorldSpace(float depth, float2 screenCoordinate, float4x4 viewProjectionInverseMatrix) { float4 position = float4(screenCoordinate.x * 2.0f - 1.0f, -(screenCoordinate.y * 2.0f - 1.0f), depth, 1.0f); // transform to world space position = mul(position, viewProjectionInverseMatrix); position /= position.w; return position.xyz; } the "screenCoordinate.x * 2.0f - 1.0f" part is only to convert from 0,1 range to -1,1 as I'm actually working with values that come from texture coordinates there.
  13. So basically you want to retrieve the ray of each pixel in your pixel shader? I did it a bit differently to avoid computing heavy stuff on each pixel. Since I'm using a deferred rendering technique I simply calculate the ray direction of the 4 corners of the camera frustum on the CPU and pass it to my vertex shader during the light pass and let it interpolate for the pixel shader. The light pass of a deferred renderer is basically just a big quad filling the screen so it's easy to pull in this case. So it would be useful to know if you are doing forward rendering or deferred.
  14. As far as I know there no difference only on nvidia hardware but keep in mind that we're talking only about dynamic buffers here and not textures. I'd like to hear the input from someone with more experience on that one as the difference isn't clear to me if there's one. In my case I use Map/Unmap all the time on dynamic buffers.
  15. It's all good. Be very careful tho because those square wont solve all your problems. Let's take this basic example of a mario bros level : In this case is it better to draw 46 squares or a big rectangle with different texture coordinates to make the texture repeat? The world of graphic programming never cease to paint us in a corner when we assume such things
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!