Jump to content
  • Advertisement

MJP

Moderator
  • Content Count

    8654
  • Joined

  • Last visited

  • Days Won

    6

MJP last won the day on September 17

MJP had the most liked content!

Community Reputation

20002 Excellent

1 Follower

About MJP

  • Rank
    XNA/DirectX Moderator & MVP

Personal Information

Social

  • Twitter
    @MyNameIsMJP
  • Github
    TheRealMJP

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Schlick fresnel should work fine for the clear coat surface of a car. The clear coat basically acts like a layer of clear plastic/glass over the paint, so the same rules apply. Perhaps your light source isn't bright enough? You should also make sure that your specular terms are properly balanced with your diffuse terms. For instance if you omit the 1 / Pi in the Lambertian diffuse BRDF, then your specular will always look too dim (since your diffuse will be too bright).
  2. I know this doesn't exactly help with your problem, but have you tried using PIX or RenderDoc instead of the VS graphics debugger? I find both of those tools to be *much* more usable than the built-in VS debugger.
  3. This presentation (and accompanying paper) talks about the cosine/dot products in the denominator a bit, amongst other things.The reason for these terms is that the BRDF always deals with with a surface patch whose area is exactly 1, but the projected area from the eye's point of view and the light's point of view is not 1 (they're proportional to the cosine of the angle between the eye/light and the surface normal). These terms data back to 1967(!), when they were discussed in Torrance and Sparrow's paper about off-peak specular reflections.
  4. MJP

    Unusually high memory usage

    So you have to be a bit careful when looking at the overall memory usage of your process. There are lots of things can allocate virtual memory inside of your process, of which your own code is just one. The OS and its various components might allocate things in your address space, third-party libraries linked into your code or loaded as a DLL can allocate, and then of course there's a very heavy D3D runtime and a user-mode driver loaded into your process. Sometimes these other components will allocate memory as a direct result of your code (for instance, if you create a D3D resource) but in many other cases it will do so indirectly. The driver might need some memory for generating command buffers, or it might have a pool of temporary memory that it draws from when you map a dynamic buffer. Basically this means that while it's always a good idea to keep an eye on your memory usage, you probably need more information than what task manager gives you if you really want to know what's going on. For the memory directly allocated by your own code, writing your own simple tools can be really useful. Generally you want to have all kinds of information that's specific to your game or engine, such as "how much memory am I using for this one level?" or "how much memory am I using for caching streamed data?". As for keeping track of everyone else's allocations, for that the best tool is ETW. With that you can trace the callstack of every call to VirtualAlloc, which can give you clues as to what's going on. Unfortunately there will be things for which you don't have PDB's, but you can at least get symbols for Microsoft DLL's using their public symbol server. The new PIX for Windows also has a built-in memory capture tool that can give you the same information, which it does by using ETW under the hood.
  5. MJP

    Silly Input Layout Problem

    The OP's VS code is fine, the shader compiler automatically converts a float4x4 to to 4 float4 attributes in the input signature (with sequential semantic indices).
  6. MJP

    Move view(camera) matrix

    Your view matrix is just the inverse of a matrix representing the world-space transform for your camera. "LookAt" functions will automatically invert the transform for you, but you can also build a "normal" transformation matrix for your camera and then compute the inverse to get a view matrix.
  7. I'm not sure what the old D3DX functions would be doing there. It looks like U8V8 is a signed integer format (DirectXTex seems to treat it as R8G8_SNORM), so perhaps it was something to with unpacking from the [0, 1] range to the [-1, 1] range. You may want to file an issue on the DirectXTex GitHub repo or see if you can contact Chuck Walbourn directly to see if he can help you out. He's open-sourced a lot of the old D3DX stuff, so he's probably the best person to ask.
  8. In this case all the only "conversion" that you would do would be to chop off the extra 2 channels to create an R8G8_UNORM texture. You can certainly do this yourself before creating the texture resource, but you could also just ignore it. Your shader code will work the same with R8G8_UNORM as it will with R8G8B8A8_UNORM formats if it only uses the first two channels of the texture fetch. If you really want to do format conversions, DirectXTex can do all of that for you.
  9. MJP

    HDR programming

    So there's separate (but related) topics here: HDR rendering, and HDR output for displays. Depending on your exact Google queries you might find information about one of these or both of these topics. HDR rendering has been popular in games ever since the last generation of consoles (PS3/XB360) came out.The basic idea there is to perform lighting and shading calculations internally using values that can be outside the [0, 1] range, which is most easily done using floating-point values. Performing lighting without floats seems silly now, but historically GPU's did a lot of lighting calculations with limited-precision fixed-point numbers. Support for storing floating point values (including writing, reading, filtering, and blending) was also very patchy 10-12 years ago, but is now ubiquitous. Storing floating-point values isn't strictly necessary for HDR rendering (Valve famously used a setup that didn't require it), but it certainly makes things much simpler (particularly performing post-processing like bloom and depth of field in HDR). You can find a lot of information about this out there now that it's very common. HDR output for displays is a relatively new topic. This is all about how the application sends its data to be displayed, and format of that data. With older displays you would typically have a game render with a rather wide HDR range (potentially going from the dark of night to full daytime brightness if using a physical intensity scale) and then using a set of special mapping functions (usually consisting of exposure + tone mapping) to squish that down into the limited range of a display. The basic idea of HDR displays is that you remove the need for "squishing things down", and have the display take a wide range of intensity values in a specially-coded format (like HDR10). In practice that's not really the case, since these displays have a wider intensity range than previous displays, but still nowhere wide enough to represent the full range of possible intensity values (imagine watching a TV as bright as the sun!). So that means either the application or the display itself still needs to compress the dynamic range somehow, with each approach having various trade-offs. I would recommend reading or watching this presentation by Paul Malin for a good overview of how all of this works. As for actually sending HDR data to a display on a PC, it depends on whether the OS and display driver support it. I know that Nvidia and Windows definitely support it, with DirectX having native API support. For OpenGL I believe that you have to use Nvidia's extension API (NVAPI). Nvidia has some information here and here. Be aware that using HDR output isn't necessarily going to fix your banding issues. If fixing banding is your main priority, I would suggest making sure that your entire rendering pipeline is setup in a way to avoid common sources of banding. The most common source is usually storing color-data without the sRGB transfer curve applied to it, which acts like a sort of compression function that ensures darker color values have sufficient precision in an 8-bit encoding. It's also possible to mask banding through careful use of dithering.
  10. MJP

    VS output problem

    The "per-instance" classification comes into play when you're drawing multiple instances of something, and you want per-instance data to come from a vertex buffer. The classic setup is two vertex buffers: the first one has all of your "normal" mesh vertex data, with PER_VERTEX_DATA classification for all attributes. Then the other one contains a per-instance unique transform with PER_INSTANCE_DATA. Conceptually you can think of it working like there's an invisible prologue to your vertex shader that pulls the data from your buffers using either the vertex index (PER_VERTEX_DATA) or the instance index (PER_INSTANCE_DATA): // This is a normal, simple vertex shader struct VSInput { float3 VtxPosition : VTXPOS; // PER_VERTEX_DATA float3 InstancePosition : INSTPOS; // PER_INSTANCE_DATA }; float4 VSMain(in VSInput input) : SV_Position { float3 vtxPos = input.VtxPosition; vtxPos += input.InstancePosition; return mul(float4(vtxPos, 1.0f), ViewProjectionMatrix); } // ====================================================================================================== // Imagine that this hidden VS "prologue" runs first for every instance of the vertex shader: Buffer<float3> VtxPositionBuffer; Buffer<float3> InstancePosBuffer; float4 VSPrologue(in uint vertexIndex : SV_VertexID, in uint instanceIndex : SV_InstanceID) : SV_Position { VSInput vsInput; // PER_VERTEX_DATA means index into the buffer using the vertex index, typically from an index buffer vsInput.VtxPosition = VtxPositionBuffer[vertexIndex]; // PER_INSTANCE_DATA means index into the buffer using instance index vsInput.InstancePosition = InstancePosBuffer[instanceIndex]; return VSMain(vsInput); }
  11. As far as I know this isn't really exposed to you at the API level, it's instead a detail of the driver and the Windows driver model that it plugs into. Before Win10/WDDM 2.0 there was a distinction on the driver side of things between a "command buffer" and a "DMA buffer", where a command buffer was generated by the user-mode driver and was placed in normal paged memory, but the DMA buffer was handled by the kernel-mode driver and could be made directly-accessible to the GPU. But it seems that this distinction is gone in WDDM 2.0 for GPU's drivers that support GPU virtual addressing, and the user-mode driver can directly-generate a GPU-accessible command buffer and then submit that to the scheduler. I would assume that pretty much all D3D12 drivers are going down that path, in which case the commands are probably being stored directly in GPU-accessible memory. However the D3D12 API doesn't preclude patching occurring when you call ExecuteCommandLists, so it's totally possible that a driver might want to do a patch followed by a copy.
  12. For SV_Position the x component is post-projection Z/W, the same value that's stored in the depth buffer. The w component is the reciprocal of post-projection W, and post-projection W is the view-space z component for perspective projections.
  13. MJP

    PCSS Shadow Sample ?

    Nvidia has a sample that you can look at.
  14. When you're referring to overdraw, are you concerned with avoiding pixel overdraw by sorting your opaque meshes in front-to-back order? This kind of sorting is indeed somewhat incompatible with batching/instancing, and the sweet spot depends on a lot of factors. We mostly sidestep this problem by doing a depth-only prepass, which ensures that there's no overdraw when it comes time for the main forward pass that uses heavy pixel shaders, so that's certainly an option for you. It does add extra draw calls, but in terms of CPU cost they will have fewer state changes than a forward pass or G-Buffer pass (typically no textures and no pixel shader), so they end up being cheaper. It can also be quite cheap on the GPU, but it depends on the geometric complexity. If you really wanted to push the limits on triangle counts then a depth prepass may not be a great idea.
  15. MJP

    PerspectiveOffCenterLH

    The "OffCenter" functions let you create an skewed/asymmetical projection, as opposed to the typical symmetrical perspective projection that's normally used for rendering. Unless you're doing something special (like per-eye VR rendering), you probably don't want an asymmetrical projection. A normal projection looks like this (top-down view): The projection that you're creating is going to look like this: You can create a symmetrical projection with the OffCenter functions, but you can also just use the PerspectiveFov functions instead. As for your view matrix, you need to invert the camera's transformation matrix to create a view matrix (the view matrix brings things from world coordinates into the camera's frame of reference).
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!