Jump to content
  • Advertisement

Riko Ophorst

Member
  • Content Count

    11
  • Joined

  • Last visited

Community Reputation

153 Neutral

About Riko Ophorst

  • Rank
    Member

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hey. I'm looking to do a proper graphics related project in the summer holiday for the full 8 weeks that I have available. I'm not a beginner - I've done all of the basic stuff. I've written a forward renderer in DirectX 11 two years ago and another forward renderer in DirectX 12 this year. I've done Phong lighting and GPU skinning. Perhaps non graphics related, but I have done many engine things, such as custom memory management techniques, ECS's, resource managers, etc. On the C++ side of things I consider myself a pretty good programmer, but I tend to get too obsessed with making a good C++ architecture before I ever start working on actual graphics related work. I want to do a project this summer which will really solidify me as a graphics programmer. I want to do it using DirectX 12. I want it to be a solid portfolio piece that I can take with me to job/internship interviews. What kind of stuff do you think would be really impressive if pulled off?
  2. Riko Ophorst

    Recommended byte size of a single vertex

    Is that sort of stuff worth it though? Reducing memory usage at the cost of increasing computation?
  3. I'm wondering, what is the recommended byte size of a single vertex? Currently, we're using 80 bytes per vertex in our engine: struct Vertex { DirectX::XMFLOAT3 position; // 0 + 12 = 12 bytes DirectX::XMFLOAT3 normal; // 12 + 12 = 24 bytes DirectX::XMFLOAT3 tangent; // 24 + 12 = 36 bytes DirectX::XMFLOAT3 bitangent; // 36 + 12 = 48 bytes DirectX::XMFLOAT2 uv; // 48 + 8 = 56 bytes DirectX::XMFLOAT4 color; // 56 + 16 = 72 bytes uint32_t bone_id; // 72 + 4 = 76 bytes float bone_weight; // 76 + 4 = 80 bytes } What are the things to keep in mind when setting up a vertex layout?  
  4. Riko Ophorst

    [Assimp] Forcing 1 mesh per node

    I've done that. However, we'd would highly prefer to not have multiple meshes per node in our architecture, hence the question. Thanks for the input though :)
  5. I know this is probably a long shot - asking this question here - but I still want to try. I'm currently using Assimp (v3.3.1, the latest version) as the model importer in our engine. Nodes (aiNode) in the scene structure (aiScene) are allowed to have multiple meshes per node. I want to specifically disable this behaviour from Assimp - I only want one mesh per node and nothing more. Is that possible? Is there some aiProcess* flag I'm not aware of?
  6. I've been wondering about skyboxes lately. I know that a skybox can be rendered as a cube that's always centered around the camera with a cubemapped texture applied to it, but I'm wondering: are there better ways to do this? What are the "best practices" for rendering a skybox?
  7. Riko Ophorst

    Root Signature Descriptor Ranges & Registers

    I don't know what I was expecting.. Thanks I guess :)
  8. In D3D12 you can set a root parameter in a root signature to be a descriptor table, which is just a collection of descriptor ranges.    A descriptor range is just a typed list of descriptors. You set which shader register the descriptor range sits in with D3D12_DESCRIPTOR_RANGE::BaseShaderRegister, however I am wondering what happens to all the other descriptors in the range? What register are they put into? Just the BaseShaderRegister + the current descriptor in the range?   So say I have a descriptor range like this: D3D12_DESCRIPTOR_RANGE range; range.RangeType = D3D12_DESCRIPTOR_RANGE_TYPE_SRV; range.NumDescriptors = 7; range.BaseShaderRegister = 2; range.RegisterSpace = 0; range.OffsetInDescriptorsFromTableStart = 0; Can I then assume that the descriptors will be placed in registers 2 to 8? So I would be able to access them like this in HLSL? Texture2D texture01 : register(t2); Texture2D texture02 : register(t3); Texture2D texture03 : register(t4); Texture2D texture04 : register(t5); Texture2D texture05 : register(t6); Texture2D texture06 : register(t7); Texture2D texture07 : register(t8); Are my assumptions correct?
  9. Riko Ophorst

    Unordered Access Views in DirectX 11/12

    Nope, they're for SRV's that have been created from buffer resources (as opposed to the more common case of SRV's that have been created from texture resources).  These are simply for when you want a shader to be able to perform random-access reads from a buffer resource.   As an example, AMD GPU's do not actually have very much in the way of "input assembler" hardware any more. When you bind a vertex buffer to the IA stage, the driver is actually patching your vertex shader code to contain some Buffer/StructuredBuffer variables, and automagically binds SRV's for your vertex buffers to the VS stage. The magic (driver-generated) VS code can then read the vertex attributes out of the buffers as required. If you ever need to read data out of a buffer in a shader, you can use Buffer HLSL variables too. Buffer is for simple types, like float4 / R32G32B32A32_FLOAT, while StructuredBuffer is for structs (like cbuffers). If you ever need to bind a massive array of data to a shader -- like a skeleton for vertex skinning, then buffers are probably a better choice than cbuffers. Lastly ByteAddressBuffers simply let you read 32-bit values without any automatic format decoding -- the DIY buffer view.   UAV's are for RWStructuredBuffers and RWByteAddressBuffers, so best to understand the above use case first :) UAV's are similar to SRV's, but in HLSL you use these RW* types, and bind them to u# registers instead of t# registers. The big difference is that you can also write to these resource views! e.g. normally in a shader you sample texels from a texture -- but a UAV lets you write data into the texture, such as: myTexture[x,y] = float4(1,0,0,0);     In D3D11, a UAV can be created with a counter attached to it (it's handled internally by D3D), but in D3D12 it's a bit more manual to implement this, hence the extra type in D12 (you make two buffers - one for the UAV to read/write data from, and one for the counter)... The counters are used by AppendBuffers and ConsumeBuffers. These are buffers that you can write data to / read data from like a stack. E.g. if a shader is spawning particles, it can push them into an append buffer. The counter keeps track of how many particles were pushed. Later on, a particle rendering shader can use a consume buffer to pop items from that stack (and reduce the counter). These are used in places where shaders should generate a variable amount of data.   Thanks a lot for the clarification! Kudos!
  10. I'm building a small general-purpose renderer in DirectX 12 at the moment. I however don't have experience with UAVs in any way, so I'm kinda hoping someone can give me a good explanation of what it is and how I can best make use of a UAV.   DirectX 12 also introduces a so-called counter buffer, which I don't quite understand what it's purpose is and how it should be used. Like, what does it even do?   I'm right now also assuming that a UAV can only be used in things such as "StructuredBuffers" and "ByteAddressBuffers" but I don't quite understand what those are either.    Up to this point I always thought that data is data and that there isn't much complicated processes going on when you are trying to read data, however now with UAVs it seems that my entire idea behind data on the GPU is faulty.   Can anybody give me a run down on UAVs, what they are used for and how you should use them in shaders? Also, what are counter buffers?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!