Jump to content
  • Advertisement

CGEngine

Member
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

142 Neutral

About CGEngine

  • Rank
    Member

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, I'm looking into how to manage state transitions of read only resources. Lets say I have the following scenario with buffer A: Draw Call #1 - Reads from buffer A in PS. Draw Call #2 - Reads from buffer A in VS. So the state will be: Initial - Common Draw Call #1 - Implicit transition to NON_PIXEL_SHADER_RESOURCE Draw Call #2 - Transition from NON_PIXEL_SHADER_RESOURCE to PIXEL_SHADER_RESOURCE. End of command list - Implicit decay to D3D12_RESOURCE_STATE_COMMON. Is this the optimal way to do it? Or should I do a single transition to NON_PIXEL_SHADER_RESOURCE | PIXEL_SHADER_RESOURCE at the beginning of every frame? Would it be more expensive to simply explicitly transition to GENERIC_READ at the beginning of every frame? This would make a lot easier to manage read only resources... Thanks!
  2. Hi, I'm reading https://software.intel.com/en-us/articles/sample-application-for-direct3d-12-flip-model-swap-chains to figure out the best way to setup my swapchain but I dont understand the following: 1 - In "Classic Mode" on the link above, what causes the GPU to wait for the next Vsync before starting working on the next frame? (eg: orange bar on the second column doesn't start executing right after the previous blue bar) Its clear it has to wait because the new frame will render to the render target currently on screen, but is there an explicit wait one must do in code? Or does the driver force the wait? If so, how does the driver know? Does it check which RT is bound, so if I was rendering to GBuffer no wait would happen? 2 - When Vsync is off, what does it mean that a frame is dropped and what causes it? Thanks!!
  3. Hi,   I was looking at ID3D12Device::GetCopyableFootprints and d3dx12.h, and I'm trying to figure out with does GetCopyableFootprints has separate output arguments for the number of rows and the size of each row in bytes since those values are apparently already available in the D3D12_PLACED_SUBRESOURCE_FOOTPRINT array also returned by the function.     Is there any case when D3D12_SUBRESOURCE_FOOTPRINT::Height might be different than pNumRows and D3D12_SUBRESOURCE_FOOTPRINT::RowPitch is different than pRowSizeInBytes for each subresource?   Also GetCopyableFootprints takes BaseOffset as argument and then D3D12_PLACED_SUBRESOURCE_FOOTPRINT has a member called Offset that seems to be a simple copy of the argument. Again are there cases when it might be different?   Lastly: inline void MemcpySubresource( _In_ const D3D12_MEMCPY_DEST* pDest, _In_ const D3D12_SUBRESOURCE_DATA* pSrc, SIZE_T RowSizeInBytes, UINT NumRows, UINT NumSlices) { for (UINT z = 0; z < NumSlices; ++z) { BYTE* pDestSlice = reinterpret_cast<BYTE*>(pDest->pData) + pDest->SlicePitch * z; const BYTE* pSrcSlice = reinterpret_cast<const BYTE*>(pSrc->pData) + pSrc->SlicePitch * z; for (UINT y = 0; y < NumRows; ++y) { memcpy(pDestSlice + pDest->RowPitch * y, pSrcSlice + pSrc->RowPitch * y, RowSizeInBytes); } } } This function in some places multiplies by pDest->RowPitch but copies RowSizeInBytes bytes which makes it look like they might be different but I'm not sure why.   And if they're different why doesn't GetCopyableFootprints also return a value like SliceSizeInBytes? Can't it also be different than pDest->SlicePitch?   Thanks in advance.   P.S. Can a moderator move this to DirectX and XNA section? Sorry
  4. Hi,   I have two on-site interviews next week for programming roles at two game studios in the UK.   First, what should I wear to the interviews? I think a suit would look weird. Do you think dark jeans and a polo shirt looks too casual?   Both interviews will include hour long C++ programming tests, what kind of questions do these tests usually ask? Should I review algorithms like (Dijkstra, Ford Fulkerson, etc) or take a look at some game programming books like Game Engine Architecture etc?   Thank you!
  5. Hi,   I've read that the best way to get a job is to know someone that works on the company you want to apply to.   However, I'm not sure what's the best way to meet someone working there. I can't afford to go to a GDC or SIGGRAPH and *stalk*  people there, so my only option is the internet.   I do have a few connections on LinkedIn with people working at the company I want to apply, would it be appropriate to PM them on LinkedIn and what should I say?   Thanks
  6. Say I want to downscale and blur simultaneously a texture using a separable filter, like suggested here.   If the texture original size is 1280x720, should I:   1 - Do a horizontal blur to a 640x720 texture, and vertical blur to a 640x360 texture.   Or    2 - Do a horizontal blur to a 640x360 texture, and vertical blur to a 640x360 texture.   What's the difference between both approaches? Would option 1 result in higher quality?   Does this depend on the filter used? Eg: A GPU Pro 4 article about DOF uses approach 1.   Which option would you suggest for Bloom, Screen space reflections and Motion Blur?
  7. CGEngine

    Clamp light intensity

      How do you define a reasonable value? Would it make sense to calculate the average luminance before DOF (since DOF won't affect it to much anyway) and use that to somehow scale the overly bright pixels? I kind of fixed bloom by: float3 x = light_buffer[current_pixel]; float lum = luminance(x); float tonemapped_lum = tonemap(lum) return x * max(tonemapped_lum - threshold, 0.0f)/tonemapped_lum; However this made bloom lose intensity -.-.   DOF is still causing white squares.       I'm planning on working on area lights after finishing the PostFX pipeline  . I would like to use photometric units in my materials but can't figure out how to convert them to radiometric units.   Eg:   I've bought some new LEDs for my house and the package has this info:   50W 36º - 680 lm 2950K Ra - 100 (not sure what this is)   How do I take this info and convert to radiometric units (spot light's intensity float3)?
  8. CGEngine

    Clamp light intensity

    My light source irradiance is (1.0f, 1.0f, 1.0f) and I'm using a normalized GGX distribution where alpha = roughness * rougnhess   Since alpha must be larger than 0 I add a delta to alpha so materials set roughness to 0:   float alpha = roughness * roughness + 0.00015f;   And welcome to larger numbers party:   DGGX(alpha) > 65000     So what should I do? If I increase the delta I lose small specular reflections , if I keep it like that I lose nice circular bloom ...   Only good option seems to be clamping some value somewhere... Not sure which and to what range. How should I chose it?   I also want to switch to R11G11B10 formats like most engines are using (according to various presentations I read) but if I'm having problems with 16 bits imagine 10/11 bits -.-   Is anyone familiar with Unreal engine 4 source code that could point me to where they handle this?
  9. Hi!   Is it common to clamp the result of the BRDFs?   Because even using a light buffer with R16G16B16A16 format if the roughness of the material is close to 0 the radiance of the pixel goes well over the ~65000 limit of the render target format.   It also messes with the tone mapper because most pixels radiance is below (1.0, 1.0, 1.0), while the radiance of pixels at certain angles are many orders of mangitude larger.   This doesn't seem right...   It causes other issues like, applying Gaussian blur turns bright spots in squares etc
  10. CGEngine

    How to calculate Lumens?

    How can I convert Photometric to Radiometric units that I can use in the rendering equations and output to the screen? Or is it possible to do everything in Photometric units? If so, how?   Is the pixel luminance calculated to perform tonemapping (using the dot product with float3(0.2125f, 0.7154f, 0.0721f)) the photometric luminance or other unit with the same name?   Thanks!
  11. CGEngine

    High performance resource binding

    How does that map to D3D12/Vulkan descriptor tables? Do you create a descriptor table per render item (one for each pass for each mesh)?   And since you can't modify descriptor tables while they're being used by command queues, does that mean you have to recreate the render items of every dynamic object every frame to update cbuffers etc?
  12. I'm trying to find the best way to handle resource binding (binding cbuffers, textures, etc to the GPU).   So, the systems has to support:   1-Binding Vertex Buffers, Index Buffers, CBuffers, Textures and UAVs to the GPU.   2-Some of those resources (except UAVs) require a map -> memcpy -> unmap to be performed before they're bound.   3-Each RenderObject has a few of the resources (which must be bound before the RenderObject draw call).   4-Depending of the pass/shader permutation, different resources have to be bound (no need to bind diffuse map during shadow pass).   5-The resources might change at runtime.   The must basic way is to check shader reflection data everytime a RenderObject is being drawn, however this is clearly not fast, so I'm hoping someone can help me find a good way to organize and bind the resources.   Since the resources probably don't change very frequently at runtime, I'm not worried about making resource switching a bit slower, so resource binding can be done as efficiently as possible.   Any ideas?   Thanks. 
  13. I already keep an orthogonal basis so I don't run into problems when rotating the camera.   The main problem is in the function: void Camera::setLookDirection(Vector3D dir) {     m_Look = dir;     m_Look.Normalize();     m_Right = Vector3D(0.0f, 1.0f, 0.0f).Cross(m_Look);     m_Right.Normalize();     m_Up = m_Look.Cross(m_Right); } Setting the camera direction to (0.0f, 1.0f, 0.0f) causes problems.   I'm also having problems when using a function like this. What can I do when I want to generate a quaternion to rotate an object so it faces another object directly above it?
  14. Hi.   What should I do when the forward vector equals the up vector so calculating the cross product returns a zero vector.   This is causing some issues in my camera and when working with rotation axis etc...    
  15. Hey!   I'm trying to find the best way to store binary images of resources to make loading as fast of possible.   My resources are all POD (no constructors, no vtables) so I don't have to deal with those cases.   Regarding endianess, my resource compiler converts the data endianess to the endianess of the target platform, so I think I won't need any endianess conversion at runtime (eg: the pc version ships with little endian compiled resources)   My biggest problem is how to deal with x86 and x64 since the pointers on each platform are different sizes. I store a pointer patching table and the end of each resource, however I want to use the same binary image on both x86 and x64 but structs that contain pointers have different sizes depending on the architecture.   How can I deal with this? Do you know any tricks?   Thanks. 
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!