• Advertisement

Yours3!f

Member
  • Content count

    514
  • Joined

  • Last visited

Community Reputation

1533 Excellent

About Yours3!f

  • Rank
    Advanced Member
  1. Hi,   is there a way to resolve MSAA depth buffer to a non-MSAA one? Here (https://www.khronos.org/registry/vulkan/specs/1.0/man/html/vkCmdResolveImage.html) it says that:   Whereas in OpenGL (https://www.khronos.org/registry/OpenGL-Refpages/es3.0/html/glBlitFramebuffer.xhtml):   bests, yours3!f
  2. hi there,   I created a simple GPU profiling lib for opengl, check it out: https://github.com/Yours3lf/OpenGLGPUProfiler   best regards, Yours3!f  
  3. hi there,   today I had the idea to write a list of things to check if there's nothing rendered on screen, for newbies. I think this would tremendously help them, especially if there would be a piece of code that check each of these.   If you have any additions to this list, please comment it below.   -is there any opengl context? -is any kind of swapbuffer called? -is it called after everything has been rendered? -what kind of frame buffer object are you rendering into? -if you are using shaders, do you bind the correct one? -are the shader uniforms bound? -are shaders compiled and linked properly? -is there any vertex buffer object, vertex array object bound? -is there any index buffer object bound? -if you are using textures, are the textures bound? -are you using the depth state that you want to use? -is blending enabled? -are you using the blending function you wanted to use? -is scissoring enabled? -is backface culling enabled?   etc.   please contribute to this list :)
  4. Yes, and then all of the necessary events/fences to synchronize the resources that are being shared between the two queues (just like you would for code that was split across two threads on a CPU).   allright, thank you! :)
  5.   so what do you advise if I want to say do compute stuff while doing shadow map rendering (ie. only depth passes) one graphics + one compute queue?
  6.   yeah of course that makes sense :)
  7.   yeah I know that, I guess I'll have to measure out if multiple command queues get me additional perf or not.
  8.   thank you :)   seems like for now one should suffice... MSDN vs the graphics samples is confusing, because on MSDN they have as many queues as threads in the example codes, but in the samples they have one only. They populate commadn lists on separate threads, and submit on the main graphics thread after syncing.
  9. hi there,   do command queues (https://msdn.microsoft.com/en-us/library/windows/desktop/dn788627(v=vs.85).aspx) correspond directly to hardware queues aka ACEs on GCN?   ie. should I create the same number of compute queues as there are ACEs on the GPU? I suppose there should be only one graphics queue, as the hardware (GCN) can only use one. Is this the same with DMA copy engines? (same number of copy queues)   or should there be one command queue per async submission thread? (ie. 1 graphics/compute/copy queue per thread)   afaik it is advised to use one command allocator, one command list and one fence per thread. Is this true?   best regards, Yours3!f    
  10. hi there,   EDIT: oh didnt see that there's a newer driver, installing 355.82 solved it...   I tried to run the DX12 hello window graphics sample (https://github.com/Microsoft/DirectX-Graphics-Samples/blob/master/Samples/D3D12HelloWorld/src/HelloWindow/D3D12HelloWindow.cpp), but I can only run it, if I set m_useWarpDevice to true, meaning I can only create a warp device.   I have a 64 bit windows 10 installed, I tried to run the sample on a gtx660 using the latest driver (353.62). Nvidia control panel says I have dx12 runtime and api version, but only feature level 11_0. DXDiag reports 11.3 directx version, and says that the driver is wddm 2.0 capable, but again, it only lists feature levels up to 11_0.   While I can still develop using a warp device, I'll need the hardware device later, so it'd be great if I could solve this   any idea how to fix this? I tried to reinstall the display driver previously, no success.   best regards, Yours3!f
  11. In your D3D11_VIEWPORT, but... that's just to convert the placement of your viewport rect from GL coords to D3D (or vice versa), it won't actually flip it vertically. That's because NDC is the same in GL/D3D, but tex-coords are flipped. So the right thing(tm) to do is to flip your texcoords. Alternatively, you can just flip your VS's position.y output variable, but that won't fix the same bug in other cases (e.g. when artists put a texture on a model, it will be vertically flipped between the two APIs...)   Alternatively, you can flip all of your textures on disk or when loading them (D3D expects the top row of pixels to come first in the buffer, GL expects the bottom row of pixels to come first in the buffer...), and also flip all your projection matrices upside down in one of the APIs (so that render-targets get flipped as well as texture files -- D3D's NDC->pixel coordinate mapping is flipped vertically from GL's)... In shaders like that which don't use a projection matrix, you'd just multiply the VS's position.s output by -1. This will have the same effect -- the texture data itself (and render-target data) will now be upside down, so there's no need to flip the texcoords any more.   Personally, I choose to use D3D's coordinate systems as the standard, and do all this flipping nonsense in GL only -- but vice versa works too.   [Edit] BTW, layout(origin_upper_left) only modifies the pixel coordinate value that is seen by the fragment shader, it doesn't actually change the window coordinate system or the rasterization rules. The ARB_clip_control extension allows you to actually use D3D window coordinates in GL (including finally fixing GL's busted NDC depth range)... however, it only exists in GL4.     awesome, thank you for the detailed explanation! :) I think I'll go w/ the flipping tex coords on the CPU solution, it seems to me that this is the least painful :)
  12.   well, I'm rendering a fullscreen quad, and the vertices are defined in ndc space       vec3 ll( -1, -1, 0 );       vec3 lr( 1, -1, 0 );       vec3 ul( -1, 1, 0 );       vec3 ur( 1, 1, 0 ); This way I don't need to multiply them w/ a matrix in the vertex shader. I wanted to display a texture, but it appeared upside down. So I figured that I have two options, flip the window coords, or flip the texture coords, as you mentioned. I figured the window coords would be less painful where do I set this? vp.y = renderTarget.height - vp.y - vp.height;
  13. hi there,   I want to change the window coordinate origin to use OpenGL conventions (ie. lower-left is (0,0)).  In GLSL the default is lower-left, however you can change to upper-left using layout(origin_upper_left). Is there such a thing in HLSL that changes to lower-left?   best regards, Yours3!f
  14. SOLVED: needed to enable color write masks, turns out by zeroing out the blend desc structure, you disable color writes...
  • Advertisement