Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

13 Neutral

1 Follower

About CortexDragon

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. CortexDragon

    Capture constantbuffer in commandlist

    Instead of having one constant buffer that is changed every frame, can you rotate between 2 (or even 3) constant buffers, so you have your alternate frames commandlists be bound to different constant buffers. This allows a constant buffers value once set to remain at that value for a certain number of frames, long enough for the commandlist that uses it to finish executing Basically the same approach that the tutorial dx12 samples use (They use 3 frame objects) Also remember any changes you make to one buffer eventually have to be made to the other frames buffers if you want those other buffers to also have that value. (keeping a single cpu copy of the buffer, making your changes to that, then copying that into the current frames buffer is the easiest way)
  2. Instead of having the pixelshader write to a rendertarget, you could have the pixelshader write to a texture UAV. It would allow your pixel shader to both read and write pixels. You may have to use interlocked functions. So if you want to set a certain bit on a pixel, you could InterlockedOr with a number that just has that bit set.
  3. CortexDragon

    Depth buffer resource, mipmaps

    Rather than the 2 stage process of buffer copying to your target resource, then later generating the mips on that target with a compute shader, you could do it in a single step by using the depth texture as a srv input for the compute shader and have that compute shader be responsible for both copying of the input srv's pixels to the highest level of the target texture and also creating the mips and write those to the target texture.
  4. CortexDragon

    One DescriptorHeap per frame buffer ?

    You dont need multiple descriptor heaps to give each frameobject a contineous region in a descriptorheap You can simply have each frameobjects unique buffers use a different region of the same big cbv/srv/uav descriptorheap Example: If I have a framecount of 3 (2 backbuffers and 1 front buffer). This means I have 3 frameobjects. I have 10 constant buffers for each frameobject I have 20 textures which are shared by each frame object I have 5 slow changing constant buffers that are shared by each frame object. The cbv/srv/uav descriptor heap is arranged like this: slots 0 to 9 are the 10 constant buffers views of the first frameobject slots 10 to 19 are the 10 constant buffers views of the second frameobject slots 20 to 29 are the 10 constant buffers views of the third frameobject slots 30 to 49 are the 20 textures shader resource views shared by all frame objects. slots 50 to 54 are the 5 slow changing constant buffer cbvs shared by all frame objects. I use an enum for my descriptor heap slots to help avoiding mistakes.
  5. I do picking in the pixel shader that I use to draw my objects, without using an additional pass. My code is dx11 or dx12, but you might be able to employ a similar technique in your language. I pass an "object number" down from the vertex shader to the pixel shader when i draw my objects. I send the mouse coordinates to the pixelshader in a cbuffer (its in my cbeveryframe buffer that is updated each frame) In the pixel shader, if the pixel coodinates are equal to the mouse coordinates, I construct a uint where the high 16 bits are the pixel depth and the low 16 bits are the data to send back, in this case data is the "object number" I then write this number to a uav buffer using an interlockedmin function. This causes the closest object at the mouse coordinates to have its "object number" to be written to the uav Then in the cpu I read this uav buffer (well actually a cpu-map buffer that I copied the uav to), and take the low 16 bits of the uint to get the object number. I send back various pieces of information about the object under the mouse back to the cpu using this techique by using different uint variables in that uav.buffer. That pixel shader is also used to draw the object after the above piece of code. Performance tip when sending things from the gpu to the cpu - dont try to read the same buffer in the cpu imediately after the cpu draw command that causes that buffer to be written by the gpu. Allow it a few frames by for example having 3 uav buffers (and their corresponding cpu-map buffers that they are copied to) that you rotate between. This is because the draw in the gpu doesnt occur imediately it is issued in the cpu.
  6. CortexDragon

    Handling World Transformations

    DX11 - I would do it the first way as it minimizes the number of buffer writes from the cpu to the gpu each frame. Have an array in the constant buffer, with one element per object Each array element is of a struct that contains any "per object information" such as world matrix, color, texturenumber etc If you have a large number of objects, the object element struct would instead be elements of a structured buffer, rather than using an array inside a constant buffer. Nvidia dx12 do's and dont's recommends the array inside the constant buffer rather than as elements of a structured buffer if you are using it from the pixel shader. In an ideal world you would also draw all of them together using a single draw command, rather than one draw per object, but that depends on how you do your vertexbuffer so may not be practical. ----- Vulcan - I dont know Vulcan. ----- DX 12 - If you are using executeindirect there is another option - You can store your per object information in a structured buffer in elements of size 512 bytes (or increments of that). Then you draw all your objects with a single call to executeindirect, and each draw call within it sets a rootparameter to cause a constant buffer view start position to point to the correct position of your structured buffer. This allows the gpu to see the current object being drawn as a single constant buffer.
  7. CortexDragon

    Wormhole Effect

    A simple "fake" wormhole way which doesnt involve a moving camara could be: 1. have a straight length of tube in front of the camara 2. textures move towards the near end of the tube by using an offset in the pixelshader. 3. To make it look like there are twists and turns in the tube you could bend the far end of the tube left/right/up/down by making changes in the vertexshader similar to doing skeletal animation. Its not as "real" as actually flying the camera down a real twisting tube, but it might be slightly simpler if all you are doing is a cutscene rather than something that can be controlled by the player. My guess is your "real" tunnel will probably look better than this "fake" way however.
  8. CortexDragon

    Intel HD 620, 400 draw calls, 37FPS

    This is very old information, so it may not apply to your situation, but traditionally lots of small draws is slower than a few big draws, so definetly try the instancing mentioned above, (or even combine them into one big draw). https://stackoverflow.com/questions/4853856/why-are-draw-calls-expensive Nvidia wrote a paper about it :- http://www.nvidia.com/docs/IO/8228/BatchBatchBatch.pdf
  9. When the commandlist that is copying from the staging buffer has finished executing, and (as mentioned by the above posters), you know that by a fence on the queue you are using. If you are doing this on the graphics queue, and your program is arranged like many of the basic dx12 samples, and you have frame objects containing commandlists for each of your (for example) 3 frames, you already know when those frame objects are available again due to the wait for the fence that is in your standard game loop (each frame object is available after 3 frames if your framecount is 3).
  10. https://docs.unity3d.com/Manual/SL-Reference.html As the language is hlsl (a variant early version of hlsl https://docs.unity3d.com/Manual/SL-ShadingLanguage.html), the msdn site is also very good as it explains all the hlsl functions and syntax hlsl on MSDN site https://msdn.microsoft.com/en-us/library/windows/desktop/bb509561(v=vs.85).aspx Good sections inside this are are language syntax - https://msdn.microsoft.com/en-us/library/windows/desktop/bb509615(v=vs.85).aspx intrinsic functions - https://msdn.microsoft.com/en-us/library/windows/desktop/ff471376(v=vs.85).aspx ******** Then its a case of looking at lots of examples of shader techniques and using their ideas.
  11. CortexDragon

    Hard magic and custom spells

    A cone also increases its volume by the cube of its radius
  12. CortexDragon

    Hard magic and custom spells

    Range and radius of the area are technically 2 seperate things, so you might want to be able to pay for them seperately. For example you could have a classic dungeons and dragons fireball which is a spread at range, or you could have an explosion centered on yourself which would be a range 0 spread spell. Cones are usually range 0 spells with a length ie radius of the cone. Assuming you are talking about the area costs by radius, it may be better to base the costs on how much the area increases with radius. This means line increases linearly, cone grows by the square of their radius, and spread grows by the SQUARE of their radius but has a smaller starting radius than the cone. (for example a 30' cone may cost the same as a 20' spread) The reason is otherwise if the spreads increased by the cube of their radius, cones would become more and more favourable for covering big areas compared to spreads when you put a lot of points in.
  13. CortexDragon

    Direct2d/directwrite with DX11

    There is another method to make text on screen - You use dx2d to write text to a texture then use d3d to draw a square or rectangle on the screen using that texture. The advantages of this way over writing text to the back buffer are: 1. you dont have to use the dx2d writing every frame. Good if you have large amounts of text on complex dialogs. 2. you dont have to change the texture format of your swapchain to BGRA. You simply have to have that one texture that gets written to by dx2d have this format. I use this method on my dialog boxes that the user can drag around the screen like windows.
  14. CortexDragon

    Direct2d/directwrite with DX11

    This dx10 page describes using dx2d in dx3d https://msdn.microsoft.com/en-gb/library/windows/desktop/dd370966(v=vs.85).aspx You can do the same things in dx11 using your standard dx11 device instead of using the dx10 one they created on that page It describes both 1) writing to the swapchain 2) writing to a texture that you use from your standard d3d code. I used it for the 2nd application in dx11. I created my factory using D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); One gotcha you have to watch out for is dx2d prefers textures in the format BGRA rather than RGBA. I dont know why.
  15. CortexDragon

    Artist To Programmer

    If you do write your own engine based on those 7 tutorials, you will probably want to add a post process step to it at the end. Its a simple principle - you basically make your pixelshader output to a texture rather than the screen. Then use that texture as an input texture for a seperate draw of a big rectangle that you draw to the screen. And in that rectangles pixelshader you do post process effects like blur etc. I cant find an up to date link for a simple example of post process in dx11. The rastertek blur sample explains it but it I dont know if it will compile as its an old sample http://www.rastertek.com/tutdx11.html Nvidia developer have some more advanced samples
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!