Jump to content
  • Advertisement

CortexDragon

Member
  • Content count

    38
  • Joined

  • Last visited

Community Reputation

11 Neutral

1 Follower

About CortexDragon

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Art
    Design
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. CortexDragon

    Wormhole Effect

    A simple "fake" wormhole way which doesnt involve a moving camara could be: 1. have a straight length of tube in front of the camara 2. textures move towards the near end of the tube by using an offset in the pixelshader. 3. To make it look like there are twists and turns in the tube you could bend the far end of the tube left/right/up/down by making changes in the vertexshader similar to doing skeletal animation. Its not as "real" as actually flying the camera down a real twisting tube, but it might be slightly simpler if all you are doing is a cutscene rather than something that can be controlled by the player. My guess is your "real" tunnel will probably look better than this "fake" way however.
  2. CortexDragon

    Intel HD 620, 400 draw calls, 37FPS

    This is very old information, so it may not apply to your situation, but traditionally lots of small draws is slower than a few big draws, so definetly try the instancing mentioned above, (or even combine them into one big draw). https://stackoverflow.com/questions/4853856/why-are-draw-calls-expensive Nvidia wrote a paper about it :- http://www.nvidia.com/docs/IO/8228/BatchBatchBatch.pdf
  3. When the commandlist that is copying from the staging buffer has finished executing, and (as mentioned by the above posters), you know that by a fence on the queue you are using. If you are doing this on the graphics queue, and your program is arranged like many of the basic dx12 samples, and you have frame objects containing commandlists for each of your (for example) 3 frames, you already know when those frame objects are available again due to the wait for the fence that is in your standard game loop (each frame object is available after 3 frames if your framecount is 3).
  4. https://docs.unity3d.com/Manual/SL-Reference.html As the language is hlsl (a variant early version of hlsl https://docs.unity3d.com/Manual/SL-ShadingLanguage.html), the msdn site is also very good as it explains all the hlsl functions and syntax hlsl on MSDN site https://msdn.microsoft.com/en-us/library/windows/desktop/bb509561(v=vs.85).aspx Good sections inside this are are language syntax - https://msdn.microsoft.com/en-us/library/windows/desktop/bb509615(v=vs.85).aspx intrinsic functions - https://msdn.microsoft.com/en-us/library/windows/desktop/ff471376(v=vs.85).aspx ******** Then its a case of looking at lots of examples of shader techniques and using their ideas.
  5. CortexDragon

    Hard magic and custom spells

    A cone also increases its volume by the cube of its radius
  6. CortexDragon

    Hard magic and custom spells

    Range and radius of the area are technically 2 seperate things, so you might want to be able to pay for them seperately. For example you could have a classic dungeons and dragons fireball which is a spread at range, or you could have an explosion centered on yourself which would be a range 0 spread spell. Cones are usually range 0 spells with a length ie radius of the cone. Assuming you are talking about the area costs by radius, it may be better to base the costs on how much the area increases with radius. This means line increases linearly, cone grows by the square of their radius, and spread grows by the SQUARE of their radius but has a smaller starting radius than the cone. (for example a 30' cone may cost the same as a 20' spread) The reason is otherwise if the spreads increased by the cube of their radius, cones would become more and more favourable for covering big areas compared to spreads when you put a lot of points in.
  7. CortexDragon

    Direct2d/directwrite with DX11

    There is another method to make text on screen - You use dx2d to write text to a texture then use d3d to draw a square or rectangle on the screen using that texture. The advantages of this way over writing text to the back buffer are: 1. you dont have to use the dx2d writing every frame. Good if you have large amounts of text on complex dialogs. 2. you dont have to change the texture format of your swapchain to BGRA. You simply have to have that one texture that gets written to by dx2d have this format. I use this method on my dialog boxes that the user can drag around the screen like windows.
  8. CortexDragon

    Direct2d/directwrite with DX11

    This dx10 page describes using dx2d in dx3d https://msdn.microsoft.com/en-gb/library/windows/desktop/dd370966(v=vs.85).aspx You can do the same things in dx11 using your standard dx11 device instead of using the dx10 one they created on that page It describes both 1) writing to the swapchain 2) writing to a texture that you use from your standard d3d code. I used it for the 2nd application in dx11. I created my factory using D2D1CreateFactory(D2D1_FACTORY_TYPE_SINGLE_THREADED, &pD2DFactory); One gotcha you have to watch out for is dx2d prefers textures in the format BGRA rather than RGBA. I dont know why.
  9. CortexDragon

    Artist To Programmer

    If you do write your own engine based on those 7 tutorials, you will probably want to add a post process step to it at the end. Its a simple principle - you basically make your pixelshader output to a texture rather than the screen. Then use that texture as an input texture for a seperate draw of a big rectangle that you draw to the screen. And in that rectangles pixelshader you do post process effects like blur etc. I cant find an up to date link for a simple example of post process in dx11. The rastertek blur sample explains it but it I dont know if it will compile as its an old sample http://www.rastertek.com/tutdx11.html Nvidia developer have some more advanced samples
  10. CortexDragon

    Artist To Programmer

    Disclaimer - Im not a professional graphics programmer, for me its a hobby (I am a professional programmer however) I think a good starting point for any graphics programmer is to be able to write a basic simple graphics engine in c++ and directx 11 (and possibly dx12) up to the stage of displaying objects so you can play around with your own shaders in hlsl. It only takes 7 simple tutorials to learn this. You can download the working code from the tutorial pages. For dx11 see Tutorial01 - 07 from this page - https://blogs.msdn.microsoft.com/chuckw/2013/09/20/directx-sdk-samples-catalog/ Very important - Make sure you actually read the 7 tutorials web pages rather than just downloading them as they give very good explainations on their web pages. The 7 tutorial web pages are from links from this page inside the above page https://code.msdn.microsoft.com/Direct3D-Tutorial-Win32-829979ef tutorial 1: Direct3d 11 basics https://msdn.microsoft.com/en-us/library/windows/apps/ff729718.aspx tutorial 2: Rendering a triangle https://msdn.microsoft.com/en-us/library/windows/apps/ff729719.aspx tutorial 3: Shaders and Effect Systems https://msdn.microsoft.com/en-us/library/windows/apps/ff729720.aspx tutorial 4: 3D Spaces https://msdn.microsoft.com/en-us/library/windows/apps/ff729721.aspx tutorial 5: 3D Transformation https://msdn.microsoft.com/en-us/library/windows/apps/ff729721.aspx tutorial 6: Lighting https://msdn.microsoft.com/en-us/library/windows/apps/ff729721.aspx tutorial 7: Texture mapping and constant buffers https://msdn.microsoft.com/en-us/library/windows/apps/ff729721.aspx For dx12 its the hello samples here - https://msdn.microsoft.com/en-us/library/windows/desktop/mt186624(v=vs.85).aspx For all information for the entire of dx11 and dx 12 and hlsl - https://msdn.microsoft.com/en-us/library/windows/desktop/hh309466(v=vs.85).aspx The most important sections being "Direct3d 11 Graphics" or "Direct3d 12 Graphics" depending on which you are using and of course "HLSL" HLSL will probably be very important for you so browse through the entire of HLSL. Inside HLSL you will find the "Reference for HLSL" https://msdn.microsoft.com/en-us/library/windows/desktop/bb509638(v=vs.85).aspx read the chapters "Language Syntax" and "Intrinsic functions" . I find myself constantly refering to these section when I write code in hlsl. Texture objects are also useful to understand in hlsl https://msdn.microsoft.com/en-us/library/windows/desktop/bb509700(v=vs.85).aspx
  11. This may not be the reason, but it will cause problems: uint distance = asuint(length(p - q1)); The asuint() function stores the floats bit pattern in your uint. However float bit patterns are complicated, so you cannot simply compare their bit patterns as uints to determine which is a higher number. It would be safer to simply typecast to uint, or maybe multiply by 1000 then typecast to uint for more accuracy.
  12. CortexDragon

    Descriptors and heaps

    Instead of using a ring buffer approach for the descriptorheap, I use a linked list on the cpu that initially stores a link for each element number in my texture region of the descriptorheap (the link knows its element number) When I want a descriptor record , I pop the first link from this list. The link knows its recordnumber in the descriptorheap. When I want to release a texture I return its link to the list so its descriptor record is now free for someone else to grab. It gets more complicated if you want to grab links or release links from different cpu threads as you have to use interlocked linked lists.
  13. CortexDragon

    Clip points hidden by the model

    I do my mouse clicking on objects entirely in my standard pixelshader. Its fast and is accurate to individual pixels. Basically if the pixel coordinate is the mouse coordinate, I send back information about the object to a UAV which the cpu then picks up. If Im using [earlydepthstencil] it already copes with clicking on the closest thing and not things behind it If Im NOT using [earlydepthstencil], I use InterlockedMin() to make sure only the closest object at the location I clicked is sent back to the cpu To do this, for every 16 bits of information I want to send back to the cpu, I construct a uint where the high 16 bits are the depth and the low 16 bits are the data i want to send back. By using interlockedmin(), the ones with the lowest depth will be what is sent at the pixel.
  14. I had described a generic way for a 3d renderer with transparency and using [earlydepthstencil], but realised since your things in front are rather small (you dont have large opaque objects in front of other opaque objects), I realised that your way of not using depth testing might be faster for your situation. I had written something like this, which allowed you to take advantage of [earlydepthstencil] :- 1. First draw opaque (non transparent) objects front to back with depth test enabled, depth write enabled, and [earlydepthstencil]. 2. Then draw transparent objects back to front with depth test enabled, depth write DISABLED and [earlydepthstencil] The reason the opaque objects are drawn front to back is so further back objects that are drawn later and are blocked by closer objects will bypass their pixel shaders due to [earlydepthstencil] The reason for the transparent objects depth test is to not draw those that would be blocked by the non transparent ones that had been previously drawn. Their depth write is disabled so they dont accidently block something drawn behind them later due to incorrect sorting of the back to front
  15. yes that seems a good way. A depth stencil is only of use if its used to block other later draws.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!