• Advertisement

NikiTo

Member
  • Content count

    62
  • Joined

  • Last visited

Community Reputation

167 Neutral

1 Follower

About NikiTo

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Programming
  1. DX12 Discard vs Clip

    Ok, I think I understand it now. When people say that GPU keeps evaluating the sentences, they actually mean the other pixels of the grid/group evaluate the sentences. It is just never clearly enough explained. I think this way, my fetches from textures from inside the discarded pixel are spared. I hate to post useless posts....
  2. Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures. if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction. MSN> discard: Do not output the result of the current pixel. clip: Discards the current pixel.. <MSN As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too) I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip. (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
  3. Thank you for the answer, @Krypt0n! I have it clearer now. I will use discard, and not warm my head for now. And I will let depth/stencil apart as a final optimization.
  4. Imagine that I have big triangles with static TV(white noise) texture. I need to discard only the black pixels.
  5. Thank you, all! I am going to do what @MJP suggested. Plus, I can decide which pixel is discarded in the previous render pass, having no need for a dedicated pass for Z-values. So the solution of MJP is purrfect for me. It is just that, at the moment of making design decisions, I assumed that Stencil would be easier to use. Now I have to use a pure depth format excluding the stencil byte from the format. Using this way the depth functionality for Stenciling. I don't understand why Stencil is so tricky to use, forcing people to emulate the same functionality using depth buffer. Is Stencil somehow deprecated?!
  6. I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete. Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip? Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!
  7. Thank you, all! I am so glad you helped me to use RTVs without mips! @SoldierOfLight I was doing exactly that sharing heap thing. Now I use just a single committed resource as you told me to do, and it works correctly for MipLevels of 1 on both adapters. Thank you again for solving my 3 days struggle!
  8. AMD forces me to use MipLevels in order to can read from a heap previously used as RTV. Intel's integrated GPU works fine with MipLevels = 1 inside the D3D12_RESOURCE_DESC. For AMD I have to set it to 0(or 2). MSDN says 0 means max levels. With MipLevels = 1, AMD is rendering fine to the RTV, but reading from the RTV it shows the image reordered. Is setting MipLevels to something other than 1 going to cost me too much memory or execution time during rendering to RTVs, because I really don't need mipmaps at all(not for the 99% of my app)? (I use the same 2D D3D12_RESOURCE_DESC for both the SRV and RTV sharing the same heap. Using 1 for MipLevels in that D3D12_RESOURCE_DESC gives me results like in the photos attached below. Using 0 or 2 makes AMD read fine from the RTV. I wish I could sort this somehow, but in the last two days I've tried almost anything to sort this problem, and this is the only way it works on my machine.)
  9. I would love to see this in games: image one image two (I don't understand how it works physically. I tried to read an explanation about it, but wasn't able to understand it :S I think it would add a lot of realism and could be faked in games) The point is that if we had a supercomputer where we could load all the physical laws and simulate a scene, it would be boring for me. What makes it interesting for me, is exactly the way of using shadow mapping instead of ray tracing for shadows(for example). The cheating makes it fun for me. If a post process filter can fake it- great for me! Normally a gamer would not pick up a game based on the technique or API used, but based on how it looks(excepting other things as MARKETING, playability design(boobs etc.). rdehtgyfg.bmp
  10. @Vilem Otte You are right. For reflections, the noise removal would blur the reflected image. For your previous(Quake 2) example, it should work.
  11. @Vilem Otte I would ask the 3D artist to provide me with a model with additional data made up of the unique indexes of polygons. This way, I could apply a noise/blur post filter and remove the noise without damaging the important edges. I think it would look nice enough this way. About the ray tracing, I am not sure if the tracer is simply iterating/bouncing-around or solving the n-body problem between reflections for real.
  12. Programming challenge: As you can see, even in real world, light is fading away after some number of iterations. It is not going infinitely. I don't want to bring Matrix theory into conversation, but it worth mentioning that this could be cheating of the reality simulation hardware. I am working currently in computer vision, and I found out two things- compared to computers, brain has infinite computational power, and second- there is no need to program an App that surpasses the real world. Some times it is hard for me to distinguish faces when they are painted for example, so I don't expect my App to recognize faces when they are painted. I am not trying to surpass brain, I only try to get close to the brain(failing currently). For this computational challenge, I would make light fade completely after very few iterations. It would still look nearly real, and would save computations. I think, for a game, even 3 levels of mirror recursion would be enough if they fade gradually to dark/no-reflection. A curious experiment would be to grow a child since newborn with VR glasses that show him a reality like Wolfenstein 3D 1992. And when its brain adapts to operate in lame graphics, we take the glasses off and watch the reaction. It is something that happened with all of us who are 25+ years old. For example, I always remembered Robot Jox the Movie as an amazing real-like movie, until I watched it again and it was sooo lameeee. My brain added to my memories amazing CGI effects to that lame movie. (I would not let my children play 3D games too often. 3D games are low polygon and single ocular, so the growing up and adapting brain of my child would get used to fake reality and would feel different with real reality. When the brain is developed already with real perspective/lights/physics from playing with real toys in real environment, it is ok to play games, but again, not too often). That's why I love advanced reality games, because it is like hacking perception with any kind of techniques. Giving to the brain the best available(in terms of hardware) lie to make it believe. Most of the computational job is done by brain actually.
  13. Yes, it could be the framerate. It called my attention quickly. It is easy to notice. And in the demo with Star Wars, I notices they did not use textures for the reflecting surfaces. Only flat materials. I guess this way they saved few fetches.
  14. Thank you for the suggestion! I will do as you say if everything else fails. Thank you! (my laptop GPU says "total memory bandwidth 14 GByte/s" and "Memory Bit Rate 1.80 Gbps")
  15. @galop1n you are correct. I don't need 3D texture. I just read that 3D texture's main advantage is three dimensional interpolation. I don't need that. @Hodgman Thanks for the suggestions. I try to avoid UAVs for now, because I think that they should be slower by default than a regular RTV. I need what is shown in the picture. Imagine that I have to read 6 photos inside the shader, then blend them in a particular way and output 50 levels of gray of the resulting image. Running the shader various times, I would read the photos 6 extra times for each extra drawing call. Now that I re-think it, maybe using GS was a bad solution, because it will still call the shader 49 extra times. If there is not a way to select from the pixel shader the slice to render to, and these slices to be more than 8, I am screwed... New Bitmap Image.bmp
  • Advertisement