Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Everything posted by Meltac

  1. Many Thanks MaulingMonkey for checking and commenting on my screenshot. Your explanation helped me much understanding what happens and what the others said here before! Actually I was thinking about adding dust particles and/or jittering anyway Also I've quickly tried one dithering algorithm and it does help, however fine tuning it to prevent too much rendering artifacts seems to be tricky. So for now this is ok for me. Once again I've learned a lot Thanks again to everybody!
  2. Hi all In my post-processing pixel shader (HLSL, SM5) I am facing heavy banding issues as to be seen in the attached image. It doesn't matter what type of effect I am implementing (fake volumetric light in this case), it always happens when I am applying some color blending, e.g. fading out. Could this be a precision issue (I'm working with float numbers), or where do I have to start looking? Any hints will be appreciated!
  3. Thanks, gentlemen. Honestly I am not so sure about that. The rest of the rendered graphics (which does not come from my feather) does not show any banding nor visible dithering. Same with photos being viewed on my monitor, e.g. gradual colors / brightness level in the sky. I can see some banding going on when I zoom in very closely, but none in full screen mode. The banding that I encounter is much stronger and obviously visible, as you can see in the attached screen. There must be another source for this. Convert to what format? Unfortunately it's not my own engine, it's a proprietary game engine (X-Ray), and I only got access to some of its HLSL shaders, so I can't say how the pipeline exactly works. For what I know the gamma is done inside the engine (ie. in the host application) as there is nothing gamma related in the shaders. But I do not know the specifics. I've attached a BMP, will this also help (instead of PNG)? XR_3DA_2017-07-10_20-40-44-01.bmp
  4. Alright, I've compared *all* DX / D3D files in SysWOW64 from both my Vista and my W10 installation (using a Diff tool, binary content comparison) and then copied all files that were different from Vista to Win10 (DirectX 9 to 11 files, 12 was not present on Vista). Before I had double-checked that these files are actually being used by the engine by renaming then and getting startup errors accordingly. So now all DirectX files being used should be binary-wise identical on both installations, Vista an W10, no matter if they come from the OS or from the game installation folder. But the behavior is still absolutely the same !?? The compiler still does not behave as in Vista, omitting the same code parts as before!   Any further thoughts !?
  5. Hi all I'm facing some weird problems with some game's HLSL shaders, on Windows 10 exclusively. The shaders are compiled against DX 11 / shader model 5. In case it matters, they still use the old fashioned DX9 methods tex2D etc. for sampling because they initially had been written for DX9 / shader model 3 and later been migrated with least possible effort. There are different variations of pixel shaders that the game engine applies depending on in-game weather (ie. one pixel shader for dry weather, a different one for rain etc.). As those variations share large parts of the same code, those code parts are being referenced as #include directives, and the differences have been implemented as #define and #undef directives (e.g. rain shader defines a part for rendering the ground as wet, dry shader does not). The game engine compiles all source shader files upon start-up, one by one, in a predefined (hard-coded) order, using the usual D3D commands. NOW: On my Vista 64bit environment, the D3D compiler seems to figure out which included shader source files are being used anywhere (ie. in at least one of the pixel shader variants), and compiles all used includes.   On my Windows 10 64bit environment, things behave differently: If the compiler encounters some include not being used in the first compiled shader file, it DOES NOT COMPILE that include file REGARDLESS it being used in subsequent file to compile !!! Pseudo code sample: // Pixel shader for dry weather (= first file being compiled) #include "SomeSettingsFile.h" #include "GenericShaderForAllWeatherTypes.h" // Pixel shader for rainy weather (= second file being compiled) #include "SomeSettingsFile.h" #define IT_RAINS #include "GenericShaderForAllWeatherTypes.h" // GenericShaderForAllWeatherTypes.h [...] #ifdef IT_RAINS #include "RainShader.h" render_things_as_wet(); #endif [...] On Vista this works as expected: Source file "RainShader.h" is always being compiled an thus available on runtime. On Windows 10 is does not: "RainShader.h" and the code segment starting with #ifdef IT_RAINS will be omitted / "optimized away" by the compiler because the first shader being compiled does not use it!!! Any ideas???
  6.   Thanks for your answer. I am using a copy of the compiler DLL inside my bin directory, so compiler-wise the file should be the same (sorry I should mentioned that in the first place). So I suspect it must be another D3D / DX file that makes the difference, perhaps some DLL that the compiler uses. Any ideas how I could figure that out?
  7. Hi all,   I'm looking for a way to distinguish sharp areas in an image from those that exceed a certain amount of blur / "unsharpness". In HLSL.   More specifically, I want to detect / mask / mark the entire out-of-focus part of a photograph, or, vice-versa, its in-focus part.   So far I've tested several approaches dealing either with edge detection, or the overall-sharpness of images, but they all fail in properly and robustly masking areas with certain blur or sharpness.   I couldn't imagine that this would be such a hard task since the human eye can distinguish in-focus (sharp) from out-of-focus (blurred) areas of a photo quite easily.   Any ideas or hints?
  8. Thanks so far guys.   I've read that some guys used derivatives (e.g. the third derivative) to detect blur / sharpness.   Anyone knows how to implement a derivative / derivation in HLSL?
  9. Ok thanks - I think I'll just try myself then
  10.   Related question, are there any known compiling performance gains (or disadvantages) when using the newer version of the compiler? Or in other words, in case that several of those versions do work as expected, which would be the fastest (in terms of compilation times)?
  11. Meltac

    D3DCOMPILER_47.dll ?

      Thanks, I've just tested it. No difference for non-compute shaders (at least in my case), unfortunately.
  12. Meltac

    D3DCOMPILER_47.dll ?

    On a related topic, does bring D3DCOMPILER_47.dll any performance or memory benefits over the _43 version?
  13. Hey guys, just wanted to let those who tried to help me know that the issue has been solved - my code was right, it was the engine (host application) delivering wrong matrices under certain conditions.   So everything's fine now, thanks again for your help :)
  14. Hi all!   In a 3D game that I'm writing a (post process) pixel shader for I try to transform a view space to a world space coordinate. My HLSL code looks about like this:   float3 world_pos = mul(view_pos,(float3x3)m_WV) + camera_pos   This works, but only for certain view angles and camera positions. E.g. when I look "from south" to the position in question it looks as it should (I mark the position to be transformed on screen as a colored sphere), but when turning the camera more than about 20 degrees, or shifting the camera position so that I will look "from east" the transformation renders completely off.   I must be missing something here, but I don't know what. I've tried normalizing, transposing and some other basic mutations / additions to my code but didn't find any working solution.   Any hints?
  15. Hi all!   I am looking for an efficient way to shuffle an array in plain HLSL (i.e., create a random sequence where every index is used exactly once).   I've learned so far that Fisher–Yates shuffle  (Knuth shuffle) algorithm would do the job, but I didn't find any implementations in HLSL so far. So I tried implementing the algorithm myself but the naiv approach of transforming code from a different language like C++ into HLSL produces rather slow running results.   Any ideas for a really fast way to achieve this?
  16. Hash! I need a hash, simple as it is! I should have come to that conclusion myself already in the first place     I think I have explained myself well enough to make clear why I reacted the way I did. It wasn't just some "other suggestion" that made me mad but the fact that some people insist on such a "other suggestion" even after I have stated clearly that it is not an option in my case.   Nonetheless, your hint on goniometric functions, even though not exactly what I was looking for, has lead me to the solution - I simple and stupid hash.   Thanks again.
  17.   Thanks. I don't have to do it for every pixel. That was, as I said, a simplification I had to make to avoid needing to tell a whole-day story here. Actually I pixelate ("downsample" in some sense) the screen to some extend, say 48 x 48 blocks. Would you then still suggest going the way of goniometric functions, or is there a simpler approach?         I've already found those pages myself, but thanks anyway.         I'm sorry but you're still off-topic, as *repeatedly* questioning me not being able to do it CPU-wise is off-topic. Don't get me wrong here, it's completely ok to ask if I couldn't do it on the CPU side  -   ONCE. But sticking on that and asking the same thing over and over is just annoying and not helpful nor constructive at all.
  18.   HOLY COW! Why do people here always question why something is the way it is? Can't you just accept that THIS IS NOT POSSIBLE IN MY PARTICULAR CASE !???   Even if you might not believe or not be able to imagine it - there ARE developers that do NOT have the privilege to build their own engine / host application or build effects onto some open environment! I just can't do it on the CPU and can't generate, bind, or access any textures in my case! And I don't need to explain here why! It's just the way it is.   Sorry that I react this upset but I'm really really getting absolutely sick and tired of repeating myself all the time!
  19.   That's why I need a deterministic (mathematical) function doing it. I don't intend to calculate anything just to store/cache it, I wan't to call a function that always returns the same value for the same inputs, but that value should be nothing continuous or linear but something like "fixed noise".         In the most efficient way, not most inefficient. And as I said, calculating anything on the CPU is out of scope here. It has to be done on the GPU, and purely in HLSL. Using any additional textures is not an option either.         Ok, I'll try, although I have to simplify much to avoid writing a novel...   Imagine I wanted to have the entire screen black (ie. no pixels colored), and then highlight one pixel after another, depending on a single variable passed from the CPU to the GPU, holding a value between 0 and 1. Value 0 means all pixels are black, Value 1 means all are highlighted, any value between stands for the percentage of highlighted pixels.   The application (CPU) starts increasing from 0 to 1, so that one pixel after the other will be highlighted. This happens in a "random" or "noisy" way, but still deterministic. For example, the first value larger then 0 (say 0.001) will always highlight the pixel at screen position (0.325, 0.871) with all other pixels remaining black. The second step (e.g. value 0.002), happening for example 0.1 seconds later, will additionally highlight pixel (0.728, 0.523) where the first pixel will remain highlighted - and so on, until the whole screen has been highlighted. At no point during the process (ie. while increasing the passed value), any pixel that had been already highlighted will be black again.   So, at every time (for every value passed from CPU to GPU) it is defined which pixels have to be highlighted (whatever "highlighting" might be, pixel shader wise).   Has that become clear now?
  20.   Once for each pixel on the screen (per frame, of course). Can (no, must) be the same result each time.
  21.   Yes it would, but in my case that's not possible. I need to do it inside a post-processing pixel shader.
  22. Hmm, I might have chosen the wrong approach anyway.   What I actually need is a deterministic function that takes an index and a range limit number as input and returns a unique "shuffled" number not larger than the specific limit. int f (int index, int limit) // returns a value between 0 and limit Example: Range / upper limit of 1000 --> input and return value have to be between 0 and 1000: f (1, 1000)   -->  724 f (2, 1000)   -->  28 f (3, 1000)   -->  301 f (4, 1000)   -->  513 f (500, 1000) -->  8 f (999, 1000) -->  148 Where each return value will occur exactly once (and within the specified range).   So basically the same as shuffling and querying an array of 1..1000, but ideally without having to define and iterate through an array.   Any ideas?
  23.         I was just about to say that this was not true when I realized we're talking directional lights, not spot lights.   I have seen (and used) a couple of shader/lighting/shadowing algorithms taking a light's world space *position* into account, sometimes converting it into view or screen space for the lighting calculations. So there *are* cases where it's completely valid to use a light's position, not direction.   However as you say correctly a directional light is defined by - as the name already says - the direction of light, where the light's position is considered being somewhere in very far to infinite distance. That's the way sunlight physically behaves from the earth's perspective - although the sun itself has a defined position somewhere in universe, the light rays coming to earth are nearly parallel so it is much more precisely to treat sunlight as directional than as spot light.
  24. For what I've seen in various examples and game engines, both are valid ways to go. As you say, position and direction are basically two sides of the same thing. It's mainly a matter of perspective.   So, there are lighting / shading algorithms that take a light's (world/view/screen space) position into account (as in coordinates), and there are others using the direction from the point of view (i.e. the camera) to the light's origin for their calculations. It's mainly the same vector, but in reverse order/direction.   Thus, if you are to implement an engine or application yourself, you can do it both ways.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!