JorenJoestar

Members
  • Content count

    256
  • Joined

  • Last visited

Community Reputation

381 Neutral

About JorenJoestar

  • Rank
    Member

Personal Information

  • Interests
    Art
    Business
    Design
    Education
    Programming

Social

  • Twitter
    GabrielSassone
  1. Fifth Engine

    42.
  2. Thank you guys! You are super kind everyone
  3. Hey there! Big update! I am probably the only one - but I can't find a search/find in the forums anymore ? What am I missing ? (Using Chrome)
  4. That works, thanks a lot!  
  5. Sadly glGetUniformLocation gives me a different number, that's why the confusion! :/
  6. Hello everyone,    I am trying to search the binding index of a uniform image2D into a compute shader. If I have this kind of syntax:   layout(rgba16f, binding=0) readonly uniform image2D tex0; and I use   glBindImageTexture(0, ...) it works of course. But I can't find any proper way of finding the binding point. glGetUniformLocation does not work of course, but I can't find a method to get the binding. I would like not to have to explicitly write the binding point in the shader and in the cpu code, but just reflect it! Any ideas? Thank you a lot!
  7. OpenGL Graphics programming language

    In my opinion it really depends what you want to express with the language.   If you want to describe a rendering pipeline (that FINALLY with Vulkan/Dx12 is stateless) then it's one thing. If you want to describe different rendering configuration, is another. If you want a api-agnostic way of issuing drawing command, then is another thing. For the full frame definition...you need to express the dependencies of a frame.   Buffer/Textures are the main ones. From a drawing/compute point of view, you are writing into a buffer. Who needs that buffer then dictates the order.   This is an old concept but it could be a starting point! There is also someone that did some work on that:   https://www.gamedev.net/blog/1930/entry-2262315-designing-a-high-level-render-pipeline-part-3-a-visual-interface/   I personally use this separation since quite a while (explicit render pipelines, stages and their dependency), and you can describe everything more easily. Vulkan finally has the RenderPass for it!     Also, into writing a custom language, I suggest you to look into Antlr (version 4):   It's a library to create a lexer and a parser from a grammar:   http://www.antlr.org/ https://github.com/antlr/grammars-v4   Version 4 is pretty solid and they improved a lot the expressiveness of their language.   So for example, with your language you can generate c++ code quite easily once you have a tree of resources you need to draw!
  8. OpenGL Vulkan programming guide

    I would suggest this tutorial instead:   https://vulkan-tutorial.com/   I liked it a lot, and gives you a good understanding of the basic Vulkan knowledge.
  9. Horizon:zero Dawn Cloud System

    Another interesting part of the system is the weather simulation. I found that a good way to handle cloud formation/dissipation and movement is using Cellular Automata as Dobashi tried:   http://evasion.imag.fr/~Antoine.Bouthors/research/dea/sig00_cloud.pdf A very brief summary of the paper is this: https://graphics.stanford.edu/courses/cs348b-competition/cs348b-05/clouds/index.html   A working implementaton is here: https://software.intel.com/en-us/articles/dynamic-volumetric-cloud-rendering-for-games-on-multi-core-platforms   All the simulation is a 3D grid, and it can be used to generate the weather map. Accumulating cloudiness on the y-axis could create the cloudtype parameter, and if over a certain threshold the coverage too. Probably another pass could see spots areas with taller clouds (maybe doing a downscaled version of the simulation map) to check precipitation values.   Just a train of thoughts!
  10. Horizon:zero Dawn Cloud System

    You can also find both the complete source code from Bitsquid here: http://bitsquid.blogspot.com/2016/07/volumetric-clouds.html   Or a more working version that you can try on Unity here: http://kode80.com/blog/2016/04/11/kode80-clouds-for-unity3d/index.html   The unity package contains everything, from source code to editing the weather maps.   These could be a good peek into this technique! And again, thanks to Andrew that shared his (their) tech with us!
  11. Maybe you can look at [url="http://mmikkelsen3d.blogspot.co.uk/"]this[/url]!
  12. Possibly this line is broken: posWorldSpace = mul(posViewSpace, InverseWorldViewMatrix); because you are moving from the viewSpace to object space. To be back to world space you need to multiply for the inverse of the view matrix, that maps from view to world space. Also, if you need other more informations I've put some stuff together on my blog: [url="http://badfoolprototype.blogspot.com/"]http://badfoolprototype.blogspot.com/[/url] Hope this helps!
  13. Reconstructing view-space position from depth

    Thanks Daniel, you are very kind to post your code! Actually yesterday night I get the Arkano's implemenation working, I had to change a little in the uv calculation...it seems those math is working only with the view space texture! I'm using this method to reconstruct: [color="#c0c0c0"]float depth = [color="#c0c0c0"]tex2D(gDepthTex, uv).r * gFar; [/color] float4 pos = float4( (uv.x-0.5)*2, (0.5-uv.y)*2, 1, 1 ); float4 ray = mul(pos, gProjectionInverse); return ray.xyz * depth; [color="#000000"]And it is working quite well! I will try your method too, the trade between the multiplication with a texture fetch is interesting Thanks again!!![/color] [/color]
  14. Reconstructing view-space position from depth

    I have the same problem with reconstruction as you guys...I`m trying to use the depth and no luck. I use this reconstruction method (I'm using right-handed coordinates): [color="#c0c0c0"] [color="#c0c0c0"]float depth = tex2Dlod( g_depth, float4(uv, 0, 0) ).r * g_far_clip; [/color] float4 positionCS = float4((uv.x-0.5)*2, (0.5-uv.y)*2, 1, 1); float4 ray = mul( positionCS, gProjI ); ray.xyz /= ray.w; position = ray.xyz * depth / ray.z position.z *= -1; // This is for right-handed. Daniel what method are you using? Thanks! [/color]
  15. [quote name='0xffffffff' timestamp='1295832219' post='4763691'] We had great success with this approach. Our implementation is somewhat different than that described above. We take four strategically-placed samples to establish edge direction, leveraging bilinear filtering to get the most out of each sample. Then we do a four-tap blur (three also works well) with a strong directional bias along the edge. The size of the blur kernel is in proportion to the detected edge contrast, but the sample weighting is constant. This works astonishingly well and is a fraction of a millisecond compared to 3+ ms for MLAA. [/quote] Can you describe better the process you are using? Are you using both normal and depth??? Thanks!