multifractal

Members
  • Content count

    31
  • Joined

  • Last visited

Community Reputation

306 Neutral

About multifractal

  • Rank
    Member
  1. You really don't need a very high resolution 3D texture for the rendering of fog. Fog in nature isn't very sharply defined so a low resolution texture with interpolation works quite satisfactory. I'm sure you've seen it already but if not this may give you a better understanding of the topic: http://advances.realtimerendering.com/s2014/wronski/bwronski_volumetric_fog_siggraph2014.pdf
  2. Compute Shader Experiments

    Various shots of my experiments into using compute shaders.
  3. Hello.   I am writing to a 3D texture in a compute shader using the imageStore function. In another compute shader I then need to read from this texture. I would like to be able to set it as a sampler uniform and read it using the texture function but this returns a clear value. However, when I attempt to read with imageLoad after binding the texture as an image, I successfully read the pixels. I am aware this problem fits the description of using gl_MemoryBarrier(GL_SHADER_IMAGE_ACCESS_BIT) in place of gl_MemoryBarrier(GL_TEXTURE_FETCH_BIT) but this doesn't seem to affect the output. Strangely, I only encounter this problem with layered textures.   Here is the code that dispatches the compute shaders and sets up memory barriers:   glUseProgram(inscatter_compute.program_id); glActiveTexture(GL_TEXTURE0 + TRANSMITTANCE_UNIT); glBindTexture(GL_TEXTURE_2D, transmittanceTexture); glUniform1i(glGetUniformLocation(inscatter_compute.program_id, "transmittance_tex"), TRANSMITTANCE_UNIT); glBindImageTexture(0, deltaSRTexture, 0, GL_TRUE, 0, GL_WRITE_ONLY, GL_RGBA16F); glBindImageTexture(1, deltaSMTexture, 0, GL_TRUE, 0, GL_WRITE_ONLY, GL_RGBA16F); glDispatchCompute(16, 8, 8); glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT | GL_SHADER_IMAGE_ACCESS_BARRIER_BIT); glUseProgram(copy_inscatter_compute.program_id); glActiveTexture(GL_TEXTURE0 + DELTA_R_UNIT); glBindTexture(GL_TEXTURE_3D, deltaSRTexture); glUniform1i(glGetUniformLocation(copy_inscatter_compute.program_id, "delta_sr_tex"), DELTA_R_UNIT); glActiveTexture(GL_TEXTURE0 + DELTA_M_UNIT); glBindTexture(GL_TEXTURE_3D, deltaSMTexture); glUniform1i(glGetUniformLocation(copy_inscatter_compute.program_id, "delta_sm_tex"), DELTA_M_UNIT); glBindImageTexture(0, inscatterTexture, 0, GL_TRUE, 0, GL_WRITE_ONLY, GL_RGBA16F); glDispatchCompute(16, 8, 8); glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT | GL_SHADER_IMAGE_ACCESS_BARRIER_BIT); I create all of these textures using glTexStorage3D.    Any input would be appreciated. Thank you.  
  4. Strange Bug in Projected-Grid Rendering

    Wow, a quick test of that yielded the attached image. Upside down, but with the problem resolved. Thanks for your help! [attachment=33184:Screenshot 2016-09-07 04.59.37.png]   EDIT: In the case that anyone is interested in the fix:   the glitch happens whenever 'world_dir.y'  from float t = -ocean_camera_pos.y / world_dir.y; return ocean_camera_pos.xz + (t) * world_dir.xz; became positive making 't' become negative. Setting the variable like: float t = -ocean_camera_pos.y / (min(world_dir.y, -0.001)); generally resolves the problem.
  5. Strange Bug in Projected-Grid Rendering

    Can I clear something up for you then? Do you not understand the bounds of the loop?
  6. Strange Bug in Projected-Grid Rendering

      I too thought the indices were the problem but I tried drawing the vertices without doing all of the projections and no problem was seen with the polygon construction:   [attachment=33167:angle_problem4.png]   this mesh is created with the same indices and vertices that the projected grid in the original post uses.
  7. Hello. I have been attempting to implement a projected grid to test out some water related code I have written. In order to do this I implemented the projected grid created by Eric Bruneton in his ocean water/lighting paper (http://www-ljk.imag.fr/Publications/Basilic/com.lmc.publi.PUBLI_Article@125fe8a8322_1ac3379/article.pdf). I have essentially completed the implementation aside for one odd bug that remains unsolved. Whenever the pitch of the camera increases to above some angle, dependent on how the "horizon" of the grid is computed, the grid becomes to draw degenerate triangles and it seems like a projection problem but I can't really tell what about this angle causes the grid to spasm so severely. The code for grid generation is implemented as follows: void generate_mesh(float camera_theta) { if (grid_vbo_size != 0) { glDeleteVertexArrays(1, &grid_vao); glDeleteBuffers(1, &grid_vbo); glDeleteBuffers(1, &grid_ibo); } glGenVertexArrays(1, &grid_vao); glBindVertexArray(grid_vao); glGenBuffers(1, &grid_vbo); glBindBuffer(GL_ARRAY_BUFFER, grid_vbo); //was horizon = tan(camera_theta / 180 * M_PI); float horizon = tan(45 / 180.0 * M_PI); float s = std::min(1.1f, 0.5f + horizon * 0.50f); std::cout << s << std::endl; float vmargin = 0.1; float hmargin = 0.1; int size = int(ceil(HEIGHT * (s + vmargin) / grid_size) + 5) * int(ceil(WIDTH * (1.0 + 2.0 * hmargin) / grid_size) + 5); glm::vec4 *data = new glm::vec4[int(ceil(HEIGHT * (s + vmargin) / grid_size) + 5) * int(ceil(WIDTH * (1.0 + 2.0 * hmargin) / grid_size) + 5)]; int n = 0; int nx = 0; for (float j = HEIGHT * s - 0.1; j > -HEIGHT * vmargin - grid_size; j -= grid_size) { nx = 0; for (float i = -WIDTH * hmargin; i < WIDTH * (1.0 + hmargin) + grid_size; i += grid_size) { data[n++] = glm::vec4(-1.0 + 2.0 * i / WIDTH, -1.0 + 2.0 * j / HEIGHT, 0.0, 1.0); nx++; } } glBufferData(GL_ARRAY_BUFFER, n * 16, data, GL_STATIC_DRAW); delete[] data; glGenBuffers(1, &grid_ibo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, grid_ibo); grid_vbo_size = 0; GLuint *indices = new GLuint[6 * int(ceil(HEIGHT * (s + vmargin) / grid_size) + 4) * int(ceil(WIDTH * (1.0 + 2.0 * hmargin) / grid_size) + 4)]; int nj = 0; for (float j = HEIGHT * s - 0.1; j > -HEIGHT * vmargin; j -= grid_size) { int ni = 0; for (float i = -WIDTH * hmargin; i < WIDTH * (1.0 + hmargin); i += grid_size) { indices[grid_vbo_size++] = ni + ((nj + 1) * nx); indices[grid_vbo_size++] = (ni + 1) + ((nj + 1) * nx); indices[grid_vbo_size++] = (ni + 1) + (nj * nx); indices[grid_vbo_size++] = (ni + 1) + (nj * nx); indices[grid_vbo_size++] = ni + ((nj + 1) * nx); indices[grid_vbo_size++] = ni + nj * nx; ni++; } nj++; } glBufferData(GL_ELEMENT_ARRAY_BUFFER, grid_vbo_size * sizeof(GLuint), indices, GL_STATIC_DRAW); glVertexAttribPointer(0, 4, GL_FLOAT, false, 4 * sizeof(GL_FLOAT), (GLvoid *)0); glEnableVertexAttribArray(0); delete[] indices; glBindVertexArray(0); } As I mentioned above the following line:  float horizon = tan(45 / 180.0 * M_PI); affects the angle at which the grid begins to become warped. In Bruneton's original code this angle was set to be the pitch of the camera but this still presents the bug (and other undesirable behavior.) The matrices used to project the grid are simply my standard projection and view matrices (along with their inverts) and seem to be set up correctly as they projected various other geometry as expected. The actual grid is transformed like so in a shader: vec2 ocean_pos(vec4 vertex) { vec3 camera_dir = normalize((screen_to_camera * vertex).xyz); vec3 world_dir = (camera_to_world * vec4(camera_dir, 0.0)).xyz; //was t = -world_camera.z / world_dir.z; float t = -world_camera.y / world_dir.y; return world_camera.xz + t * world_dir.xz; } void main() { vec2 u = ocean_pos(position); float s = sin(u.x) * cos(u.y); color = vec3(s); gl_Position = world_to_screen * vec4(u.x, s, u.y, 1.0); }  where world_to_screen = perspective matrix * view matrix, screen_to_camera = inverse perspective matrix, camera_to_world = inverse of view matrix, and world_camera is the position of the camera in world space. In order to illustrate the problem further here is a screenshot of the grid with the camera pointed just under the problem angle: [attachment=33137:angle_problem2.png] And then when the camera is moved just above the angle: [attachment=33138:angle_problem3.png] Filled in: [attachment=33139:angle_problem.png]   (the grid is being offset by sine functions to emphasize the bug but it is present in non offset grids.)   Has anyone who has implemented this encountered similar problems? Does anyone see if it's obvious why applying the aforementioned transformation would so warp a grid when the view is of a certain angle? Thank you for your time.
  8.  Thanks for all the replies. I've switched to glTexStorage2D, removed the memory barrier call from before the dispatch and moved it after along with changing the flag to GL_TEXTURE_FETCH_BARRIER_BIT, and added calls to glGetError.  This has revealed the following errors:   Invalid value error after the following call: glTexStorage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16);   I'll check out RenderDoc and attempt to debug this.   EDIT: I had accidentally set levels to 0 rather than 1. Fixing this has allowed the texture to be rendered from the compute shader. Thanks for all the help!
  9. Hello. I've recently been attempting to render 3D textures in a compute shader for an upcoming project of mine. However, I have met an unexpected issue I cannot solve. No matter what I tried the texture was not written to. In order to ascertain what is causing the defect I set up a simple test case consisting of an all red 16x16 texture being created on a compute shader and then rendered to the screen using a full-screen quad. Yet the issue persists. I'm not too sure where to go next. I've tried substituting glTexImage2D(...) with glTexStorage2D(...) to no avail. I am at a loss here so any help would be very much appreciated.   Relevant Code:   compute shader: #version 450 core layout(local_size_x = 1, local_size_y = 1) in; layout(rgba32f) uniform image2D destination; void main(void) { ivec2 index = ivec2(gl_GlobalInvocationID.xy); imageStore(destination, index, vec4(1.0f, 0.0f, 0.0f, 1.0f)); } compute shader dispatch method: void draw_texture(GLuint *texture_handle, GLuint draw_program) { GLuint tex; glGenTextures(1, &tex); glActiveTexture(GL_TEXTURE0 + 0); glBindTexture(GL_TEXTURE_2D, tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16, 0, GL_RGBA, GL_FLOAT, NULL); glUseProgram(draw_program); glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT); GLuint uniform_location = glGetUniformLocation(draw_program, "destination"); glBindImageTexture(uniform_location, tex, 0, GL_FALSE, 0, GL_READ_WRITE, GL_RGBA32F); glDispatchCompute(16, 16, 1); glUseProgram(0); glBindTexture(0); *texture_handle = tex; }   Thanks for your time!
  10. Hello,  I am creating heightmaps on the GPU and then using a Sobel filter to create a normal map for that heightmap. The Sobel filter works by sampling the 8 pixels surrounding a central pixel. The obvious problem is that when the central pixel is located on the edge of a heightmap the filter will not be able to sample all the edges. To resolve this problem I decided to expand the size of the heightmap from 256x256 pixels to 258x258 pixels. Then when I generate the heightmap for a particular quad I do the following:  vec3 p = vec3(uv.x, 0.0, uv.y); //size is the size of the current quad p *= (size + 2); p -= (1.0/(size + 2.0)); //'meshoffset' offsets the scaled meshes points from the origin p += meshOffset; when it comes to sample the heightmap I warp the UV coordinates like so:  //the uv coordinates range from (0.0,0.0),(0.0, 1.0/17.0),(0.0, 2.0/17.0)...-> (1.0, 1.0) uv *= (15.0/17.0); uv += (1.0/17.0); yet this still does not cause the edges to match up.    Does anyone have any experience with this? Thanks. 
  11. Wow, thank you so much! This is really fantastic. Your help is greatly appreciated. 
  12. Hello,  I have been trying to render height maps to a texture using a frame buffer object. This works...although the performance is simply awful (3 fps). I think the problem rests in the fact the that the quad is still attached to the viewport after the terrain is rendered yet calling viewPort.detach(quad) results in the following, distorted image:          Here is the code I use to create the frame buffer, attach it to a viewport, and render the height map to a texture. The shader is running improved, texture-based, perlin noise.  Texture2D hm = new Texture2D(1024,1024,Format.RGBA32F); Quad quad = new Quad(2,2); mat = new Material(assetManager, "NoiseProgram.j3md"); geom = new Geometry("Quad",quad); geom.setMaterial(mat); mat.setTexture("permSampler2d", permutationTexture()); mat.setTexture("permGradSampler", gradientTexture()); FrameBuffer fbo = new FrameBuffer(1024,1024,1); fbo.setColorTexture(hm); vp.setOutputFrameBuffer(fbo); vp.attachScene(ge); I may have not done a great job explaining the problem. Thanks for any help. 
  13. Yea that was stupid of me:     Although that still doesn't look great, I could probably just fiddle with the variables more. Thanks.
  14. Hello, I recently decided to switch from creating heightmaps in openCL to a pixel shader. While this makes things easier to work with I have one major issue. For terrain I am relying primarily on this ridged multifractal function: float RMF(in vec3 v) { float result, frequency, signal, weight; float H = 1.0; float lacunarity = 2.143212; int octaves = 12; float offset = 1.0; float gain = 2.1; int i; bool first = true; float exponentArray[12]; frequency =.6; if(first){ for (i=0; i<octaves; i++) { exponentArray[i] = pow(frequency, -H); frequency *= lacunarity; } first = false; } signal = inoise(v); if (signal < 0.0) signal = -signal; signal = offset - signal; signal *=signal; result = signal; weight = 1.0; for (i = 1; i < octaves; i++) { v*=lacunarity; weight = signal * gain; if ( weight > 1.0 ) weight = 1.0; if ( weight < 0.0 ) weight = 0.0; signal = (inoise(v)); if ( signal < 0.0 ) signal = -signal; signal = offset - signal; signal *= signal; signal *= weight; result += signal * exponentArray[i]; } return (result - 1.0) / 2.0; } Using openCl I could produce heightmap such as the following:   This resulted in "strong" terrain features and rough mountains yet smooth valleys.  Using a pixel shader I cannot reproduce such quality.  At best I can create something like below:    Which is far too rough and lacks the height features of the other implementation.  I can think of no other cause of these disparities than the actual perlin noise function being used.  For the pixel shader I am using the noise described here: http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter26.html while in openCL I was using the following function: float sgnoise2d(float2 position) { float2 p = position; float2 pf = floor(p); int2 ip = (int2)((int)pf.x, (int)pf.y); float2 fp = p - pf; ip &= P_MASK; const int2 I00 = (int2)(0, 0); const int2 I01 = (int2)(0, 1); const int2 I10 = (int2)(1, 0); const int2 I11 = (int2)(1, 1); const float2 F00 = (float2)(0.0f, 0.0f); const float2 F01 = (float2)(0.0f, 1.0f); const float2 F10 = (float2)(1.0f, 0.0f); const float2 F11 = (float2)(1.0f, 1.0f); float n00 = gradient2d(ip + I00, fp - F00); float n10 = gradient2d(ip + I10, fp - F10); float n01 = gradient2d(ip + I01, fp - F01); float n11 = gradient2d(ip + I11, fp - F11); const float2 n0001 = (float2)(n00, n01); const float2 n1011 = (float2)(n10, n11); float2 n2 = mix2d(n0001, n1011, smooth(fp.x)); float n = mix1d(n2.x, n2.y, smooth(fp.y)); return n * (1.0f / 0.7f); } float ugnoise2d(float2 position) { return (0.5f - 0.5f * sgnoise2d(position)); } I would like to keep using the improved glsl noise that I am using but I was wondering if anyone had any experience with good ridged multifractal functions to use in a pixel shader. Thanks for any help whatsoever. 
  15. Hello, I am currently working on Eric Bruneton's pre-computed atmospheric scattering. I have been trying to figure out what was wrong with my irradiance1 texture so I decided to render the transmittance table to screen and this was the result:     Which looks quite different from what it is supposed to be:     I am confused since I am basically copying Brunetons/Neyret's code verbatim... Here is my fragment shader for reference: const float Rg = 6360.0; const float Rt = 6420.0; const float RL = 6421.0; float limit(float r, float mu) { float dout = -r * mu + sqrt(r * r * (mu * mu - 1.0) + RL * RL); float delta2 = r * r * (mu * mu - 1.0) + Rg * Rg; if (delta2 >= 0.0) { float din = -r * mu - sqrt(delta2); if (din >= 0.0) { dout = min(dout, din); } } return dout; } void getTransmittanceRMu(out float r, out float muS) { r = gl_FragCoord.y / 64.0; muS = gl_FragCoord.x / 256.0; r = Rg + r * (Rt - Rg); muS = -0.15 + muS * (1.0 + 0.15); } const int TRANSMITTANCE_INTEGRAL_SAMPLES = 500; float opticalDepth(float H, float r, float mu) { float result = 0.0; float dx = limit(r, mu) / float(TRANSMITTANCE_INTEGRAL_SAMPLES); float xi = 0.0; float yi = exp(-(r - Rg) / H); for (int i = 1; i <= TRANSMITTANCE_INTEGRAL_SAMPLES; ++i) { float xj = float(i) * dx; float yj = exp(-(sqrt(r * r + xj * xj + 2.0 * xj * r * mu) - Rg) / H); result += (yi + yj) / 2.0 * dx; xi = xj; yi = yj; } return mu < -sqrt(1.0 - (Rg / r) * (Rg / r)) ? 1e9 : result; } void main() { float HR = 8.0; float HM = 12.0; vec3 betaR = vec3(5.8e-3, 1.35e-2, 3.31e-2); vec3 betaMSca = vec3(4e-3); vec3 betaMEx = betaMSca / 0.9; float r, muS; getTransmittanceRMu(r, muS); vec3 depth = betaR * opticalDepth(HR, r, muS) + betaMEx * opticalDepth(HM, r, muS); gl_FragColor = vec4(exp(-depth), 0.0); // Eq (5) } Any help would be appreciated...thanks