ramirofages

Members
  • Content count

    24
  • Joined

  • Last visited

Community Reputation

229 Neutral

About ramirofages

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. Hello everyone, I was following this article: https://mattdesl.svbtle.com/drawing-lines-is-hard#screenspace-projected-lines_2 And I'm trying to understand how the algorithm works. I'm currently testing it in Unity3D to first get a grasp of it and later port it to webgl. What I'm having problems with is the space in which the calculations take place. First the author calculates the position in NDC and takes into account the aspect ratio of the screen. Later, he calculates a displacement vector which he calls offset, and adds that to the position that is still in projective space, with the offset having a W value of 1. What's going on here? why can you add a vector in NDC to the resulting position of the projection? what's the relation there?. Also, what is that value of 1 in W doing? shouldn't it be 0 ? Supposedly this algorithm makes the thickness of the line independent of the depth, but I'm failing to see why. Any help is appreciated. Thanks
  2. UDK Volumetric light beam

    Wow thanks a lot, they never said anything about tangent space. Will try it out when I get home. EDIT: Works great, thanks!
  3. UDK Volumetric light beam

    Thanks for sharing your stuff as well. Unfortunately I couldn't make it work yet
  4. Hi, I came across this udk article: https://docs.unrealengine.com/udk/Three/VolumetricLightbeamTutorial.html that somewhat teaches you how to make the volumetric light beam using a cone. I'm not using unreal engine so I just wanted to understand how the technique works. What I'm having problems is with how they calculate the X position of the uv coordinate, they mention the use of a "reflection vector" that according to the documentation (https://docs.unrealengine.com/latest/INT/Engine/Rendering/Materials/ExpressionReference/Vector/#reflectionvectorws ) it just reflects the camera direction across the surface normal in world space (I assume from the WS initials) . So in my pixel shader I tried doing something like this: float3 reflected_view = reflect(view_dir, vertex_normal); tex2D(falloff_texture, float2(reflected_view.x * 0.5 + 0.5, uv.y)) view_dir is the direction that points from the camera to the point in world space. vertex normal is also in world space. But unfortunately it's not working as expected probably because the calculations are being made in world space. I moved them to view space but there is a problem when you move the camera horizontally that makes the coordinates "move" as well. The problem can be seen below: Notice the white part in the second image, coming from the left side. Surprisingly I couldn't find as much information about this technique on the internet as I would have liked to, so I decided to come here for help!
  5. problems generating signed distance field

    Oh you're right, I can't believe I didn't notice it earlier. Unfortunately that seems to 'bias' things to one side or the other depending on which side has higher range than the other. Here's the result with your code: [attachment=36028:carita 256 SDF_5.png]   But I can see that there's definitely a problem with how I'm mapping the values. Unfortunately the paper makes no mention about this at all, so I'll keep trying. Thanks for your time!
  6. problems generating signed distance field

    Thanks to everyone who commented on this thread. Here are my findings: 1) Applying gamma correction (as Krypt0n and BFG suggested) fixed the brightness issue, but as you can se below, the "harsh edge" issue still persisted. [attachment=36011:Screenshot_1.png]   2) Thanks to Alvaro's suggestion I decided to make the example case easier, and tested with an image of 20x1 pixels, with the first 10 pixels white, and printed the integer distances before doing the range map. I noticed that (as Postie mentioned) I was having trouble with the distances near 0, because the distances were jumping from -1 to 1 without going through 0. I fixed it this way ( I don't know if it's the proper way to do it...but it works  :P). float findClosest(int current_x, int current_y) { int min_distance = 9999; bool texel_outside = sample_raw_image(current_x, current_y) < 0.5f; for(int i=0; i< image_width * image_height; i++) { int y = i /image_width; int x = i % image_width; if(texel_is_oposite(texel_outside, x,y)) { min_distance = get_min_distance(current_x, current_y, x, y, min_distance); } } // FIX :) return texel_outside? min_distance -1 : -min_distance; } This yielded the following results: [attachment=36012:face 256 SDF.png]   Unfortunately as you can see, it's not the same as the reference image, and when rendering I have to use a value like 0.6 as the cutoff instead of 0.5 as the paper mentions. Having said that, it kinda works. I will give it a few more tries and then will move to a better implementation, as the algorithm that Aressera mentioned (thanks for it! ).   Thanks everyone for your time. Cheers!
  7. Hi! I'm trying to implemente Valve's paper about improved alpha testing (http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf) but unfortunately I'm not getting the results I was expecting. Have a look at the following image: [attachment=35993:Screenshot_3.png] I borrowed the image from here [http://www.codersnotes.com/notes/signed-distance-fields/]. At the right we have the source image. At the left we have the expected result. And in the middle my results. At first look it may seem as if the middle image is just overall darker than the correct one, but if you look closely you will notice some harsh transitions where the edge (the 'middle value', 0.5) should be. I can't really tell if it's a problem of how the value mapping is being done, or how the distance are being calculated. Below I leave a few of the key functions that I'm using:   Here is the main loop, InputData is an array of floats representing the grey values of an image (monochrome single channel). void GenerateSDF() { for(int i=0; i< InputData.Length; i++) { int y = i/image_width; int x = i % image_width; float dist = findClosest(x,y); //this writes the computed distance to an array of floats set_output(x,y, dist); } } Here is the function that calculates the minimum distance to a texel of the oposite color (as the paper indicates) float findClosest(int current_x, int current_y) { float min_distance = 9999f; //sample_raw_image just samples the InputData array, converting <x,y> coordinates into array index. bool texel_outside = sample_raw_image(current_x, current_y) < 0.5f; // texel_outside = true = black // texel_outside = false = white for(int i=0; i< image_width * image_height; i++) { int y = i /image_width; int x = i % image_width; // This function determines if the current texel being processed is of the oposite color // than the reference pixel from the main loop. If it is then we calculate the nearest distance. if(texel_is_oposite(texel_outside, x,y)) { min_distance = get_min_distance(current_x, current_y, x, y, min_distance); } } // if the texel is outside (is black) of the shape then we use positive distance. return texel_outside? min_distance : -min_distance; } For clarity I also leave here the texel_is_oposite and the get_min_distance functions: bool texel_is_oposite(bool texel_is_outside, int x, int y) { if(texel_is_outside && sample_raw_image(x,y)> 0.5f) //outside = black return true; if(!texel_is_outside && sample_raw_image(x,y)< 0.5f)//inside = white return true; return false; }     float get_min_distance(float x1, float y1,  float x2, float y2, float current_min)     {         float x = x2 - x1;         float y = y2 - y1;         float length = (float)Math.Sqrt(x * x + y * y);         return Math.Min(length, current_min);     } Then after the output array is calculated, I proceed to calculte the minimum and maximum values of the entire output array, and map the values from the range minimum..maximum to -1..1 and then 0..1 float[] remap(float[] values) { float minimum = 9999f; float maximum = -9999f; float[] new_values = new float[values.Length]; for(int i=0; i< values.Length; i++) { minimum = Mathf.Min(values[i], minimum); maximum = Mathf.Max(values[i], maximum); } for(int i=0; i< values.Length; i++) { // map converts a value from an initial range to a target range, // in this case from minimum..maximum to -1..1 new_values[i] = map(minimum, maximum, -1f, 1f, values[i]) * 0.5f + 0.5f; } return new_values; } All that remains is to create an image with that array and display it on screen. Is there anything wrong with the code? Any help is greatly apreciated.   Also note that I'm using a brute-force approach, this is mainly because I want to understand with my own hands how the algorithm works, that's why I haven't used the algorithm described in the link I posted which is more efficient.   Cheers
  8. GPU point particle rendering artifacts

    Alright, I still would like to know exactly what the problem was (from the technical point of view), but I solved it by adding noise in the XY direction (with Y pointing upwards) but in camera space. Hope it helps someone :)
  9. GPU point particle rendering artifacts

    My apologies, I should have given more details about it ( I wrote it in a rush because I had to go somewhere). It probably is related to aliasing problems. I forgot to mention something important, I'm using an ortographic camera. With a perspective camera it doesn't happen, probably because the particles are a bit more separated (pic below). Honestly it's the first time that I see this kind of problem because I've never played with GL_POINT particles before, not with this many at least, so I'm not sure what kind of other information would be needed, but I will be happy to provide it http://imgur.com/eEisMZK ( I don't understand how to attch files correctly in the answer :(      You're probably right, I've noticed that this happens because of the amount of little points, with so litttle space between them. My short term solution would be to reduce the amount of particles and/or use a little sprite as you mention. Thanks
  10. Hi! I've come across a problem that I've never seen before. I'm rendering 1024x1024 point particles on the gpu, doing some gerstner waves, and I've noticed a vertical stripe pattern that can be seen on the particles, and can't really figure out why they are there. Pic below In the fragment shader I'm just outputting red (1,0,0,1), and I'm sure that a lot of the particles are overlapping since they are confined in that little space. [attachment=35820:Screenshot_1.png]
  11. Hi, I'm trying to do projective clipping using a circle as the clipping mask. I'm aware that I could use the stencil buffer or other methods, but at this point I'm just curious to know why my approach isn't working as expected. But first, a screenshot of the problem: [attachment=35801:circle.png] I'm creating a circle at position 0,0,0 with radious 3, what I want to do is to use that circle as a clipping mask, by clipping (using  discard on the fragment shader) everything  that is outside of the circle in normalized device coordinates. Looking at the picture you can see that at the borders there are a few bits left outside that shouldn't be there (where the blue lines are). Here's what I'm doing: In the vertex shader, I calculate the world position of the vertex (w_pos) and pass that to the fragment shader. In the fragment shader: // p_pos = the projected position of the world point float4 p_pos = mul(MATRIX_VP, float4(w_pos, 1)); // circle_border = the closest border point of the circle to the p_pos float3 circle_border_w_pos = normalize(float3(w_pos.x, 0 , w_pos.z)) * _Radius; float4 circle_border_p_pos = mul(MATRIX_VP, float4(circle_w_pos, 1)); // circle_center = the center point of the circle, already in world space float4 circle_center_p_pos = mul(MATRIX_VP, float4(0,0,0,1)); float4 circle_center_p_pos = mul(MATRIX_VP, float4(circle_center_w_pos,1)); // perspective divide float3 n_pos = p_pos.xyz/p_pos.w; float3 circle_border_n_pos = circle_border_p_pos.xyz/circle_border_p_pos.w; float3 circle_center_n_pos = circle_center_p_pos.xyz/circle_center_p_pos.w; if(distance(circle_center_n_pos.xy, n_pos.xy) > distance(circle_center_n_pos.xy, circle_border_n_pos)) { discard; } So as you can see, what I'm doing is projecting each point to the screen(current point, circle's border point closest to the current point, and center point of the circle) and measuring distances between them. If the distance from the center of the circle to the current point is greater than the distance from the center to the border (meaning, greater than the radius) then it's outside of the projected circle, and we clip it. Below I leave an image (edited with ms paint :P) of the result that I'm trying to achieve: [attachment=35802:Screenshot_6.png]   Cheers
  12. Alright, will give it a try then, thanks!
  13. Hi, currently I'm drawing a large amount of flowers and I notice that the ones that are far away are quite...aliased/hard to see. I feel like this is not something that can be fixed with anti aliasing alone. Pic below ( and I'm using anti aliasing there) [attachment=35412:Screenshot_11.jpg] This problem gets worse when you see them moving (https://www.youtube.com/watch?v=rZUbX3M_I4M) Of course I understand that it's normal for objects far away to be hard to distinguish their shape and colors, but I feel like there's something that can be done to make them smooth/low frequency, for example this game https://www.youtube.com/watch?v=Pixpj3Tsdwk There they deal with a lot of sunflowers, and while they are quite big, I feel like they have done something so they always look smooth no matter the distance. I'm currently using only 1 LOD for the flowers because it was hard to simplify it more than that, the next thing would be a billboard... Do you have any tips to make them smoother at distance? or something similar. Cheers
  14. Lighting model for flower petals

    Thanks a lot, your analysis was very helpful !! And on a side note, I've never seen a daisy in my life :P they don't grow where I live, so I didn't know they close at night. So thats probably why I also couln't find one of those during night
  15. Lighting model for flower petals

    Thanks, then I'll look into SSS , and also do more work on the textures