Sign in to follow this  

OpenGL Methods for Drawing a Low-Resolution 2D Line Gradually?

This topic is 1044 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hope that title was descriptive enough, but I've prepared a diagram to describe exactly what I'm talking about:

oi62.tinypic.com/11imhr7.jpg

 

As you can see, the image is split into 3 panes. At the top we have two icons that I want to draw a line between. At the bottom there's a complete line between the two as I want it to look when it's finished, and the middle shows a partway progression of the line between the two states.

 

What I'm trying to achieve is basically the ability to draw a line between 2 points in the same style as if you used the pencil tool in Photoshop by clicking somewhere then holding shift and clicking somewhere else to draw a straight line between them. I don't want it to be anti-aliased or hi-res, and I need to be able to control the thickness of the line while retaining the pixellated look so it fits in with the simplistic art style I'm using.

What I want to use it for is graphically representing the path an object is travelling along over time, so it would start off as some kind of dotted line (e.g. only occasional square sections of the line being drawn) and then as the object got closer the line would be drawn in to represent the object's current position, as shown in the middle pane.

 

So... how might I accomplish this? I'm using C++ and OpenGL if it matters.

 

For something that seems so simple, it sure gets complicated when you don't have any experience with this kind of thing and start trying to come up with actual solutions that can be coded.

We start with the 2 points, so we'll need to come up with the equation of the line between them and the slope of that line. The slope tells us how many pixels y will increase by for every pixel that x increases, which is clearly what we need here, but where do we go from there?

My first semi-naive and probably impractical (but maybe someone else knows better) thought was:

1. Create an image file of a square representing the minimum "brush size" of the line.

2. Draw it onto the start position of the line, then draw it for every subsequent step along the line.

Am I right in thinking that would be terribly inefficient? Sounds like it would be. I'm also not sure if I really know how to determine exactly where each "brush" should be drawn.

 

Then I searched around a bit and heard about Bresenham's Line Algorithm. That looks like it would be relevant. The description on Wikipedia states:

"determines the points of an n-dimensional raster that should be selected in order to form a close approximation to a straight line between two points."

 

So that's exactly it. My only concerns would be:

1. Is it a simple matter to replicate a lower-resolution line than the actual resolution of the display? i.e. Make it nice and blocky and easily change this resolution at will.

2. Even if I do use that algorithm to determine what pixels should be coloured in, how should I draw them? Would it be best to use some kind of OpenGL primitives or similar OpenGL function? It would have to be horribly inefficient to use a textured quad for every square "pixel" of the line, but I'm not sure what the obvious alternative is because I haven't done anything like this before.

 

Thoughts?

Share this post


Link to post
Share on other sites

0) Yes, the algorithm is simple enough. Figure out the rise and run, then step along each pixel and draw it.

 

1) You can change the line width the same way, but it requires a bit of math to build properly. Ensure that you have the correct number of parallel pixels involved in the line, or draw the correct number of pixels above/below/beside the pixel you are processing. Determining the correct number depends on the angle of the line.

 

2) You should only do this if you are building your own rasterizer. If you are using OpenGL it already has these primitives built in. Your graphics card and drivers also already have these primitives built in. A thick line might require changing thousands of pixels. For drawing an individual line you can either make thousands of individual calls, potentially requiring thousands of trips across the system bus and across the hardware, or you can make a single call, with a single trip out to the graphics card.  It should be clear that one is several orders of magnitude more efficient.

 

3) Thoughts are that as a learning exercise to understand how the raserizers work it is a good thing to experience on your own. But after you've build your own software rasterizer, throw it away and go back to the hardware accelerated graphics systems.

Share this post


Link to post
Share on other sites
2) You should only do this if you are building your own rasterizer. If you are using OpenGL it already has these primitives built in. Your graphics card and drivers also already have these primitives built in. A thick line might require changing thousands of pixels. For drawing an individual line you can either make thousands of individual calls, potentially requiring thousands of trips across the system bus and across the hardware, or you can make a single call, with a single trip out to the graphics card.  It should be clear that one is several orders of magnitude more efficient.

 

Well for the sake of getting a game/feature completed, I'll take a quick and easy way here if at all possible. It's annoying getting hung up on this little features...

 

So am I right in thinking it's just a matter of using GL_LINES with some vertex data?

 

For a low-res effect would you have to do something with the fragment shader? And would that be quite a simple thing to implement, or somewhat tedious/time-consuming? I just wouldn't be quite sure how/where to begin...

Though I have heard about rendering the whole screen to a texture and then resizing that texture before displaying it on screen as a means of getting a low-res look. Here I just need to change the line itself though, so presumably there are better/quicker ways of achieving that.

Share this post


Link to post
Share on other sites

BTW, on the subject of fixed-function v shader-based OpenGL, is it ever a good idea to actually use FF functions to accomplish things even if you're generally using shaders? i.e. Can it sometimes be more convenient and not really worth the hassle of shaders etc. when you just want to do something quite limited, or should FF always be avoided nowadays?

 

EDIT: Also thinking about it further, I could clearly achieve the desired effect if I was able to use OpenGL to render a 1px-thick line between 2 points and then simply find a way to scale up the pixels by a factor of X and only keep the first 1/X of the line. Does that sound practical? How might you scale up a line like that? Or am I way off in my thinking?

Edited by Sean_Seanston

Share this post


Link to post
Share on other sites
You could render a 1-pixel thick line to a texture and then apply it to a quad of any size. I've never done render to texture, so I cannot help you there, but I've seen several tutorials online.

Share this post


Link to post
Share on other sites

You could render a 1-pixel thick line to a texture and then apply it to a quad of any size.

 

Hmmm... yes that seems like it would make sense. Haven't used render to texture myself either, but it sounds like the kind of thing that would work.

 

Might be a bit fiddly, but presumably it would work out in the end.

 

So unless anyone else has any better ideas, which I would be interested in hearing, I'll probably go with that.

 

Thanks all.

Edited by Sean_Seanston

Share this post


Link to post
Share on other sites

If you're set on drawing the line gradually, I'd do the same thing with RTT, but linear interpolate the position between endpoints and draw quads centered at each point. You'll get some overdraw, but it will be minor since you'll be drawing so little anyway and you'll be able to control the thickness of the line at the same time.

Share this post


Link to post
Share on other sites

If you're set on drawing the line gradually, I'd do the same thing with RTT, but linear interpolate the position between endpoints and draw quads centered at each point.

 

Just to clarify because I'm not 100% sure I understand fully... are you saying to use 2 quads, each centred at an endpoint, one representing the filled/complete section of the line and the other representing the dashed/incomplete section?

Share this post


Link to post
Share on other sites

No, it sounded to me like you want to draw the line gradually over time. I might be misunderstanding. If this is what you mean, what I am saying is to linearly interpolate the line between the two endpoints at each time step. So, the first time step you'd draw the first end point. The next time step, you'd draw the first interpolated point between the two end points, etc. Each time you draw, don't draw a single pixel (unless that is what you want), but a quad of the thickness you desire.

Share this post


Link to post
Share on other sites

No, it sounded to me like you want to draw the line gradually over time. I might be misunderstanding. If this is what you mean, what I am saying is to linearly interpolate the line between the two endpoints at each time step. So, the first time step you'd draw the first end point. The next time step, you'd draw the first interpolated point between the two end points, etc. Each time you draw, don't draw a single pixel (unless that is what you want), but a quad of the thickness you desire.

Hmmm... so you mean have the line sort of split into specified chunks, somewhat like a tile system almost? Such that you'd have a line with equally spaced points dotted along it and as the line went on you'd draw a line "tile" with its centre point on the next point?

 

Only problem I see with that is surely it wouldn't work very well for any arbitrary angle? Or are we talking about computing the textures for these quads on the fly depending on the line's slope etc.?

Though if that's right... would it not be just as easy to use 1 textured quad, with the entire completed line segment rendered to the texture?

 

 

The way lines are typically animated as being “drawn in real time” is to have an image with the whole line-art fully completed and then to add slowly decreasing alpha values along the line from start-to-finish. Then you draw the whole image while decreasing the alpha-testing value (which is a discard in a shader), causing more and more of the line to appear over time.

 

 

Interesting... hadn't thought of that.

 

 

Pass the to the shader the start point, the end point, and an interpolation value.
For each pixel your shader renders along the line, it is easy to determine where that pixel is between the start and end points, and based on the interpolation value you either discard the pixel or not.

 

 

Cool, sounds simple enough.

 

As for a stippling/dashed effect... would I be right in thinking that could be achieved with something like:

- Check if pixel is past interpolation value.

- If pixel is past interpolation value, do something like only draw every 2nd pixel.

 

???

 

Speaking of which, how would you calculate whether or not the sequence of the current fragment in the line was divisible by 2? gl_FragCoord tells you the window position, so does it have to be derived from that? Even still, you can't just look at the value from the previous fragment and compare, as far as I know... is there an easier way?

I know there's a way of getting the vertexID for the vertex shader, but there doesn't seem to be anything similar for fragments.

Share this post


Link to post
Share on other sites

 

No, it sounded to me like you want to draw the line gradually over time. I might be misunderstanding. If this is what you mean, what I am saying is to linearly interpolate the line between the two endpoints at each time step. So, the first time step you'd draw the first end point. The next time step, you'd draw the first interpolated point between the two end points, etc. Each time you draw, don't draw a single pixel (unless that is what you want), but a quad of the thickness you desire.

Hmmm... so you mean have the line sort of split into specified chunks, somewhat like a tile system almost? Such that you'd have a line with equally spaced points dotted along it and as the line went on you'd draw a line "tile" with its centre point on the next point?
 
Only problem I see with that is surely it wouldn't work very well for any arbitrary angle? Or are we talking about computing the textures for these quads on the fly depending on the line's slope etc.?
Though if that's right... would it not be just as easy to use 1 textured quad, with the entire completed line segment rendered to the texture?

 


I don't think I'm getting my idea across effectively. It works in my head, but I'm having a hard time explaining it. No matter. L. Spiro's idea is far superior! I'm ashamed I didn't think of it myself!

Share this post


Link to post
Share on other sites

As for a stippling/dashed effect... would I be right in thinking that could be achieved with something like:
- Check if pixel is past interpolation value.
- If pixel is past interpolation value, do something like only draw every 2nd pixel.

No. Virtually all effects you want to apply to it would be done by manipulating the alpha value (the result of (Cur - Start) / (End - Start)).
Rather than branching based on being above or behind the cut-off value, simply manipulate the alpha value creatively (design an algorithm to increase it by some amount in certain cases).

For example, you can find a dithering equation online easily.
The normal result of dithering is to pick either the next color up or the next color down based on screen position.
Instead, where the result would have you pick the next color up, increase the alpha value by 0.1 (testing required) and where it would have you pick the next color down, either do nothing or decrease the alpha (testing required).

This is just an example—it won’t work on perfectly diagonal lines (a typical problem with dithering).


If you can’t think of a creative algorithm to get the effect you want, then yes you could start manually manipulating the alpha based off selection statements etc.


L. Spiro

Share this post


Link to post
Share on other sites


Rather than branching based on being above or behind the cut-off value, simply manipulate the alpha value creatively (design an algorithm to increase it by some amount in certain cases).

 

So you mean just writing something like:

alpha = [Formula based on interpolation that results in desired alpha value];

?

 

I'm having a hard time imagining what that would look like to make the alpha flick alternately between 0 and 1 for a specified length of the line, and with an arbitrary pattern... is there a specific reason to avoid if statements here altogether (unless that's not what you meant and I misunderstood), or is it just that it's neater this way?

 

Though actually, if this becomes overly problematic for me to implement, I could probably just replace it with drawing the line in a dark colour like grey and then gradually colouring it in with a bright colour like orange or w/e for a similar visual effect.

Share this post


Link to post
Share on other sites
When trying to find this topic again it took a while.  It should be moved to Graphics Programming and Theory.
 

I'm having a hard time imagining what that would look like to make the alpha flick alternately between 0 and 1

I said you only need to increase the alpha after first calculating your current alpha value.
Every pixel from start-to-end has a fixed alpha value from 1-0 (the original equation should have been (1.0 - (Cur - Start) / (End - Start))), which you are exposing over time by decreasing the cut-off value (from 1-0).

So if your cut-off value is 0.75, only the first 25% of the line will show.

Let’s say the next pixels have alphas of 0.6, 0.5, and 0.4. They wouldn’t get drawn on this pass unless you do some math to increase their alphas over 0.75.

alpha = max( alpha, alpha + sin( alpha * 100.0 ) );
Plug this in for each number:
0.6 = max( 0.6, 0.6 + sin( 0.6 * 100.0 ) ) = max( 0.6, 0.29518937889778329437435053452157 )
0.5 = max( 0.5, 0.5 + sin( 0.5 * 100.0 ) ) = max( 0.5, 0.23762514629607121408560635308738 )
0.4 = max( 0.4, 0.4 + sin( 0.4 * 100.0 ) ) = max( 0.4, 1.1451131604793487869877094026363 )
0.3 = max( 0.3, 0.3 + sin( 0.3 * 100.0 ) ) = max( 0.3, -0.68803162409286178998774890729446 )
0.2 = max( 0.2, 0.2 + sin( 0.2 * 100.0 ) ) = max( 0.2, 1.1129452507276276543760999838457 )
0.1 = max( 0.1, 0.1 + sin( 0.1 * 100.0 ) ) = max( 0.1, -0.44402111088936981340474766185138 )
0.0 = max( 0.0, 0.0 + sin( 0.0 * 100.0 ) ) = max( 0.0, 0.00000000000000000000000000000000 )


As you can see, 2 of the pixels ahead of the current cut-off will be drawn.
Fancier formulas can make better effects.


or is it just that it's neater this way?

It is more robust and it is niftier.


L. Spiro

Share this post


Link to post
Share on other sites

When trying to find this topic again it took a while. It should be moved to Graphics Programming and Theory.

 

Yeah, I wasn't too sure where to put it when I was making it... I thought about OpenGL but then I figured it was a bit more general than just that and put it here...

 

Ok, that all makes sense... I'll see what kind of formula I can come up with in practice when I get that far.

 

One more little thing though; I've been thinking about how to implement the lines, and I've come up with a few ideas that I'm wondering about in terms of efficiency and general practicality.

 

- The simplest way of course seems to be just getting 2 vertices that are correctly oriented in relation to each other for forming the line, putting them into a VBO, sending them into the vertex shader as attributes (or rather 1 vec2 attribute...), then probably applying a modelview matrix through a uniform to achieve the final position. Given I might have a lot of lines, it would probably make sense to make a Line class of some kind to represent each line. I could initialize a Line with the vertex data, then draw it to the screen based on a translation via a modelview matrix.

 

Then I had another idea. I'm not sure if it makes sense in terms of ease of use or efficiency... it's probably impractical but I still want to mention it for critique to see if I'm understanding how things work...

- Don't bother with the position attribute at all. A line can be defined using 2 points; we can use an attribute to define these points before drawing, OR what if we didn't use an attribute at all, and instead passed their positions into the vertex shader as uniforms at the time of drawing? There will only ever be 2 vertices so it wouldn't be messy or complicated like if you had many vertices.

The possible advantage I could see is that maybe this would then allow you to create just 1 instance of the Line class, then simply pass the point coordinates into its draw() function to set the uniforms and draw the line, being able to draw as many different lines as you wanted without having to use any more than the 2 vertices in the VBO? Which would hopefully cut down on clutter, in terms of not having to keep a whole list of Line objects.

 

Could that make sense or am I overthinking it and designing something stupid?

Share this post


Link to post
Share on other sites

I've been thinking about how to implement the lines

There is no reason to approach them differently from how you would normal triangles.
Put them into a VBO, draw using GL_LINES, and transform the vertices in the vertex shader. You can pass vec3 with the Z component set to 0 or you can pass a vec2 and assign 0 (or whatever fixed value you want) to it in the shader.


Don't bother with the position attribute at all.

You can’t render without a position attribute in the vertex buffer.

As for the rest, if you are only going to have a few lines then it is no big deal to take a few shortcuts and draw them one-by-one, but you said you might have a lot of lines, so you need to focus on batch-drawing them.


L. Spiro

Share this post


Link to post
Share on other sites

There is no reason to approach them differently from how you would normal triangles.
Put them into a VBO, draw using GL_LINES, and transform the vertices in the vertex shader. You can pass vec3 with the Z component set to 0 or you can pass a vec2 and assign 0 (or whatever fixed value you want) to it in the shader.

 

Yeah, I just did them normally in the end... no point overthinking. I figured I'd do them like that at first and then change it later if I decided there was a reason to. Can't see that there is now.

 

 

You can’t render without a position attribute in the vertex buffer.

 

 

Really?... Oh wait that makes sense... OpenGL would have no way of knowing how many vertices there were, d'oh.

 

 

As for the rest, if you are only going to have a few lines then it is no big deal to take a few shortcuts and draw them one-by-one, but you said you might have a lot of lines, so you need to focus on batch-drawing them.

 

 

Well come to think of it... while obviously my game isn't complete yet and the design is a bit sketchy, I'd be surprised if I even needed 50 lines in the end with my current ideas.

 

You'd need thousands or at least hundreds before there was a problem in any normal situation, right? As long as you weren't doing something really costly with all of them or w/e.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now