Sign in to follow this  
EVIL_ENT

OpenGL GIMP Brush Blending

Recommended Posts

EVIL_ENT    276
I've got a line texture with soft edges.
If I render two of these lines above each other using
[code]glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);[/code]
there's too much alpha where the lines are intersecting.

My blend function should do something like this:
[code]new color = old color or brush color//does not really matter because both are the same
if (old alpha > brush alpha) new alpha = old alpha
else new alpha = brush alpha[/code]
So I could render the single brush strokes on a framebuffer object.
Next I could blend all these layers with [code]glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);[/code].

In the end it should look like in this video:
[media]http://www.youtube.com/watch?v=9JfbnnloTNM[/media]

I've found several functions which work with alpha which are:
[code]glBlendEquation
glBlendEquationSeparate
glBlendFunc
glBlendFuncSeparate
glAlphaFunc[/code]
but I got none of these to do what I want.
Test environment:
http://pastebin.com/HVPxtNa6

If it is possible with shaders some code would be very nice because I didn't work much with them yet.
Of course I could do this with the CPU and without OpenGL but that would be slower and I need it to be fast.

Share this post


Link to post
Share on other sites
V-man    813
What do you mean by stack.
The blend operation is just a mathematical operation. If you want some different mathematical operation, then just type here what you want.

From your right pic, it looks like you don't want blending.

Share this post


Link to post
Share on other sites
Nanoha    2682
Paint it to a seperate layer, if pixel is not coloured then colour it with the paint value, otherwise do nothing (unless your brush has some falloff). You can render this layer over the top with alpha blending. Once you release the mouse then you apply the layer to the layer your actually trying to draw on. Its sort of doing the "big array" thing you didn't want but slightly differently. Blending alone won't solve your problem as the effect you have is blending, the effect you want is not blending (its logical). Sounds like your doing render to texture, just get 2 render targets. One you render to while the brush is down, once its released you render it onto the "canvas" target and clear it ready for the next time you start painting.

Share this post


Link to post
Share on other sites
EVIL_ENT    276
[quote name='Hodgman' timestamp='1310214244' post='4833073']
If you don't want to see one quad underneath the other, why are you using blending?
[/quote]

I still want the blending.
Imagine a brush which is not solid but fades out smoothly.
As long as I press the left mouse button I can go over the same pixel as often as I want but it won't get higher alpha values.
But if I release the left mouse button and do the same again it is supposed to blend as usual.

[quote name='Nanoha' timestamp='1310216527' post='4833076']
Paint it to a seperate layer, if pixel is not coloured then colour it with the paint value, otherwise do nothing (unless your brush has some falloff). You can render this layer over the top with alpha blending. Once you release the mouse then you apply the layer to the layer your actually trying to draw on. Its sort of doing the "big array" thing you didn't want but slightly differently. Blending alone won't solve your problem as the effect you have is blending, the effect you want is not blending (its logical). Sounds like your doing render to texture, just get 2 render targets. One you render to while the brush is down, once its released you render it onto the "canvas" target and clear it ready for the next time you start painting.
[/quote]

I've just found a function called "glBlendEquation".
If I do this:

[CODE]glBlendEquation(GL_FUNC_ADD);
glColor4f(red, green, blue, 1.0);
//render brush
glBlendEquation(GL_FUNC_REVERSE_SUBTRACT);
glColor4f(0.0, 0.0, 0.0, 1.0-alpha);
//render quad which covers the whole screen
[/CODE]

I can limit what I am drawing to alpha because it can't go beyond 1.0 and after that I subtract everything.

Unfortunately this will also affect what has been drawn before so I have to save that first, render the brush, save the brush, restore the old stuff and blend the brush on top of it.
I guess this will be too slow if I make a few brushes so I don't think that I'll do it like that.

I remember reading that it is possible to render directly to textures so I could make a bunch of quads which blend as usual and render on the quads by subtracting the alpha.
I have not done this before so I don't know if this works as intended.
Maybe rendering to many textures will consume the same amount of memory or have a big impact on performance.

What would be the fastest way to render the single brush strokes and blend them onto each other?

edit:
Seems to be called "Frame Buffer Object".
So I would:
1. Create one of these
2. Render a brush stroke on it
3. Blend content of it over the main rendering context
4. Clear it
and repeat 2. - 4. (up to lets say 1000 times) until everything is drawn?
Would that still be possible with a nice framerate?

Share this post


Link to post
Share on other sites
EVIL_ENT    276
Apparently I can do this:
-render a triangle on the fbo
-render the fbo to a quad in the window context
-render another triangle to the fbo
-render the fbo on another quad in the window context
and get a quad with 1 and another quad with 2 triangles although having the same texture, render buffer object, etc.
So I can save lots of everything :)
Also it is pretty fast.
It only takes 0.00005 seconds to switch contexts, clear, render and switching back on my laptop.
And if I need more speed for some reason I'll just make another fbo to buffer it.
Thanks everyone

Share this post


Link to post
Share on other sites
EVIL_ENT    276
*Bump*
Unfortunately the blending still does not work.

Updated first post, clarified question, added a video and more images.

Edit:
After lots of trial and error I finally found something which looks like I want.
FBO:
[code]glBlendFuncSeparate(GL_ONE, GL_ZERO, GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquationSeparate(GL_FUNC_ADD, GL_MAX);[/code]
Main window:
[code]glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);[/code]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now