# OpenGL Accelerating interleaved Particle Systems

## Recommended Posts

aleks_1661    188
Hey All, ------------------------------------------------------------ Quick Appology - I decided to move this to OpenGL rather than the 'Graphics' forum, since I am using opengl and I need info that is specific to the API.. ------------------------------------------------------------ I am wondering how to render particles where 2 (or more) systems are intersecting. I was going to implement my system using vertex buffers with one buffer per material. But I have thought of a use case in my game where i am likely to have a number of particle systems intersecting. And unfortunately some of the effects will not be possible with additive blending alone. A particular example is a vehicle explosion (the game is a c&c style 3d affair) where there might be a fireball, a central smoke column and smoke trails for various parts blown off. While trying to think of a solution I did consider just leaving it as is, where by I sort each particle system back to front and render the buffers one by one, the actual order of rendering the individual buffers is arbitrary. The problem is that, for example, flames on the floor several yards back from a column of heavy smoke should not be visible (or at least partially obscured) from the camera but depending on the order the vertex buffers are rendered the flame material may end up being rasterised on top of the smoke particles. I realise that if i detect the intersection of two particle systems by bounding box test, i can then prepare them differently so that the two systems are interleaved, sorted and rendered together, but i believe this would prevent me from using any sort of fast rendering, the only way i can think to achieve this is glBegin ... glEnd, but there is no way that this would give me any sort of accepable performance. Anyone know of anyway to either get round the situation or a way to render them at a decent speed? I have been having a look around for info but not been able to find much that addresses this problem, I had looked back through the forums but i appologise if this has been addressed and I missed it. Thanks, Alex

Basiror    241
cross post

##### Share on other sites
aleks_1661    188
Hence the apology at the begining...

I am begining to think that the only way to get good results is to render occluded systems individually and then render the remaining systems grouped by materials. building the scene up from back to front. Or just give it up as a bad idea and hope that the resulting scene wont look to bad with a few overlaps.

----------------------------------------------

Just found a way of doing it nicely, http://www.gamedev.net/community/forums/topic.asp?topic_id=216773

basicly if i use premultiplied alpha based images then i dont have to change blend modes and if i place all the particle images into a single texture (either at run-time or before hand) then i can pass all the particles as a single vertex array. cushty!

[Edited by - aleks_1661 on October 5, 2004 5:05:03 AM]

##### Share on other sites
Ilici    862
Why not have a particle render manager to do the rendering. The particle manager would just represent the sorts of particle systems at work (smoke, flames) and they would pass positions, sizes and textures of the billboards to the particle renderer. This would then sort (since you're inputting the particles one by one you can use insertion sort, or in any case some fast sorting method) the particles by depth value and then render them.

##### Share on other sites
aleks_1661    188
All that is implemented but the problem I want to try and address is what to do when two particle systems are intersecting, unless you use additive blending (as most systems do for corona's sparks etc) you have to mix the two sets of particles together to avoid one type overdrawing the other incorrectly.

ie imagine looking vertically down at a set of billboarded polys
         --   --      --    ==  ==             ==  == --        ==  ==    --      --   --   --             -- particle type 1       ==    ==                == particle type 2          ^          |        Camera       =-====--==-----    < this shows what type would be visible                                in each column

with out rendering the particles all mixed into one vertex array
you will be essentially doing the following
         --   --      --                --         <= particle type 1 rendered first                  --      --   --   --                ==  ==     ==  ==             render the second particle type.        ==  ==       <= particle type 2 will overdraw type 1       ==    ==                ^           |         Camera       ======-===-==--  < the visible particles     =-====--==-----  < this is what should be seen

If using vertex arrays you can only have one material per buffer, which means i cannot change material mid render. And using glBegin/glEnd is slow.

If i could combine a number of particle textures into a single texture then by varying the texture co-ords i can effectivly render particles with more than one 'material' in a single buffer. And extending that by using the premultiplyed alpha technique mentioned above i can actually give the appearance of more than one material and more than one blend mode per buffer. Though i have read up on premultiplyed alpha and it can give particles a dark edge beause the colours become darker as the alpha value of a pixel decreases.

ok, i think that ascii diagram is going to confuse, ill have to try to make an image to show what i mean..

## Create an account

Register a new account

• ### Similar Content

• I'm trying to get some legacy OpenGL code to run with a shader pipeline,
The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
I've got a version 330 vertex shader to somewhat work:
#version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
Question:
What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?

Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.

• Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.

• By KarimIO
Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!

• Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
I'm available for a good conversation about Game Engine / Graphics Programming
• By C0dR
I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.

The following features are implemented:
Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping
You can find the repository at https://github.com/0x2A/physicam

I would be happy about feedback, suggestions or contributions.

• 11
• 10
• 18
• 9
• 9