• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By racarate
      Hey everybody!
      I am trying to replicate all these cool on-screen debug visuals I see in all the SIGGRAPH and GDC talks, but I really don't know where to start.  The only resource I know of is almost 16 years old:
      http://number-none.com/product/Interactive Profiling, Part 1/index.html
      Does anybody have a more up-to-date reference?  Do people use minimal UI libraries like Dear ImgGui?  Also, If I am profiling OpenGL ES 3.0 (which doesn't have timer queries) is there really anything I can do to measure performance GPU-wise?  Or should I just chart CPU-side frame time?  I feel like this is something people re-invent for every game there has gotta be a tutorial out there... right?
       
       
    • By Achivai
      Hey, I am semi-new to 3d-programming and I've hit a snag. I have one object, let's call it Object A. This object has a long int array of 3d xyz-positions stored in it's vbo as an instanced attribute. I am using these numbers to instance object A a couple of thousand times. So far so good. 
      Now I've hit a point where I want to remove one of these instances of object A while the game is running, but I'm not quite sure how to go about it. At first my thought was to update the instanced attribute of Object A and change the positions to some dummy number that I could catch in the vertex shader and then decide there whether to draw the instance of Object A or not, but I think that would be expensive to do while the game is running, considering that it might have to be done several times every frame in some cases. 
      I'm not sure how to proceed, anyone have any tips?
    • By fleissi
      Hey guys!

      I'm new here and I recently started developing my own rendering engine. It's open source, based on OpenGL/DirectX and C++.
      The full source code is hosted on github:
      https://github.com/fleissna/flyEngine

      I would appreciate if people with experience in game development / engine desgin could take a look at my source code. I'm looking for honest, constructive criticism on how to improve the engine.
      I'm currently writing my master's thesis in computer science and in the recent year I've gone through all the basics about graphics programming, learned DirectX and OpenGL, read some articles on Nvidia GPU Gems, read books and integrated some of this stuff step by step into the engine.

      I know about the basics, but I feel like there is some missing link that I didn't get yet to merge all those little pieces together.

      Features I have so far:
      - Dynamic shader generation based on material properties
      - Dynamic sorting of meshes to be renderd based on shader and material
      - Rendering large amounts of static meshes
      - Hierarchical culling (detail + view frustum)
      - Limited support for dynamic (i.e. moving) meshes
      - Normal, Parallax and Relief Mapping implementations
      - Wind animations based on vertex displacement
      - A very basic integration of the Bullet physics engine
      - Procedural Grass generation
      - Some post processing effects (Depth of Field, Light Volumes, Screen Space Reflections, God Rays)
      - Caching mechanisms for textures, shaders, materials and meshes

      Features I would like to have:
      - Global illumination methods
      - Scalable physics
      - Occlusion culling
      - A nice procedural terrain generator
      - Scripting
      - Level Editing
      - Sound system
      - Optimization techniques

      Books I have so far:
      - Real-Time Rendering Third Edition
      - 3D Game Programming with DirectX 11
      - Vulkan Cookbook (not started yet)

      I hope you guys can take a look at my source code and if you're really motivated, feel free to contribute :-)
      There are some videos on youtube that demonstrate some of the features:
      Procedural grass on the GPU
      Procedural Terrain Engine
      Quadtree detail and view frustum culling

      The long term goal is to turn this into a commercial game engine. I'm aware that this is a very ambitious goal, but I'm sure it's possible if you work hard for it.

      Bye,

      Phil
    • By tj8146
      I have attached my project in a .zip file if you wish to run it for yourself.
      I am making a simple 2d top-down game and I am trying to run my code to see if my window creation is working and to see if my timer is also working with it. Every time I run it though I get errors. And when I fix those errors, more come, then the same errors keep appearing. I end up just going round in circles.  Is there anyone who could help with this? 
       
      Errors when I build my code:
      1>Renderer.cpp 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2039: 'string': is not a member of 'std' 1>c:\program files (x86)\windows kits\10\include\10.0.16299.0\ucrt\stddef.h(18): note: see declaration of 'std' 1>c:\users\documents\opengl\game\game\renderer.h(15): error C2061: syntax error: identifier 'string' 1>c:\users\documents\opengl\game\game\renderer.cpp(28): error C2511: 'bool Game::Rendering::initialize(int,int,bool,std::string)': overloaded member function not found in 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.h(9): note: see declaration of 'Game::Rendering' 1>c:\users\documents\opengl\game\game\renderer.cpp(35): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(36): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>c:\users\documents\opengl\game\game\renderer.cpp(43): error C2597: illegal reference to non-static member 'Game::Rendering::window' 1>Done building project "Game.vcxproj" -- FAILED. ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========  
       
      Renderer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include "Renderer.h" #include "Timer.h" #include <iostream> namespace Game { GLFWwindow* window; /* Initialize the library */ Rendering::Rendering() { mClock = new Clock; } Rendering::~Rendering() { shutdown(); } bool Rendering::initialize(uint width, uint height, bool fullscreen, std::string window_title) { if (!glfwInit()) { return -1; } /* Create a windowed mode window and its OpenGL context */ window = glfwCreateWindow(640, 480, "Hello World", NULL, NULL); if (!window) { glfwTerminate(); return -1; } /* Make the window's context current */ glfwMakeContextCurrent(window); glViewport(0, 0, (GLsizei)width, (GLsizei)height); glOrtho(0, (GLsizei)width, (GLsizei)height, 0, 1, -1); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glfwSwapInterval(1); glEnable(GL_SMOOTH); glEnable(GL_DEPTH_TEST); glEnable(GL_BLEND); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glEnable(GL_TEXTURE_2D); glLoadIdentity(); return true; } bool Rendering::render() { /* Loop until the user closes the window */ if (!glfwWindowShouldClose(window)) return false; /* Render here */ mClock->reset(); glfwPollEvents(); if (mClock->step()) { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glfwSwapBuffers(window); mClock->update(); } return true; } void Rendering::shutdown() { glfwDestroyWindow(window); glfwTerminate(); } GLFWwindow* Rendering::getCurrentWindow() { return window; } } Renderer.h
      #pragma once namespace Game { class Clock; class Rendering { public: Rendering(); ~Rendering(); bool initialize(uint width, uint height, bool fullscreen, std::string window_title = "Rendering window"); void shutdown(); bool render(); GLFWwindow* getCurrentWindow(); private: GLFWwindow * window; Clock* mClock; }; } Timer.cpp
      #include <GL/glew.h> #include <GLFW/glfw3.h> #include <time.h> #include "Timer.h" namespace Game { Clock::Clock() : mTicksPerSecond(50), mSkipTics(1000 / mTicksPerSecond), mMaxFrameSkip(10), mLoops(0) { mLastTick = tick(); } Clock::~Clock() { } bool Clock::step() { if (tick() > mLastTick && mLoops < mMaxFrameSkip) return true; return false; } void Clock::reset() { mLoops = 0; } void Clock::update() { mLastTick += mSkipTics; mLoops++; } clock_t Clock::tick() { return clock(); } } TImer.h
      #pragma once #include "Common.h" namespace Game { class Clock { public: Clock(); ~Clock(); void update(); bool step(); void reset(); clock_t tick(); private: uint mTicksPerSecond; ufloat mSkipTics; uint mMaxFrameSkip; uint mLoops; uint mLastTick; }; } Common.h
      #pragma once #include <cstdio> #include <cstdlib> #include <ctime> #include <cstring> #include <cmath> #include <iostream> namespace Game { typedef unsigned char uchar; typedef unsigned short ushort; typedef unsigned int uint; typedef unsigned long ulong; typedef float ufloat; }  
      Game.zip
    • By lxjk
      Hi guys,
      There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there.
      Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test.
      On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres).
      This method can be naturally extended to clustered light culling as well.
      The following image shows the general ideas

       
      Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test
       

       
      I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included!
       
      Eric
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Alpha blending with OpengL

This topic is 4773 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I've been working on this little engine of mine. It uses OpenGL! I haven't used OpenGL in a long time; I've mostly used Direct3D. But I feel for a change and want to code with a different API. Back on topic, I've decided that there are two types of polygons that I need to render. One for models and ones for special effects. The model polygons don't really use any blending but the special effects ones do. The special effect polygons are used for particles and such things. Any ways, I figured I would only use one vertex format for all so I would only have to work with one type of vertex structure. So I've came to the conclusion that all I need is: position, normal, color and texture coordinates. Each model polygon may use two different textures; one for a basic image and the other for detail (sort of like a bump but not). But because the detail texture will be that same size as the base texture I wouldn't need a another set of texture coordinates. I just use the same ones for the base. I can blend them with multi-pass or multi-texturing, either way it's the same vertex structure I use for everything. So my question is, what are the blending operation that I need to setup for source and destination? I should tell you that all textures will have an alpha channel. The base textures will be RGBA and the detail texture will bit 16 bit gray scale with an alpha channel. I use the alpha channels to blend the textures. That way, if I want to special effects to be blended I only specify the blend factor with the alpha. That being said, I want the alpha value to depict how the polygons with their texture will be blended. I'm using this: glBlendFunc( GL_SRC_ALPHA, GL_ONE )! I haven't tested it with any polygons yet because I'm still in designer stage. I'm just writing out the basic framework. I want to set all the render states for OpenGL once and not have to dick with them again at all. Unless of corse I have to refresh the application because the user task-switched out of it. But you get the basic idea. One time initialization. Now that every thing is don with polygons(triangles) I only have three types of primitives to draw; single triangles, triangle strips and triangle fans. That's pretty easy way of doing things I think. I really don't need a whole lot to make a complex scene, or I don't want a whole lot. Only using one vertex structure that is used by every thing in the engine is really efficient in my eyes at least. So is my blending operations correct? Should I not use one type of structure for all vertex information? How do I use the alpha channel to govern the transparency / translucency of my polygons? I think I had it right, do I? Just simple questions for all you pros out there. I hope you can help me.

Share this post


Link to post
Share on other sites
Advertisement
Quote:
I've decided that there are two types of polygons that I need to render. One for models and ones for special effects.
seperate the model from the material
ie do
MM
{
int modelID;
int materialID;
};

+ not
model
{
vector *verts;
etc
int textureID;
};

Quote:
I want to set all the render states for OpenGL once and not have to dick with them again at all. Unless of corse I have to refresh the application because the user task-switched out of it. But you get the basic idea. One time initialization.
unless everthings is drawn with the same material (opengl states/texture etc) then u will have to change something eg trees would use the same material as cars

Quote:
Now that every thing is don with polygons(triangles) I only have three types of primitives to draw; single triangles, triangle strips and triangle fans. That's pretty easy way of doing things I think.
even easier just use GL_TRIANGLES fgor everything, a lot of apps do this eg quake3

Quote:
So is my blending operations correct? Should I not use one type of structure for all vertex information? How do I use the alpha channel to govern the transparency / translucency of my polygons? I think I had it right, do I?
an often used blend methods are one one, src_lpaha 1-src_alpha ie one size does NOT fit all

Share this post


Link to post
Share on other sites
I really didn't;t mean that the actual model's and material data would be combined together in the same structure. Sorry, I should of clarified better.

What I met was one type of vertex format that is used for all polygons. Hence, world mesh, model meshes and particles will all use the same vertex format. The actual polygon is just a simple structure like so:

// Vertex definition
//
typedef struct _GLVERTEX {
FLOAT X, Y, Z ;
BYTE R, G, B, A ;
FLOAT S, T ;
} GLVERTEX, *LPGLVERTEX ;

// Polygon definition
//
typedef struct _GLPOLYGON {
DWORD dwVertices[ 3 ] ;
UINT uBaseTexID ;
UINT uBumpTexID ;
} GLPOLYGON, *LPGLPOLYGON ;

That's it! Basically I use these same structures through out the entire engine for ever 3D object.

My real question is how I would setup blending. The textures will all have an alpha channel. Also, alpha testing as well as alpha blending will be enabled. I use both so that the edges that have been tested out can also look smooth.

Any ways, I'll stick with GL_TRIANGLES then as you suggested. About your material idea, well I'm already doing something similar like that with the polygon structure. The only difference is that it contains vertex indices too. The 3D object or entity will be represented a lot different from those to simple structures. I'm sorry if you miss understood my and though I met representing (data and logic wise) 3D objects using the same structure. I met same as in same primitive geometry

I will explain a little further so that I may better understand what I'm trying to accomplish here.

Basically, special effects will be blended (ie particles and such) and I want to control there blend using the alpha channel. Also, character and other 3D objects will appeared translucent at times. So I figured that I will control that with the alpha channel.

So my setup using the color component to blend color with the textures and to also used the alpha component for translucency. I use the alpha components in the textures to clip areas that I don't want to be scene. I do this so that I can get better looking 3D objects with less polygons. Mean while the alpha component of the color will blend the if needed.

My problem is that I'm seeing that I will not be able to use one blending operation for everything. I may be able to use the alpha channel to blend particles and stuff, but blending models to make them look ghostly or something might not work that same way. I figured that I would be able to blend specific parts of a texture if that alpha, were to say, blurred in the image area around the edges of something that I needed to clip out. This would make it look smoother and not so jaggy. And it would do this because of the alpha components in the texture. But I want to alpha component in the color to override or add to the effect if I decrease the alpha component in it. I'm guessing that this isn't possible.

Share this post


Link to post
Share on other sites
typedef struct _GLPOLYGON {
DWORD dwVertices[ 3 ] ;
UINT uBaseTexID ; <- this doesnt belong here but in the material
UINT uBumpTexID ; <- ""
} GLPOLYGON, *LPGLPOLYGON ;

keep the mesh + material structures totslly seperate eg say the polygon/mesh whatever suddenly changes material eg the player wears a green shirt instead of a polkadot one, with your example u will have to go through all polygons changing various IDs

Quote:
My real question is how I would setup blending. The textures will all have an alpha channel. Also, alpha testing as well as alpha blending will be enabled. I use both so that the edges that have been tested out can also look smooth.
only enable them if theyre needed else performanvr will suffer.

Share this post


Link to post
Share on other sites
Okay, I think I see what you mean now. Using a pointer to material instead of pointer to texture identities. That way if I want to change textures on the object all I have to do is reassign the pointer. Even better yet, I only use material pointers for objects and don't have them associated with the polygons at all.So then I would have->

// New polygon definition
//
typedef DWORD[ 3 ] GLPOLGON ;
typedef GLPOLYGON* LPGLPOLYGON ;

// Material definition
//
typedef struct _GL_MATERIAL {
UINT uBaseTextureID ;
UINT uBumpTextureID ;
UINT uBlenFlags ;
} GLMATERIAL, *LPGLMATERIAL ;

// Basic object definition
//
typedef struct _OBJECT {
LPPOLYGON pMesh ;
GLMATERIAL glMaterial ;

...

} OBJECT, *LPOBJECT ;

That looks a lot better! Then all I have to do is set the material and draw the GL_TRIANGLES. And by keeping the blend flag with the material I can use it to set the blend operations.

By the way, the bump texture identity is used as a detail texture (more or less a bump). It's blended over the polygon to give it more detail or a little bump sort of with out making expensive bump calculations. Also, the normal is used for specular highlights and a little lighting as well. But the real lighting comes from dynamic lighting with light maps and such.

So then I'll retake your advice and use the material object. I guess I wasn't really thinking on that one huh? Or I just wasn't thinking ahead. Maybe I should extend the material structure a little bit so that I can also specify which colors to alpha test for.

Thanks!

Share this post


Link to post
Share on other sites
Actually, I'm going to change something again. I will change the polygon structure to just an arreay of three LPGLVERTEXs. Because if I use an array of three DWORDs then I limited to the amount of polygons I can represent. But if I use vertex pointer then I'm not limited at all. Besides system resources, I can have huge worlds that have like 12 million polygons apposed to a littel of 4 million with a DWORD.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement