# OpenGL 2d lighting

## Recommended Posts

I'm working on a 2d isometric game using opengl(using textured quads as sprites), and currently I'm playing around with ideas with which to enhance my lighting system. I currently use dynamic lighting, subdividing quads and lighting them but I'm wondering if there is any way that I could implement a more advanced form of lighting that takes into account the angle between the surface and the light source? I'm fairly new to the whole subject of lighting and I'm basically just looking for some direction as to whether or not this would be feasible(as well as what kind of result it might produce) taking into account the limitations of my project. Here are a couple screens to illustrate my current lighting method. Thanks.

##### Share on other sites
How are you doing the lighting currently? Are you just attenuating by distance? Are you computing the lighting values yourself, or using OpenGL's lighting system?

##### Share on other sites
I do not use opengl, I calculate the attenuation myself by distance, each light has has two variables, center_radius and falloff_radius, the center radius is totally bright(1.0) and the falloff radius linearly interpolates between the center radius's brightness(1.0) and whatever the ambient light in the may be.

##### Share on other sites
Quote:
 Original post by ari556655I do not use opengl, I calculate the attenuation myself by distance, each light has has two variables, center_radius and falloff_radius, the center radius is totally bright(1.0) and the falloff radius linearly interpolates between the center radius's brightness(1.0) and whatever the ambient light in the may be.
Lighting calculations can get more or less arbitrarily complex, but if you just want to add direction into the mix, that's relatively straightforward.

For a point light, what you're interested in is the dot product of the vertex normal and the normalized vector from the vertex to the light's position, e.g.:
vector3 v = normalize(light.position - vertex.point);float d = dot(v, vertex.normal);
If the result of the dot product is negative, there is no contribution from the light; otherwise, you can use the value (which should be in the range [0, 1], more or less) to attenuate the contribution of the light.

(Note: The above is off the top of my head, and I can't guarantee its correctness. This stuff is well documented online though - just Google e.g. 'lighting equations' and you should find plenty of references.)

Once you get that working, you might also try out some different attenuation functions for distance. A linear function will work ok, but it's common for the distance attenuation function to be non-linear.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627762
• Total Posts
2978971
• ### Similar Content

• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!

• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.

• 11
• 10
• 10
• 23
• 14