Sign in to follow this  
Nick of ZA

OpenGL Implementing 2D bump maps - is this correct?

Recommended Posts

Nick of ZA    177
Hi All I'm thinking of making a 2D space shooter that uses fake 3D lighting, by utilising 2D bump maps. I saw this technique in a game called Starport GE (which you can google and download freely if you wish to see it in action, you'll need to sign up which takes 30 seconds). I want to know how to implement this 2D bump mapping effect as they have. What I do know is that for this method, only a couple of bitmap files are required: the basic image to be bumpmapped, and a heightmap image OR a normals image (latter is more efficient, AFAICT). Similar examples of 2D bump mapping can be found here (viewable in your browser): http://www.unitzeroone.com/blog/2006/03/28/flash-8-example-3d-bumpmapping-using-filters-source-included/ http://www.bytearray.org/?p=74 This is the kind of effect I want to get in C++, but instead the light source will remain fixed and the objects will move... which is an arbitrary distinction. Right, before I ask questions, let me give you a scene from my proposed game to put this into perspsective. The scene: A top-down view of an arbitrary star system (meaning you are looking directly down at the solar/gravitational/orbital plane, in top-down shooter style). You are a pirate starship captain, coasting about along the solar plane, just itching for your next bit of loot. Suddenly, your sensors indicate a cargo ship on the far side of the star. Thrusting along the plane, you shoot past the star and head for your opponent. In terms of game functionality, here's what happens... As you cruise through space, the star (i.e. the only light source of any consequence) sheds light on your ship. As you begin approaching the star, the light strikes your ship more for'ard. At the moment you pass the star (very closely) to your right, the light strikes your ship directly on it's starboard side. And as it recedes behind you as you approach your prey, the highlight on your ship recedes aft (first quickly, and then more slowly as the distance between your ship and the sun diminishes). (It's important to imagine this example from the top down, as per the game's viewpoint , think any top-down shooter eg. my faves, StarControl or StarScape.) I have done some research on this but am still hazy in a number of areas. So here is the process I presume is needed to implement the above scenario. Please correct me at every turn. (Let's assume we have a simple game set up which shows a starfield and a sun in the centre of it, and a basic starship that we can fly around near the sun. Now we wish to extend the functionality to have a dynamically lit 2D ship as described above.) 1. Create the textured model. Align it facing some default direction, eg. North. 2. Render the model with ambient/diffuse lighting to get the underlying "colour template" for your end result. 3. Render the model with a light source set up at a default direction (I assume that if the original model in step 1 is facing a cardinal direction, then this light source must be placed at some NON-cardinal direction, e.g. NE/NW/SE/SW, so as to get an accurate idea of the object's shape in x and y axes). 4. Convert this to greyscale and back to RGB so that it uses brightness to show pixel topography. As shades of grey, in each pixel you can read any of the three colour channels and they will provide the same information. In reality, one could use one channel only for this step(?). If you didn't do this, you'd need to calculate luminance, meaning your code would be able to interpret the HSL colour model (not really worthwhile...) Anyhow, if you choose to skip step 5, you will have to calculate, at runtime, the surface normals for your pixels. 5. (OPTIONAL STEP) Use some tool(?) to scan the greyscale image and use this to generate the normal map. This will use the RGB values as vector co-ordinates. R=x, G=y, B=z. 6. Combine the colour template (the underlying, neutrally-lit image) with the normal map using a fixed light source, to get a 3D-looking object. The specifics of this being based on various bits of vector math, and pixel blending calcs. If anyone can elaborate on this part more it would be great. I've viewed a few tutorials but I don't follow step-by-step what is being done. I guess this step needs to be broken down a lot more for me to be able to follow it. Am I (sort of) on the right track with all of this? I am thinking of perhaps using OpenGL (on top of SFML) to do the bump mapping. Would it be fairly straightforward to do this in OpenGL, remembering of course that I only need it as a 2D effect? Or would I be better off implementing it by manipulating pixels directly, excluding OpenGL completely for these ops? P.S. Does OpenGl even support direct pixel-level manipulation? I found this on another gamedev thread, someone said:
Quote:
if you thought pixel manipulation was slow in SDL it gets worse in opengl however there are shaders and other workarounds that can make this multiples faster.
*phew* Thanks for reading. Any and all responses appreciated. -Nick [Edited by - Nick of ZA on April 19, 2008 1:14:13 PM]

Share this post


Link to post
Share on other sites
Gage64    1235
There's a tutorial on 2D bump-mapping here. I don't know if it uses the technique you want, but you might find it interesting anyway.

Share this post


Link to post
Share on other sites
Nick of ZA    177
Thanks for that, I'm reading it now to see if there's anything new...

BTW if anyone isn't sure of what I mean by a "normals image", see about 2/3 down this page:

http://www.katsbits.com/htm/tutorials/bumpmaps_from_images_pt2.htm

...or search for the word "distinctive", the image is just beside this, you will see what I mean.

MOOORRRE INFO, PLEASE!

-Nick

Share this post


Link to post
Share on other sites
ahayweh    164
Bump-mapping in OpenGL is pretty trivial thing to accomplish ( 2D and 3D use the same methods , 2D using orthogonal matrices ) , and there are endless tutorials out there to show you how ( google-me-baby ). Now , if you are trying to accomplish it in a software-rendering way , then thats a different story all together. So which way is it?

Share this post


Link to post
Share on other sites
Nick of ZA    177
I guess the question is, which is faster.

I plan to have a fairly high number of bump-mapped sprites onscreen at once, maxing somewhere around a hundred (maybe less -- whatever is possible once AI, collision detection and other expensive operations are thrown into the mix).

What would the speed of something like this be in OpenGL I wonder, using 64x64 texels? Wikipedia claims that bumpmapping is computationally cheap (though taken in context, they are comparing it to geometric complexity in 3D environments as the alternative).

I am thinking that if OpenGL can do this, I'd rather let it do so and focus my time on optimizing other things as mentioned above.

(I have to learn how to use matrices/transforms either way, so I may as well accept the helping hand of OpenGL.)

I'll be back. For help. I assure you.

PS. I would still appreciate hearing how this is done in software though, i.e. are the methods I outlined above correct....

-Nick

Share this post


Link to post
Share on other sites
kiwibonga    183
You compute the normals at a given point based on the height differences in your bump maps. The normals will then serve to compute the illumination of each pixel, just like it would compute the illumination of a face in flat shading, or that of a vertex in smooth shading, by comparing the normal vector to the light(s)' direction(s).

This can be done with a shader, so the GPU takes care of the majority of the work. Doing it in software would probably not be a problem on today's computers, but it's still a lot of overhead that can be easily avoided... I don't think anyone will recommend that you do it in software.

Share this post


Link to post
Share on other sites
Nick of ZA    177
Excellent, thanks for the tips!

PS. For anyone interested in getting a thorough overview of bump-mapping techniques, this is the best information I've found:

http://www.delphi3d.net/articles/viewarticle.php?article=bumpmapping.htm

[Edited by - Nick of ZA on April 20, 2008 7:30:49 AM]

Share this post


Link to post
Share on other sites
Fingers_    410
If you're creating your sprites in a 3D modelling package, you may be able to directly render an object space normal map based on the 3D model. This should give you more accurate results than converting into a heightmap and then deriving normals from that.

Basic normal mapping on the GPU is pretty cheap. You should be able to cover the whole screen with normal mapped sprites.

edit: Here's how to directly render a normal map in Blender - most other modelling packages probably have a similar feature, as it's a very popular way to create normal mapped textures for games.

[Edited by - Fingers_ on April 21, 2008 4:01:29 PM]

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now