Jump to content
  • Advertisement
Sign in to follow this  
Nick of ZA

OpenGL Implementing 2D bump maps - is this correct?

This topic is 3715 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi All I'm thinking of making a 2D space shooter that uses fake 3D lighting, by utilising 2D bump maps. I saw this technique in a game called Starport GE (which you can google and download freely if you wish to see it in action, you'll need to sign up which takes 30 seconds). I want to know how to implement this 2D bump mapping effect as they have. What I do know is that for this method, only a couple of bitmap files are required: the basic image to be bumpmapped, and a heightmap image OR a normals image (latter is more efficient, AFAICT). Similar examples of 2D bump mapping can be found here (viewable in your browser): http://www.unitzeroone.com/blog/2006/03/28/flash-8-example-3d-bumpmapping-using-filters-source-included/ http://www.bytearray.org/?p=74 This is the kind of effect I want to get in C++, but instead the light source will remain fixed and the objects will move... which is an arbitrary distinction. Right, before I ask questions, let me give you a scene from my proposed game to put this into perspsective. The scene: A top-down view of an arbitrary star system (meaning you are looking directly down at the solar/gravitational/orbital plane, in top-down shooter style). You are a pirate starship captain, coasting about along the solar plane, just itching for your next bit of loot. Suddenly, your sensors indicate a cargo ship on the far side of the star. Thrusting along the plane, you shoot past the star and head for your opponent. In terms of game functionality, here's what happens... As you cruise through space, the star (i.e. the only light source of any consequence) sheds light on your ship. As you begin approaching the star, the light strikes your ship more for'ard. At the moment you pass the star (very closely) to your right, the light strikes your ship directly on it's starboard side. And as it recedes behind you as you approach your prey, the highlight on your ship recedes aft (first quickly, and then more slowly as the distance between your ship and the sun diminishes). (It's important to imagine this example from the top down, as per the game's viewpoint , think any top-down shooter eg. my faves, StarControl or StarScape.) I have done some research on this but am still hazy in a number of areas. So here is the process I presume is needed to implement the above scenario. Please correct me at every turn. (Let's assume we have a simple game set up which shows a starfield and a sun in the centre of it, and a basic starship that we can fly around near the sun. Now we wish to extend the functionality to have a dynamically lit 2D ship as described above.) 1. Create the textured model. Align it facing some default direction, eg. North. 2. Render the model with ambient/diffuse lighting to get the underlying "colour template" for your end result. 3. Render the model with a light source set up at a default direction (I assume that if the original model in step 1 is facing a cardinal direction, then this light source must be placed at some NON-cardinal direction, e.g. NE/NW/SE/SW, so as to get an accurate idea of the object's shape in x and y axes). 4. Convert this to greyscale and back to RGB so that it uses brightness to show pixel topography. As shades of grey, in each pixel you can read any of the three colour channels and they will provide the same information. In reality, one could use one channel only for this step(?). If you didn't do this, you'd need to calculate luminance, meaning your code would be able to interpret the HSL colour model (not really worthwhile...) Anyhow, if you choose to skip step 5, you will have to calculate, at runtime, the surface normals for your pixels. 5. (OPTIONAL STEP) Use some tool(?) to scan the greyscale image and use this to generate the normal map. This will use the RGB values as vector co-ordinates. R=x, G=y, B=z. 6. Combine the colour template (the underlying, neutrally-lit image) with the normal map using a fixed light source, to get a 3D-looking object. The specifics of this being based on various bits of vector math, and pixel blending calcs. If anyone can elaborate on this part more it would be great. I've viewed a few tutorials but I don't follow step-by-step what is being done. I guess this step needs to be broken down a lot more for me to be able to follow it. Am I (sort of) on the right track with all of this? I am thinking of perhaps using OpenGL (on top of SFML) to do the bump mapping. Would it be fairly straightforward to do this in OpenGL, remembering of course that I only need it as a 2D effect? Or would I be better off implementing it by manipulating pixels directly, excluding OpenGL completely for these ops? P.S. Does OpenGl even support direct pixel-level manipulation? I found this on another gamedev thread, someone said:
Quote:
if you thought pixel manipulation was slow in SDL it gets worse in opengl however there are shaders and other workarounds that can make this multiples faster.
*phew* Thanks for reading. Any and all responses appreciated. -Nick [Edited by - Nick of ZA on April 19, 2008 1:14:13 PM]

Share this post


Link to post
Share on other sites
Advertisement
There's a tutorial on 2D bump-mapping here. I don't know if it uses the technique you want, but you might find it interesting anyway.

Share this post


Link to post
Share on other sites
Thanks for that, I'm reading it now to see if there's anything new...

BTW if anyone isn't sure of what I mean by a "normals image", see about 2/3 down this page:

http://www.katsbits.com/htm/tutorials/bumpmaps_from_images_pt2.htm

...or search for the word "distinctive", the image is just beside this, you will see what I mean.

MOOORRRE INFO, PLEASE!

-Nick

Share this post


Link to post
Share on other sites
Bump-mapping in OpenGL is pretty trivial thing to accomplish ( 2D and 3D use the same methods , 2D using orthogonal matrices ) , and there are endless tutorials out there to show you how ( google-me-baby ). Now , if you are trying to accomplish it in a software-rendering way , then thats a different story all together. So which way is it?

Share this post


Link to post
Share on other sites
I guess the question is, which is faster.

I plan to have a fairly high number of bump-mapped sprites onscreen at once, maxing somewhere around a hundred (maybe less -- whatever is possible once AI, collision detection and other expensive operations are thrown into the mix).

What would the speed of something like this be in OpenGL I wonder, using 64x64 texels? Wikipedia claims that bumpmapping is computationally cheap (though taken in context, they are comparing it to geometric complexity in 3D environments as the alternative).

I am thinking that if OpenGL can do this, I'd rather let it do so and focus my time on optimizing other things as mentioned above.

(I have to learn how to use matrices/transforms either way, so I may as well accept the helping hand of OpenGL.)

I'll be back. For help. I assure you.

PS. I would still appreciate hearing how this is done in software though, i.e. are the methods I outlined above correct....

-Nick

Share this post


Link to post
Share on other sites
You compute the normals at a given point based on the height differences in your bump maps. The normals will then serve to compute the illumination of each pixel, just like it would compute the illumination of a face in flat shading, or that of a vertex in smooth shading, by comparing the normal vector to the light(s)' direction(s).

This can be done with a shader, so the GPU takes care of the majority of the work. Doing it in software would probably not be a problem on today's computers, but it's still a lot of overhead that can be easily avoided... I don't think anyone will recommend that you do it in software.

Share this post


Link to post
Share on other sites
Excellent, thanks for the tips!

PS. For anyone interested in getting a thorough overview of bump-mapping techniques, this is the best information I've found:

http://www.delphi3d.net/articles/viewarticle.php?article=bumpmapping.htm

[Edited by - Nick of ZA on April 20, 2008 7:30:49 AM]

Share this post


Link to post
Share on other sites
If you're creating your sprites in a 3D modelling package, you may be able to directly render an object space normal map based on the 3D model. This should give you more accurate results than converting into a heightmap and then deriving normals from that.

Basic normal mapping on the GPU is pretty cheap. You should be able to cover the whole screen with normal mapped sprites.

edit: Here's how to directly render a normal map in Blender - most other modelling packages probably have a similar feature, as it's a very popular way to create normal mapped textures for games.

[Edited by - Fingers_ on April 21, 2008 4:01:29 PM]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!