Sign in to follow this  
Nick of ZA

OpenGL Implementing 2D bump maps - is this correct?

Recommended Posts

Hi All I'm thinking of making a 2D space shooter that uses fake 3D lighting, by utilising 2D bump maps. I saw this technique in a game called Starport GE (which you can google and download freely if you wish to see it in action, you'll need to sign up which takes 30 seconds). I want to know how to implement this 2D bump mapping effect as they have. What I do know is that for this method, only a couple of bitmap files are required: the basic image to be bumpmapped, and a heightmap image OR a normals image (latter is more efficient, AFAICT). Similar examples of 2D bump mapping can be found here (viewable in your browser): This is the kind of effect I want to get in C++, but instead the light source will remain fixed and the objects will move... which is an arbitrary distinction. Right, before I ask questions, let me give you a scene from my proposed game to put this into perspsective. The scene: A top-down view of an arbitrary star system (meaning you are looking directly down at the solar/gravitational/orbital plane, in top-down shooter style). You are a pirate starship captain, coasting about along the solar plane, just itching for your next bit of loot. Suddenly, your sensors indicate a cargo ship on the far side of the star. Thrusting along the plane, you shoot past the star and head for your opponent. In terms of game functionality, here's what happens... As you cruise through space, the star (i.e. the only light source of any consequence) sheds light on your ship. As you begin approaching the star, the light strikes your ship more for'ard. At the moment you pass the star (very closely) to your right, the light strikes your ship directly on it's starboard side. And as it recedes behind you as you approach your prey, the highlight on your ship recedes aft (first quickly, and then more slowly as the distance between your ship and the sun diminishes). (It's important to imagine this example from the top down, as per the game's viewpoint , think any top-down shooter eg. my faves, StarControl or StarScape.) I have done some research on this but am still hazy in a number of areas. So here is the process I presume is needed to implement the above scenario. Please correct me at every turn. (Let's assume we have a simple game set up which shows a starfield and a sun in the centre of it, and a basic starship that we can fly around near the sun. Now we wish to extend the functionality to have a dynamically lit 2D ship as described above.) 1. Create the textured model. Align it facing some default direction, eg. North. 2. Render the model with ambient/diffuse lighting to get the underlying "colour template" for your end result. 3. Render the model with a light source set up at a default direction (I assume that if the original model in step 1 is facing a cardinal direction, then this light source must be placed at some NON-cardinal direction, e.g. NE/NW/SE/SW, so as to get an accurate idea of the object's shape in x and y axes). 4. Convert this to greyscale and back to RGB so that it uses brightness to show pixel topography. As shades of grey, in each pixel you can read any of the three colour channels and they will provide the same information. In reality, one could use one channel only for this step(?). If you didn't do this, you'd need to calculate luminance, meaning your code would be able to interpret the HSL colour model (not really worthwhile...) Anyhow, if you choose to skip step 5, you will have to calculate, at runtime, the surface normals for your pixels. 5. (OPTIONAL STEP) Use some tool(?) to scan the greyscale image and use this to generate the normal map. This will use the RGB values as vector co-ordinates. R=x, G=y, B=z. 6. Combine the colour template (the underlying, neutrally-lit image) with the normal map using a fixed light source, to get a 3D-looking object. The specifics of this being based on various bits of vector math, and pixel blending calcs. If anyone can elaborate on this part more it would be great. I've viewed a few tutorials but I don't follow step-by-step what is being done. I guess this step needs to be broken down a lot more for me to be able to follow it. Am I (sort of) on the right track with all of this? I am thinking of perhaps using OpenGL (on top of SFML) to do the bump mapping. Would it be fairly straightforward to do this in OpenGL, remembering of course that I only need it as a 2D effect? Or would I be better off implementing it by manipulating pixels directly, excluding OpenGL completely for these ops? P.S. Does OpenGl even support direct pixel-level manipulation? I found this on another gamedev thread, someone said:
if you thought pixel manipulation was slow in SDL it gets worse in opengl however there are shaders and other workarounds that can make this multiples faster.
*phew* Thanks for reading. Any and all responses appreciated. -Nick [Edited by - Nick of ZA on April 19, 2008 1:14:13 PM]

Share this post

Link to post
Share on other sites
Thanks for that, I'm reading it now to see if there's anything new...

BTW if anyone isn't sure of what I mean by a "normals image", see about 2/3 down this page:

...or search for the word "distinctive", the image is just beside this, you will see what I mean.



Share this post

Link to post
Share on other sites
Bump-mapping in OpenGL is pretty trivial thing to accomplish ( 2D and 3D use the same methods , 2D using orthogonal matrices ) , and there are endless tutorials out there to show you how ( google-me-baby ). Now , if you are trying to accomplish it in a software-rendering way , then thats a different story all together. So which way is it?

Share this post

Link to post
Share on other sites
I guess the question is, which is faster.

I plan to have a fairly high number of bump-mapped sprites onscreen at once, maxing somewhere around a hundred (maybe less -- whatever is possible once AI, collision detection and other expensive operations are thrown into the mix).

What would the speed of something like this be in OpenGL I wonder, using 64x64 texels? Wikipedia claims that bumpmapping is computationally cheap (though taken in context, they are comparing it to geometric complexity in 3D environments as the alternative).

I am thinking that if OpenGL can do this, I'd rather let it do so and focus my time on optimizing other things as mentioned above.

(I have to learn how to use matrices/transforms either way, so I may as well accept the helping hand of OpenGL.)

I'll be back. For help. I assure you.

PS. I would still appreciate hearing how this is done in software though, i.e. are the methods I outlined above correct....


Share this post

Link to post
Share on other sites
You compute the normals at a given point based on the height differences in your bump maps. The normals will then serve to compute the illumination of each pixel, just like it would compute the illumination of a face in flat shading, or that of a vertex in smooth shading, by comparing the normal vector to the light(s)' direction(s).

This can be done with a shader, so the GPU takes care of the majority of the work. Doing it in software would probably not be a problem on today's computers, but it's still a lot of overhead that can be easily avoided... I don't think anyone will recommend that you do it in software.

Share this post

Link to post
Share on other sites
Excellent, thanks for the tips!

PS. For anyone interested in getting a thorough overview of bump-mapping techniques, this is the best information I've found:

[Edited by - Nick of ZA on April 20, 2008 7:30:49 AM]

Share this post

Link to post
Share on other sites
If you're creating your sprites in a 3D modelling package, you may be able to directly render an object space normal map based on the 3D model. This should give you more accurate results than converting into a heightmap and then deriving normals from that.

Basic normal mapping on the GPU is pretty cheap. You should be able to cover the whole screen with normal mapped sprites.

edit: Here's how to directly render a normal map in Blender - most other modelling packages probably have a similar feature, as it's a very popular way to create normal mapped textures for games.

[Edited by - Fingers_ on April 21, 2008 4:01:29 PM]

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
    • Total Posts
  • Similar Content

    • By test opty
      Hi all,
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
  • Popular Now