Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


keelx

Member Since 22 May 2011
Offline Last Active May 14 2012 07:04 PM

#4891343 Valve techniques in OpenGL

Posted by keelx on 06 December 2011 - 10:51 PM

I read in a valve paper that in hl2 and many other games they used what they call an 'ambient cube' to acheive somewhat global-illumination effects on animated models (for static geometry they used radiosity light mapping). It looked really good, and I'd like to see it in action. Problem is, I'm on a linux computer, and the article used directx with hlsl shaders.

The syntax between HLSL and GLSL is almost the same. Replace float4 with vec4 and you're most of the way there.

You can implement an ambient cube either with a 1px cube-map, which you sample using the world-space normal as a texture-coordinate, or you can pass 6 colour values into the shader and add them, weighted by the normal:
vec3 colorXP = ...;// +x color
vec3 colorXN = ...;// -x color
vec3 colorYP = ...;// +y color
vec3 colorYN = ...;// -y color
vec3 colorZP = ...;// +z color
vec3 colorZN = ...;// -z color
vec3 ambient = colorXP * clamp( normal.x, 0.0,1.0)
     		+ colorXN * clamp(-normal.x, 0.0,1.0)
     		+ colorYP * clamp( normal.y, 0.0,1.0)
     		+ colorYN * clamp(-normal.y, 0.0,1.0)
     		+ colorZP * clamp( normal.z, 0.0,1.0)
     		+ colorZN * clamp(-normal.z, 0.0,1.0);

Also, they used a specular cube map for rendering of specular highlights. I don't quite understand this one- how do they combine the specular cubemap with the normal map?

Convert the tangent-space normal map into a world-space normal, then reflect the eye-direction around this normal to get the reflection direction. Use the reflection direction as a texture-coordinate to sample the cube map.

Then, they compute specular component based off the nearest specular cube map in the world. Dynamically calculated. Are these various cube maps updated dynamically?

No, the cube-maps aren't dynamic in HL2 (though they can be implemented as to be dynamic).

And then we get to model shading. So they approximate radiosity using an ambient cube. Do they use this for the actual ambient component? Or do they use it for diffuse/normal mapping as well? Or do they normal map an ambient component (which doesn't make any sense in my mind)? What exactly do they do?

After calculating the surface normal (which may involve reading from a normal-map or not), they use the normal to determine the ambient colour in that direction, as above.

So normal mapping is done first, to determine the actual normal of a pixel.
After that, you can use the normal to find the ambient colour in that direction, and you can also use the normal to calculate the diffuse lighting (using phong/etc for models, or lightmaps for the world), and if required, you can reflect the eye-direction around the normal to calculate a specular reflection direction.

It seems that they do offline radiosity lightmapping for world geometry, and normal map it. There they have both 'ambient' and diffuse lighting. Statically, that is.

The lightmapping is quite simmilar to traditional lightmapping, but instead of baking out a single lightmap result, they produce 3 lightmaps for the world.
All the lightmaps are generated without normal-mapping -- only the geometric normals are used.
The first lightmap is generated as if all of the normals were bent slightly in a certain direction (say, slightly north).

The 2nd lightmap is generated as if all of the normals were bent slightly in a different direction (say, slightly south-east).

The 3rd lightmap is generated as if all of the normals were bent slightly in another different direction (say, slightly south-west).

When rendering the world-geometry at runtime, all 3 light-maps are read from, and are mixed together with different weights, which are determined by the normal map.
e.g. If the normal-mapped normal is pointing slightly north, then more weight will be given to the 1st lightmap, or if it's pointing slightly south, more weighting will be given to the 2nd/3rd lightmaps, etc....


Wow that cleared up everything. Thanks for the explanation!

Also, I must say I'm astounded by the ingeniousness here...




#4874543 Player move while jumping

Posted by keelx on 19 October 2011 - 09:13 PM

Well, consider this.
YOU are running. You suddenly decide to jump. While you are running, you jump, but you continue to move forward even though you aren't on the ground. It's called inertia. Non-realistic games let you move in mid-air. Non-realistic games make you come to a dead halt when your feet leave the ground. Realistic games (think mario if you're going 2d) use inertia, friction, vectors with gravity, terminal velocity, etc. to make a realistic, yet enjoyable gaming experience.


#4873275 Choosing your game's technology

Posted by keelx on 16 October 2011 - 06:14 PM


If you want to make anything cross platform then these are completely out-of-the picture

  • XNA
  • Unity


That would depend on your definition of "cross-platform." XNA runs on Windows, XBox 360 and Windows Phone. Unity runs on Windows, Mac, web browsers, and, if you're willing to buy the license for them, iOS, Android, Wii, PS3, and Xbox 360.


You say windows, Xbox 360, and windows phone. Running on only Microsoft devices is FAR from cross-platform. And as for Unity, it still doesn't support any form of linux or unix other than the linux kernel on the Wii and the one on android. Though with that, I can see why, there's hundreds of distros and configurations.


If you want to make anything cross platform then these are completely out-of-the picture:

  • Anything directX or related to directX
  • Anything crytek
  • UDK, unreal engine 3
  • XNA
  • Unity
I recommend, that if you are building a game from scratch, to go with SDL and OpenGL. It is cross-platform to areas beyond Windows, Mac, or Linux. Or if you really want some game engine, I recommend either OGRE or Crystal Space. Also, ID tech engines are REALLY good. Though I'm not sure about the difficulties and/or restrictions upon using those engines.

The thing about those Commercial game engines, is that they're usually, if not always, windows only, using things like the win32 API, directX, etc. My advice is to stay away from those.


This also depends on how cross-platform you need to be. The reality is that the API-specific portions of the rendering code (specifically referencing your mention of OpenGL here) are going to be a quite small portion of the overall codebase, and you still need to deal with file I/O, memory management, sound, input, networking, the main event loop, and potentially many other components where a platform dependency may exist. Just focussing on OpenGL is not enough to guarantee cross-platform compatibility I'm afraid.


Hence my mention of SDL. It handles input, networking, an event loop, and sound. And file I/O and memory management is supported in just standard C++ alone. I personally don't see any reason why you'd ever want to use a library that wasn't cross platform, there's no need.


#4873239 Ogre Vs Irrlicht

Posted by keelx on 16 October 2011 - 04:24 PM

Oh boy it's this thread again...

Anyway, it really comes down to what you are trying to do. A simple game where you don't need many advanced features and want to spend more time on design, go for Irrlicht. It is very stable, and is a full game engine, meaning input, collision, an event loop, etc. It can also be extended with plugins, and you can add next-gen effects with shaders. OGRE is JUST a graphics engine. Nothing more. The examples, however, use OIS for input because they fit well together and are made by the same people. Also, with ogre, there is only one model format it supports, and you have to write a config file for almost everything you can think of. That, however, makes it more maintainable for large projects.

In the field of documentation, both are documented decently, though irrlicht comes with standalone and comprehendable examples, while OGRE's examples use it's own framework/architecture, and some plugin system where all the examples are plugins to the SampleBrowser. Quite confusing if you're just starting out. Also, it has NO tutorials whatsoever about how to get your own application up and running.

All in all, it comes down to what you are making. Irrlicht is easier to learn and handles more things for you, while OGRE is very difficult and is ONLY a rendering engine, but more maintainable and MUCH more powerful.


PARTNERS