Jump to content

  • Log In with Google      Sign In   
  • Create Account


keelx

Member Since 22 May 2011
Offline Last Active May 14 2012 07:04 PM

Posts I've Made

In Topic: Unity on the Wii with devkitPro?

21 December 2011 - 01:33 PM

I don't think the "normal" Unity allows to create Wii builds.
And since their requirments are "You must be a an Authorized Developer for the Wii console and obtain a Wii development kit", so I don't think you can get a version that can do it..
And even if you could, there's a very high probably of it not being compatible with devkitPro.
If you really want to make games for Wii, you'll have to use devkitPro and write C code, search in the net, I think I've seen some free graphic engines for the Wii before, made from someone in the homebrew community.

Yes I've seen those. The problem with that is that there is little to no documentation, tutorials, or anything of the like.


I don't think the "normal" Unity allows to create Wii builds.
[--snip--]

Indeed it does not allow that. To use Unity for Wii you'll need a separate build that you can only acquire once you've shown proof that you are a legit authorized developer. In other words: there is no way to use Unity for Wii without actually doing it the 'normal' way. :)


Well that sucks. Oh well.

In Topic: Valve techniques in OpenGL

06 December 2011 - 10:51 PM

I read in a valve paper that in hl2 and many other games they used what they call an 'ambient cube' to acheive somewhat global-illumination effects on animated models (for static geometry they used radiosity light mapping). It looked really good, and I'd like to see it in action. Problem is, I'm on a linux computer, and the article used directx with hlsl shaders.

The syntax between HLSL and GLSL is almost the same. Replace float4 with vec4 and you're most of the way there.

You can implement an ambient cube either with a 1px cube-map, which you sample using the world-space normal as a texture-coordinate, or you can pass 6 colour values into the shader and add them, weighted by the normal:
vec3 colorXP = ...;// +x color
vec3 colorXN = ...;// -x color
vec3 colorYP = ...;// +y color
vec3 colorYN = ...;// -y color
vec3 colorZP = ...;// +z color
vec3 colorZN = ...;// -z color
vec3 ambient = colorXP * clamp( normal.x, 0.0,1.0)
     		+ colorXN * clamp(-normal.x, 0.0,1.0)
     		+ colorYP * clamp( normal.y, 0.0,1.0)
     		+ colorYN * clamp(-normal.y, 0.0,1.0)
     		+ colorZP * clamp( normal.z, 0.0,1.0)
     		+ colorZN * clamp(-normal.z, 0.0,1.0);

Also, they used a specular cube map for rendering of specular highlights. I don't quite understand this one- how do they combine the specular cubemap with the normal map?

Convert the tangent-space normal map into a world-space normal, then reflect the eye-direction around this normal to get the reflection direction. Use the reflection direction as a texture-coordinate to sample the cube map.

Then, they compute specular component based off the nearest specular cube map in the world. Dynamically calculated. Are these various cube maps updated dynamically?

No, the cube-maps aren't dynamic in HL2 (though they can be implemented as to be dynamic).

And then we get to model shading. So they approximate radiosity using an ambient cube. Do they use this for the actual ambient component? Or do they use it for diffuse/normal mapping as well? Or do they normal map an ambient component (which doesn't make any sense in my mind)? What exactly do they do?

After calculating the surface normal (which may involve reading from a normal-map or not), they use the normal to determine the ambient colour in that direction, as above.

So normal mapping is done first, to determine the actual normal of a pixel.
After that, you can use the normal to find the ambient colour in that direction, and you can also use the normal to calculate the diffuse lighting (using phong/etc for models, or lightmaps for the world), and if required, you can reflect the eye-direction around the normal to calculate a specular reflection direction.

It seems that they do offline radiosity lightmapping for world geometry, and normal map it. There they have both 'ambient' and diffuse lighting. Statically, that is.

The lightmapping is quite simmilar to traditional lightmapping, but instead of baking out a single lightmap result, they produce 3 lightmaps for the world.
All the lightmaps are generated without normal-mapping -- only the geometric normals are used.
The first lightmap is generated as if all of the normals were bent slightly in a certain direction (say, slightly north).

The 2nd lightmap is generated as if all of the normals were bent slightly in a different direction (say, slightly south-east).

The 3rd lightmap is generated as if all of the normals were bent slightly in another different direction (say, slightly south-west).

When rendering the world-geometry at runtime, all 3 light-maps are read from, and are mixed together with different weights, which are determined by the normal map.
e.g. If the normal-mapped normal is pointing slightly north, then more weight will be given to the 1st lightmap, or if it's pointing slightly south, more weighting will be given to the 2nd/3rd lightmaps, etc....


Wow that cleared up everything. Thanks for the explanation!

Also, I must say I'm astounded by the ingeniousness here...


In Topic: Valve techniques in OpenGL

06 December 2011 - 10:30 PM

The HLSL code should be translatable to GLSL, they don't do anything fancy. For the ambient cube, they just do a weighted lookup based on the normal. I would recommend just going with spherical harmonics instead, which is a more general solution that can provide better quality. For the specular cubemaps, they just compute a reflection vector per pixel based on the normal. You can do this easily with the reflect() intrinsic. To combine this with normal mapping, you just compute the reflection using the normal map normal rather than the interpolated vertex normal. They use the specular cubemaps for dynamic objects, and also have the option of adding analytical specular from a small number of dynamic light sources.

BTW the paper is here: http://www.valvesoft...ourceEngine.pdf


I'll look into spherical harmonics; I've heard of them before.


Reading this and another paper (can't find it, title is 'directX tutorial 10, half-life 2 shading'), I still don't quite get the concepts. It seems that they do offline radiosity lightmapping for world geometry, and normal map it. There they have both 'ambient' and diffuse lighting. Statically, that is. Then, they compute specular component based off the nearest specular cube map in the world. Dynamically calculated. Are these various cube maps updated dynamically? And if they are to be normal mapped, then wouldn't the cube map have to be precomputed separately for every surface that has a specular term, using the normals of the surface (provided by a normal map)?

And then we get to model shading. So they approximate radiosity using an ambient cube. Do they use this for the actual ambient component? Or do they use it for diffuse/normal mapping as well? Or do they normal map an ambient component (which doesn't make any sense in my mind)? What exactly do they do? Then there's the specular cube map. The models apparently use the same cube maps as the world, and my questions about the world lighting still remain here.

So, in all, I don't understand really anything of what's going on here. If someone could explain this a bit more clearly and in-depth than the paper, that would be great. Preferably in somewhat of a step-by-step fashion.

In Topic: Tilemap collision nowadays

19 October 2011 - 09:13 PM

I was wondering, what collision method do people use nowadays on sidescrolling platformers? Is it pixel-perfect or rectangle box?


I use neither. I usually have an array of int's for the tilemap, and then I do a four-corner overlap check, using the player's position divided by the tile size. I then check whether the index in the array at that position is a passable value or not.
The method is described in detail here:
http://www.parallelrealities.co.uk/

It works wonders and is blazing fast. The only drawback is all the tiles need to be the same size, without some manipulation to the code.

In Topic: Player move while jumping

19 October 2011 - 09:13 PM

Well, consider this.
YOU are running. You suddenly decide to jump. While you are running, you jump, but you continue to move forward even though you aren't on the ground. It's called inertia. Non-realistic games let you move in mid-air. Non-realistic games make you come to a dead halt when your feet leave the ground. Realistic games (think mario if you're going 2d) use inertia, friction, vectors with gravity, terminal velocity, etc. to make a realistic, yet enjoyable gaming experience.

PARTNERS