That would be really nice. I am at work so couldnt check the video. But I can see how that will work as all we can assume the directional light as a point alight, calculate the coefficients and then only use the coefficient depending upon the direction of light. Thanks, obhi
Don't need to assume it is a point light at all! Spherical harmonics are great at encoding complex lighting environments (not as great as Haar wavelets apparently but I haven't looked into them). Think of it as compressing a full environment map into just a few numbers (massively lossy of course). Another way to think of an environment map is "what colour/brightness is the incoming light from each possible direction". So you can reverse this and instead encode into the environment map for a single point (vertex or texel) what colour/brightness that point is when a directional light is cast on it from all possible directions. In some cases it will be lit, some it will be shadowed by other geometry, and some it will have secondary illumination from ambient lighting and light bounces. Then you can encode this environment into a SH with a limited number of coefficients, and hard code it into vertex data or textures. Then when you want to simulate a directional light you can encode the directional light into the same number of SH coefficients and simply multiply all the environment coefficients by these, like a mask, in your shaders. The directional light can be created by taking a cardinal axis SH and rotating it (there is a fairly easy way to rotate SH) to the direction of the light. If you want you can also create much more complex lighting environments and apply them instead. Google for Precomputed radiance transfer (PRT) and spherical harmonics and it throws up a few papers.
I started working on a project just like this for Android, and by far the most time consuming things were AI and GUI. If you can find a handy library that will manage most UI components for you then that should make the task a lot easier. If you use a cheating AI then it will make that part a lot easier as well, but I was restricting my NPCs to using the same ship physics model and inputs as the player, which was simply a thrust button and left and right turning. Obviously the actual graphics side of things is pretty easy when everything is sprites and in 2D, as is simple collision (which is good enough for a game like this imo). It is still worth using a grid for spatial partitioning, speeding up rendering, and collision detection.
As far as learning C++ is concerned its difficult to say. I'm tempted to say do it in C#, but I can't vouch for linux C# support. I know there is a .NET implementation for linux, but not how good it is, or what there is in the way of IDE support for C#. I guess java is another option, but personally I hate it so can't really suggest it myself.
In Civilization V there are no "sides" to the map (except for the top and bottom of course), so if you keep moving your camera, lets say to the left, you keep on circling around the map. How is this achieved? I was going to wrap my terrain around a cylinder, but I don't think that would produce the same effect, as the map in Civilization looks flat.
Its called "wrapped" or "toroidal indexing". Basically just take the absolute xy coordinate of the tile you want to draw (can be any values, including outside the range of valid tile indexes), then you use the an unsigned mod operation with the max width and height to determine the "wrapped" index. e.g.:
x = -47
width = 20
wrappedx = x % width;
if(wrappedx < 0) wrappedx = width + x;
Sounds right to me. But if you are going to rescale z don't use a matrix in the fragment program, just rescale the z value (*2-1). But why not just adjust your projection matrix instead? If you are writing custom fragment depth (which you will need to as you are transforming the z in your fragment program) you are losing your early z check.
I don't agree with those two guides, my opinion is you should keep glew in its own folder with an include and a lib folder, and copy the dll to the same directory as your programs exe file. So:
create a directory called glew where ever you usually put sdks/libraries (e.g. programming/libs or programming/sdks), create an directory inside it called include and in that one called GL (so glew/include/GL), and one called lib (glew/lib)
copy the .h file to the include/GL directory, and the .lib file to the lib directory.
copy the dll file to the same directory as your exe file is in.
in your project settings for your exe project add the lib directory to Configuration Properties->Linker->General->Additional Library Directories, and add the include directory to Configuration Properties->C/C++->General->Additional Include Directories.*
in the project settings still: add glew32.lib to Configuration Properties->Linker->Input->Additional Dependencies. Alternatively use #pragma comment(lib, "glew32.lib") in a code file in your exe project.
* You should try to organise your code base so that you can use relative directories rather than putting in the full path, as it makes moving your code around, and changing directory structure easier. e.g.: Main directory C:\Programming. Under that you have: SDKs and src. Under SDKs you have glew and other libraries that you haven't written yourself. i.e. external dependencies of your code. Under src you have all your own projects. So if under src you have a project called "hello_world", you can add the glew directories using $(SolutionDir)..\..\SDKs\glew\include and $(SolutionDir)..\..\SDKs\glew\lib. Then if you were to copy your entire Programming directory to another location, or rename it, these relative directories would still work.
This isn't a nice design. For one thing dynamic_casts are costly, and you certainly wouldn't want to use them every time you call any function in your entire game. Don't add runtime complexity just to get out of having to type some extra text! Also don't add complex design to get out of it either. If you want your Object class to be able to access these different systems either put them into a globally accessible object, or pass them to the Object in the constructor. But my opinion is that it is better to separate your operations from your data:
Object contains the specification for an Object (i.e. the data: position, colour etc.)
Another class (e.g. Renderer) contains the methods to draw an Object.
Another class (e.g. Scene) contains the set of object instances you wish to draw.
So you create a bunch of Objects, add them to a Scene, and then pass the Scene to the Renderer to actually do the drawing.
I may be missing something here, but fundamentally what makes floors any different from anything else your ray caster will be drawing?
If he means ray caster as in Doom then the walls and floors use different techniques. In a Doom style engine the walls are drawn by intersecting a ray in 2D with a wall section, then scaling a 1D slice of texture onto the screen. Obviously to draw the floor needs a completely different technique. My guess is that you simply project a pixel onto a plane and then mod the plane coordinates by texture size, and that's your pixel colour.
Can't see anything immediately wrong (assuming the various matrix*normal calculations have .w component set to 0, to prevent translation). As usual, de-construct your shader, outputting each stage as colour until you find where the values aren't correct. Start by making sure your material and light parameters are getting into the shader correctly but just outputting their colours to all pixels.
Yeah, I don't see anything wrong with the atmospheric scattering part. I think you should go back and double check that the camera position is really what it is supposed to be. Just because your lighting works doesn't mean everything is definitely correct as far as the scattering code expects it to be. In fact it MUST be wrong, because the scattering code is copy pasted from a working example. You just need to diagnose until you find the incorrect parameters.
How are you ensuring that your input data is definitely within the inner to outer atmosphere range? If, for instance, you are using the default or improved Perlin noise algorithm, they do not provide numbers that are clamped between -1 and 1, they can exceed those bounds. How about doing some shader diagnostics? Output height / (OuterRadius - InnerRadius) as the colour. Add branches to output red if the value escapes 0 to 1 bounds. Simplify the shader then build it back up, adding each stage, testing values against what they should be by outputting them as colour.
I don't know anything about the Zhang method, but I have implemented this method before and found it works quite well. I think you would have to treat the moving voxels as a separate problem, and from what I can tell nobody wants to share their techniques for modifiable spatial partitioning (I have never seen any talk of how to do it efficiently for different partitioning schemes).