My idea with max distance is to have it be 0 at maxDistance and 1 at the centre, so it clamps between the two. Then to have an additional parameter, what I call "FalloffFactor" to be how steep the falloff is between 1 and maxdistance
At the moment I'm ready to accept the simplest just to get the actual light volume working properly before trying a better formula, as shown it dosnt work properly in the pcitures in the previous post - something is bollocks and I dont know what
Start coding asap and then go back to refactor once you've got some naive implementation. It might be disheartening knowing much of your first implementation might get scrapped but when actually coding you'll stumble into all sorts of problems you probably could never have dreamt of while designing, and even if you spent days on a solid design the first implementation is hardly water-proof no matter how much you prepare.
Refactor and iterate is king, especially in hobby projects when you can go back how many times you want and find and correct your past mistakes, you'll quickly progress into a better programmer overall
I will strongly recommend http://www.arcsynthesis.org/gltut/index.html. I am using it and it is way more in-depth than anything else I've found on the internet, I have bought the OpenGL Superbible but I never really use it anymore
+1 for refactoring, it is a little dishearthening to know you are writing code you know you are probably gonna refactor soon but it's impossible to get it right on the first attempt. Gets your brain working though and that's important, I do it all the time in my hobby projects. It is also pleasing mentally to clean up old code. There's a good book called "The pragmatic programmer" (http://www.amazon.com/Pragmatic-Programmer-Journeyman-Master/dp/020161622X) I highly recommend.
A little bit harder to convince people at work, especially non-programmers, the value of spending some time and go back refactoring old code instead of adding new features though...
I can recommend http://unity3d.com/. Very easy to use, powerful and you can become productive quickly. It has support for multiple platforms, including iOS/Android. I believe you can make games for those platforms even with the free license (although under restrictions).
I'm not sure if you can access the accelerometer directly from their engine, but I'm certain they allow you to write wrappers to access the underlying OS features. I know a project where we had our native cross-platform library which we could load independently of the platform used through the same C# wrapper code in Unity and through it we could directly use android/iOS library.
If I was an indie just wanted to create a game that would be my engine of choice any day of the week.
So far I've used a static camera which simply sits at Vec3(0.0f) and stares down negative Z axis. I've got geometry like cubes to play correctly when translating, scaling, rotating etc, but now I want to try get the camera move correctly.
I am using glm, and to get a view matrix I am using the glm::lookAt method with the Vec3 data provided from my camera struct
I basically try to move it like this between the rendering:
// move camera
if (evnt.KeySymbol == LEFT)
activeScene->GetSceneCamera().CameraPosition -= Vec3(0.1f, 0.0f, 0.0f);
if (evnt.KeySymbol == RIGHT)
activeScene->GetSceneCamera().CameraPosition += Vec3(0.1f, 0.0f, 0.0f);
if (evnt.KeySymbol == UP)
activeScene->GetSceneCamera().CameraPosition -= Vec3(0.0f, 0.0f, 0.1f);
if (evnt.KeySymbol == DOWN)
activeScene->GetSceneCamera().CameraPosition += Vec3(0.0f, 0.0f, 0.1f);
but this creates some strange results. For example, instead of the 'strafing' effect I would expect when moving the camera on the x-axis, it is as if I am rotating it, standing still. And when I try to move it in Z-axis, I really cant quite explain how it looks like, kinda like 'zooming' down the Z axis or something..
// front vertices
1.0f, -1.0f, 1.0f, // bottom right
-1.0f, -1.0f, 1.0f, // bottom left
-1.0f, 1.0f, 1.0f, // top left
1.0f, 1.0f, 1.0f, // top right
// back vertices
1.0f, -1.0f, -1.0f, // bottom right
-1.0f, -1.0f, -1.0f, // bottom left
-1.0f, 1.0f, -1.0f, // top left
1.0f, 1.0f, -1.0f, // top right
I strongly recommend learning modern, shader-driven OpenGL. The vast majority of OpenGL resources on the net cover the classic, fixed-function pipeline, but there are a few good tutorials out there to help get you off the ground. I really like the Learning Modern 3D Graphics Programming tutorial by Jason McKesson. I've been around OpenGL for years, but only recently got into the modern stuff. And that's what I used. And I know a few newcomers to OpenGL who found it useful.
You might also find the 5th edition of the OpenGL Superbible useful to start with. I first learned OpenGL from the 2nd edition some years ago (still have it on my shelf). I picked up the Kindle version of the 5th edition earlier this year to help me along with the tutorial above. I think it's perfect for beginners. The author shields you from the nitty-gritty details of shaders for the first few chapters via a utility library he put together. I think it's a great way to get started with the concepts and without getting bogged down by the technical details. But he gets into the shaders around Chapter 6.
Once you're comfortable with simple OpenGL stuff, another book you might find useful is the 6th edition of Edward Angel's Interactive Computer Graphics book. It teaches some graphics theory and algorithms specifically using shader-based OpenGL.
All 3 of these books together should go a long way toward getting you where you want to be. Of course, there are other great books out there that any graphics programmer should have on his shelf, but these are good to get started.