Posted by solenoidz
on 20 September 2013 - 10:34 AM
I'm not sure if I understand the question correctly, but can't you place your probes around your level in your engine. Then for every probe, render a cube map located at the probe position, then convert the cube maps to a set of SH coefficients and store them. At render time, render spheres(for example) at your probe's position, pass the SH harmonics coeffiecents for this probe to the shader, and for every pixel, get its world position, calculate the vector pixel->probe pisition, which can be used as normal to extract the light comming from that direction stored in the SH and shade that pixel with "global light" .
I'm not sure what you actually mean ( your statements sound a bit contradictory to me ), but I'm afraid that you can not reuse anything along different instances of a mesh, when it comes to lighting them using a lightmap. They occupy different places and orientations in space, so they must be lit differently by the lights in the scene.
Posted by solenoidz
on 22 November 2011 - 03:41 AM
You have a particle system, that renders batched particles. You draw them in one call I guess, and you need to lit every single one of them separately, based on their distance from the average lights sum in the current scene ?
Something like this (excuse my photoshop skills)
As opposed to this.
Can you store (on CPU side) an attribute in every particle vertex, how close it is from the average sum of scene lights, so you can adjust the brightness of this individual particle in the pixel shader...
Maybe you can skip the CPU side and pass to the particle vertex shader the average light position and (color, why not) and then for each processed particle vertex pass to the pixel shader it's distance from the average lighting.
According to my tests.
If implemented right, occlusion culling doesn't stall the rendering and actually helps.
Culling objects behind terrain simple. You don't even need to sort anything. Just draw the terrain first.
Occlusion culling conflicts with other "speeding up" techniques like instancing, so doing it on individual objects isn't the perfect way.
You should, for example occlusion cull octree nodes against one another and the terrain in that nodes you should have instanced objects that you draw in one draw call. It depends of the scene, since there maybe nodes that contain lost of different meshes, so instancing won't help much, and maybe you will be better off in this situation have per-object culling, not per node, especially if the nodes are big and objects are complex. I think there's alway a compromise, and there isn't an ideal solution for every situation.
You should at least try OpenGL or Direct3D once in your life. Actually you should try them all the time, no matter if those tries are complete failure for you and you continue to return to those engines, crying.
Eventually at some point you will feel like you actually understand those APIs and feel comfortable and free with them. You will start to question certain implemenation details and approaches in the full blown engines, and at the end you will scrap them and start your own engine the way you like it. At that point only the sky is the limit.
In fact, if you want to make a game for some reason NOW no matter what, use engines. If you are just interested in 3D graphics and could deducate time and effort - start learning lower level API. You will eventually go further and the programing experience that you'll gain durring the learning process will be priceless for writting the game itself. If you can write reasonably complex 3D graphics using low level API, you most probably could write reasonably complex gameplay and stuff.
Otherwise , you culd spend your life using other people work and fool yourself your are a professional game developer, no matter how good the results look and how fast those results are achieved.
I just wanted to tell you- try some OpenGL / Direct3D examples first. Some of them are even more simple and good looking than setting an Ogre development environment and creating a game loop.
Writing an WYSIYG level editor along with the game engine is a wise step, IMHO.
It could accelerate your development, by exposing the 3D world in a visual way.
It requires a bit of effort to write and maintain, but at the end of the day it pays every single hour you spend working on it. You could start by setting up a viewport and moving the camera around. Then load objects, implement picking and selecting objects or set of objects, move, roatate , scale, clone etc,. Having a rich GUI tool-set fo RAD applications is a handy thing. Everyting in the world could be made visual and tweakable - materials, shader parameters, transformations, physics etc.
Your editor could be uber editor and do everything you can think of, or you could have a set of small editors to edit particle systems, physics systems, GUI interface etc.
Writting a 3D editor that uses you engine could force you to implement the interface to your graphics engine in a usable way. You will feel the need to make it usefull, simple and will think about every function and parameter you export from your engine , test it and debug it right away in the editor. Writting an editor that uses the engine is similar to writting a game that uses the engine. You could even write and play your game using the editor. When tested, make an exe that call the same functions and you could release a game demo. You could attach your dll engine process to the editor.exe and step into and debug your engine code.
Calculate the dot between the light direction and the normal in the vertex shader and pass that float to the pixel shader. In pixel shader, sample your bump-map texture sampler extract a normal from it, add it to the "float dot" passed from the vertex shader and use that "float dot" for your final color calculation of the pixel. That works for "detail normal mapping" with conjunction with vertex normals being present and used to calculate the "overall" lighting. That is simply one of the ways of doing it. For example for normal maps in object space as those exported from L3DT for example, you don't need vertex normals i think.