The point is that even if you're avoiding shaders in order to support prehistoric hardware, you may well have already committed your hardware requirements to something more modern elsewhere - shaders aren't the only feature of more modern hardware and it's quite easy to trip over that line and thereby invalidate your reasons for avoiding shaders.
no worries there. the most radical thing i did recently was i implemented QueryPerformanceCounter. its nice having a real high rez timer again, like back when you used to reprogram the timer chip as part of standard operating procedure for a game. graphics wise, its all directx8 compatible code basically. other than wanting to draw more stuff (its always "more stuff!" with games), i'm only using dx8 capabilities.
so if i add to / switch my gamedev graphics library to shaders, i can write a vertex shader for basic transforms, and 3 pixels shaders: regular, alphatest, and alphablend, and thats it? i'm done?
that will speedup the transform and texture stages, but i'm still sending 500 batches of 20 triangles.
i've done a bit of basic testing and it appears i'm cpu bound due to the large batch numbers and small batch sizes.
right now my approach to drawing most scenes is to assemble the scene from basic parts like ground quad, rock meshes 1 & 2, and plant meshes 1-4, then texturing, scaling, rotating, translating, and height mapping them, one quad, rock, and plant at at a time.
i take it that two alternate approaches used are:
1. chunks: bigger meshes with entire sections of a level
2. dynamic buffers where the possibly visible mesh(es) is/are assembled on the fly
is it just me, or is it weird that what games want to do (draw lots of small meshes) is just what vidcards suck at?
or did they evolve with a specific type of game and way of doing graphics in mind? or was it another case of non-gamedevs doing what they thought might help, and be a way to make some $ at the game of making games?
overall, i'm looking for general solutions for basic graphics capabilities. stuff where i can build it, plop into the gamelib, and forget about it. and get back to building games, not components and modules.
but it does look like the time has come when i need to move on to a new way of doing things, if i want to have the level of scene complexity i want and probably need to be competitive in todays market.
i only sell in low/no competition markets. when you're the best or only one out there, you can get away with less than bleeding edge graphics. but things like applying a normal lighting equation and some simple scaled mipmap with CORRECT alpha test wouldn't be that big a deal. pretty much all of that i've done before or something similar.
so i guess i'd be looking for a generalized shader based approach for drawing indoor and outdoor scenes for games like shooters, fps/rpgs, and ground, air, and water vehicle sims.
at the GPU end, you want to set a texture, and draw a batch of all the triangles that use that texture and are at least partially in the frustum, then do the next texture, and so on, touching each texture exactly once. thats what the card likes the most, right?
the question is, what should the data look like on the game end for proper "care and feeding" of the GPU in such a manner. or if its even 100% possible or practical to do so.
i'm doing all this with drawing randomly generated levels and environments in mind. so pre-processed and hard coded data are sort of out of the question.