index buffers, D3DPOOL_DEFAULT, and lost device

Started by
16 comments, last by Norman Barrows 11 years ago

Roundabout now I should mention how much I despise mayonnaise. :)

Anyway, typical requirements for a game coming out maybe 5 or 6 years ago would have been D3D9/programmable pipeline. I'm going to confidently predict that based on what you've just said, once you get over the learning curve you'll love it.

Anyway, the following seems a reasonably simple introduction: http://www.two-kings.de/tutorials/dxgraphics/dxgraphics18.html - it does a few things slightly weird (specifically, the GetTransform stuff) but other than that it should be enough to get you up and running with a basic program.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Advertisement
Very roughly:
You can rank a GPU's compatibility/power level by the Shader Models that it supports - these are the 'Asm' instruction sets that your HLSL shader code will be compiled into.

FF was ok til 03.
Then Dx9 SM2 popped up - this is Unity's minimum requirement.
In around 04-06, Dx9 SM3 took off - this is PlayStation3/Xbox360 level hardware (old).
Then in 07-09, DX10 SM4 appeared.
Then in the past 2 to 4 years, Dx11 SM5 has been starting to take over, but it's still new ground.

If you want to keep support for older versions of windows, then you'll have to use Dx9, but you can choose which hardware era with SM2 or SM3 shaders (sm2 has more limitations, but buys you a few more years of compatibility).

If you're ok with ditching WinXP support, then you can go straight to Dx11. It has "feature levels", which allows it to run on earlier hardware (not just on SM5-era hardware). You can choose between SM2, SM4 or SM5 (they don't support PS3-era SM3 for some strange reason...).
Most modern games at the moment probably use Dx11 with SM4 for hardware compatibility (and a few SM5 optional code paths for the latest eye candy).

It's worth adding here that GPU capabilities tend to evolve hand-in-hand (despite the impression that D3D caps or OpenGL extensions may give you), so if you've got certain other non-shader capabilities which your code is dependent on, then you've already got a requirement to have hardware that supports shaders anyway. For example, you mentioned a 10-texture blend earlier - if you're blending 10 textures on the GPU then you're already into SM3-class hardware territory. That's not all. Are you, for example, using any non-power-of-two textures? Or textures sized 2048x2048 or more? These will all raise your minimum hardware requirements to something that's also going to support shaders anyway.

The point is that even if you're avoiding shaders in order to support prehistoric hardware, you may well have already committed your hardware requirements to something more modern elsewhere - shaders aren't the only feature of more modern hardware and it's quite easy to trip over that line and thereby invalidate your reasons for avoiding shaders.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Anyway, the following seems a reasonably simple introduction

thanks. i appreciate that!

two-kings! good stuff! been a long time.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Very roughly:
You can rank a GPU's compatibility/power level by the Shader Models that it supports - these are the 'Asm' instruction sets that your HLSL shader code will be compiled into.

FF was ok til 03.
Then Dx9 SM2 popped up - this is Unity's minimum requirement.
In around 04-06, Dx9 SM3 took off - this is PlayStation3/Xbox360 level hardware (old).
Then in 07-09, DX10 SM4 appeared.
Then in the past 2 to 4 years, Dx11 SM5 has been starting to take over, but it's still new ground.

If you want to keep support for older versions of windows, then you'll have to use Dx9, but you can choose which hardware era with SM2 or SM3 shaders (sm2 has more limitations, but buys you a few more years of compatibility).

If you're ok with ditching WinXP support, then you can go straight to Dx11. It has "feature levels", which allows it to run on earlier hardware (not just on SM5-era hardware). You can choose between SM2, SM4 or SM5 (they don't support PS3-era SM3 for some strange reason...).
Most modern games at the moment probably use Dx11 with SM4 for hardware compatibility (and a few SM5 optional code paths for the latest eye candy).

this is EXACTLY the type of info i need.

looks like i should be at dx11 sm4-5. even if i don't need all the capabilities, it'll be easier to port to DX12 when the time comes.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Anyway, the following seems a reasonably simple introduction: http://www.two-kings.de/tutorials/dxgraphics/dxgraphics18.html

i take it that in dx11, shader code is required for both the vertex and pixel stages? and that code similar to that on two-kings (mat mul, and text lookup) would be the basics to get me started? and then i have to add gouraud and phong and mips to get the rest of the standard fixed function pipeline? the baseline capabilities i'm looking for are aniso, mipmaps, and T&L. the special capabilities i need beyond that are alphatest, and alpha blend (for now) the shader code itself seems very straightforward. having written my own poly engine once probably helps (anyone remember sutherland-hodgeman clipping algo? <g>). i hope i don't get addicted to writing shader code!

ok, here i go already....

with the HLSL instruction set, would it be possible to implement real time raytracing?

i'm thinking probably not, unless you used it as a big mathco. like all that non-graphics GPU stuff you hear about. its more of specialized processor hardware stage in a poly engine.

too bad they don't make cards that accelerate ray tracing. then again, they wouldn't be much different to program i'd imagine. vertex stuff would get replaced with ray stuff, pixel stuff would be analogous to color calculations at each ray/surface collision.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

For example, you mentioned a 10-texture blend earlier - if you're blending 10 textures on the GPU then you're already into SM3-class hardware territory. That's not all. Are you, for example, using any non-power-of-two textures? Or textures sized 2048x2048 or more? These will all raise your minimum hardware requirements to something that's also going to support shaders anyway.

the texture blender was for the previous version of caveman. it did its work in ram with the cpu, then mempy'd the result to a dynamic texture in i guess it was mempool_default.

all textures in the titles i'm working on now are 256x256. i experimented with sizes up to 4096x4096, but was able to get decent results with 256x256. i spent a lot of time playing around with quad sizes, # of times the texture is repeated across the quad, seamless textures, real world size of the image on the texture, etc, trying to get textures at the correct image scale, seamless, with little or no moire patterns, low pixelization, and 256x256 textures for speed. some of the stuff turned out really nice. no bump maps or anything fancy. it appears that a high quality texture can make all the difference.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

The point is that even if you're avoiding shaders in order to support prehistoric hardware, you may well have already committed your hardware requirements to something more modern elsewhere - shaders aren't the only feature of more modern hardware and it's quite easy to trip over that line and thereby invalidate your reasons for avoiding shaders.

no worries there. the most radical thing i did recently was i implemented QueryPerformanceCounter. its nice having a real high rez timer again, like back when you used to reprogram the timer chip as part of standard operating procedure for a game. graphics wise, its all directx8 compatible code basically. other than wanting to draw more stuff (its always "more stuff!" with games), i'm only using dx8 capabilities.

so if i add to / switch my gamedev graphics library to shaders, i can write a vertex shader for basic transforms, and 3 pixels shaders: regular, alphatest, and alphablend, and thats it? i'm done?

that will speedup the transform and texture stages, but i'm still sending 500 batches of 20 triangles.

i've done a bit of basic testing and it appears i'm cpu bound due to the large batch numbers and small batch sizes.

right now my approach to drawing most scenes is to assemble the scene from basic parts like ground quad, rock meshes 1 & 2, and plant meshes 1-4, then texturing, scaling, rotating, translating, and height mapping them, one quad, rock, and plant at at a time.

i take it that two alternate approaches used are:

1. chunks: bigger meshes with entire sections of a level

2. dynamic buffers where the possibly visible mesh(es) is/are assembled on the fly

is it just me, or is it weird that what games want to do (draw lots of small meshes) is just what vidcards suck at?

or did they evolve with a specific type of game and way of doing graphics in mind? or was it another case of non-gamedevs doing what they thought might help, and be a way to make some $ at the game of making games?

overall, i'm looking for general solutions for basic graphics capabilities. stuff where i can build it, plop into the gamelib, and forget about it. and get back to building games, not components and modules.

but it does look like the time has come when i need to move on to a new way of doing things, if i want to have the level of scene complexity i want and probably need to be competitive in todays market.

i only sell in low/no competition markets. when you're the best or only one out there, you can get away with less than bleeding edge graphics. but things like applying a normal lighting equation and some simple scaled mipmap with CORRECT alpha test wouldn't be that big a deal. pretty much all of that i've done before or something similar.

so i guess i'd be looking for a generalized shader based approach for drawing indoor and outdoor scenes for games like shooters, fps/rpgs, and ground, air, and water vehicle sims.

at the GPU end, you want to set a texture, and draw a batch of all the triangles that use that texture and are at least partially in the frustum, then do the next texture, and so on, touching each texture exactly once. thats what the card likes the most, right?

the question is, what should the data look like on the game end for proper "care and feeding" of the GPU in such a manner. or if its even 100% possible or practical to do so.

i'm doing all this with drawing randomly generated levels and environments in mind. so pre-processed and hard coded data are sort of out of the question.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement