The effects framework is now open-source. I haven't looked at the code yet, but surely you could copy the relevant part that parses effect files?
EDIT: Oh, but the compiler is not open-source In that case, the best I can think of is that you write your own parser that reads the file and generates sampler states. Painful.
Now actually, this makes me think. A lot of guns with M in the name are not product names, but actually military designations, ie the US government takes a look at a gun manufactured by Colt, Bushmaster etc, decides it wants to buy some for the army, and renames it. Perhaps it's different if you're only using the army names 
M16, M4, both refer to variants of the AR15
M82 / M107 you may be familiar with the Barrett 50 Cal from Modern Warfare 2?
M9 - adopted name of the Beretta M92F(s)
Then you get the 'X' variety, as in XM8, XM29.
RPG of course should be fine - its just an acronym of Rocket Propelled Grenade (Surely you can't be sued for this?)
I remember playing a football (soccer for you silly 'murricans) game an old mobile phone once, and you could choose the 'real world' player to take freekicks with. I always wondered why Backhim, Owan and Zidann's names were all misspelled
Disclaimer: I don't know much about law, but I think the point about military designations should perhaps be considered by those in this thread who do
You can blow the limbs off a character and still draw him in one DrawIndexed.
1. Give him a skeleton and assign different vertices.
2. Separate the elements of the mesh that you want blown off. Eg, make sure there are no faces connecting vertices from the arm to vertices of the shoulder etc.
3. When the arm gets shot (or chopped with an axe, samurai sword etc), mark the arm's skeleton joint (or bone) as "Separated"
4. When you collapse the bone transforms, any bones marked as "Separated" should have their world transforms calculated separately. Eg you could use a physical simulation to provide the transforms from then on.
I understand this is a lot of conceptually difficult stuff - the main point to take away from it is that you can easily have parts of a mesh use different world transforms, all you have to do is give each vertex a component that indexes the world transform from an array (Preferably in the vertex shader). You can provide this array in several different ways - in D3D10 the options are constant buffer, shader-buffer, texture1D, texture2D... Once you understand this technique there is a whole range of effects you can achieve using it - characters having their limbs separated, instancing, etc.
Now if you're making a game with a Minecraft style world, instead of drawing every single block as a cube you can split the world into chunks - Minecraft uses 16x128x16 blocks per chunk. Then draw the chunks near the player using the instancing method - this allows you to create and destroy blocks near the player cheaply - and combine all blocks in further away chunks into smaller meshes.
Well the first thing to do is cap the number of frames you show per second, because nobody gets any extra benefit from 2500 fps as opposed to 100 fps, and if you give your (or your player's) graphics card some breathing space it will run cooler and probably live longer.
The next thing to do is reduce the polygon count. Ideally I wouldn't go for more than 1 million vertices per frame, even with the most efficient state changing systems. 150k poly for a tree (!) is completely ridiculous, and unless its a game about tree surgery, nobody's gonna notice the difference between 150k polys and 8k polys.
Finally. The sample XNA draw model code looked terrible when I last saw it a few years ago. Whenever you change the current pixel shader (by changing the effect), the gpu has to finish drawing all models with the current shader before it can change, which causes a delay. The better approach is to draw everything that uses the same shader in one go, then switch to the next shader and repeat, changing the constants/parameters as you go. If you can then reduce the number of changes of texture, it will run even faster.
Lastly, group as many things as possible into one draw call. Nvidia recommends about 300 draw calls per frame in total (this might be a bit out of date though).
In most cases I tried to create a hierarchical model, so I have classes that interacts directly with D3D (basic classes) and classes that only use those basic classes and do not interact with D3D. is that what you mean?
Yes that's what I meant. So, the lower level of your hierarchy, which deals directly with D3D9, will have to be entirely re-written Large parts of it may be similar in the D3D11 rewrite, but none of it will likely be reusable without modification.
I would recommend separating your D3D9 and D3D11 code via #ifdefs, and only compiling for one at a time (which means you've got two different EXEs - one for people on WinXP, and one for people on Win7). However, you could alternatively make your "wrapper" into abstract-base-classes with virtual methods if that's more attractive to you.
The good thing about the second option (virtual methods) is that I only have one .exe and I can decide wheter I use D3D9 or D3D11 at run-time, isn't it?
Actually I was looking in the Crysis 2 installation folder (Crysis 2 let you use both D3D9 and D3D11) and I found only one .exe(Crysis2.exe) and two dll's :one for D3D9 (CryRenderD3D9.dll) and another for D3D11(CryRenderD3D11.dll); So I supose they're using something similar to the "virtual method approach". Am I right?
Actually if ou think about it, this means they used the #ifdef approach, because there are 2 different DLLs. I expect this is because there is overhead with virtual functions (very very small), and they want their graphics to run as quickly as possible
Seeing as this is a game related website, I think a shader language should be on that list as well. But the way things are at the moment, the shader language that gets used is dependent on the graphics API...
Normalize - divides each component of the vector by the vector's total length (the length of a vector v = (x, y, z) is computed by sqrt(x*x + y*y + z*z), also can be written as |v|). The length of the new vector is now equal to 1 . TransformCoord - transforms the vector using the transformation defined by the matrix argument. The w component is assumed to be equal to 1. Read about matrix maths if you aren't sure what this means
Dot and Cross are two different ways of multiplying vectors together. Dot returns a single number, whereas Cross returns a vector
Dot - calculates the dot product of 2 vectors, which is the younger sibling of the cross product, speaking mathematically. If v1 = (a, b, c) and v2 = (d, e, f), then Dot(v1, v2) returns (a*d + b*e + c*f), the sum of the multiples of components from each vector. This is also equal to |v1||v2|cos(t), where t is the angle between the two vectors in 3D space, and also notice that Dot(v1, v1) = |v1||v1|cos(0) = |v1|2 - the dot product of the vector with itself is equal to the length squared, if you look at the two formulas this is easy to see. This is useful for many many things
Cross - calculates the cross product of 2 vectors. The returned vector is, as you said, perpendicular to the two vectors. If I set v3 = Cross(v1, v2), then the length of v3 is equal to: |v3| = |v1||v2|sin(t), where t is the angle between v1 and v2. Now if v1 and v2 are parallel then notice that t = 0, therefore the returned vector is (0, 0, 0).
Now the really interesting thing about Cross is that the answer is different if you pass in the two vectors in different orders, In other words, Cross(v1, v2) is not equal to Cross(v2, v1). (To be precise, Cross(v1, v2) = -Cross(v2, v1)). If you think about it, if v1 and v2 are (non-parallel) vectors, there are 2 possible vectors v3 and v4 perpendicular to v1 and v2 - they are the negative version of each other (v3 = -v4). For example, if v1 = (1, 0, 0) and v2 = (0, 1, 0) then you can easily see that v3 = (0, 0, 1) is perpendicular to both v1 and v2, but also v4 = (0, 0, -1) is as well! the Cross product of two vectors 'decides' which direction to return based on the order, so if you pass in the two arguments the other way around you will get the opposite direction.
Ok that was quite a lecture, if you still don't understand something then speak now or forever hold your peace
OK I found the problem. My tcp implementation did not use blocking sockets - this version does, but does not check whether recvfrom aborted to avoid blocking - that meant the receiving of messages was a pure experiment in probability. Without the call to ioctlsocket() the program runs lightning fast as I would expect. Thanks for replying though!
Unfortunately, static members can be fiddly like this. Then when this is in place you need to pass in the vertex elements to the CloneMesh function. Making sure that you have initialised VertexCol::Decl:
D3DVERTEXELEMENT9 elements[MAX_FVF_DECL_SIZE]; // MAX_FVF_DECL_SIZE is the largest number of elements that any vertex declaration can have
oldMesh->CloneMesh(0, elements, device, &newMesh); // Instead of 0 consider using D3DXMESH_MANAGED as teh first arg
// now you have the new mesh with the vertex colors you can release the old one if you no longer need it