Thank you all for your answers!
My own engine uses interleaved VB.
Average mesh has about 1300 vertices/4000 indices.
The program I reversed, does not create shadows, so normals are always in use there.
After reading all the answers, my first thought was:
- Modify content pipeline, and engine’s mesh cache.
- Measure performance.
- Chose the best approach, remove unused code.
It seems it has a lot of sense to separate buffers for me because of shadows.
But after thinking a while, I came up with more fundamental questions about optimizations:
Consider I want to ship a game in 4-5 years.
I have no ideas right now what will be next AMD/NVidia/Intel architectures and their performance guidelines at that time.
Measuring timings right now and remembering the choice maybe outdated (for my 660GTX is for sure).
And I am not talking about this particular case, but for many other cases.
Similar story: a friend of mine remembered a bug in Visual Studio 5(very old one).
In 7 years on a code review, he asked me to insert this workaround in VS2003, because he remembered “this should be made this way”.
Question1: What is the best way to keep optimization, made in an engine, up-to-current hardware/assumptions?
Using 2 code paths for every optimization even with #ifdef’s is not sexy enough :)
Question2: I have plans to buy 7700k + 1070GTX and make it my reference platform in assumption, that in 4-5 years it will be average gaming PC.
For this platform, will make all optimizations (and probably place 660GTX in second PCIe slot as low-end video card).
Is this a good idea?