P.S. Yes, SlimDX will be supporting D3D 12. We're going to build a modern version of the library that is streamlined to 12 + utilities/math only, and supporting all Win10/D3D 12 hardware platforms. It will be slick.
You don't need to master 3d art creation. You SHOULD learn to do it. It's part of the process, and it's myopic to imagine a game as a program blindly consuming art assets that appeared by magic. Understanding what is involved will give you greater perspective on how to design for it, what issues can be created or fixed by engine design decisions, and what possibilities there are for inputs to your game code. It's also quite useful to be able to create your own test assets that are built a particular way, especially when you and the artist aren't getting tools and game to match properly.
I need something like this too, but with mobile support (iOS at least). TBB unfortunately doesn't have official IOS support and I am wary of random ports. I looked at libdispatch last year, which isn't exactly a task system but includes enough of the right ingredients to finish the job. I should check and see how their various OS ports are doing.
I've usually used Doxygen in my own projects... but, on every professional game team that I've ever worked on, there's been usually no code documentation... and it's often not that bad!
Sometimes wikis are used, but they often become outdated, and wrong/outdated documentation is worse than none. Same goes for doxygen/etc comments...
Yep, same here. There's usually a wiki that drifts, and really only exists for the new hires to get some clue of how things were set up originally. After that, it's all about learning your way around the system.
Different for actual engines in regular use across wide groups of people, though, or other middleware projects. You don't get to skimp on the documentation there... for SlimDX we used the built in VS doc system but through Sandcastle with a custom front end bolted on. For pure C++ I'd just use Doxygen, but I'd only bother for public APIs.
Device Lost scenarios are well known phenomena in Direct3D 9. They generally occur when you lose exclusive access to a device, such as Alt+Tabbing out of full screen. If you're developing a D3D9 application, you need to specifically handle these cases by releasing resources, testing whether you have device access, and when you get it back resetting the device and reloading all GPU resources. If you want to go down this path, search around in the documentation for the proper way to handle device loss.
That process is at least as annoying as it sounds; you should consider whether you actually *need* Direct3D 9 support. You can support the same classes of hardware with the much newer Direct3D 11 API, which does not require handling of device loss and reset. The downside is that D3D11 is not supported on Windows XP, but since that OS is almost completely dead at this point, that's probably not a huge loss for you.
In WDDM systems (Vista+), a device lost can only be generated when the device driver crashes or the GPU resets. The OP mentioned using a six year old graphics driver. OTOH, VS 2012. Something's not right...
You don't need to do anything for C++, as DirectX is now part of the core Windows SDK (which ships with VS). Just #include <d3d11.h> and go. For .NET based languages you will need a wrapper such as SlimDX or SharpDX.
Reading the API, I got a bit confused as to whether I need to internally change my image to have this RGBA structure or would just setting the internal format to GL_RGBA and format to GL_RGB suffice?
Yes - in fact some (most?) hardware will actually do best with GL_BGRA. But since this tends to be a guessing game and depends somewhat on the actual hardware in question, I prefer to simply send RGB and hope the driver sorts it out. Sometimes it'll quietly change it internally and as long as you don't try to read back, no big deal. But if you are more comfortable aligning it, use BGRA.
Another question that I wonder is what if I have grayscale data i.e. just a single channel. Would it also be beneficial to convert that to RGBA structure at the cost of more memory or would I not get a performance hit by keeping it as GL_RED, for example.
Nah, I prefer to just leave it as-is. The extra bandwidth costs you more than what alignment potentially gains, and the drivers are well versed in handling this.
On the flip side, I am a bit concerned about sync issues. Sync between CPU and GPU (or even the GPU with itself) can lead to some really awful, hard-to-track down bugs. It's bad because you might think that you're doing it right, but then you make a small tweak to a shader and suddenly you have artifacts. It's hard enough dealing with that for one hardware configuration, so it's a little scary to imagine what could happen for PC games that have to run on everything. Hopefully there will be some good debugging/validation functionality available for tracking this down, otherwise we will probably end up with drivers automatically inserting sync points to prevent corruption (and/or removing unnecessary syncs for better performance). Either way, beginners are probably in for a rough time.
Don't worry, a variety of shipping professional games will somehow make a complete mess of it in final build too