As you see, first and last results are almost the same. So, my question is why we need sRGB backbuffers plus modifying final output pixel shader if we can simply use non-sRGB texture? The result is almost the same:
I'd like to point out that that only your second image is "correct", that is because your performed the "manual" conversion incorrectly. Maybe an older post of mine can clear up any lingering confusion. It's not complicated, but it trips people up often. To answer the question, the advantage of a sRGB renderbuffer is that it performs the actual linear to sRGB conversion, not a gamma approximation, and by using it instead of a pow() instruction you are less likely to make a mistake as you did.
Erm, if the backbuffer is sRGB then you shouldn't be providing ANY changes to the values you are writing out; you should be writing linear values and allowing the hardware to do the conversion to sRGB space when it writes the data.
The correct versions are either;
linear maths in shader => sRGB buffer
linear maths in shader => pow(2.2) => non-sRGB buffer 8bit/channel image
Anything else is wrong.
(Also, keep in mind sRGB isn't just a pow(2.2) curve, it has a toe at the low end to 'boost' the dark colours).
That should be:
linear maths in shader => pow(1 / 2.2) => non-sRGB buffer 8bit/channel image
And that is only correct insofar that it is a close-ish approximation to sRGB.
After a quick search online I'm left with the impression that double-checked locking is not an issue for x86 or x86-64 and that it can be implemented safely (at a high level) in C++11
Right, if you *assume* that your high-level double-checked locking pattern code will never be compiled for a weakly-ordered system, it should work. But of coarse the trouble with high-level code is that any fool can unknowingly do just that, and then be subjected to strange and intermittent bugs. That's why its an anti-pattern.
Each transistor on a processor consumes power while it's active -- a certain amount while open, less while closed. There's also energy consumed in switching, but even when not in use they are draining some power.
Are you sure about that? I though transistors in a CMOS configuration use negligible energy when in a stable state.
Although something like 1368x768 is the average or most common screen resolution right now
My intuition would have been 1920x1080, and that is confirmed by Steam's hardware survey. More than a third of Steam users have a 1080p (or better) display. My guess is that the majority of the people with less than that fall into the category of casual gamers and are less likely to playing the "good-looking PC games". 1080p monitors are so cheap and ubiquitous today that if you were going to buy or build a computer with the intention of playing video games, there would be no sense in getting anything less.
I was playing around with sprite rendering a few weeks ago. I used OpenGL, but the same technique should be supported by DirectX 11.
Basically I have 32 different sprite images, all the same format and resolution, so I create a 128x128x32 2D texture array. You only have to create the texture array once, as it should contain all the sprites you may want to use during any frame. Next I create a shader storage buffer (think they are called UAVs in DX11, but a constant buffer could also work depending on the number of sprites.) This buffer contains a transformation matrix to handle the position, scale, and rotation of the sprite, it also contains an integer to act as an index into my texture array. All you have to do each frame is update this buffer. For rendering I use instancing, so that all the sprites can be drawn with a single draw call. In the shader you index into the memory buffer using the instance id.
I can't answer that question, but I feel compelled to warn you about using NeHe. OpenGL has changed a lot over the years. NeHe should be thought of as historical information at this point. If your goal was to become a chemist, you probably wouldn't start with alchemy books.