God damn it, I had a super long post typed up and chrome crashed because I hit CTRL+S.
You don't really see bitfields in desktop applications because there's no need to conserve memory. You'll find them in code that makes heavy use if bit masking (crypto?) or memory conservation (networking code?). I program micro controllers for my day job and we make heavy use of bitfields.
[EDIT] Just to be clear and as mentioned in the coming posts: This is not portable.
Here's an example. Imagine you had to pack a "move" instruction of a chess piece in a game of chess into as little space as possible. Reasons being you want to transmit the move over a network and save space. You could encode this by using a "to" and "from" coordinate. Seeing as a chess board is 8x8, a coordinate can be packed into 6 bits. You could write:
Posted by TheComet
on 17 September 2015 - 11:00 AM
Once that 64 byte buffer is loaded, anything you do within that 64 byte buffer is very nearly free. Thanks to the magic of the Out Of Order core and processor caches, doing one operation on one item in that block is very nearly the same clock time as doing one operation on sixteen items in that block.
Would it actually be beneficial to pack our variables more tightly together (use char/short where possible) in order to reduce block loads?
It's a silly suggestion. As you've been saying over and over, it works fine with VS8 but not with VS10. Seeing as I know nothing of VS I can only suggest to try disabling/enabling options until it starts working.
If you scroll down to page 20 (16) (Chapter 5.1. getaddrinfo()) there's a complete example which does exactly what you asked for.
You may have to adjust a few things here and there to get it working on Windows (e.g. call WSAStartup() and WSACleanup() and include the right headers), but by and large it works the same way on Windows.
anyone who considers compiling on MingW to be "ported to Windows" should be thrown out of a window
Can you elaborate some more on why you take this stance?
I've come to the conclusion that writing cross platform software is a lot easier for everyone if you drop support for the MSVC compiler. It's non-compliant in many areas and is a pain to configure properly. As such it only causes headaches and additional overhead when writing your build scripts.
I find it disappointing that porting an entire POSIX compliant environment and toolchain to Windows is easier to do than supporting the MSVC compiler, but that's how it is.
Just got the code from the PDF file I linked to work using MSVC:
In this thread we share stories of some of the worst hotfixes we've seen and/or applied.
In a game I had to present I was experiencing an extremely obscure bug where after some time pointers would randomly point to garbage values, crashing the game. I had an hour to get it working before the presentation.
I spent 45 minutes trying to reproduce it with no success. It happened at seemingly random times, but for some reason it was always the same two pointers that were modified.
Seeing as I was running out of time, I ended up inserting checks which would replace the garbage value (when it occurred) with the correct value again - the correct value I knew because I saw it in the debugger and it seemed to remain consistent.
if(game->settings_doc != 0x63e1b0)
game->settings_doc = 0x63e1b0; /* from debugger */
After the presentation I sat down with valgrind and found the problem. A buffer overrun was writing into memory it wasn't supposed to.
Also note that if the allocation fails and you have more than one heap allocation in the initializer list (and said allocation throws std::bad_alloc or you throw an exception in the constructor body), you've got yourself a nice memory leak, because the destructor will never get called.
model2(new Model) // this allocation fails
~Test() // this will never get called
if(model1) delete model1;
if(model2) delete model2;
You're on the safe side if you use smart pointers.