Jump to content
  • Advertisement

formalproof

Member
  • Content count

    96
  • Joined

  • Last visited

Community Reputation

165 Neutral

About formalproof

  • Rank
    Member
  1. I recently managed to get a cheap copy of Charles Petzold's Progamming Windows, 5th Edition (1998). I haven't really kept up with times all that well. So my question is: is this book, and the Win32 API in general, still relevant and useful?   I'm pretty sure that programs using the Win32 API described in the book are still compatible with modern versions of Windows. But has the API changed much since 1998? Will I get suboptimal performance or miss some important operating system features if I use the API calls described in the book?   The reason for my interest in Win32 API is that I'd like to be able to program with the minimal amount of abstraction layers (such MFC, .NET or the standard C/C++ IO libraries). That way, I get to know what my program really is doing under the hood. I am also involved in some research projects where speed and efficiency are critical, and I suspect I may have to write a very high-performance Windows program in the future. For this purpose, I suppose C and Win32 are still the way to go?
  2. Ok, seems I made an embarassing mistake in understanding what little endian means So in a little endian system, val = 0xAABBCCDD is stored in memory as: 0 1 2 4 DD CC BB AA That would explain why vec[0] == 0xDD, etc. Thanks!
  3. I'm working on a project where I need to convert the bit pattern in an integer type to an array of chars (bytes), and vice versa. I'm wondering why this conversion always seems to "reverse" the order of the bytes. For example, when converting val = 0xAABBCCDD to an array of chars, the first element of the array will be 0xDD, the second 0xCC, etc., although I'm using a little endian system. For me, the logical order would be 0xAA, 0xBB, 0xCC, 0xDD, because that is the order in which the bytes of val are stored in memory, isn't it? Here is the code: #include <iostream> #include <vector> using namespace std; int main() { int one = 0x1; cout << "The system is " << (((one << 1) == 0)? "big endian" : "little endian") << endl; // This will output "little endian" on my system vector<char> vec; vec.push_back(0xAA); vec.push_back(0xBB); vec.push_back(0xCC); vec.push_back(0xDD); unsigned long* ul = reinterpret_cast<unsigned long*>(&vec[0]); cout << hex << *ul << endl; // Why does this output 0xDDCCBBAA? unsigned long n = 0xAABBCCDD; vector<char> w; w.assign( reinterpret_cast<char*>(&n), reinterpret_cast<char*>(&n) + sizeof(unsigned long) ); for(int i = 0; i < w.size(); i++) { cout << hex << (int)(unsigned char)w; // this cast is because we want to print the bit pattern of w, not the character } cout << endl; // This will output in the "reverse" order as well: 0xDDCCBBAA0000 } So I'm asking whether someone can explain to me why the order of bytes after conversion is 0xDDCCBBAA and not 0xAABBCCDD? Btw, if you know a better way to convert between an integer type and a vector of chars, or notice any other stupidities in my code, please let me know.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!