• 12
• 12
• 9
• 10
• 13

# Need understanding of DWORD

This topic is 3719 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I'm trying to understand why Dx has DWORD instead of just a regular data type, and having DWORD's be 32 bit I would assume it's just acting like a float, but why? Why not just have a float? Is it something to do with DirectX being a COM interface? Are data types different size with different languages? If so then eh, I'm so confused someone please explain it to me.

##### Share on other sites
Quote:
 Are data types different size with different languages?

Almost.

Rather, data types size is dependent on the system and envirement. For example, there is no guarantee that an "int" will be the same size on every single system it is complied for. This is a problem if we require certain size data.

##### Share on other sites
DWORD is a 32 bit unsigned integer. On 32bit machines it's the same as unsigned long. As the Windows OS operates on various platforms (x86, x64, x86-64, PowerPC, ARM, Mips, XScale, SH3 (the last few are for PocketPC w/ WindowsMobile)), as well as software such as office running on 680x0 systems (old Macs), it's benefical to define types that do what you expect, to ensure your code works as expected across a wide variety of platforms.

The DWORD name probably stems from Intel x86 assembly, which had BYTE, WORD, DWORD, FWORD, QWORD, etc.

##### Share on other sites
Thanks with the both of you my question was answered, and I'm well satisfied and content to start DirectX :)

##### Share on other sites
Quote:
 Original post by NamethatnobodyelsetookDWORD is a 32 bit unsigned integer.

Sort of.

Quote:
 The DWORD name probably stems from Intel x86 assembly, which had BYTE, WORD, DWORD, FWORD, QWORD, etc.

Nope. x86 assembly has no types; everything's either an immediate integer value (literal), a register or a memory address.

DWORD simply stands for "double word." A word is the natural unit of computing used by a particular computer design. When Windows was originally developed, the system word size on Intel x86 processors was 16 bits. Consequently, WORD was defined as being 16 bits and DWORD as 32. Obviously, today's x86 machines have 32- and 64-bit word sizes (Itanium has a 128-bit word), meaning that the DWORD definition of being 32-bits wide is wrong, or, more correctly, an anachronism.

Windows technologies are full of such anachronisms. WPARAM and LPARAM, for example.

##### Share on other sites
Quote:
 Original post by OluseyiNope. x86 assembly has no types; everything's either an immediate integer value (literal), a register or a memory address.

Sure it does. When declaring data, or for indirectly accessing memory, it needs to know how large of an operation you're performing. For example, FWORD is a 16:32 segment/selector and offset pair.. Without specifying the type, you could be pointing to a plain 32 bit address.

##### Share on other sites
Quote:
Original post by Oluseyi
Quote:
 Original post by NamethatnobodyelsetookDWORD is a 32 bit unsigned integer.

Sort of.

Quote:
 The DWORD name probably stems from Intel x86 assembly, which had BYTE, WORD, DWORD, FWORD, QWORD, etc.

Nope. x86 assembly has no types; everything's either an immediate integer value (literal), a register or a memory address.

DWORD simply stands for "double word." A word is the natural unit of computing used by a particular computer design. When Windows was originally developed, the system word size on Intel x86 processors was 16 bits. Consequently, WORD was defined as being 16 bits and DWORD as 32. Obviously, today's x86 machines have 32- and 64-bit word sizes (Itanium has a 128-bit word), meaning that the DWORD definition of being 32-bits wide is wrong, or, more correctly, an anachronism.

Windows technologies are full of such anachronisms. WPARAM and LPARAM, for example.

Thanks! :) I understand it now. My friends and I were having this argument at the lunch table today when I told them I was going to start learning DirectX. Oh, the refuted the statement, but finally realized why I got headaches everytime they opened their mouths on how they want "their ultimate game" done one day. Yup, a good crack open of the Introduction to 3D Game Programming using Dx9.0 by Frank Luna made them say, "Oh, I think we can just have a regular slash, or cut instead of defining multi-angles, depths, and angles for a cut. Yes, a single cut will do just fine"

Ha and now I understand Dx a little better, however, when clearing the screen or changing it to a different color is it OK to have these hexadecimal color codes or whatever they are defeined in a header? I mean is there anyway I can define things in Dx that will make it look like I'm not in Hell trying to decipher a way out each night?

I mean like let's say

[sourcecode]#define black 0x000000[/sourcecode]Is all great and all, but are there ways to simplify and easing Dx, and if so what are they?

##### Share on other sites
MDX/SlimDX for the .NET languages is probably as simple as you're likely to get. DirectX is a low-level API and is the foundation for many other technologies (e.g. WPF) so you'll find it very hard to escape things like binary arithmetic and old C-style constants/enumerations.

If you really find that challenging then I'd posit that you might want to use middleware to deal with the actual multimedia implementation. There'd be nothing wrong with this - learning the gaming and multimedia concepts from a higher level language (I learnt it via VB6 [cool]) allows you to worry about the fiddly little details at a later date...

hth
Jack

##### Share on other sites
Well, I've had that option in my mind once or twice, but I remember when I was real young I'm talking about first couple of months in middle school my grandfather saw that I had taken apart the computer and tried to upgrade the RAM, however, not knowing pinns, different types of RAM back in those days I ended up making nothing instead, and he asked me, "Would you rather have a computer that worked, or figure out how it works and why?"

When I program I don't really program to make a game, for I've realized it's 1/1000000000000000 for me to make something anything near someone calling excellent, unique, and selling it. sure I've made my fun games, but I'm truly in it for the technical understanding.

How does this work, why does it work, and can I find some nifty loop hole, or find some programming trick or technique that solves this problem more efficiently and more exciting?

I don't not understand the color codes 0x000000 sure I might not know what they're called right now, but I know it makes color and it makes a lot more damn sense defining some word in some header that replaces that junk.

##### Share on other sites
Quote:
Original post by Oluseyi
Quote:
 Original post by NamethatnobodyelsetookDWORD is a 32 bit unsigned integer.

Sort of.

Quote:
 The DWORD name probably stems from Intel x86 assembly, which had BYTE, WORD, DWORD, FWORD, QWORD, etc.

Nope. x86 assembly has no types; everything's either an immediate integer value (literal), a register or a memory address.

DWORD simply stands for "double word." A word is the natural unit of computing used by a particular computer design. When Windows was originally developed, the system word size on Intel x86 processors was 16 bits. Consequently, WORD was defined as being 16 bits and DWORD as 32. Obviously, today's x86 machines have 32- and 64-bit word sizes (Itanium has a 128-bit word), meaning that the DWORD definition of being 32-bits wide is wrong, or, more correctly, an anachronism.

Windows technologies are full of such anachronisms. WPARAM and LPARAM, for example.

Kind of. In the case of DWORD, its meaning is correct. Word size in even the latest x86 based CPU architectures remains at 16-bits. This is because these CPUs all maintain backwards compatibility.

Since DWORD is typedef'd as an unsigned int, it will be correctly a double word (32-bits) for 32 and 64-bit applications since Microsoft uses the LLP64 model where integers remain at 32-bits.