• Create Account

### #Actualrobert.leblanc

Posted 14 October 2012 - 02:54 PM

Ok so just to make sure I'm getting it right:

D3DXCOLOR::operator UINT () const
{
UINT dwR = r >= 1.0f ? 0xff : r <= 0.0f ? 0x00 : (UINT) (r * 255.0f + 0.5f);
UINT dwG = g >= 1.0f ? 0xff : g <= 0.0f ? 0x00 : (UINT) (g * 255.0f + 0.5f);
UINT dwB = b >= 1.0f ? 0xff : b <= 0.0f ? 0x00 : (UINT) (b * 255.0f + 0.5f);
UINT dwA = a >= 1.0f ? 0xff : a <= 0.0f ? 0x00 : (UINT) (a * 255.0f + 0.5f);
return (dwA << 24) | (dwR << 16) | (dwG << 8) | (dwB << 0);
}


the return statement is essentially a 32 bit entity that looks like

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|

Does it make sense that if x86 intel machines are little-endian that I should think of this as

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|
LOW ------------------------------------------------------------------- HIGH

-----------------------------------------------------------------------------------------
The function that I used was given by Luna as:

D3DX10INLINE UINT ARGB2ABGR(UINT argb)
{
BYTE A = (argb >> 24) & 0xff;
BYTE R = (argb >> 16) & 0xff;
BYTE G = (argb >>  8) & 0xff;
BYTE B = (argb >>  0) & 0xff;
return (A << 24) | (B << 16) | (G << 8) | (R << 0);
}


Which to me seems to imply:

| --byte1=Alpha--|--byte2=Blue--|--byte3=Green--|--byte4=Red--|
LOW -------------------------------------------------------------------- HIGH

This works but obviously my understanding of the endianness is incorrect because the format code I am using is:
DXGI_FORMAT_R8G8B8A8_UNORM

So this means the expected byte order is Red, Green, Blue, Alpha. Which if you read opposite of what I have above makes sense. To me it looks like the bytes are ordered backwards in memory (ABGR). Maybe I'm misunderstanding the shifting operation.

Does: (A << 24) | (B << 16) | (G << 8) | (R << 0) create the following sort of thing

AAAAAAAA000000000000000000000000 (A bits shifted 24 to the left) OR'd With
00000000BBBBBBBB0000000000000000 (B bits shifted 16 to the left) OR'd With
0000000000000000GGGGGGGG00000000 (G bits shifted 8 to the left) OR'd With
000000000000000000000000RRRRRRRR (R bits shifted 0 to the left) Which results in
-------------------------------
AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR

Is it just a case of convention where I should start at the far right and call that byte 1 or am I missing something else?
Also does the + 0.5f in (r * 255.0f + 0.5f) and the like, cause rounding upwards or is it doing something else?

### #11robert.leblanc

Posted 14 October 2012 - 02:52 PM

Ok so just to make sure I'm getting it right:

D3DXCOLOR::operator UINT () const
{
UINT dwR = r >= 1.0f ? 0xff : r <= 0.0f ? 0x00 : (UINT) (r * 255.0f + 0.5f);
UINT dwG = g >= 1.0f ? 0xff : g <= 0.0f ? 0x00 : (UINT) (g * 255.0f + 0.5f);
UINT dwB = b >= 1.0f ? 0xff : b <= 0.0f ? 0x00 : (UINT) (b * 255.0f + 0.5f);
UINT dwA = a >= 1.0f ? 0xff : a <= 0.0f ? 0x00 : (UINT) (a * 255.0f + 0.5f);
return (dwA << 24) | (dwR << 16) | (dwG << 8) | (dwB << 0);
}


the return statement is essentially a 32 bit entity that looks like

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|

Does it make sense that if x86 intel machines are little-endian that I should think of this as

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|
LOW ------------------------------------------------------------------- HIGH

-----------------------------------------------------------------------------------------
The function that I used was given by Luna as:

D3DX10INLINE UINT ARGB2ABGR(UINT argb)
{
BYTE A = (argb >> 24) & 0xff;
BYTE R = (argb >> 16) & 0xff;
BYTE G = (argb >>  8) & 0xff;
BYTE B = (argb >>  0) & 0xff;
return (A << 24) | (B << 16) | (G << 8) | (R << 0);
}


Which to me seems to imply:

| --byte1=Alpha--|--byte2=Blue--|--byte3=Green--|--byte4=Red--|
LOW -------------------------------------------------------------------- HIGH

This works but obviously my understanding of the endianness is incorrect because the format code I am using is:
DXGI_FORMAT_R8G8B8A8_UNORM

So this means the expected byte order is Red, Green, Blue, Alpha. Which if you read opposite of what I have above makes sense. To me it looks like the bytes are ordered backwards in memory (ABGR). Maybe I'm misunderstanding the shifting operation.

Does: (A << 24) | (B << 16) | (G << 8) | (R << 0) create the following sort of thing

AAAAAAAA000000000000000000000000 (A bits shifted 24 to the left) OR'd With
BBBBBBBB0000000000000000 (B bits shifted 16 to the left) OR'd With
GGGGGGGG000000000 (G bits shifted 8 to the left) OR'd With
RRRRRRRR (R bits shifted 0 to the left) Which results in
-------------------------------
AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR

Is it just a case of convention where I should start at the far right and call that byte 1 or am I missing something else?
Also does the + 0.5f in (r * 255.0f + 0.5f) and the like, cause rounding upwards or is it doing something else?

### #10robert.leblanc

Posted 14 October 2012 - 02:39 PM

Ok so just to make sure I'm getting it right:

D3DXCOLOR::operator UINT () const
{
UINT dwR = r >= 1.0f ? 0xff : r <= 0.0f ? 0x00 : (UINT) (r * 255.0f + 0.5f);
UINT dwG = g >= 1.0f ? 0xff : g <= 0.0f ? 0x00 : (UINT) (g * 255.0f + 0.5f);
UINT dwB = b >= 1.0f ? 0xff : b <= 0.0f ? 0x00 : (UINT) (b * 255.0f + 0.5f);
UINT dwA = a >= 1.0f ? 0xff : a <= 0.0f ? 0x00 : (UINT) (a * 255.0f + 0.5f);
return (dwA << 24) | (dwR << 16) | (dwG << 8) | (dwB << 0);
}


the return statement is essentially a 32 bit entity that looks like

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|

Does it make sense that if x86 intel machines are little-endian that I should think of this as

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|
LOW ------------------------------------------------------------------- HIGH

-----------------------------------------------------------------------------------------
The function that I used was given by Luna as:

D3DX10INLINE UINT ARGB2ABGR(UINT argb)
{
BYTE A = (argb >> 24) & 0xff;
BYTE R = (argb >> 16) & 0xff;
BYTE G = (argb >>  8) & 0xff;
BYTE B = (argb >>  0) & 0xff;
return (A << 24) | (B << 16) | (G << 8) | (R << 0);
}


Which to me seems to imply:

| --byte1=Alpha--|--byte2=Blue--|--byte3=Green--|--byte4=Red--|
LOW -------------------------------------------------------------------- HIGH

This works but obviously my understanding of the endianness is incorrect because the format code I am using is:
DXGI_FORMAT_R8G8B8A8_UNORM

So this means the expected byte order is Red, Green, Blue, Alpha. Which if you read opposite of what I have above makes sense. To me it looks like the bytes are ordered backwards in memory (ABGR). Maybe I'm misunderstanding the shifting operation.

Does: (A << 24) | (B << 16) | (G << 8) | (R << 0) create the following sort of thing

AAAAAAAA000000000000000000000000 (A bits shifted 24 to the left) OR'd With
BBBBBBBB0000000000000000 (B bits shifted 16 to the left) OR'd With
GGGGGGGG000000000 (G bits shifted 8 to the left) OR'd With
RRRRRRRR (R bits shifted 0 to the left) Which results in
-------------------------------
AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR

Is it just a case of convention where I should start at the far right and call that byte 1 or am I missing something else?

### #9robert.leblanc

Posted 14 October 2012 - 02:38 PM

Ok so just to make sure I'm getting it right:

D3DXCOLOR::operator UINT () const
{
UINT dwR = r >= 1.0f ? 0xff : r <= 0.0f ? 0x00 : (UINT) (r * 255.0f + 0.5f);
UINT dwG = g >= 1.0f ? 0xff : g <= 0.0f ? 0x00 : (UINT) (g * 255.0f + 0.5f);
UINT dwB = b >= 1.0f ? 0xff : b <= 0.0f ? 0x00 : (UINT) (b * 255.0f + 0.5f);
UINT dwA = a >= 1.0f ? 0xff : a <= 0.0f ? 0x00 : (UINT) (a * 255.0f + 0.5f);
return (dwA << 24) | (dwR << 16) | (dwG << 8) | (dwB << 0);
}


the return statement is essentially a 32 bit entity that looks like

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|

Does it make sense that if x86 intel machines are little-endian that I should think of this as

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|
LOW ------------------------------------------------------------------- HIGH

-----------------------------------------------------------------------------------------
The function that I used was given by Luna as:

D3DX10INLINE UINT ARGB2ABGR(UINT argb)
{
BYTE A = (argb >> 24) & 0xff;
BYTE R = (argb >> 16) & 0xff;
BYTE G = (argb >>  8) & 0xff;
BYTE B = (argb >>  0) & 0xff;
return (A << 24) | (B << 16) | (G << 8) | (R << 0);
}


Which to me seems to imply:

| --byte1=Alpha--|--byte2=Blue--|--byte3=Green--|--byte4=Red--|
LOW -------------------------------------------------------------------- HIGH

This works but obviously my understanding of the endianness is incorrect because the format code I am using is:
DXGI_FORMAT_R8G8B8A8_UNORM

So this means the expected byte order is Red, Green, Blue, Alpha. Which if you read opposite of what I have above makes sense. To me it looks like the bytes are ordered backwards in memory (ABGR). Maybe I'm misunderstanding the shifting operation.

Does: (A << 24) | (B << 16) | (G << 8) | (R << 0) create the following sort of thing

AAAAAAAA000000000000000000000000 (A bits shifted 24 to the left) OR'd With
BBBBBBBB0000000000000000 (B bits shifted 16 to the left) OR'd With
GGGGGGGG000000000 (G bits shifted 8 to the left) OR'd With
RRRRRRRR (R bits shifted 0 to the left) Which results in
-------------------------------
AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR

Is it just a case of the convention is that I should start at the far right and call that byte 1 or am I missing something else?

### #8robert.leblanc

Posted 14 October 2012 - 02:37 PM

Ok so just to make sure I'm getting it right:

D3DXCOLOR::operator UINT () const
{
UINT dwR = r >= 1.0f ? 0xff : r <= 0.0f ? 0x00 : (UINT) (r * 255.0f + 0.5f);
UINT dwG = g >= 1.0f ? 0xff : g <= 0.0f ? 0x00 : (UINT) (g * 255.0f + 0.5f);
UINT dwB = b >= 1.0f ? 0xff : b <= 0.0f ? 0x00 : (UINT) (b * 255.0f + 0.5f);
UINT dwA = a >= 1.0f ? 0xff : a <= 0.0f ? 0x00 : (UINT) (a * 255.0f + 0.5f);
return (dwA << 24) | (dwR << 16) | (dwG << 8) | (dwB << 0);
}


the return statement is essentially a 32 bit entity that looks like

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|

Does it make sense that if x86 intel machines are little-endian that I should think of this as

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|
LOW HIGH

-----------------------------------------------------------------------------------------
The function that I used was given by Luna as:

D3DX10INLINE UINT ARGB2ABGR(UINT argb)
{
BYTE A = (argb >> 24) & 0xff;
BYTE R = (argb >> 16) & 0xff;
BYTE G = (argb >>  8) & 0xff;
BYTE B = (argb >>  0) & 0xff;
return (A << 24) | (B << 16) | (G << 8) | (R << 0);
}


Which to me seems to imply:

| --byte1=Alpha--|--byte2=Blue--|--byte3=Green--|--byte4=Red--|
LOW HIGH

This works but obviously my understanding of the endianness is incorrect because the format code I am using is:
DXGI_FORMAT_R8G8B8A8_UNORM

So this means the expected byte order is Red, Green, Blue, Alpha. Which if you read opposite of what I have above makes sense. To me it looks like the bytes are ordered backwards in memory (ABGR). Maybe I'm misunderstanding the shifting operation.

Does: (A << 24) | (B << 16) | (G << 8) | (R << 0) create the following sort of thing

AAAAAAAA000000000000000000000000 (A bits shifted 24 to the left) OR'd With
BBBBBBBB0000000000000000 (B bits shifted 16 to the left) OR'd With
GGGGGGGG000000000 (G bits shifted 8 to the left) OR'd With
RRRRRRRR (R bits shifted 0 to the left) Which results in
-------------------------------
AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR

Is it just a case of the convention is that I should start at the far right and call that byte 1 or am I missing something else?

### #7robert.leblanc

Posted 14 October 2012 - 02:35 PM

Ok so just to make sure I'm getting it right:

D3DXCOLOR::operator UINT () const
{
UINT dwR = r >= 1.0f ? 0xff : r <= 0.0f ? 0x00 : (UINT) (r * 255.0f + 0.5f);
UINT dwG = g >= 1.0f ? 0xff : g <= 0.0f ? 0x00 : (UINT) (g * 255.0f + 0.5f);
UINT dwB = b >= 1.0f ? 0xff : b <= 0.0f ? 0x00 : (UINT) (b * 255.0f + 0.5f);
UINT dwA = a >= 1.0f ? 0xff : a <= 0.0f ? 0x00 : (UINT) (a * 255.0f + 0.5f);
return (dwA << 24) | (dwR << 16) | (dwG << 8) | (dwB << 0);
}


the return statement is essentially a 32 bit entity that looks like

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|

Does it make sense that if x86 intel machines are little-endian that I should think of this as

| --byte1=Alpha--|--byte2=Red--|--byte3=Green--|--byte4=Blue--|LOW																					HIGH

-----------------------------------------------------------------------------------------
The function that I used was given by Luna as:

D3DX10INLINE UINT ARGB2ABGR(UINT argb)
{
BYTE A = (argb >> 24) & 0xff;
BYTE R = (argb >> 16) & 0xff;
BYTE G = (argb >>  8) & 0xff;
BYTE B = (argb >>  0) & 0xff;
return (A << 24) | (B << 16) | (G << 8) | (R << 0);
}


Which to me seems to imply:
| --byte1=Alpha--|--byte2=Blue--|--byte3=Green--|--byte4=Red--|LOW																			   HIGH

This works but obviously my understanding of the endianness is incorrect because the format code I am using is:
DXGI_FORMAT_R8G8B8A8_UNORM

So this means the expected byte order is Red, Green, Blue, Alpha. Which if you read opposite of what I have above makes sense. To me it looks like the bytes are ordered backwards in memory (ABGR). Maybe I'm misunderstanding the shifting operation.

Does: (A << 24) | (B << 16) | (G << 8) | (R << 0) create the following sort of thing

AAAAAAAA000000000000000000000000 (A bits shifted 24 to the left) OR'd With
BBBBBBBB0000000000000000 (B bits shifted 16 to the left) OR'd With
GGGGGGGG000000000 (G bits shifted 8 to the left) OR'd With
RRRRRRRR (R bits shifted 0 to the left) Which results in
-------------------------------
AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR

Is it just a case of the convention is that I should start at the far right and call that byte 1 or am I missing something else?

PARTNERS