This topic is 3682 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Is there a reasonably fast way to copy N dwords from one memory location to another location, but truncated to N words. So say we copied 2048 dwords = 8192 bytes from memory location a, memory location b would contain 2048 words = 4096 bytes of memory, all the the low 16 bits actually, even though I don't see as if that in itself matters. And vice versa... copying 2048 words to 2048 dwords, padding with 0s, (the numbers are more in the millions but 2048 is easier to read of course). ??? Thanks -Scott

##### Share on other sites
In what programming language? On what platform? Are you using traditional CS meanings of word/dword or an API typedef meaning?

##### Share on other sites
oh.. right. sorry. Win32 C++. And those were just an example. Basically I might have an array with elements of an arbitrary bit depth, 32 bits for example, as a source. And then another array with elements of a different arbitrary bit depth (but possibly the same which can just use your favorite memcpy method) - 16 bits for example, as a destination. Now when copying from the source to the destination, the 32 bit elements need to be truncated. Being arbitrary, it could be the other way around, and the 16 bit array might be the source, in which case the 32 bit array elements would be zero padded. Regardless... is there a reasonably fast way to do the first case (larger elements to smaller elements with truncation)?

For bonus points, since I imagine they would be somewhat different, what about the second case (smaller to larger)?

Cheers
-Scott

##### Share on other sites
A simple for() loop reading from and writing to pointers, casting int to short (or the other way around) is reasonably fast in C++.

If it is still not fast enough, and your processor supports SSE2, and you have both arrays aligned to 16 bytes, you can do two aligned fetches, two shuffles, and a non-temporal store to truncate 32 to 16, and you could use PUNPCKHWD/PUNPCKLWD instead of shuffles for the other way around. Also, don't forget prefetching on the read side.
This obviously isn't "real" C++ (nor very portable) any more, but it will be faster.

##### Share on other sites
Interesting... yeah it's win32 specific software, and sse2 is good. Also my arrays are 16-byte (octword???) aligned because I knew I would be getting myself into sse2 stuff eventually :) I just don't wanna spend time with it except where most beneficial. This will have to be one of those cases though...

Sounds like some research but I can probably google the topics you spoke of... but I'm still open to any other thoughts! Thanks

-Scott

##### Share on other sites
You should first try if the simple for() loop is fast enough, though. Most likely, it is.

SSE is all good and cool, but intrinsics or inline assembly aren't nearly as easy to read, debug, maintain, or pretty much anything.

##### Share on other sites
For the specific examples, you can use SSE2 to pack 4 i32 into 8 i16 and blast those out to memory in one instruction. [It is probably better to unroll 4x and thus fill the write-combine buffer.] The opposite is also easily possible.

Note the integer types, though - only very new (as in this year) CPUs have pack instructions that avoid the signed 16-bit saturation.

Anyway, you still haven't given enough information. Of course it matters WHICH and HOW MANY bits you want to truncate (e.g. applicability of the SSE pack instructions, whether saturation is OK). And what are the requirements on "reasonably fast"?

##### Share on other sites
I would keep the low bits. Let's assume everything is unsigned too. So if one of the elements was 0xff003311, the output would be 0x3311. Etc. Right now I've got a super-super-general copy just to get the algorithm working. I will probably need a few special case algorithms... but this one always works. Since these intels are LSB ordered:
void ImageBlitter::CopyGeneric_(UInt32 SrcX, UInt32 SrcY, UInt32 Width, UInt32 Height,            Image &DestImageObj, UInt32 DestX, UInt32 DestY) {  // Get some byte counts  UInt8 CopyBitCount = min(DestImageObj.GetChannelDepth(), ImageObj_->GetChannelDepth());  UInt8 CopyByteCount = CopyBitCount / 8;  UInt32 SkipByteCountIn = (ImageObj_->GetChannelDepth() - CopyBitCount) / 8;  UInt32 SkipByteCountOut = (DestImageObj.GetChannelDepth() - CopyBitCount) / 8;  UInt32 NextLineBytesIn = (ImageObj_->GetBPP() / 8) * (ImageObj_->GetWidth() - Width);  UInt32 NextLineBytesOut = (DestImageObj.GetBPP() / 8) * (DestImageObj.GetWidth() - Width);  // [Get our UInt8 *SrcPtr and UInt8 *DestPtr...]  // Do the super general always works but pretty slow copy.  for (UInt32 YI = SrcY; YI < Height + SrcY; YI++) {    for (UInt32 XI = SrcX; XI < Width + SrcX; XI++) {      for (UInt32 Bytes = 0; Bytes < CopyByteCount; Bytes ++) {        *DestPtr++ = *SrcPtr++;      }      SrcPtr += SkipByteCountIn;      DestPtr += SkipByteCountOut;    }    SrcPtr += NextLineBytesIn;    DestPtr += NextLineBytesOut;  }}

(EDIT - just copied the whole function for clarity's sake...)

So yeah... just looking for some ideas on the matter... didn't wanna waste time looking into one method if another was more appropriate. I realize most optimized versions will have to have special cases... but there aren't that many supported bit depths. This is one of the most important and widely used functions though so it must go faster than it currently does (meaning anything faster = reasonably fast)

Thanks
-Scott

##### Share on other sites
std::copy should be up to the task. If not, try std::transform, using a predicate which performs a static_cast and assignment.

But.

Why do you assume your existing copy method is too slow for you?

And for that matter, why do you need to do this copying yourself? "Image blitting" sounds like a library task.

##### Share on other sites
:)
Matrox changed their licensing terms and their lib doesn't contain some of the stuff we are starting to need anyway... (like support for >2GB images on 32 bit hardware)... nor leadtools...
I assume it's too slow because the matrox lib does it faster... and the lib is like 10 years old or something. I know they use MMX optimizations where possible, and I figured there might be some good ideas around here before delving into it myself.

And yeah... I had wondered if the std library might suffice... I'll have to try it too.

Cheers
-Scott

• 39
• 15
• 9
• 23
• 10