# C++ Xor contents of an object with virtual function

## Recommended Posts

struct Base
{
virtual void func() = 0;
};
struct A : Base
{
void func()
{
std::cout << "a: " << a << "\n";
}
int a;
};

int main()
{
A a;
a.a = 1;
A a2 = a;

A a3;
char * ap = reinterpret_cast<char*>(&a);
char * ap2 = reinterpret_cast<char*>(&a2);
char * ap3 = reinterpret_cast<char*>(&a3);

for (int i = 0; i < sizeof(A); ++i)
*(ap3 + i) = *(ap + i) ^ *(ap2 + i);

Base * b = &a3;
b->func();

}

I want to bitwise xor the contents of an object with virtual functions like above.

But I think i get a crash in above scenario because I xor'd the vptr too.

Is there a way to achieve what I want?

Edited by lride

##### Share on other sites

What are you trying to accomplish with this?

##### Share on other sites

I want to be able to call a virtual function on the xor'd object.

The reason is I'm trying to implement delta compression for compressing snapshots for a multiplayer game.

I need to xor the contents of two objects and then call a virtual function to pack the bits.

##### Share on other sites

I think you need to answer for us and for yourself; what does an XOR of the bits implementing an object in memory have to do with delta-compressing the concepts and values which the object contains?

You don't measure the difference between two 2D vectors implemented by struct Vector{double x,y;} by taking the XOR of the bits which implement the two IEEE 754 Double-Precision floating-point values... you calculate a new vector which's components are the difference between the components of the two vectors.

Edited by Wyrframe
typo

##### Share on other sites

This is the way how I do delta compression:

All objects' information is represented in integers.

I xor the two objects and I get lots of 0s. Then I do run length encoding.

I think I can solve this problem by adding virtual void * data(), and virtual int size() methods.

##### Share on other sites

C++ objects are not just a pile of bits - you can't just cast them to another type and do bitwise operations of them. This is undefined behaviour. You may be able to get it to work by accounting for all the details such as vtables, but, it will still be undefined behaviour and your compiler will still be allowed to fuck you over.

Either use plain old data structures that can safety be bitwise modified, or, implement a "delta compare" virtual function.

##### Share on other sites
struct BaseNetObj
{
std::vector<char> data;
std::function<void()> func;
};

struct NetPerson
{
void say()
{
std::cout << a << b << c << "\n";
}
int a;
int b;
int c;
};

template <typename T>
BaseNetObj makeBase()
{
BaseNetObj obj;
obj.data.resize(sizeof(T));
obj.func = std::bind(&T::say, (T*)obj.data.data());
return obj;
}

int main()
{
BaseNetObj obj0 = makeBase<NetPerson>();
BaseNetObj obj1 = makeBase<NetPerson>();
BaseNetObj obj2 = makeBase<NetPerson>();

NetPerson * p0 = (NetPerson*)obj0.data.data();
p0->a = p0->b = p0->c = 1;

NetPerson * p1 = (NetPerson*)obj1.data.data();
p1->a = p1->b = p1->c = 1;

obj0.func(); //111
obj1.func(); //111

for (int i = 0; i < obj0.data.size(); ++i)
{
obj2.data[i] = obj0.data[i] ^ obj1.data[i];
}
obj2.func(); //000
}

I think the above should do it. No inheritance, virtual functions. The base class holds the raw data in a vector. I can still do emulate polymorphism with std::function.

##### Share on other sites

Again: what are you trying to accomplish? Sure, that code probably compiles, but what problem is it solving?

##### Share on other sites

The claim above was that it was an attempt to improve compression of delta encoding.

The only thing I can come up with, based on the talk of xor'ing for delta-encoded snapshots, that makes me suspect that he's trying to use a frame-of-reference encoding used in some video and audio files, where you've got a small set of integer values with only slight variation and with a limited number of available bits. You can isolate the number of actual bits used, use some zig-zag encoding to combine the bits, and xor in the zig-zag to avoid subtractions with negative numbers, adding in the xor'ed value instead.

The other reason I suspect that's the purpose is that Google's "protocol buffers" uses that as a tiny piece of their encoding. Since their protocol buffer encoding is rapidly becoming more popular, people read about it and then attempt to implement it without fully understanding the details.

Or it could be some other reason that he's trying to do it, something else about delta-encoded snapshots that doesn't immediately come to mind.

Anyway, for a situation like the one described, the best approach is probably for the objects themselves to implement a delta system, and to use a good bit-packer library to encode the values.  Thus a system could say there are zero bits different, or store a 2-bit pattern if it is different within 4, or store a 3-bit pattern if it is different within 8, etc., so small changes require even fewer bytes in the stream.  That stream can then be compressed with any of the major general-purpose stream-based compression libraries like gzip or zlib or similar.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628740
• Total Posts
2984470
• ### Similar Content

• By Josheir
void update() { if (thrust) { dx += cos(angle*DEGTORAD)*.02; dy += sin(angle*DEGTORAD)*.02; } else { dx*=0.99; dy*=0.99; } int maxSpeed = 15; float speed = sqrt(dx*dx+dy*dy); if (speed>maxSpeed) { dx *= maxSpeed/speed; dy *= maxSpeed/speed; } x+=dx; y+=dy; . . . } In the above code, why is maxSpeed being divided by the speed variable.  I'm stumped.

Thank you,
Josheir

• Hey there,  I have this old code im trying to compile using GCC and am running into a few issues..
im trying to figure out how to convert these functions to gcc
static __int64 MyQueryPerformanceFrequency() { static __int64 aFreq = 0; if(aFreq!=0) return aFreq; LARGE_INTEGER s1, e1, f1; __int64 s2, e2, f2; QueryPerformanceCounter(&s1); s2 = MyQueryPerformanceCounter(); Sleep(50); e2 = MyQueryPerformanceCounter(); QueryPerformanceCounter(&e1); QueryPerformanceFrequency(&f1); double aTime = (double)(e1.QuadPart - s1.QuadPart)/f1.QuadPart; f2 = (e2 - s2)/aTime; aFreq = f2; return aFreq; } void PerfTimer::GlobalStart(const char *theName) { gPerfTimerStarted = true; gPerfTotalTime = 0; gPerfTimerStartCount = 0; gPerfElapsedTime = 0; LARGE_INTEGER anInt; QueryPerformanceCounter(&anInt); gPerfResetTick = anInt.QuadPart; } /////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////// void PerfTimer::GlobalStop(const char *theName) { LARGE_INTEGER anInt; QueryPerformanceCounter(&anInt); LARGE_INTEGER aFreq; QueryPerformanceFrequency(&aFreq); gPerfElapsedTime = (double)(anInt.QuadPart - gPerfResetTick)/aFreq.QuadPart*1000.0; gPerfTimerStarted = false; }
I also tried converting this function (original function is the first function below and my converted for gcc function is under that) is this correct?:
#if defined(WIN32) static __int64 MyQueryPerformanceCounter() { // LARGE_INTEGER anInt; // QueryPerformanceCounter(&anInt); // return anInt.QuadPart; #if defined(WIN32) unsigned long x,y; _asm { rdtsc mov x, eax mov y, edx } __int64 result = y; result<<=32; result|=x; return result; } #else static __int64 MyQueryPerformanceCounter() { struct timeval t1, t2; double elapsedTime; // start timer gettimeofday(&t1, NULL); Sleep(50); // stop timer gettimeofday(&t2, NULL); // compute and print the elapsed time in millisec elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms elapsedTime += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms return elapsedTime; } #endif Any help would be appreciated, Thank you!

• Hi, I'm building a game engine using DirectX11 in c++.
I need a basic physics engine to handle collisions and motion, and no time to write my own.
What is the easiest solution for this? Bullet and PhysX both seem too complicated and would still require writing my own wrapper classes, it seems.
I found this thing called PAL - physics abstraction layer that can support bullet, physx, etc, but it's so old and no info on how to download or install it.
The simpler the better. Please let me know, thanks!

• It comes that time again when I try and get my PC build working on Android via Android Studio. All was going swimmingly, it ran in the emulator fine, but on my first actual test device (Google Nexus 7 2012 tablet (32 bit ARM Cortex-A9, ARM v7A architecture)) I was getting a 'SIGBUS illegal alignment' crash.
My little research has indicated that while x86 is fine with loading 16 / 32 / 64 bit values from any byte address in memory, the earlier ARM chips may need data to be aligned to the data size. This isn't a massive problem, and I see the reason for it (probably faster, like SIMD aligned loads, and simpler for the CPU). I probably have quite a few of these, particular in my own byte packed file formats. I can adjust the exporter / formats so that they are using the required alignment.
Just to confirm, if anyone knows this, is it all 16 / 32 / 64 bit accesses that need to be data size aligned on early android devices? Or e.g. just 64 bit size access?
And is there any easy way to get the compiler to spit out some kind of useful information as to the alignment of each member of a struct / class, so I can quickly pin down the culprits?
The ARM docs (http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html) suggest another alternative is using a __packed qualifier. Anyone used this, is this practical?
• By Josheir
In the following code:

Point p = a[1]; center of rotation for (int i = 0; I<4; i++) { int x = a[i].x - p.x; int y = a[i].y - p.y; a[i].x = y + p.x; a[i].y = - x + p.y; }
I am understanding that a 90 degree shift results in a change like:
xNew = -y
yNew = x

Could someone please explain how the two additions and subtractions of the p.x and p.y works?

Thank you,
Josheir

• 12
• 25
• 11
• 10
• 16