• Advertisement

ZLib compression from UE4 to Boost ?

Recommended Posts

Hello,

I am sending compressed json data from the UE4 client to a C++ server made with boost.
I am using ZLib to compress and decompress all json but it doesn't work. I am now encoding it in base64 to avoid some issues but that doesn't change a thing.

I currently stopped trying to send the data and I am writing it in a file from the client and trying to read the file and decompress on the server side.
When the server is trying to decompress it I get an error from ZLib : zlib error: iostream error

My question is the following : Did anyone manage to compress and decompress data between a UE4 client and a C++ server ?
I cannot really configure anything on the server side (because boost has its ZLib compressor) and I don't know what is wrong with the decompression.

Any idea ?

rXp

Edited by rip-off
[mod edit: please do not mark posts as resolved]

Share this post


Link to post
Share on other sites
Advertisement

The problem is not with the zlib, and not with UE4; you are quite likely using it wrong. And why would you be base-64 encoding anything when you're using raw C++ boost sockets?

Without knowing exactly how you're using it, and exactly what the problems are that you're seeing, there's no chance of giving you any better help.

Print out how many bytes you send in each "go" and when you send them.

Print out how many bytes are received in each "go" and when they are received.

Correlate with failures in decompression. For example, do you initialize the zlib stream for each thing you send? If so, how do you separate each individual packet sent; is there a length prefix field of some sort that you send?

There are so many things that could be going wrong, that you need to take a step back, and figure out each individual small piece of what you want to do, and then do only that one thing. Verify it works. Then add one more thing, and verify that the combination works. If it doesn't, debug that one thing. Repeat, until you've gotten to where you want to go.

 

Share this post


Link to post
Share on other sites

I know my packet and data receiving/sending is not the problem since when I don't compress everything is transferred fine.
I just have the size as a uint32 header and the rest is the message itself.

When a message is fully received it is sent to my messagemanager to be consume and decompressed. I tested everypart of the client and the server. If I disable the Zlib part everything works. 

Server side compression (the decompression is pretty similar, use zlib_decompressor and read from the right place etc) : 

void MessageManager::compressString(const std::string &data, std::vector<char> &compressedData)
{
    std::stringstream compressed;
    std::stringstream decompressed(data);

    boost::iostreams::filtering_streambuf<boost::iostreams::input> in;
    in.push(boost::iostreams::zlib_compressor());
    in.push(decompressed);
    boost::iostreams::copy(in, compressed);

    std::string str = encode64(decompressed.str());
    compressedData.assign(str.begin(),str.end());
}

Client side compression (it's also pretty easy to understand what the decompress code looks like) : 

void MessageManager::compressString(const FString &json, TArray<uint8> &compressedData)
{
	// Compress File 
	FBufferArchive binaryArrayArchive;
	FString strData = FString(json);
	binaryArrayArchive << strData;
	TArray<uint8> dataArray(binaryArrayArchive.GetData(),binaryArrayArchive.GetAllocatedSize());
	FArchiveSaveCompressedProxy  compressor =
		FArchiveSaveCompressedProxy(compressedData, ECompressionFlags::COMPRESS_ZLIB);
	compressor << dataArray;
	compressor.Flush();
	compressor.FlushCache();
}

If I compress with the server I can decompress with the server. Same with the client.

So now I starting to read UE4 source code and boost source code to see what are the default parameters sent to ZLib, they might differ a lot. I'm also looking into FString of UE4 because std::streamstring might not support the encoding of FString etc...

Oh and yes the encoding in base64 is not necessary at all but I did it since I am making test through files and I wanted to be able to check every character is written in the file.

Edited by rXpSwiss

Share this post


Link to post
Share on other sites

You should take out the base64 again -- it serves no purpose.

You should also put breakpoints in the debugger on the sender, and check the length/data actually passed to the socket sending function.

Then you should put a breakpoint in the debugger on the receiver, and check the length/data actually received from the socket.

My guess is you're using the stringstreams or compressors wrong somehow. although it may also be something such as the decompressor not getting a big enough decompress working buffer or somesuch.

Also, when you say decompression "doesn't work," what actually happens? Do you get corrupt data? Do you get too little, or too much data? Does it throw an exception? Crash?

Share this post


Link to post
Share on other sites

It is usually wrong i C/C++ to use a string as target for binary data, because strings most often are defined to end at the first occurrence of '\0', but binary data may continue after the first 0. I see the target being a stringstream and know that zlib may contain 0 values, so I expect problems due to the described reason. Some base64 encoding, besides being needless overhead as hplus0603 has already written, does no change anything, because it is applied after the mishap has happened.

I've never used exactly that stuff, so I may be wrong. But I'd definitely try to avoid strings for binary data. Couldn't you copy from the filtering_streambuf to the vector directly?

Share this post


Link to post
Share on other sites

On the server side I get zlib error: iostream error and the threads stops. It should be an iostream error and only happens when the compress data comes from the client. The client says the header is wrong.

I agree with encore I removed it but std::string can contain \0 it is not delimited by it (although I will look into putting it in a char vector).

I already checked the size on the client, on the server and I checked the byte array data to verify it was the same byte (it is).

The problem is that both header are different. The header from server compressing something is what the RFC of ZLib (ZLib RFC) describe.

But when the client is compressing it, it does not correspond to what it should be, this is the first 2 bytes : C1 83.
I mean it is not a endianess problem since both are running on the same machine and 1C 38 would correspond to the RFC either.
I am looking on how it is compressed by UE4 when it uses ZLib or I will import ZLib myself and do it directly.

Share this post


Link to post
Share on other sites

Ah! When you said you were using boost zlib, I thought you were using it in both sides.

If you use two different library implementations, then, yes, it's totally possible that they make different choices about how to encode the data.

For all we know, perhaps Unreal had a bug ten years ago, and there's a lot of data they have to stay compatible with, and thus they haven't fixed the bug. The best place to find insight into that, would be the unreal developer forums.

Share this post


Link to post
Share on other sites

Well I got it to work by linking and using the raw zlib library on the UE4 client, I also had to encode into ANSI char for my server to be able to convert it into std::string (I will need to look into utf8 on the server but not for now since I don't send any text message yet).

For those interested I will give my solution for the compression decompression here and then you can tweak it.

Client UE4 :

First of all build the zlib library and I choose the same version as the one in my current version of UE4 (4.18), so I used zlib 1.2.8.

Then I built it manually on windows, but since it's cmake it wont be much different on linux or osx :
 

mkdir C:\Builds\zlib; cd C:\Builds\zlib
cmake -G "Visual Studio 15 2017" -A x64 C:\local\zlib-x.x.x
cmake --build .

Then you need to tell UE4 you want to link it into your project (*.build.cs) :
 

PublicAdditionalLibraries.Add(@"PATHTOPROJECT/Binaries/Win64/zlibd.lib");

(if anyone knows how to have the path to the project dynamically it would be nice)

Compress on the client :

void MessageManager::compressString(const FString &json, TArray<uint8> &compressedData)
{
	//convert into ANSI to be able to cast it into std::string for now
	auto jsonANSI = StringCast<ANSICHAR>(*json);
	TArray<uint8> UncompressedBinaryArray((uint8*)jsonANSI.Get(), jsonANSI.Length());
  
	compressedData.SetNum(UncompressedBinaryArray.Num() * 1023, true);

	//int ret;
	z_stream strm;
	strm.zalloc = Z_NULL;
	strm.zfree = Z_NULL;
	strm.opaque = Z_NULL;

	strm.avail_in = UncompressedBinaryArray.Num();
	strm.next_in = (Bytef *)UncompressedBinaryArray.GetData();
	strm.avail_out = compressedData.Num();
	strm.next_out = (Bytef *)compressedData.GetData();


	// the actual compression work.
	deflateInit(&strm, Z_DEFAULT_COMPRESSION);
	deflate(&strm, Z_FINISH);
	deflateEnd(&strm);

	// Shrink the array to minimum size
	compressedData.RemoveAt(strm.total_out, compressedData.Num() - strm.total_out, true);
}

Decompress on the client :

FString MessageManager::decompressString(TArray<uint8> &compressedData)
{
	TArray<uint8> UncompressedBinaryArray;
	UncompressedBinaryArray.SetNum(compressedData.Num() * 1032);

	//int ret;
	z_stream strm;
	strm.zalloc = Z_NULL;
	strm.zfree = Z_NULL;
	strm.opaque = Z_NULL;

	strm.avail_in = compressedData.Num();
	strm.next_in = (Bytef *)compressedData.GetData();
	strm.avail_out = UncompressedBinaryArray.Num();
	strm.next_out = (Bytef *)UncompressedBinaryArray.GetData();

	// the actual DE-compression work.
	inflateInit(&strm);
	inflate(&strm, Z_FINISH);
	inflateEnd(&strm);

	return FString((ANSICHAR*)UncompressedBinaryArray.GetData());
}

On the server (C++ using boost and STD) this is how to compress :

void MessageManager::compressString(const std::string &data, std::vector<char> &compressedData)
{
    std::stringstream compressed;
    std::stringstream decompressed(data);

    boost::iostreams::filtering_streambuf<boost::iostreams::input> in;
    in.push(boost::iostreams::zlib_compressor());
    in.push(decompressed);
    boost::iostreams::copy(in, compressed);

    std::string str = compressed.str();
    compressedData.assign(str.begin(),str.end());
}

And decompress :

std::string MessageManager::decompressString(std::vector<char> &compressedData)
{
    std::stringstream compressed;
    compressed.write(compressedData.data(),compressedData.size()); // <--- that is to be sure that it WONT STOP copying at a \0 char
    std::stringstream decompressed;

    boost::iostreams::filtering_streambuf<boost::iostreams::input> in;
    in.push(boost::iostreams::zlib_decompressor());
    in.push(compressed);
    boost::iostreams::copy(in, decompressed);

    std::string str(decompressed.str());
    return str;
}

 

I hope it will help someone out there :)

Edited by rXpSwiss

Share this post


Link to post
Share on other sites

I also had to encode into ANSI char for my server to be able to convert it into std::string (I will need to look into utf8 on the server but not for now since I don't send any text message yet).

Good you got it working, but that's till doing it wrong.

std::string is not for binary data. You should be using std::vector<char> or some other binary buffer format, rather than std::string. Encoding to ASCII neutralizes the entire idea of compression, because it expands the representation of the data again before it goes over the wire.

 

Share this post


Link to post
Share on other sites
13 hours ago, hplus0603 said:

 

 

Good you got it working, but that's till doing it wrong.

std::string is not for binary data. You should be using std::vector<char> or some other binary buffer format, rather than std::string. Encoding to ASCII neutralizes the entire idea of compression, because it expands the representation of the data again before it goes over the wire.

 

I encoded into ansi before the compression not after.

Share this post


Link to post
Share on other sites
15 hours ago, hplus0603 said:

 

 

It still seems like you're needlessly adding size to your data.

 

How ? I have 1 byte per character and if I use UTF8 I get between 1 to 4 bytes.

Knowing that, for now, I only have letter number and basic character how am I adding size ?

Share this post


Link to post
Share on other sites

If you "encode into ansi" then any binary byte that gets encoded into more than one output character will be bigger than it needs to be.

You should send raw binary data. It sounds in your posts above as if you're trying to wedge your binary data into a std::string, and encoding as ansi/text/characters/utf8 to somehow avoid problems with std::string representations. This is the wrong way around. Marshal binary data into a std::vector<> instead. (Or some other binary buffer class of your choice.)

 

Share this post


Link to post
Share on other sites
7 hours ago, hplus0603 said:

If you "encode into ansi" then any binary byte that gets encoded into more than one output character will be bigger than it needs to be.

You should send raw binary data. It sounds in your posts above as if you're trying to wedge your binary data into a std::string, and encoding as ansi/text/characters/utf8 to somehow avoid problems with std::string representations. This is the wrong way around. Marshal binary data into a std::vector<> instead. (Or some other binary buffer class of your choice.)

 

This is what I am doing :
Preparing my message, serializing it as a JSON (which is UTF8), making the JSON string ANSI, compressing it, sending it, receiving it, decompressing it and putting it back into a string so I can inflate my Message object.

Edited by rXpSwiss

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By khawk
      Epic released a new feature trailer today as part of their GDC announcements. They're giving several in-depth talks at GDC and showcasing the power of the Unreal Engine in their booth. You can see their full GDC plan here.
       
    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By fishyperil
      I'm looking for some references that could help me learn how to program some really basic 2D enemy behaviours.
      I wasn't sure whether to post this here or in the AI section but I think it might be more suitable to be posted here since it has more to do with basic maths than any AI related algorithms.
      Could anyone help recommend some resources (books, posts, videos) that could help me understand how to properly implement the basics of enemy movement in 2d games ? So far I've only managed to get them to chase the player character and to stop moving on collision, but the movement is pretty unrealistic and once the collision occurs the enemies all "pile up" on the player character. I'm doing this in C++ so no guides that explain how to script this using an engine api please.
    • By Terry Jin
      Hi everyone! 

      I am from an indie studio that has received funding for our concept and is ready to create the next generation 2D Pokemon-inspired MMORPG called Phantasy World. This ad is for a volunteer position but hopefully will transition into something more. Our vision is to create a game that draws inspiration from the series but is dramatically different in both aesthetics and gameplay as the work would be our own.
       
      We are hoping that you can help us make this a reality and are looking for game developers familiar with the unreal engine and would be happy to work on a 2D top down game. Sprite artists are also welcome as we are in desperate need of talented artists! Join our discord and let's have a chat! https://discord.gg/hfDxwDX

      Here's a teaser as to what our in game characters look like when moving in the game world! Hope to see you soon!
       

    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
  • Advertisement