• Advertisement

C++ Writing char arrays (binary)

Recommended Posts

Hi all,

For practice I'm trying to write 3D mesh data to a binary format. To eventually also be able to read meshes from binary files (provided by the asset pipeline).

I understand how to write floats, vectors of floats, ints etc by using a write((char*)myVar, sizeof(type) to the file, for which I'm using std::ofstream.

My goal is to write a 3D vector or other type (struct) to the file and be able to read it back the same way. From what I understand, I need to create function that returns a char array combining all members of the struct.

So in short; how can I create this char array, in a "safe" way?

So far I've come up with this approach:

- calculate number of needed bytes/chars, i.e 12 bytes for a 3D vector with 3 floats

- char result[12]

- cast the 3 individual floats to char[4]'s and use strcpy to result for the 1st and strcat for the others

Is this the way to go or would you suggest other approaches?

question 2; How would I read/convert the char array back to the 3 floats when reading it from the binary file?

Any input is appreciated.

Share this post


Link to post
Share on other sites
Advertisement

I'm not even sure what's your problem there. Let's start from the basics: the only thing that truly exist is memory and bits. As long as you're going with 'basic' types such as floats the following will do: 

ubyte blobby[12];
memcpy(blobby, src, sizeof(blobby));
float *bruh = (float*)blobby;

Hoping `blobby` to be properly aligned.

First you have to ensure your `int` is the same as the serialized `int` and that's why you have `__int32` or `std::int64_t`. And of course you have endianess problems but let's assume you'll be loading on 'coherent' machines.

When `struct` enters the picture `memcpy` goes out due to possibility of different compiler packing.

There are various options for (de)serialization, I'd suggest pupping all the things.

That or google protobuffers but in my experience they're not quite convenient for game purposes due to their everything can be optional nonsense.

 

 

Share this post


Link to post
Share on other sites

Thanks guys.

I've been playing around using this input. Unfortunately writing the struct or array (std::vector) at once, doesn't work by trying to cast to (myType*), it does work when I cast it to (char*). See the code samples below, tested all 3 positive.

Would I have any risks on this approach, depending on platform? (size of float etc.).
I'll make sure the struct I use as a POD struct, which is the case in the example below.

void ReadWriteStruct()
{
	std::ofstream writeFile;
	writeFile.open("data.bin", std::ios::out, std::ios::binary);
	writeFile.write((char*)&srcVector, sizeof(VECTOR3));
	writeFile.close();

	MessageBox(NULL, L"Vector written to binary file - full struct!", L"Done", MB_OK);

	std::ifstream readFile;
	readFile.open("data.bin", std::ios::in, std::ios::binary);

	VECTOR3 myReadVec3;
	readFile.read((char*)&myReadVec3, sizeof(VECTOR3));
	readFile.close();

	char tempText[100];
	sprintf_s(tempText, "Read: %f, %f, %f\n", myReadVec3.x, myReadVec3.y, myReadVec3.z);
	OutputDebugStringA(tempText);
}

void ReadWriteArray()
{
	std::vector<VECTOR3> myVectors(2);
	myVectors[0].x = 0.2f;
	myVectors[0].y = 0.7f;
	myVectors[0].z = 0.95f;
	myVectors[1].x = 5.2f;
	myVectors[1].y = 4.7f;
	myVectors[1].z = 7.75f;

	std::ofstream writeFile;
	writeFile.open("data.bin", std::ios::out, std::ios::binary);
	writeFile.write((char*)&myVectors[0], myVectors.size() * sizeof(VECTOR3));
	writeFile.close();

	MessageBox(NULL, L"Vector written to binary file - full struct!", L"Done", MB_OK);

	std::ifstream readFile;
	readFile.open("data.bin", std::ios::in, std::ios::binary);

	std::vector<VECTOR3> readVectors(2);
	readFile.read((char*)&readVectors[0], sizeof(VECTOR3));
	readFile.read((char*)&readVectors[1], sizeof(VECTOR3));
	readFile.close();

	char tempText[100];
	sprintf_s(tempText, "Read 1: %f, %f, %f\n", readVectors[0].x, readVectors[0].y, readVectors[0].z);
	OutputDebugStringA(tempText);
	sprintf_s(tempText, "Read 2: %f, %f, %f\n", readVectors[1].x, readVectors[1].y, readVectors[1].z);
	OutputDebugStringA(tempText);
}

 

Share this post


Link to post
Share on other sites
17 hours ago, cozzie said:

Thanks guys.

I've been playing around using this input. Unfortunately writing the struct or array (std::vector) at once, doesn't work by trying to cast to (myType*), it does work when I cast it to (char*). See the code samples below, tested all 3 positive.

Would I have any risks on this approach, depending on platform? (size of float etc.).
I'll make sure the struct I use as a POD struct, which is the case in the example below.


void ReadWriteStruct()
{
	std::ofstream writeFile;
	writeFile.open("data.bin", std::ios::out, std::ios::binary);
	writeFile.write((char*)&srcVector, sizeof(VECTOR3));
	writeFile.close();

	MessageBox(NULL, L"Vector written to binary file - full struct!", L"Done", MB_OK);

	std::ifstream readFile;
	readFile.open("data.bin", std::ios::in, std::ios::binary);

	VECTOR3 myReadVec3;
	readFile.read((char*)&myReadVec3, sizeof(VECTOR3));
	readFile.close();

	char tempText[100];
	sprintf_s(tempText, "Read: %f, %f, %f\n", myReadVec3.x, myReadVec3.y, myReadVec3.z);
	OutputDebugStringA(tempText);
}

void ReadWriteArray()
{
	std::vector<VECTOR3> myVectors(2);
	myVectors[0].x = 0.2f;
	myVectors[0].y = 0.7f;
	myVectors[0].z = 0.95f;
	myVectors[1].x = 5.2f;
	myVectors[1].y = 4.7f;
	myVectors[1].z = 7.75f;

	std::ofstream writeFile;
	writeFile.open("data.bin", std::ios::out, std::ios::binary);
	writeFile.write((char*)&myVectors[0], myVectors.size() * sizeof(VECTOR3));
	writeFile.close();

	MessageBox(NULL, L"Vector written to binary file - full struct!", L"Done", MB_OK);

	std::ifstream readFile;
	readFile.open("data.bin", std::ios::in, std::ios::binary);

	std::vector<VECTOR3> readVectors(2);
	readFile.read((char*)&readVectors[0], sizeof(VECTOR3));
	readFile.read((char*)&readVectors[1], sizeof(VECTOR3));
	readFile.close();

	char tempText[100];
	sprintf_s(tempText, "Read 1: %f, %f, %f\n", readVectors[0].x, readVectors[0].y, readVectors[0].z);
	OutputDebugStringA(tempText);
	sprintf_s(tempText, "Read 2: %f, %f, %f\n", readVectors[1].x, readVectors[1].y, readVectors[1].z);
	OutputDebugStringA(tempText);
}

 

I made a mistake, it should have been char* not myType*. I don't see a structure in your post but usually a vector struct is a POD.

You can read the vector array in a single call as well.

Krohm already posted the problems you may face.

Share this post


Link to post
Share on other sites

Thanks. The VECTOR3 is my struct, so it's working for both an individual struct as a std::vector of them. Thanks

Edited by cozzie

Share this post


Link to post
Share on other sites
On 10/11/2017 at 12:57 AM, Krohm said:

There are various options for (de)serialization, I'd suggest pupping all the things.

I second this approach.

Break it down so that you're only serializing fundamental types. This allows you to avoid all issues with alignment and padding, properly handle endian-ness, and trivially support composites of serializeable types.

There's nothing you gain from trying to read/write entire buffers of typed data at once, other than the potential for hard-to-find bugs.

Share this post


Link to post
Share on other sites

A little bit off topic but i always first writr then length of char array and then i writr char array anyway

Memcpy is your pal, you just memcpy(&floatvar, &chartable[index],3);

 

Something kike this you copy some amount memory to float pointer

Share this post


Link to post
Share on other sites

Do you mean first writing everything to a char array (aka buffer) and then write that buffer to file?

That could work, but then I have to calculate the index manually, which could be a risk if some type has a different bytesize on another problem (writing per variable and using sizeof, I think prevents this). In your example, shouldn't 3 be 4 (bytes for a float)?

But perhaps there's a way to use a buffer while keeping this in mind (or don't manually calculate the index of each variable).

Share this post


Link to post
Share on other sites

I don't understand the difficulty that you have, aren't you just over-thinking stuff? I mean

class Writer {
  ...
  void WriteInt(int n) { Store((const char *)&n, sizeof(n)); }
  void WriteFloat(float n) { Store((const char *)&n, sizeof(n)); }
  // etc
 
  void Store(const char *base, size_t sz) {
    for (int i = 0; i < sz; i++) // store or write base
  }
}


struct SomeStuff {
  int a;
  float b;
  
  void Write(Writer &w) {
    w.WriteInt(a);
    w.WriteFloat(b);
  }
}

Make a Writer class that has a function for each type of data, for each structure you have add a "Write" method that writes self to the Writer. Reading is just the other way around, you have a Reader class with functions that produce various elementary values, and each struct has a Read method that assigns fields in the same order as the Write method.

 

Share this post


Link to post
Share on other sites

Be aware though that the above is not portable. (For example, there are at least 4 different byte patterns that WriteInt could generate on relatively modern systems, for example, although you're unlikely to see 2 of them.)

I have no idea what the previous 2 posts were about, however. Perhaps the typos caused confusion. Writing variable-length arrays or lists of anything is usually best achieved by prepending the length, obviously.

Share this post


Link to post
Share on other sites
50 minutes ago, Alberth said:

Make a Writer class that has a function for each type of data, for each structure you have add a "Write" method that writes self to the Writer. Reading is just the other way around, you have a Reader class with functions that produce various elementary values, and each struct has a Read method that assigns fields in the same order as the Write method.

 

Or go with pupping and have a single function to read, write and compute size. No repeating yourself but I feel like beating a dead horse at this point.

Share this post


Link to post
Share on other sites

(Just an aside: I don't like the use of the term 'pupping' because that was just invented for that article. The idea of writing out the fields individually is not new, nor is the idea of using a single function for both reads and writes, e.g. https://gafferongames.com/post/serialization_strategies/ . Apart from that however, the linked article is good.)

Share this post


Link to post
Share on other sites

Ohh I did this a while ago, I ended up posting a stackoverflow question because I was still learning many of the fundamentals to C++ at the time. https://stackoverflow.com/questions/20198002/how-is-a-struct-stored-in-memory

I'll synthesize the important stuff for you though. Which is that all you really need to know is that an object of any struct is stored in memory as an array of bytes (and lets be honest even primitive types are too, they are just much shorter). Binary files are essentially really long arrays of bytes.

So one of my favorite reasons to use a union is to have char arrays laying around which represent some data that is quite likely going to be pushed into a file. You need to know how many bytes your data is, but it is quite easy.

struct POD
{
	float X,Y,Z;
};

union quickie_data
{
	POD data_obj;
	char data_raw[sizeof(POD)];
};

Totally unnecessary though. Somebody has probably already said this, but you can just say `write( (char*)my_struct, sizeof(MyStruct));`

Share this post


Link to post
Share on other sites
16 hours ago, coope said:

Totally unnecessary though. Somebody has probably already said this, but you can just say `write( (char*)my_struct, sizeof(MyStruct));`

Yes, it was literally in the 3rd paragraph of the 1st post. Much of the rest of the thread is pointing out why this is a bad thing to do.

Share this post


Link to post
Share on other sites

A bit late to the party, but have a look at this pattern, I think it's a really elegant way to serialize binary data. Also have a read of section 7.4 in Beej's guide to network programming about data serialization and how to make binary data portable here: http://beej.us/guide/bgnet/output/html/singlepage/bgnet.html#serialization

struct Serializer
{
    // To be implemented by files, buffers, network connections, or whatever stream you want to write to
    virtual size_t write(void* data, size_t size) = 0;

    bool writeUByte(unsigned char byte) { return write(&byte, sizeof(byte)) == sizeof(byte); }
    bool writeUInt(uint32_t i) { return write(&i, sizeof(i)) == sizeof(i); }
    bool writeFloat(float f) {
        uint32_t buf = htonf(f);  // Convert to portable representation
        return writeUInt(buf);
    }
    bool writeVector3(const Vector3& v) {
        bool success = true;
        success &= writeFloat(v.x);
        success &= writeFloat(v.y);
        success &= writeFloat(v.z);
        return success;
    }
    // Etc.
};

Basically you write methods for all of the different types of data you want to have serialized in your program (maybe you have buffers, strings, quaternions...) Then you can implement the serializer for a file like so:

struct File : public Serializer{
    size_t write(void* data, size_t size) override {
        return fwrite(fp_, data, size);
    }
private:
    FILE* fp_;
};

Or you could implement the serializer for a buffer:

struct Buffer : public Serializer{
    size_t write(void* data, size_t size) override {
        for (size_t i = 0; i != size; ++i)
            buffer_.push_back(((char*)data)[i]);  // horribly slow way
    }
private:
    std::vector<char> buffer_;
};

You get the idea. The point is, the code writing the data doesn't have to care about what it's writing too and it also doesn't have to care about endianness or type sizes, because that's handled by Serializer.

Then, similarly, you write a Deserializer to handle reading the data back:

struct Deserializer
{
    virtual size_t read(void* dest, size_t size) = 0;

    unsigned char readUByte() {
        unsigned char buf;
        read(&buf, sizeof(buf));
        return buf;
    }

    uint32_t readUInt() {
        uint32_t buf;
        read(&buf, sizeof(buf));
        return buf;
    }

    float readFloat() {
        uint32_t buf = readUInt();
        return ntohf(buf);
    }

    Vector3 readVector3() {
        Vector3 v;
        v.x = readFloat();
        v.y = readFloat();
        v.z = readFloat();
        return v;
    }
};

Example usage would be:

struct Vector3
{
    float x, y, z;
};

int main()
{
    Vector3 v = {1.0f, 4.0f, 7.0f};

    File ser;
    ser.open("whatever.dat");
    ser.writeVector3(v);
    ser.close();

    File des;
    des.open("whatever.dat");
    v = des.readVector3();
    des.close();
}

 

Edited by TheComet

Share this post


Link to post
Share on other sites

Thanks guys and sorry for the late response. I've adder the ReadXxx and WriteXxx helpers, the code's much cleaner now. I decided to stick with writing/ casting only the individual standard types (float, uint etc).

What I didn't figure out yet is how I can read a full chunk of data and "place" that in a std::vector. For example, say that I know that there are 100 floats in the file, would it be possible to read them all at once and place them in a std::vector<float> (with a size of 100)?

Share this post


Link to post
Share on other sites

Thanks Kylotan. The .data works great. I've decided to stick with reading and writing of uint's, size_t and float. The first 2 I write and read as uint_32t.

Share this post


Link to post
Share on other sites
On ‎30‎/‎10‎/‎2017 at 8:04 AM, cozzie said:

Thanks guys and sorry for the late response. I've adder the ReadXxx and WriteXxx helpers, the code's much cleaner now. I decided to stick with writing/ casting only the individual standard types (float, uint etc).

What I didn't figure out yet is how I can read a full chunk of data and "place" that in a std::vector. For example, say that I know that there are 100 floats in the file, would it be possible to read them all at once and place them in a std::vector<float> (with a size of 100)?

I'd add methods for doing that to you serializer/deserializer classes:

class Deserializer {
    /* ... */
  
    void readFloatArray(std::vector<float>* v) {
        size_t arraySize = readUInt();  // we saved the size so we know how much to read
        v->resize(arraySize);
        for (size_t i = 0; i < arraySize; ++i)
            v->push_back(readFloat());
    }
  
    /* ... */
};

class Serializer {
    /* ... */
    
    void writeFloatArray(const std::vector<float>& v) {
        writeUInt(v.size());  // save size, for reading back
        for (size_t i = 0; i < v.size(); ++i)
            writeFloat(v[i]);
    }
  
    /* ... */
};

Then, to read the 100 floats:

File file;
file.open("whatever.dat");

std::vector<float> myFloats;
file.readFloatArray(&myFloats);

 

Share this post


Link to post
Share on other sites

Minor niggle, you may want to clear the array vector before pushing data into it, just in case.

Edited by Alberth
array != vector

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By francoisdiy
      So I wrote a programming language called C-Lesh to program games for my game maker Platformisis. It is a scripting language which tiles into the JavaScript game engine via a memory mapper using memory mapped I/O. Currently, I am porting the language as a standalone interpreter to be able to run on the PC and possibly other devices excluding the phone. The interpreter is being written in C++ so for those of you who are C++ fans you can see the different components implemented. Some background of the language and how to program in C-Lesh can be found here:

      http://www.codeloader.net/readme.html
      As I program this thing I will post code from different components and explain.
    • By isu diss
      I'm trying to duplicate vertices using std::map to be used in a vertex buffer. I don't get the correct index buffer(myInds) or vertex buffer(myVerts). I can get the index array from FBX but it differs from what I get in the following std::map code. Any help is much appreciated.
      struct FBXVTX { XMFLOAT3 Position; XMFLOAT2 TextureCoord; XMFLOAT3 Normal; }; std::map< FBXVTX, int > myVertsMap; std::vector<FBXVTX> myVerts; std::vector<int> myInds; HRESULT FBXLoader::Open(HWND hWnd, char* Filename, bool UsePositionOnly) { HRESULT hr = S_OK; if (FBXM) { FBXIOS = FbxIOSettings::Create(FBXM, IOSROOT); FBXM->SetIOSettings(FBXIOS); FBXI = FbxImporter::Create(FBXM, ""); if (!(FBXI->Initialize(Filename, -1, FBXIOS))) { hr = E_FAIL; MessageBox(hWnd, (wchar_t*)FBXI->GetStatus().GetErrorString(), TEXT("ALM"), MB_OK); } FBXS = FbxScene::Create(FBXM, "REALMS"); if (!FBXS) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the scene"), TEXT("ALM"), MB_OK); } if (!(FBXI->Import(FBXS))) { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to import fbx file content into the scene"), TEXT("ALM"), MB_OK); } FbxAxisSystem OurAxisSystem = FbxAxisSystem::DirectX; FbxAxisSystem SceneAxisSystem = FBXS->GetGlobalSettings().GetAxisSystem(); if(SceneAxisSystem != OurAxisSystem) { FbxAxisSystem::DirectX.ConvertScene(FBXS); } FbxSystemUnit SceneSystemUnit = FBXS->GetGlobalSettings().GetSystemUnit(); if( SceneSystemUnit.GetScaleFactor() != 1.0 ) { FbxSystemUnit::cm.ConvertScene( FBXS ); } if (FBXI) FBXI->Destroy(); FbxNode* MainNode = FBXS->GetRootNode(); int NumKids = MainNode->GetChildCount(); FbxNode* ChildNode = NULL; for (int i=0; i<NumKids; i++) { ChildNode = MainNode->GetChild(i); FbxNodeAttribute* NodeAttribute = ChildNode->GetNodeAttribute(); if (NodeAttribute->GetAttributeType() == FbxNodeAttribute::eMesh) { FbxMesh* Mesh = ChildNode->GetMesh(); if (UsePositionOnly) { NumVertices = Mesh->GetControlPointsCount();//number of vertices MyV = new XMFLOAT3[NumVertices]; for (DWORD j = 0; j < NumVertices; j++) { FbxVector4 Vertex = Mesh->GetControlPointAt(j);//Gets the control point at the specified index. MyV[j] = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); } NumIndices = Mesh->GetPolygonVertexCount();//number of indices MyI = (DWORD*)Mesh->GetPolygonVertices();//index array } else { FbxLayerElementArrayTemplate<FbxVector2>* uvVertices = NULL; Mesh->GetTextureUV(&uvVertices); int idx = 0; for (int i = 0; i < Mesh->GetPolygonCount(); i++)//polygon(=mostly triangle) count { for (int j = 0; j < Mesh->GetPolygonSize(i); j++)//retrieves number of vertices in a polygon { FBXVTX myVert; int p_index = 3*i+j; int t_index = Mesh->GetTextureUVIndex(i, j); FbxVector4 Vertex = Mesh->GetControlPointAt(p_index);//Gets the control point at the specified index. myVert.Position = XMFLOAT3((float)Vertex.mData[0], (float)Vertex.mData[1], (float)Vertex.mData[2]); FbxVector4 Normal; Mesh->GetPolygonVertexNormal(i, j, Normal); myVert.Normal = XMFLOAT3((float)Normal.mData[0], (float)Normal.mData[1], (float)Normal.mData[2]); FbxVector2 uv = uvVertices->GetAt(t_index); myVert.TextureCoord = XMFLOAT2((float)uv.mData[0], (float)uv.mData[1]); if ( myVertsMap.find( myVert ) != myVertsMap.end() ) myInds.push_back( myVertsMap[ myVert ]); else { myVertsMap.insert( std::pair<FBXVTX, int> (myVert, idx ) ); myVerts.push_back(myVert); myInds.push_back(idx); idx++; } } } } } } } else { hr = E_FAIL; MessageBox(hWnd, TEXT("Failed to create the FBX Manager"), TEXT("ALM"), MB_OK); } return hr; } bool operator < ( const FBXVTX &lValue, const FBXVTX &rValue) { if (lValue.Position.x != rValue.Position.x) return(lValue.Position.x < rValue.Position.x); if (lValue.Position.y != rValue.Position.y) return(lValue.Position.y < rValue.Position.y); if (lValue.Position.z != rValue.Position.z) return(lValue.Position.z < rValue.Position.z); if (lValue.TextureCoord.x != rValue.TextureCoord.x) return(lValue.TextureCoord.x < rValue.TextureCoord.x); if (lValue.TextureCoord.y != rValue.TextureCoord.y) return(lValue.TextureCoord.y < rValue.TextureCoord.y); if (lValue.Normal.x != rValue.Normal.x) return(lValue.Normal.x < rValue.Normal.x); if (lValue.Normal.y != rValue.Normal.y) return(lValue.Normal.y < rValue.Normal.y); return(lValue.Normal.z < rValue.Normal.z); }  
    • By Karol Plewa
      Hi, 
       
      I am working on a project where I'm trying to use Forward Plus Rendering on point lights. I have a simple reflective scene with many point lights moving around it. I am using effects file (.fx) to keep my shaders in one place. I am having a problem with Compute Shader code. I cannot get it to work properly and calculate the tiles and lighting properly. 
       
      Is there anyone that is wishing to help me set up my compute shader?
      Thank you in advance for any replies and interest!
    • By fishyperil
      I'm looking for some references that could help me learn how to program some really basic 2D enemy behaviours.
      I wasn't sure whether to post this here or in the AI section but I think it might be more suitable to be posted here since it has more to do with basic maths than any AI related algorithms.
      Could anyone help recommend some resources (books, posts, videos) that could help me understand how to properly implement the basics of enemy movement in 2d games ? So far I've only managed to get them to chase the player character and to stop moving on collision, but the movement is pretty unrealistic and once the collision occurs the enemies all "pile up" on the player character. I'm doing this in C++ so no guides that explain how to script this using an engine api please.
    • By LifeArtist
      Good Evening,
      I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
      First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
      I am really stucked right now because of the fundamental question:
      Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
      If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on. 
      In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
      Should I treat those debug objects as entities/components?
      For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
      Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
      Regards,
      LifeArtist
  • Advertisement