# FantasyVII

Members

211

1073 Excellent

• Rank
Member

• Interests
|programmer|
1. ## Bin packing

So my tree looks right? only thing I have to do is go to node 1 instead of node 0 if node 0 fails?
2. ## Bin packing

Hi everyone,   I'm having trouble understand this algorithm  http://blackpawn.com/texts/lightmaps/default.html   Here what I understood from the algorithm.   First lets assume my large texture is 128 x 128. Lets also assume the first texture I want to add is 32 x 32 pixels. So according to the algorithm I have to split my big texture into two regions. After that I add the 32 x 32 blue texture. At this point my tree and texture looks something like this     After that lets assume the next texture I want to pack is 32 x 96 green texture. So my tree would look something like this,     finally lets try and add another 32x32 pink texture. Here is my problem.       When I look at node 0 and 1 I can see that the 32 x 32 pink texture can fit in both nodes. but since node 0 is first, I will pick that node. But if you go down the tree you can see that the last node size is 0, 0 which means there is no space left. My question is now what do I do? if I just go back to the root node and do the same thing I will get to the same consultation. Unless I have to somehow update node 0 every time I add a new texture. But if that is the case he never explained how that would work.   I don't understand this algorithm. Any help would be appreciated.    Cheers.
3. ## Texture artifacts when rendering multiple textures on a single mesh

Well my engine supports both DirectX 11 and OpenGL 4+. So if you have advice for either APIs that would be great.   I will check out Array Textures and see if I can make it work. What is the equivalent of that in D3D11?   I did read somewhere that Array Textures in OpenGL is evil. I forgot why though. That was a long time ago.
4. ## Texture artifacts when rendering multiple textures on a single mesh

man..... I can't believe I wasted all day just to solve this problem.   adding this code in my pixel shader fixed the issue. if(texID == 0) color = texture(textures[0], shared_data.UV); if(texID == 1) color = texture(textures[1], shared_data.UV); if(texID == 2) color = texture(textures[2], shared_data.UV); I don't understand why the above code works but this creates artifacts.... color = texture(textures[texID], shared_data.UV); they are basically the same thing. aren't they?
5. ## Texture artifacts when rendering multiple textures on a single mesh

Hi,   I'm trying to render multiple textures on a single mesh. What i'm doing is binding three different textures using texture units and i'm adding those textureIDs to my vertex buffer. So my vertex buffer looks something like this [vec3 , vec4 , vec2 , vec3 , float ] Vertex Buffer = [vertex position, vertex color, texcoord, normal, textureID] and in my pixel shader I have an array of texture samplers and I render the correct texture using textureID as an index to the array.   This works great, my only problem is that i'm getting some texture artifacting. From the looks of it some textures are partly being drawn on top of other textures.     https://www.youtube.com/watch?v=yQiRgVCbewE     I don't know if this a good way of drawing multiple textures on a mesh. If you guys know a more efficient way of doing this I would love to know about it.     here is how I draw everything void init() { glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer); glBufferData(GL_ARRAY_BUFFER, size, data, GL_DYNAMIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); } //-------- void Draw() { //I think the problem is this block of code //----- glActiveTexture(GL_TEXTURE0 + 0); glBindTexture(GL_TEXTURE_2D, texture1); glUniform1i(glGetUniformLocation(glshader->GetProgramID(), "textures[0]"), 0); glActiveTexture(GL_TEXTURE0 + 1); glBindTexture(GL_TEXTURE_2D, texture2); glUniform1i(glGetUniformLocation(glshader->GetProgramID(), "textures[1]"), 1); glActiveTexture(GL_TEXTURE0 + 2); glBindTexture(GL_TEXTURE_2D, texture3); glUniform1i(glGetUniformLocation(glshader->GetProgramID(), "textures[2]"), 2); //----- vertexBuffer->Bind(); indexBuffer->Bind(); glDrawElements(GL_TRIANGLES, indexBuffer->GetIndicesCount(), GL_UNSIGNED_INT, nullptr); indexBuffer->Unbind(); vertexBuffer->Unbind(); } here is my vertex shader #version 450 core layout(location = 0) in vec3 inPosition; layout(location = 1) in vec4 inColor; layout(location = 2) in vec2 inUV; layout(location = 3) in vec3 inNormals; layout(location = 4) in float inTextureID; struct data { vec2 UV; float textureID; }; out data shared_data; void main() { shared_data.UV = inUV; shared_data.textureID = inTextureID; } here is my pixel shader #version 450 core struct data { vec2 UV; float textureID; }; in data shared_data; out vec4 color; uniform sampler2D textures[3]; void main() { int texID = int(shared_data.textureID); color = texture(textures[texID], shared_data.UV); }
6. ## Odd spritebatch rendering order

I'm such a dummy :P   so in my pixel shader I have my samplers setup as an array uniform sampler2D textures[5] and the way I was passing the each texture is by doing this sprite->texture2D->Bind("textures", textureID); //this function is equal to this glActiveTexture(GL_TEXTURE0 + textureID); glBindTexture(GL_TEXTURE_2D, texture); glUniform1i(glGetUniformLocation(glshader->GetProgramID(), "textures"), textureID); which is totally wrong because this will basically fill the entire array with the last textureID.   The correct way of doing this is if(textureID == 0.0f) sprite->texture2D->Bind("textures[0]", textureID); else if (textureID == 1.0f) sprite->texture2D->Bind("textures[1]", textureID); ofc I can do some string manipulation and get ride of the if statement so effectively it will look something like  this sprite->texture2D->Bind("textures[" + textureID + "]", textureID);
7. ## Odd spritebatch rendering order

Hi everyone,   I wrote a 2D spritebatch and I'm having trouble understanding whether the spritebatch behavior is normal or not.   So my spritebatch has a single vertex buffer. I have a submit function that takes a "sprite" which is basically a quad and it maps that into the vertex buffer. When you first submit a sprite the "FindTexture" function will check if the sprite exist in a list. If it does it returns an index number that will be sent to the shader in order to tell the shader which texture to render. If it doesn't find the sprite it will push it back and return the last index in the list.   All sprites submitted have a Z value of 0.0f and my depth test is disabled.   My problem is that when I submit sprite1 and sprite2, sprite2 will be rendered behind sprite1. That makes no sense. What I expect to happen is that sprite2 should be rendered in front of sprite1. since sprite1 is placed first in the vertex buffer and sprite2 is placed second in the buffer. Of course if I submit sprite2 first then sprite1, then sprite1 will be drawn behind sprite2. which is the behavior I expect when I submit sprite1 then sprite2.         Here is how I call and sumbit sprites to my spritebatch spriteRenderer->Begin(); //Submit(Sprite (Texture, Position, Color)); spriteRenderer->Submit(new Sprite(sprite1, Vector3(500.0f, 200.0f, 0.0f), Vector4(1.0f, 1.0f, 1.0f, 1.0f))); spriteRenderer->Submit(new Sprite(sprite2, Vector3(500.0f, 200.0f, 0.0f), Vector4(1.0f, 1.0f, 1.0f, 1.0f))); spriteRenderer->End(); spriteRenderer->Draw(); and here is my implementation of the spritebatch void SpriteRenderer::Begin() { spriteBuffer = (SpriteBuffer*)vertexBuffer->Map(); } void SpriteRenderer::Submit(Sprite* sprite) { unsigned int textureID = FindTexture(sprite->texture2D); sprite->texture2D->Bind("textures", textureID); Vector2 topLeftUV, topRightUV, bottomRightUV, bottomLeftUV; CalculateUV(sprite, &topLeftUV, &topRightUV, &bottomRightUV, &bottomLeftUV); //Top Left spriteBuffer->position = Vector3(sprite->position.x + sprite->rectangle.x, sprite->position.y + sprite->rectangle.y, 0.0f); spriteBuffer->color = sprite->color; spriteBuffer->UV = topLeftUV; spriteBuffer->textureID = textureID; spriteBuffer++; //Top Right spriteBuffer->position = Vector3(sprite->position.x + sprite->rectangle.x + sprite->rectangle.width, sprite->position.y + sprite->rectangle.y, 0.0f); spriteBuffer->color = sprite->color; spriteBuffer->UV = topRightUV; spriteBuffer->textureID = textureID; spriteBuffer++; //Bottom Right spriteBuffer->position = Vector3(sprite->position.x + sprite->rectangle.x + sprite->rectangle.width, sprite->position.y + sprite->rectangle.y + sprite->rectangle.height, 0.0f); spriteBuffer->color = sprite->color; spriteBuffer->UV = bottomRightUV; spriteBuffer->textureID = textureID; spriteBuffer++; //Bottom Left spriteBuffer->position = Vector3(sprite->position.x + sprite->rectangle.x, sprite->position.y + sprite->rectangle.y + sprite->rectangle.height, 0.0f); spriteBuffer->color = sprite->color; spriteBuffer->UV = bottomLeftUV; spriteBuffer->textureID = textureID; spriteBuffer++; indexCount += 6; } void SpriteRenderer::End() { vertexBuffer->Unmap(); } void SpriteRenderer::Draw() { vertexBuffer->Bind(); indexBuffer->Bind(); context->Draw(indexCount); indexBuffer->Unbind(); vertexBuffer->Unbind(); for (unsigned int i = 0; i < textures.size(); i++) textures[i]->Unbind(); textures.clear(); indexCount = 0; } unsigned int SpriteRenderer::FindTexture(API::Texture2D* texture) { //textures is a list of texture2D pointers. for (unsigned int i = 0; i < textures.size(); i++) { if (textures[i] == texture) return i; } textures.push_back(texture); return (unsigned int)(textures.size() - 1); } and here is my pixel shader #version 450 core in vec4 fragmentColor; in vec2 UV; flat in int textureID; out vec4 color; uniform sampler2D textures[5]; void main() { color = fragmentColor * texture(textures[textureID], UV); }
8. ## Understanding the view matrix

I'm trying to understand how the view matrix works and why would you create a lookAt matrix.   Right now my view matrix is setup to be as the inverse of the model matrix. So if I want to move left in the world, I would move the entire scene to the right and if I want to move to the forward, I move my entire scene backwards and so on.   So essentially my view matrix is just a simple translation and rotation matrix multiplied with each other.   That seems simple to understand, but going through ever article on the internet, I see people creating a lookAt matrix and I don't understand why? What is the point of the lookAt matrix? Why not just translate and rotate the world like I do?
9. ## OpenGL Texture mapping coordinates for OpenGL and DirectX

good idea. Thx  ^_^
10. ## OpenGL Texture mapping coordinates for OpenGL and DirectX

No.   OpenGL uses {0,0} for bottom-left, D3D uses {0,0} for top-left.   However, when you're loading data to a texture, OpenGL begins loading at bottom-left and D3D begins loading at top-left.   So what happens in practice is that the differences cancel each other out and you can use the same texcoords for both.     ooh. alright. that makes sense. Thanks !
11. ## OpenGL Texture mapping coordinates for OpenGL and DirectX

So both API use 0,0 UV for the bottom left corner?    This feels like row/column major matrices misconception all over again :P       But the image is not flipped in either API's. It is correctly displayed as long as I use 0,0 UV coordinates system for the bottom left corner.   Are you saying that the image should be flipped in one of the API's if I load the image the same way for both API's? Because that is not the case here.
12. ## OpenGL Texture mapping coordinates for OpenGL and DirectX

I'm using freeimage to load the image for both API's. I believe freeimage loads the image from the bottom left to the top right. Do you think that is why UV coordinates are the same for both API's? Does the way you load the image matter?
13. ## OpenGL Texture mapping coordinates for OpenGL and DirectX

I'm trying to map a texture on to a quad in DirectX 11 and OpenGL 4.5. From my understanding this how the texture mapping coordinates for OpenGL and DirectX should look like     However in my case both OpenGL and DirectX looks to be using OpenGL's way of mapping. I don't understand why is that happening.   This is my UV for both DirectX and OpenGL. The image looks perfect in both OpenGL and DirectX. This way of mapping should only work in OpenGL coordinates system and in DirectX the image should be upside down. However that is not the case.  //create vertices clockwise //Top Left tri1Vertices[0].position = Vector3(-1.0f, 1.0f, 0.0f); tri1Vertices[0].UV = Vector2(0.0f, 1.0f); //Top Right tri1Vertices[1].position = Vector3(1.0f, 1.0f, 0.0f); tri1Vertices[1].UV = Vector2(1.0f, 1.0f); //Bottom Right tri1Vertices[2].position = Vector3(1.0f, -1.0f, 0.0f); tri1Vertices[2].UV = Vector2(1.0f, 0.0f); //Bottom Left tri1Vertices[3].position = Vector3(-1.0f, -1.0f, 0.0f); tri1Vertices[3].UV = Vector2(0.0f, 0.0f); unsigned int indices[6] { 0, 1, 2, 2, 3, 0 };   If I switch the V value to match DirectX way of mapping the image appears upside down for both OpenGL and DirectX.     //Top Left tri1Vertices[0].UV = Vector2(0.0f, 0.0f); //Top Right tri1Vertices[1].UV = Vector2(1.0f, 0.0f); //Bottom Right tri1Vertices[2].UV = Vector2(1.0f, 1.0f); //Bottom Left tri1Vertices[3].UV = Vector2(0.0f, 1.0f);   I don't understand why is this happening.
14. ## Cleanup and return from main in case of a crash or just display error message and exit?

Alright, thanks everyone.  ^_^
15. ## Cleanup and return from main in case of a crash or just display error message and exit?

So in your opinion, which one you would do? Throw an exception or return an error code?