CirdanValen

Members
  • Content count

    213
  • Joined

  • Last visited

Community Reputation

378 Neutral

About CirdanValen

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. 3D Orthographic weirdness

    After some poking around, looks like my matrix multiplication is screwy. I changed this: // Negative positions Mat4f translation_matrix(Vec3f position) { Mat4f translationMatrix = identity_matrix(); translationMatrix.E[3][0] = -position.x; translationMatrix.E[3][1] = -position.y; translationMatrix.E[3][2] = -position.z; return translationMatrix; } // Then reversed the matrix multiplication order Mat4f cameraMatrix = translation_matrix(Vec3f{0.f, 32.f, 9.f}); Mat4f mvp = cameraMatrix * projectionMatrix; Seems to be working as expected so far, tho this seems like a duct tape fix and not really a proper way to do things. It sucks being bad at math lol
  2. 3D Orthographic weirdness

    I'm trying to setup a sort of 2.5D projection, think the 45 degree angle that old 2D RPGs have. The scene will be 3D, with walls perpendicular to the floor. Then I'll have an orthographic camera on a 45 degree tilt. Something like this: That way I get all the benefits of being 3D (depth buffer, can do deferred lighting, etc). I have a prototype setup in Unity, and it works out well, however when I try to imitate it in my code, I'l getting different results. Math code: struct Mat4f { real32 E[4][4]; Mat4f operator*(const Mat4f& right); }; Mat4f Mat4f::operator*(const Mat4f& right) { Mat4f result = {}; for(uint32 row = 0; row < 4; row++) { for(uint32 column = 0; column < 4; column++) { for(uint32 rc = 0; rc < 4; rc++) { result.E[row][column] += right.E[rc][column] * E[row][rc]; } } } return result; } Mat4f identity_matrix() { Mat4f result = {}; result.E[0][0] = 1.f; result.E[1][1] = 1.f; result.E[2][2] = 1.f; result.E[3][3] = 1.f; return result; } Mat4f ortho_projection(real32 viewWidth, real32 viewHeight, real32 zNear, real32 zFar) { Mat4f result = {}; result.E[0][0] = 2.f / viewWidth; // 2 / (right - left) result.E[1][1] = 2.f / -viewHeight; // 2 / (top - bottom) result.E[2][2] = -(2.f / (zFar - zNear)); // -2.f / (far - near) result.E[3][0] = -1.f; // -((right + left) / (right - left)) result.E[3][1] = -(viewHeight / -viewHeight); // -((top + bottom) / (top - bottom)) result.E[3][2] = -((zFar + zNear) / (zFar - zNear)); // -((far + near) / (far - near)) result.E[3][3] = 1.f; return result; } Mat4f translation_matrix(Vec3f position) { Mat4f translationMatrix = identity_matrix(); translationMatrix.E[3][0] = position.x; translationMatrix.E[3][1] = position.y; translationMatrix.E[3][2] = position.z; return translationMatrix; } Usage: glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LESS); Mat4f projectionMatrix = ortho_projection(window.width, window.height, 0.01f, 10.f); Mat4f cameraMatrix = translation_matrix(Vec3f{0.f, 0.f, 2.f}); Mat4f mvp = projectionMatrix * cameraMatrix; glUniformMatrix4fv(0, 1, false, &mvp.E[0][0]); Then the shader just multiplies that to the vertex. In my test scene I have three triangles. One is at z=0, one is at z=1, and the other is at z=-1. When I run the code as it is above, with the camera at z=2 only the the triangles at z=0 and z=1 are visible. When I set the camera z to 1, all the triangles are visible. When I set z = 0, only the triangles at -1 are visible. The camera.z=0 makes sense because the depth function is LESS, and I suppose the camera.z=1 also makes sense. However, since my z-far is set to 10...shouldn't everything be visible while the camera is between 0.01 and 10? As in the above case where camera z is 2...everything should still be visible because its within the depth range. I have a prototype setup in Unity and it works with various camera heights:
  3. I'm trying to debug my project, and for some reason MSVC 2017 can't find the source code  file.....even tho it says it can and the checksum matches. Sometimes won't load the source file at all, other times it will load and I can start stepping through it, but then it errors out and says it can't find the code. I'm guessing some sort of optimization is happening, which is confusing the debugger? Locating source for 'b:\projects\paroikos\src\dev_ui.cpp'. Checksum: MD5 {76 c2 7b ca ab 92 b8 c5 2d 4 10 c8 13 32 26 3f} The file 'b:\projects\paroikos\src\dev_ui.cpp' exists. Determining whether the checksum matches for the following locations: 1: b:\projects\paroikos\src\dev_ui.cpp Checksum: MD5 {76 c2 7b ca ab 92 b8 c5 2d 4 10 c8 13 32 26 3f} Checksum matches. The debugger found source in the following locations: 1: b:\projects\paroikos\src\dev_ui.cpp Checksum: MD5 {76 c2 7b ca ab 92 b8 c5 2d 4 10 c8 13 32 26 3f} The debugger will use the source at location 1.Build command: cl /GR- /MP /MT /WL /DPAROIKOS_EDITOR /DDEBUG /Z7 /nologo /EHsc /Fobin/ src/platform_windows.cpp "opengl32.lib" "User32.lib" "Gdi32.lib" /Febin/paroikos_editor /Fdbin/paroikos_editor /link
  4. I'm working through adding texturing to my program by following vulkan-tutorial.com and was wondering about row pitch. The tutorial states that images must obey the implementation's row pitch amount, which makes sense. However, the question I'm running into is that I want to use a VkBuffer for staging rather than a separate staging VkImage. From what I can tell, the only way to get the needed row pitch is from the linear tiled staging image. I'm assuming the optimal tiling image will have a different row pitch, and I can't find anything stating either way. Can I use the row pitch from the destination optimal tiling image, or do I not have to worry about the pitch it in this case?
  5. Vulkan Vulkan UI rendering

    I question the need to only render a small portion of the UI at a time. Maybe it was useful back in the day before we had hardware acceleration and blitting pixels in the CPU was slow, but if you're using hardware accelerated rendering...I don't really think it's a problem redrawing the whole UI when something changes. I did a small immediate mode UI for a basic map editor in opengl. I filled an array of all the vertices, copied them to the VBO, and rendered everything....every frame (and this was split up into multiple draw calls if the scissor rect changed). There was very little performance hit (everything, including the map itself, was being push out in <1ms iirc). In a UI that isn't limted to real time framerate of a video game..it's even less of a problem since acceptable delay is much more lenient. So if it takes 5 or even 10 ms to render your whole UI, it wouldn't cause noticeable lag in your application's usage.  
  6. I think it's a matter of order. You want to position the weapon at the camera, rotate, then offset in the direction the camera is facing. So some psudocode would look like m_Weapon->Position(m_Camera->GetPosition()); m_Weapon->OrientY(m_Camera->GetYaw() + 30); m_Weapon->OrientZ(m_Camera->GetPitch() + 30); weapon_position += m_Camera->getForward() * weapon_offset;
  7. There's been a few games I've tested where they just plaster an alpha'd texture over the whole screen with the user id repeated all over it. 
  8. [Game Maker] Top-Down Jumping

    The way I would do it is sort objects by their y value. Assuming +y is downward, if your player's y (probably best to check the player's position at their feet) is less-than the object's y (offset a little from the top), then the player gets drawn first. Whereas if the player's y is > object's y then the player gets drawn first. This will get you proper sorting for if the player is behind or in front of. For jumping, you would have a "z" value in addition to the x and y. The z value would represent the player's jump height. When the player's sprite gets drawn you draw the sprite at (x, y + z).  I haven't used Gamemaker, but if there is a "depth" value that specifies draw order, just set that depth value to the object's y location. For the player sprite, it would be a good idea to get the y position of the feet (y + spriteHeight) for best results. For jumping onto objects, you would have to think about the 3rd dimension. Normally you just do intersection testing with the bounding box of the object to figure out if the player collided with the object. To keep things simple since this isn't "real" 3D, I would have a "height" value for an object. So for example, that table could have a height of 4 units tall. When the collision check happens, I would just do something like: if collidedWithObject AND player.z > table.height THEN allow movement ELSE collide. Then as the player falls down from gravity and lands on the table, you would set the player's "z" to the table's height. Then of course you would have to set the z value to 0 once the player gets off the box. It is complicated, and theres a lot more work to be done. I don't know how to translate this into gamemaker, but that is the concept behind it.
  9. Very strange problem with Texture Units

    Oh wow. I can't believe I missed that, tho I'm still surprised this problem occurred in this code base and not my previous one. I normally don't unbind anything, since the binds get overwritten anyway, I'll just have to be more careful about texture units. Still, what was happening is when the window is resized...I recreate the frame buffers (which are what those glTexImage2D calls are after binding to the texture units). So after the initializing the framebuffers for the first time, they immediately get recreated again because Windows sends a resized event when the app first starts up. I'll admit, this should be changed. I guess I didn't realize that calling glBindTexture while a different texture unit was  active, would change the texture bound to that unit. It makes sense in retrospect. I'll figure out a way to prevent this from happening in the future. Thanks!
  10. Very strange problem with Texture Units

    I don't touch the depth texture at all after creating it, aside from clearing it every frame.
  11. Very strange problem with Texture Units

    EDIT: here's the GL command call log http://pastebin.com/0xxVqDQq Couple more points to add: Another strange discovery I found is that if I comment out the last texture bind, by doing this:     BindTextureUnit(&editor->devTexture, 0);     BindTextureUnit(&editor->renderBuffer, 1);     BindTextureUnit(&editor->lightBuffer, 2);     //BindTextureUnit(&editor->uiSprites, 3); The first texture, devTexture at unit 0, renders as black. If I uncomment it, it works fine.   Secondly, I have a previous revision of this code that works with all four texture units bound. The difference between this code base and the previous one, is this one I have converted to 3D rendering with depth testing. This is how I build the frame buffers with depth texture: _INTERNAL RenderBuffer  CreateRenderBuffer(u32 width, u32 height, b32 hasDepthBuffer = true)  {     RenderBuffer result = {};     result.width = width; result.height = height; glGenFramebuffers(1, &result.fbo); glBindFramebuffer(GL_FRAMEBUFFER, result.fbo); glGenTextures(1, &result.texture);     glBindTexture(GL_TEXTURE_2D, result.texture); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);     glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,                  GL_RGBA, GL_UNSIGNED_BYTE, NULL);          glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, result.texture, 0);          if(hasDepthBuffer)     {         glGenTextures(1, &result.depth);         glBindTexture(GL_TEXTURE_2D, result.depth);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);         glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0);                 glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0,                      GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);             glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, result.depth, 0);             }  GLenum drawBuffers[1] = { GL_COLOR_ATTACHMENT0     };     glDrawBuffers(1, drawBuffers);      GLenum fboStatus = glCheckFramebufferStatus(GL_FRAMEBUFFER); ASSERT(fboStatus == GL_FRAMEBUFFER_COMPLETE);          glBindTexture(GL_TEXTURE_2D, 0); glBindFramebuffer(GL_FRAMEBUFFER, 0); return result; } Could the two be somehow related? I've double checked the generated texture IDs and none of them are overlapping. No gl errors even after multiple frames and the frambuffers don't trigger the assert.
  12. Very strange problem with Texture Units

    regular tex parameters   _INTERNAL Texture CreateTexture(Bitmap bitmap, b32 convertToLinear)  {     Texture result = {};     result.handle = UINT_MAX;     result.width = bitmap.width;     result.height = bitmap.height;     glGenTextures(1, &result.handle);     glBindTexture(GL_TEXTURE_2D, result.handle);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);     glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 0); GLenum colorType = (convertToLinear) ? GL_SRGB8_ALPHA8 : GL_RGBA;     glTexImage2D(GL_TEXTURE_2D, 0, colorType, bitmap.width, bitmap.height, 0,                  GL_RGBA, GL_UNSIGNED_BYTE, bitmap.pixels);     glBindTexture(GL_TEXTURE_2D, 0);     return result; }
  13. Very strange problem with Texture Units

    Quick update: I went through and did glBindTexture(GL_TEXTURE_2D, 0) after creating the frame buffers. Now instead of being white, the texture comes out as black. Same order dependency as before. Calling glGetError() at the end of the game loop still returns 0.
  14. Very strange problem with Texture Units

    Basically, I store the assigned texture unit in the Texture struct as seen above. When I go to set the uniform in the texture, I use that texture.unit.  _INTERNAL void UseTexture(RenderContext* context, Texture texture, u32 loc) {     if(context->cmdBufferSize > 0)     {         ASSERT(!"InvalidPath"); }     glUniform1i(loc, texture.unit); } The render part looks like this glClearColor(0.0f, 0.0f, 0.0f, 1.f); BeginRenderPass(&editor->renderContext, editor->spriteShader, SPRITE_SHADER_UNIFORM_TRANSFORM_MAT, viewMatrix, editor->renderBuffer); {     UseTexture(&editor->renderContext, editor->devTexture, SPRITE_SHADER_UNIFORM_TEXTURE);     PushVertexBuffer(&editor->renderContext, editor->floorVbo, 6, RenderMode_Triangles);   } EndRenderPass(&editor->renderContext);  So all I do to test this is change editor->devTexture to editor->uiSprite.   devTexture works, uiSprite doesn't. The same shader is used, so the sampler location doesn't change.   Renderdoc doesn't work on my application for some reason (Doesn't support OpenGL 4.5?). But i can use GLintercept to post the GL calls log if needed.
  15. starting a new project

    Unreal Engine 4 is probably the most capable 3D engine to get stuff going without programming due to the Blueprint visual scripting. UE4 does have simple 3rd person examples and first person shooter examples for free, I would definitely take a look into that. The closest game that matches your idea is Skyrim, which can be modded.