Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

338 Neutral

About DavidColson

  • Rank
  1. I can see the need to simplify and maybe change my approach to things. It's a little disappointing, but I may have gone into lua with the wrong mindset.
  2. It's hardly overengineering to want to be able to do this is it? transform.position.x = 10 Is there really no way for the position getter to return a reference to the same data?
  3. I tried something like this: static int TransformGet(lua_State* L) { if (luaL_checkudata(L, 1, "Transform") == NULL) { luaL_typerror(L, 1, "Transform"); } Transform* trans = (Transform*)lua_touserdata(L, 1); const char* key = luaL_checkstring(L, 2); if (strcmp(key, "Position") == 0) { lua_pushlightuserdata(L, (void*)&trans->Position); luaL_getmetatable(L, "Vector3"); lua_setmetatable(L, -2); } return 1; } But sadly it did not work. I get this error from Lua: LuaSource/main.lua:26: bad argument #1 to '__newindex' (Vector3 expected, got userdata)
  4. So I've been binding some C++ functions to Lua and all has been going well until I came across this problem. local transform transform = gfx3D.Transform(); -- Works fine (transform is a userdata) transform.Position = Vector3(0, 0, 0) -- This seems to work fine (Position is a userdata) transform.Position.z = -80 -- This doesn't work (z accessed with a __newindex call, position with an __index call) LOG(tostring(transform.Position)) -- Prints X:0 Y:0 Z:0 Both transform and position are userdata but setting z does not work. I wondered why this was until I realised exactly the reason. This is the __index method for the transform userdata: static int TransformGet(lua_State* L) { if (luaL_checkudata(L, 1, "Transform") == NULL) { luaL_typerror(L, 1, "Transform"); } Transform* trans = (Transform*)lua_touserdata(L, 1); const char* key = luaL_checkstring(L, 2); if (strcmp(key, "Position") == 0) { Maths::Vec3f* returnValue; returnValue = (Maths::Vec3f*)lua_newuserdata(L, sizeof(Maths::Vec3f)); // Creates a new vector for the return value, and so breaking the reference luaL_getmetatable(L, "Vector3"); lua_setmetatable(L, -2); *returnValue = trans->Position; } else if (strcmp(key, "Rotation") == 0) { Maths::Vec3f* returnValue; returnValue = (Maths::Vec3f*)lua_newuserdata(L, sizeof(Maths::Vec3f)); luaL_getmetatable(L, "Vector3"); lua_setmetatable(L, -2); *returnValue = trans->Rotation; } else if (strcmp(key, "Scale") == 0) { Maths::Vec3f* returnValue; returnValue = (Maths::Vec3f*)lua_newuserdata(L, sizeof(Maths::Vec3f)); luaL_getmetatable(L, "Vector3"); lua_setmetatable(L, -2); *returnValue = trans->Scale; } return 1; } As you can see when I access the Position element in the transform I create a new userdata and push it on the stack as the return value. But this breaks the reference to the original userdata, and so setting it's Z element has no effect on the original, and so it appears the assignment didn't work.   The logical answer is to return a reference to the existing data of the position vector. But it doesn't have it's own userdata as it's simply an element of the transform userdata. You can see the transform userdata being created here: Transform* transform; transform = (Transform*)lua_newuserdata(L, sizeof(Transform)); luaL_getmetatable(L, "Transform"); lua_setmetatable(L, -2); *transform = Transform(); I am completely at a loss at what to do here. I somehow need to create a userdata that points to the same block of memory that the position element was kept in. I tried experimenting with lightuserdata as it's only a pointer, but that threw errors when I tried to access members of the lightuserdata, which seem to not exist in Lua's mind.   This seems like an issue other Lua devs must have come across so I'd like some help on dealing with it.   Thanks.
  5.   Its implicit here, but you should enable face culling if you dont need it to be disabled, as well. At least this particular scene does not need it disabled, assuming the cube meshes are defined correctly (counterclockwise order defined triangle vertices)     Yea I had it on out of laziness. I've disabled it in the last screenshot I posted.
  6.   That did the trick! Thank you so much!  
  7.   So I made these changes to the matrix calculations: Maths::Mat4f Model = translate * rotate * scale; Maths::Mat4f World = cameraRotate * cameraTranslate * Model; Maths::Mat4f MVP = projection * World; glUniformMatrix4fv(context->gMVPHandle, 1, GL_TRUE, &MVP.m[0][0]); glUniformMatrix4fv(context->gWorldHandle, 1, GL_TRUE, &World.m[0][0]); But it didn't make any difference to my edge artifact. The World matrix should be orthogonal in this case since the scale matrix is setup with 1,1,1.   Regardless as a quick test I did this in the shader: Normal0 = (inverse(transpose(gWorld)) * vec4(Normal, 0.0)).xyz; But sadly it did nothing to my artifact.    Thanks for the help regardless!
  8. I'm not sure why w is set to 2.0 I set it to 1.0 and didn't make a difference regardless.   I tried the light direction at 1,0,0, didn't make any difference to the artifact sadly.    Thanks though!
  9. Hello all,   I implemented basic diffuse and ambient lighting in a glsl shader which I'm using to draw some cubes. However at a distance there are some strange artifacts on the edge's of the object. It's not aliasing. At least I'm reasonably sure. It's almost like a whole chunk of the edge isn't being lit correctly or something.   Here's a picture:     It doesn't occur when the lighting is turned off. So it's definitely the lighting that's causing it.   Obviously I want this to go away. I'm not quite sure what code needs to be shown so here's the fragment shader for starters: #version 330 in vec2 UV; in vec3 Normal0; out vec4 FragColor; uniform sampler2D myTextureSampler; void main() { vec3 DirectionalLightColor = vec3(1.0f, 1.0f, 1.0f); float AmbientLightIntensity = 0.4f; float DirectionalLightIntensity = 1.2f; vec3 DirectionalLightDirection = normalize(vec3(1.0f, 1.0f, 1.0f)); vec4 AmbientColor = vec4(DirectionalLightColor * AmbientLightIntensity, 1.0f); float DiffuseFactor = dot(normalize(Normal0), -DirectionalLightDirection); vec4 DiffuseColor; if (DiffuseFactor > 0) { DiffuseColor = vec4(DirectionalLightColor * DirectionalLightIntensity * DiffuseFactor, 1.0f); } else { DiffuseColor = vec4(0, 0, 0, 0); } //FragColor = (texture(myTextureSampler, UV)) * (AmbientColor + DiffuseColor); //FragColor = (texture(myTextureSampler, UV)); FragColor = vec4(0.5,0.5,0.5,1) * (AmbientColor + DiffuseColor); } And here's the vertex shader as well incase it might be the cause of the problem #version 330 uniform mat4 gMVP; uniform mat4 gWorld; layout (location = 0) in vec3 Position; layout (location = 1) in vec2 TexCoords; layout (location = 2) in vec3 Normal; out vec2 UV; out vec3 Normal0; void main() { gl_Position = gMVP * vec4(Position, 2.0); UV = TexCoords; Normal0 = (gWorld * vec4(Normal, 0.0)).xyz; } I'm not that experienced with OpenGL so I'm not even sure how to go about debugging this. I observed the effect lessens when you get closer to the objects. Don't know if that's relevant.  Oh here's the code that submits the draw call as well void DrawCube(State* state, Maths::Vec3f Position, Maths::Vec3f Rotation, Maths::Vec3f Scale) { glEnableVertexAttribArray(0); glBindBuffer(GL_ARRAY_BUFFER, cube.VertexPositionsBuffer); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(1); glBindBuffer(GL_ARRAY_BUFFER, cube.TexCoordsBuffer); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(2); glBindBuffer(GL_ARRAY_BUFFER, cube.VertexNormalsBuffer); glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 0, 0); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, cube.ElementBuffer); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, cube.Texture); glUniform1i(state->TextureUniformHandle, 0); Maths::Mat4f scale = Maths::MakeScale(Scale); Maths::Mat4f translate = Maths::MakeTranslate(Position); Maths::Mat4f rotate = Maths::MakeRotate(Rotation); Maths::Mat4f cameraTranslate = Maths::MakeTranslate(Maths::Vec3f(-state->Camera.Position.X, -state->Camera.Position.Y, -state->Camera.Position.Z)); Maths::Mat4f cameraRotate = Maths::MakeLookAt(state->Camera.Forward, state->Camera.Up); Maths::Mat4f projection = Maths::MakePerspective(800, 450, 0.01, 100, 30); Maths::Mat4f Model = translate * rotate * scale; Maths::Mat4f MVP = projection * cameraRotate * cameraTranslate * Model; glUniformMatrix4fv(state->gMVPHandle, 1, GL_TRUE, &MVP.m[0][0]); glUniformMatrix4fv(state->gWorldHandle, 1, GL_TRUE, &Model.m[0][0]); glDrawElements(GL_TRIANGLES, cube.NumIndices, GL_UNSIGNED_INT,0); glDisableVertexAttribArray(0); glDisableVertexAttribArray(1); glDisableVertexAttribArray(2); }
  10. DavidColson

    OpenGL View Matrix that rotates the camera

    Thank you all for your input, this is most fascinating. I had some success with combining the rotation and translation matrices together when getting the inverse. However, I still want to learn more about the real maths though, so I'm going to try implementing my own simple maths library to experiment with. And I'll get watching those Khan academy videos.   Thank you!
  11. DavidColson

    OpenGL View Matrix that rotates the camera

    So I did the thing with an inverse rotation matrix. But it just rotates the objects in the scene inversely. As opposed to the camera. That is if I set the pitch to 90 degrees, I can still see the objects, just rotated. I should be looking straight down, and therefore nothing, since my scene is just a couple of cubes. glm::mat4 cam = glm::rotate(glm::mat4(1.0), renderingData.CameraRotation.x, glm::vec3(1, 0, 0)); cam = glm::rotate(cam, renderingData.CameraRotation.y, glm::vec3(0, 1, 0)); cam = glm::rotate(cam, renderingData.CameraRotation.z, glm::vec3(0, 0, 1)); glm::mat4 camMove = glm::translate(glm::mat4(1.0), renderingData.CameraPosition); glm::mat4 mvp = glm::perspective(30.0f, 800.0f / 450.0f, 0.20f, 1000.0f) * camMove * glm::inverse(cam) * model;
  12. I've spent the day trying to implement a system in an SDL2 OpenGL app that rotates the actual camera by pitch, yaw and roll.   I'm fed up of reading tutorials that use glm::lookat and set the up vector to (0, 1, 0), and forgo the ability to roll the camera, trying to emulate a first person camera.   Let me first define what I'm trying to do. I noticed naively that using a glm::rotate matrix as the view matrix just rotates the objects in the scene, and not the camera. I actually want the camera to rotate, so if pitch is 90 degrees, the camera looks down. Or if yaw is set to 90 degrees, the camera looks right, and therefore cannot see the objects. If roll is set to 90 degrees the scene is viewed as rotated sideways.   Considering that I'm not afraid of quaternions and I know they remove gimbal lock if used correctly, what is the best way to create a view matrix that rotates the camera itself by specifying pitch, yaw and roll?   Thank you for any help!
  13. This is very interesting, I appreciate all this information.    One thing I am taking out of this is that high level gameplay logic is not something that I really need to do multithreading on. I suppose were it would be most beneficial is when a certain function is called that carries out  a low level taxing process. This particular process can then be broken down into "tasks" which can be put in a queue for the thread pools to process. In my head I am imagining this to give me much more precise control over what processes are multithreaded and which not.   I also like this idea of removing interaction between objects until computation is complete and then sending queued messages to objects. I have learned much!
  14. So I have been doing research into multithreading for no reason other than curiosity and I have noticed something which I'd like to bring up.    The best way to multithread any software, apparently, is data-decomposition. As in instead of putting subsystems on different threads, you break up the processing of one subsystem into multiple jobs to do concurrently, scaling to n number of threads.    This is all fine and dandy, but say if you were to do this to the physics system, my logic would be to break it up so for n threads divide the number of objects by n and give each thread a group of objects to update. Then you would have a situation were conflicts would occur. Say object A collides with object B, but A and B are being updated at the same time on separate threads. Problem.   Another example is for updating game logic on gameObjects. What if object A is dependant on the health of object B. But again, they are being updated on different threads.   Now I know the simple answer is, minimize interaction and communication between objects, however this is not always going to be good enough, especially in physics, since object interaction is critical to the function of the system.   I have not worked on a game were objects never have to talk to each other, so how do you get around this problem when designing a multithreaded game?
  15.   Yes, yes it is. One of the very few in fact.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!