Jump to content
  • Advertisement

Runcible

Member
  • Content Count

    7
  • Joined

  • Last visited

Community Reputation

145 Neutral

About Runcible

  • Rank
    Newbie
  1. Hi everyone, There was an interesting post on reddit the other day about a "fake 3d" style of graphics used notably in NIUM (gif) whereby you draw "slices" of your object, and render them with an offset to create a kind of stacked-layer 3d effect (explained here). I've been trying to reproduce this style using LibGDX, and drawing the images to the screen with a Y-offset is simple enough. I am however really struggling with actually creating the sort of camera shown in the example NIUM gif. I'm using the OrthographicCamera LibGDX provides to render the scene itself, and each object in the world has it's own x and y world coordinate, as would the movable player character.  From what I can understand: Either the objects in the world rotate around the centerpoint of the camera's view when the "camera" is moved (however this means they are moving their world coordinates, which is bad) ...or the "camera" itself is rotating its view of the world, preserving each objects relative x and y location. LibGDX's camera also has a rotate method, but this entirely flips the items around, and I'm not managing to get the objects own scaling and rotations correct when using this, although it seems like it should be the correct way to do this? This also rotates the axis, which complicates things. The "up" direction by which the layers are offset, and also the Y-axis on which they are scaled for perspective, needs to always be up relative to the screen and ignore the camera rotation. So if I do rotate the camera itself and not the objects, as the axis get rotated as well, "up" (adjusting the value of Y) is now relative to the rotation. I've been trying to use the Affine2 class in LibGDX to do the translations, rotation, scaling etc. for the "layered objects" , and some attempts to use the rotation of the camera itself, but I think I'm just not understanding fully what I actually need to rotate/translate and in what order to accomplish this, so I'd really appreciate any pointers on the correct way to do this. Thanks!
  2. Thanks for the pointer swiftcoder. I think I'm a little mixed up on the terms then, as the things I've been reading talk about "world space" meaning after it's been transformed by the model matrix so that it's world coordinate position / rotation / scale is taken into account, but it sounds like that's not the correct way to think about it then?   Could you elaborate on what you meant with your second point about setting up the camera transform in a way to avoid later confusion? Thanks!
  3. Thanks so much, that was exactly the explanation I needed to understand what was going on, and of course it makes total sense as soon as I read over your post :) Seems to be working great.
  4. Hi all, hopefully you can help me understand how to approach this line drawing question I have. I have a basic 3D demo set up drawing lines with OpenGL using VAO's and shaders, and a first person camera to move around.   Key snippets: // Example vertices float[] verts = { 1f, 1f, 1f, 0f, 0f, 0f }; // Loading to VAO etc. as standard and enabling / loading matrix to shader here... // Draw call glBindVertexArray(vaoID); glDrawArrays(GL_LINES, 0, 2); ?Vertex shader is basic, no model matrix in use at the moment: gl_Position = projectionMatrix * viewMatrix * vec4(position, 1.0 This outputs the line drawn properly in the 3D world, but the start/end points are based on the vertex positions only which isn't too useful.   My question is: how can I specify the start and end locations in my world coordinates and draw a line between those points? My first thought was about transforming each vertex by a different model matrix, but I asked this earlier also on stackoverflow and had a comment saying:   "You'd apply the same transformation to both points. The transformation matrix needs to contain a scaling to (p2-p1) first and then a translation to p1(x,y,z). Oh and you chose the vertices of (0,0,0) and (1,1,1) correctly."   Unfortunately that reply just left me even more confused about how to do this. How would the scaling work with the original vertices? If I need to move vertex 1 to 20x, 20y, 20z and vertex 2 to 20x, 100y, 100z then I'm not clear how this would work, and how the original vertices relate to the final world coordinates I want. I'd really appreciate any help with what I hope is a simple question. I'm using Java and LWJGL if that affects anything, but I think the concept will be the same regardless of language. Thanks!    
  5. Thanks both for your replies.    Sorry for the confusing second picture in the OP, that was just to demonstrate that the hidden side faces of each wall were being lit, and that was what I suspected was leaking through the pixels.    I had a go at trying to pre-calculate the positions but that didn't seem to lessen the effect unfortunately. I also tried rounding to the nearest 0.0125 in the shader which also didn't fix the artifacts.    One test I did was to purposely overlap the tiles by half their width, so they were positioned at x=0, y=0, z=0, .5, 1, 1.5 etc. instead of the previous 0, 1, 2, 3. With this setup, there would be no gaps between the models at all, and I still see the artifacts. This leads me to believe that it might well be some sort of z-fighting issue with the side faces and the front faces at the corner edge.    Anything I've read about z-fighting previously usually refers to 2 faces along the same plane having the issue, so I'm not entirely sure what sort of terminology to search for this case. One "workaround" I came up with was to simply delete the side faces of the model entirely, and the issue goes away. This isn't ideal though and I'd much rather have a programmatic solution. 
  6. Hey there.   I don't think it's backfaces but rather front-facing hidden side edges, or at least I think so. Here's an image where I've spaced the wall tiles out so you can see better: http://i.imgur.com/KVqsFCC.png To me it looks like the lighting there is somehow just bleeding through and creating the artifacts.   The wall tiles are all placed as x=0, y=0, with varying z. I tested with a single light placed at x=20, y=20, z=-20 and the artifacts still appear, most noticeably when viewing at an angle that would cause the sides to be the brightest.   Good suggestion about AA but I checked and that wasn't enabled for this testing, and nothing's being forced with the graphics card either.    Is there some way to not light these faces at all? Or would I then just be getting black pixel artifacts instead?
  7. Hi all. I've been going through some OpenGL tutorials and have implemented a fairly basic ambient / diffuse / specular system based on this tutorial.   I have a test scene (using Java/LWJGL) which you can see here. I'm generating a wall out of "wall tiles", with each wall model placed exactly next to each other in a line. As you can see in that picture with the helpful arrows, there are artifacts from the lights present along the wall.   I assume this is because the hidden side faces of each wall tile are being lit up (seen here ) by the lights.   Making a single model would of course solve this, but I'd like to be able to generate maps using various pre-made tiles and so it would be really great if I could get around this problem. Is there some kind of usual way this kind of thing is solved? Or a way to prevent these faces from being lit up like this?   Thanks!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!