# chrisprij

Validating

13

1. ## Cubic Mesh Simplification Algorithms?

Hey!! So I've been working on a voxel-mesh engine and it's turning out great (turns voxels into a cubic mesh for rendering, like Minecraft). I started with rendering chunks one block at a time, and then I switched to rendering each chunk as a mesh, only rendering the "shell" or outside of the mesh. The problem I'm having now is this:   The mesh is made up of one material. Clearly adjacent quads/squares (voxel faces) on the mesh, being made of the same material, should be able to be combined into a rectangle, therefore reducing the triangle and vertex count. How can I do this?   I've found a few sources online, like this and this, but I'm not understanding it, for some reason. Well...I get the idea, but I have no idea how to implement it.    As for my Mesh class, I have the following data stored:   -a Vertex array -a VertexNormal array -a vertexColor array (materials are colors, not textures in my game). -an triangleIndices array (for rendering). -a quadIndices array (for simplification).   and I can add and remove from these as I need to, using the proper notation and algorithms (dont remove a vertex without removing triangles with that vertex, etc.).   Is there something else I'd need to add to my data storage for meshes? Any idea of where to start algorithm-wise for triangle/quad reduction? As chunks reach the top of the terrain (where the chunk will have air and blocks, not just blocks), I have trouble creating a working algorithm for storing the quadIndices in a way that I can simplify the mesh easily...   Help please! I feel like I could be close, but I feel more lost than ever right now :/   --CP
2. ## Problems with clipping lines

I think I did that right...see the code here: if (outcodeOut == code1) { v1.x = x; v1.y = y; code1 = computeOutcode(v1); } else { v2.x = x; v2.y = y; code2 = computeOutcode(v2); }   I've been spending a few days just flipping around v1 and v2 to see if I ever made a mistake but...so far no luck.      It's not my code; you can find it on wikipedia under the Cohen-Sutherland algorithm for clipping lines.
3. ## Problems with clipping lines

Hello! Working on a software renderer here. Pardon the poorly made application, this is my "quick and dirty" startup for this renderer. Recently I've been working on clipping projected lines to the screen. Unfortunately, it's not working very well . . . for some sides of the screen.   Attached is a video clip of me showing the clipping problem of the left and top of the screen. The bottom and right work correctly . . . Here's my clipping code:   void Application::clipAndDrawLine(Vector3 &v1, Vector3 &v2) { //const int INSIDE = 0; const int LEFT = 1; const int RIGHT = 2; const int BOTTOM = 4; const int TOP = 8; int code1 = computeOutcode(v1); int code2 = computeOutcode(v2); bool render = true; while(true) { if (!(code1 | code2)) { break; } else if (code1 & code2) { render = false; break; } else { float x, y; int outcodeOut = code1 ? code1 : code2; if (outcodeOut & TOP) { x = v1.x + (v2.x - v1.x) * ((HEIGHT - 1) - v1.y) / (v2.y - v1.y); y = HEIGHT - 1; } else if (outcodeOut & BOTTOM) { x = v1.x + (v2.x - v1.x) * (0 - v1.y) / (v2.y - v1.y); y = 0; } else if (outcodeOut & RIGHT) { y = v1.y + (v2.y - v1.y) * ((WIDTH - 1) - v1.x) / (v2.x - v1.x); x = WIDTH - 1; } else if (outcodeOut & LEFT) { y = v1.y + (v2.y - v1.y) * (0 - v1.x) / (v2.x - v1.x); x = 0; } if (outcodeOut == code1) { v1.x = x; v1.y = y; code1 = computeOutcode(v1); } else { v2.x = x; v2.y = y; code2 = computeOutcode(v2); } } } if (render) { renderer.renderLine(v1.x, v1.y, v2.x, v2.y, SDL_MapRGB(screen->format, 255, 255, 255)); } } int Application::computeOutcode(Vector3 &v) { const int INSIDE = 0; const int LEFT = 1; const int RIGHT = 2; const int BOTTOM = 4; const int TOP = 8; int code = INSIDE; if (v.x < 0) { code |= LEFT; } if (v.x >= WIDTH) { code |= RIGHT; } if (v.y < 0) { code |= BOTTOM; } if (v.y >= HEIGHT) { code |= TOP; } return code; }   I can't find the issue here! Does anything look wrong to any of you guys? I believe the problem is with the clipping, seeing as that's where the problem occurs......   -CP
4. ## Planned Terrain Generation

BCullis: Thank you so much for your fast response! I'll have to get to making these tools as soon as my finals are over :)         Now you're talking about procedural mesh generation and even topology algorithms.  This is a huge can of worms that I'm happy to call "my current brain candy".  I can't give you any really good advice yet as I'm still sifting through it all.  Keep in mind though, this is for things like "I want to grow the terrain here, but keep the vertex density the same" or "I want to cut a hole in the terrain to stick a cave mouth prefab into, and then retriangulate the edges to fix the seam".  If you just want to add separate objects to the scene, that'll depend on the definition of your "level".  My current solution is just to keep a list of all prop (what I call my static meshes) IDs and their world transform information (location, scale, rotation).  When the level loads in the game, it reads through that data and populates it accordingly with actual prop instances. I totally have to agree with you there; Procedural Generation has been all over my mind as I have been toying around with the idea of voxels, but I wanted to root myself in 3D graphics, polygons, and procedural generation before I move on to that. I'm hoping to get into procedural generation after this first project of mine . . . Though I might want to start reading up on it if it can help me develop some parts of my terrain on my current project. One step at a time, though, I'll have to get on to creating some level editing tools first :D Again, thanks for the sound advice, you really helped clarify things in my situation!!!!
5. ## Planned Terrain Generation

Hello Everybody     So currently I'm working on a project where I'm making a semi-clone of an already existing game so I can grasp the technical aspects of creating the 3D backbone from scratch in C++ (using SDL to create window and load images, but that's about it, I want to learn most of the game-coding stuff from scratch, which is fun for me), and already have a template to follow for how the game, map, events, etc. should be planned out. Right now, I'm pretty well into making the engine, but I am catching one snag -- I'm working on creating mesh-making tools, which isn't so bad right now because most things are squares/boxes and stuff, so I made tools to put in squares, select textures, etc. to make those run smoothly.   My current problem comes when trying to do terrain that deals with mountains, caves, and more complex terrain than just a straightforward flat town and box buildings. I sort-of understand how to make terrain based on noise or hight maps and such, but I was wondering what you all have in ideas for developing mostly pre-planned terrain?   I want to take the terrain from the first game that I'm semi-cloning and recreate it in 3D (the previous game is in 2D). So I have a bit of open room to maneuver with that change in display from 2D to 3D, but I still want the land to roughly follow the land in the 2D game. Any ideas?   Basically think of making a clone of zelda or harvest moon, from one of their old gameboy games (2D top-down game with an expansive world). It's that type of conversion. --> instead of "ladders" and scene changes (screen goes black when you enter building, then reappear inside), I want everything to work as a 3D world; everything should not need those transitions.   NOTE: Sorry for being so secretive.  I just want to keep my project a surprise for when I release it for people to test out and enjoy
6. ## Planned Terrain Generation

Yes, exactly. I still have freedom to do what I want, but following roughly the layout of the caves and land from the 2D game.      This is exactly something I wanted to get from this question   Thanks for the narrowed down answer, you hit it right on the head! I'm pretty sure you clarified the whole concept when it comes to a level editor for me  For clarification, is it easiest to start with a flat mesh or some easily computed mesh, and then apply these changes to it? Once I know where to start, I think creating these tools will be much easier to accomplish.    And one last question, if I may: What if you want to add "vertices" or make the scene more complex? I know that should be just another tool, but are there any ideas or topics that can get me started there as well?
7. ## Control Structure for Animating Billboards

ifthen,   That makes sense But how exactly does that fit in with a character being mid-attack animation to switch to the dying animation? Does the update() method then interpolate between the current attack frame and the first dying frame, and then continue with the dying frames, or something along those lines? Should I make another method for this? I might be getting too much into the details here...
8. ## Control Structure for Animating Billboards

Currently I'm working on a 2.5D FPS game somewhat like Doom graphics-wise. Currently, I decided that before getting into polygon-structured enemies (especially in a Java software renderer), I would deal with 2D billboard sprites that emulate the behavior of a 3D character (animations for running sideways, forward, away, etc.), just like in Doom. I already can render sprites in the game, and have the animations face the correct way using vector math (the "look" vector).    However, now I'm thinking about how to control the animations for the enemies; an attack or death animation each have multiple frames. If i wanted to stop an attack animation because they die, would I add a simple check for that, or is there a more elegant solution? Just trying to create a good structure for the animations in my game . . .   As of right now, I have a enum called AnimationFrames, which loads each frame of an animation into a variable, so there's an AttackFrame1, AttackFrame2, etc, as well as an endFrame to know when the animation is over. The renderFrame() method sends the frame to the rendering pipeline. The nextFrame() method decides what the next frame would be (hopefully all this is self explanatory). Would I structure the animation control schemes like this:   public void attackAnimation(/*stuff*/) { AnimationFrame frame = AnimationFrame.attackframe1; while (!dead && frame != Frame.endFrame) { renderFrame(frame); frame = nextFrame(“attack”, frame); } } public void dieAnimation(/*stuff*/) { AnimationFrame frame = AnimationFrame.dieFrame1; while (frame != Frame.endFrame) { renderFrame(frame); frame = nextFrame(“die”, frame); } }   or is there a more elegant or obvious solution? Am I missing something? I'm not even exactly sure what to ask I'm just wondering if this will work for a control scheme . . .I would start programming this into my game to test it out now, but I'm unfortunately behind code-wise than I am idea/planning-wise.   On a side note, does anyone have any useful links or books/articles that deal with controlling the flow of animation in a game? This seems to be the topic I'm currently addressing in my game.   --Chris
9. ## Question about 3D render in Java

I've been watching a 3D tutorial on youtube about doing 3D graphics in Java without any exterior libraries -- just standard Java. I've been curious about how 3D graphics work down at the lower levels; I've used Unity and openGL to make applications, and now I'm aiming to go a step deeper into my understanding of 3D graphics (I also am very intrigued into the mathematics behind it all). And here is where my question arises.   Below is a function in the Render3D class for the tutorial. Using what's written below, it takes two for loops to go through each pixel in the pixels array, and end up drawing a green and blue checkered pattern shown here:   [sharedmedia=core:attachments:13636]   public void floor() { for (int y = 0; y < height; y++) { double ceiling = (y - height / 2.0) / height; double z = 8 / ceiling; for (int x = 0; x < width; x++) { double depth = (x - width / 2.0) / height; depth *= z; int xx = (int) (depth) & 15; int yy = (int) (z) & 15; pixels[x + y * width] = (xx << 4)||(yy << 4)<< 8; } } }   basically, I'm wondering if anyone has a good explanation for how this works or what the name of the algorithm is. I understand how this physically chooses what color to put in each pixel (by following through the code and printing out values), but how does this relate to rendering a 3D environment later on?   Thanks in advance! Let me know if more information is needed as well. --Chris
10. ## Question about 3D render in Java

Ah Ha! I found where he (the guy I was watching on youtube) got the algorithm, Notch used it in a Lundum Dare Competition a while back! It starts here if anyone is interested in seeing what's up.
11. ## Question about 3D render in Java

The video's are here.     However he has a really weird time explaining things . . . he'll cover stuff like what an array is and so on (really basic java fundamentals), but he doesn't explain how his code works too well, hence my question!   -Chris
12. ## Question about 3D render in Java

Shadowisadog:   Now that you bring that up, would you be able to point out where in the algorithm above it performs a 3D projection? I'm guessing it has something to do with the whole x' = x/z * zoom, y' = y/z * zoom idea....
13. ## Question about 3D render in Java

Satharis, 'Rendering' was the wrong word; I meant how does this set the pixels to later render and display a 3D environment. However I now understand vaguely how this algorithm works. The pixels array is a reference to a Java Swing's screen of pixels, allowing me to set the pixels for the screen (through the buffers, that is). This render method gets called twice, once for the back buffer and once for for the third buffer, and every tick the buffers get pushed up one, and the back buffer gets pushed onto the screen. Anyways, I spent a few hours yesterday playing around with this algorithm and changing variables, and I found out how it works; it uses the y-pixel value to determine the depth on the screen. Then it uses AND's and shifts to determine the color. He later uses this same algorithm, with some minor changes, to print textures onto the 3D environment, using the following change to the last line of this algorithm:   pixels[x*y+width] = Textures.floor.pixels[(xx & 7) + (yy & 7) << 3];     which uses an 64 length array (8 x 8 texture) to color the pixels to the colors the texture uses, transforming the world into this:     I've used openGL a few times before, and that's what I usually use, but I wanted to understand what openGL might do under the hood through practice, or see other ways of rendering. My question was this: what does this algorithm have to do with any regular algorithm for setting the pixels (done by openGL) given a 3D environment? I know this is still vague but I'm not sure how else to ask this. How does this method, or does this method, relate to anything openGL does (Any better? haha)? Later in the episodes he starts creating blocks and walls and they keep their x-y-z coordinates, but I'm not there yet. Maybe there's some words of wisdom by then?  I also have a couple 3D graphics books from my university's library now that talk about the underlying information about how openGL works, and that should open up some knowledge about how this might work. I'm starting to think this was a premature question; I'll throw myself into this for a couple weeks to see if I get anything from it.   Thanks for your answer though! --Chris