• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

241 Neutral

About vexe

  • Rank
  1. Aww dude thanks greatly appreciated. That's neat! Will show it to my artist
  2. @Scouting Ninja Glad I gave you something you needed I just meant some sort of quad view editor like they had for Doom or Quake (example). I just love wireframe/ortho diagrams in quad views, nerdy stuff, it looks so retro. I was able to get quad views in Blender though. For collision we're using a simplified level mesh for static collision geometry, and an ellipse for the player/enemies. For items/pickups etc, could just be a sphere or box. I'm thinking of using Blender for all of those too. Custom exporter, yes eventually. Also a way to automate work for my artist having to go through each camera in the scene to spit out a color and depth map. I just never enjoyed working with Blender API in Python. Good point on the names, I forgot I could just change them in the preprocessor. This is why you discuss things with people! I don't think "CameraFacingFrontDoor" is a good idea in my case because, cameras don't do anything special, they're just a position/rotation in space with a FOV to render things from. And, "FacingFrontDoor" means you have to have some sort of predefined actions some where that those names could follow. Now you have to parse more stuff and compare it to your symbols to get a match. I'd much rather have a level meta file that has that extra information in it instead of baking it in the name. With the Camera_N convention, N is just an id that you could associate things to easily. Less parsing too, just a number. You read that id and index into some camera data structure that maybe has FacingFrontDoor=true, ShakeAndBakeIntensity=3.14 and etc=false
  3. Thanks for the reply @Zipster. Blender does seem the most reasonable approach. So I went ahead last night and just did it. Here's a clip for cameras, and here's text triggers. I haven't ran into the issue you mentioned yet, I will keep an eye on it, thanks for the headsup! For now I'm just doing a simple Point in AABB test to see which box I'm in and render the camera that's associated with that. I guess I'll try and make hysteresis happen. Never knew this thing has an actual term, cool. To answer your question, the way the boxes are associated with their cameras is just by naming. In Blender, I name my cameras Camera_1, Camera_2 etc. So the boxes would be Box_C_1, Box_C_2 etc. As you see in the video there could be an instance where you need multiple boxes to cover a shot, and for that you get Box_C_3_0 and Box_C_3_1, it's pretty lame that Blender doesn't let you name two objects the same so I had to use another postfix. The way I import things is via Assimp. I save .blend directly, preprocess it to an internal engine format and use that. I will write a file watcher of some sort to keep an eye out on the input files, as soon as it detects a change it will invoke the preprocessor and reload the .blend on the fly making edits more practical cause you could just Ctrl+S from Blender and see the change take affect.
  4. Greetings, we're making a 90s style game with prerendered backgrounds and static camera angles using a custom software-based engine (Sample WIP video) I was wondering what's the best way to go about setting up camera triggers. In the video I was manually switching cameras. I was thinking just OBBs (or AABBs), every camera would be associated with a bounding box, if you're in box A then camera A renders. - Is this a good approach or is there a better more simpler/automated way of doing it? How did old games like Resident Evil or Final Fantasy do it? - Doing it this way I'd have to use some sort of editor to setup the boxes. We're using Blender so I guess I could use that, although I'd prefer a more specialized editor. Is there any good 3rd party editor that's more suitable to doing this stuff? (The same editor would be used for door triggers, item/enemy spawns, text triggers etc). I thought about writing my own editor but that's a bit luxurious at the moment, I'm still setting the core things up. Any ideas/help is greatly appreciated Thanks -vexe
  5. Greetings all,   I have a working software renderer but I'm trying to understand what makes a FOV vertical vs horizontal. First a bit of background on my setup. My calculations for the camera are as follow: I assume a normalized projection plane that has a height of 2. So from the middle going up 1, and from the middle going down is 1 too, total 2. We know aspect_ratio = width/height = width/2. Thus width=2*aspect_ratio. So the size of the projection plane (or viewplane, whatever you want to call it) is `2*ar x 2` From similar triangles relationships when calculating the projected coordinates, we get that d (distance to the projection plane) is `1/Tan(Fov/2)` which is how much we need to scale the unprojected y coordinate by. Divide that by aspect_ratio and that's how much we need to scale x by (because width is 2*ar, so we divide by ar to normalize it) All of that makes sense to me and it works. Here's my camera setup function (not using matrices):       void CameraInit(camera *Camera, vec WorldP, fixed Aspect, fixed FOV)     {         Camera->Position = WorldP;         Camera->FOV = FOV;              // tan fov/2         Camera->Tan = Math_Tan(Camera->FOV / 2);              // distance to the projection plane (which is of size=2*ar x 2).         Camera->ScaleY = 1 / Camera->Tan;              // we divide x by ar because we want to normalize things on the x too         // otherwise we'd still be left in the range [-ar, +ar]         Camera->ScaleX = Camera->ScaleY / Aspect;              // near/far clipping planes         Camera->NearZ = (fixed)0.01f;         Camera->FarZ = (fixed)100.0f;     } And here's my perspective projection function:     INLINE vec CameraToPerspective(vec CameraP, camera *Camera)     {         vec Result;              Result = CameraP;         fix ZInv = 1/Result.z;         Result.x = (Result.x * Camera->ScaleX) * ZInv;         Result.y = (Result.y * Camera->ScaleY) * ZInv;              return(Result);     } My question is: What is this 'FOV' that I'm using? Is it a vertical field of view or horizontal? What makes a vertical FOV vs a horizontal one?   Can someone provide a better explanation than "it's because of how you calculate stuff" or "it's because that's how you write it in the matrix" (as given to me here, see comments in the answer...). Throughout this entire software renderer, there is NOT a single line of code that I don't understand deeply and thoroughly (except for quaternions which are still new to me), and I'd like to keep it this way. Thanks a lot in advance!
  6. Greetings,   I have a character with an aim straight and fire handgun straight animations. I want him to also be able to aim down/up and fire from those poses too. Think Resident Evil 1/2/3.   I'm trying to find a way to do it dynamically to save time and effort for my animator. I was thinking I could dynamically have the upper part of the character body aim up/down and blend in additive the fire/shooting animation.   I'm not sure how to go about having him aim up/down. Do I have to write an IK system for this or is there a more straight-forward solution?   Any books, links, hints or pointers on the subject are appreciated. Thanks
  7. Hey Promit, thanks for the tutorial. Helped me out remove dependecy to the font rendering libs I was using.   I was just wondering what does it take to include the font size with the vertices calculations to set the size of the printed font? do I just multiply by the font set size for each line of vertices calculation? or do I just multiply with a scale matrix?   Also, there's other values in the font format such as "aa", "stretchH", "lineHeight", "padding", "spacing", "scaleH" and "scaleW". I'm not sure where exactly each one of those fit into the picture?   Here's my rendering code, maybe someone will find it useful - or could suggest a better way/improvements. Currently I'm dynamically allocating the vertices/uv buffers, maybe I should just set a max value instead, but anyways... void FontRender(font_renderer *Renderer, font_set *Font) {     u32 NumChars = StringLength(Renderer->Text);     u32 BufferSize = NumChars * 12 * sizeof(r32);          if (!Renderer->Initialized)     {         glGenBuffers(1, &Renderer->VBO);         glBindBuffer(GL_ARRAY_BUFFER, Renderer->VBO);         glBufferData(GL_ARRAY_BUFFER, BufferSize * 2, 0, GL_DYNAMIC_DRAW);         glGenVertexArrays(1, &Renderer->VAO);         glBindVertexArray(Renderer->VAO);         glEnableVertexAttribArray(0);         glVertexAttribPointer(0, 2, GL_FLOAT, 0, 0, 0);         glEnableVertexAttribArray(1);         glVertexAttribPointer(1, 2, GL_FLOAT, 0, 0, (void *)BufferSize);         glBindBuffer(GL_ARRAY_BUFFER, 0);         glBindVertexArray(0);         Renderer->Initialized = 1;     }     r32 *VertPos = Calloc(NumChars * 12, r32);     r32 *VertUV = Calloc(NumChars * 12, r32);     For(u32, i, NumChars)     {         font_character Character = Font->Characters[Renderer->Text[i] - 32];         r32 X = Character.X;         r32 Y = Character.Y;         r32 XOffset = Character.XOffset;         r32 YOffset = Character.YOffset;         r32 XAdvance = Character.XAdvance;         r32 Width = Character.Width;         r32 Height = Character.Height;         // Triangle 1         {             // Top Left             VertPos[i * 12] = Renderer->CurrentX + XOffset;             VertPos[i * 12 + 1] = YOffset;             // Bottom Left             VertPos[i * 12 + 2] = Renderer->CurrentX + XOffset;             VertPos[i * 12 + 3] = YOffset + Height;             // Bottom Right             VertPos[i * 12 + 4] = Renderer->CurrentX + XOffset + Width;             VertPos[i * 12 + 5] = YOffset + Height;         }         // Triangle 2         {             // Bottom Right             VertPos[i * 12 + 6] = VertPos[i * 12 + 4];             VertPos[i * 12 + 7] = VertPos[i * 12 + 5];             // Top Right             VertPos[i * 12 + 8] = Renderer->CurrentX + XOffset + Width;             VertPos[i * 12 + 9] = YOffset;             // Top Left             VertPos[i * 12 + 10] = VertPos[i * 12];             VertPos[i * 12 + 11] = VertPos[i * 12 + 1];         }         // UV 1         {             // Top left             VertUV[i * 12] = X / Font->Width;             VertUV[i * 12 + 1] = Y / Font->Height;             // Bottom left             VertUV[i * 12 + 2] = X / Font->Width;             VertUV[i * 12 + 3] = (Y + Height) / Font->Height;             // Bottom right             VertUV[i * 12 + 4] = (X + Width) / Font->Width;             VertUV[i * 12 + 5] = (Y + Height) / Font->Height;         }         // UV 2         {             // Bottom right             VertUV[i * 12 + 6] = VertUV[i * 12 + 4];             VertUV[i * 12 + 7] = VertUV[i * 12 + 5];             // Top right             VertUV[i * 12 + 8] = (X + Width) / Font->Width;             VertUV[i * 12 + 9] = Y / Font->Height;             // Top left             VertUV[i * 12 + 10] = VertUV[i * 12 ];             VertUV[i * 12 + 11] = VertUV[i * 12 + 1];         }         Renderer->CurrentX += Character.XAdvance;     }     glBindBuffer(GL_ARRAY_BUFFER, Renderer->VBO);     u32 Offset = 0;     glBufferSubData(GL_ARRAY_BUFFER, Offset, BufferSize, VertPos);     Offset += BufferSize;     glBufferSubData(GL_ARRAY_BUFFER, Offset, BufferSize, VertUV);     m4 FontProjection = Orthographic(0, 800, 600, 0, -1, +1);     glDisable(GL_DEPTH_TEST);     ShaderUse(Renderer->Shader);     glBindVertexArray(Renderer->VAO);     TextureBind(Font->Atlas);     ShaderSetV3(Renderer->Shader, "Color", Renderer->Color);     ShaderSetM4(Renderer->Shader, "Projection", &FontProjection);     glDrawArrays(GL_TRIANGLES, 0, NumChars * 6);          free(VertPos);     free(VertUV);     glBindBuffer(GL_ARRAY_BUFFER, 0);     glBindVertexArray(0); }
  8. Check out handmade hero if you're into game engine programming. Builds the exact mentality that you're aiming for https://handmadehero.org/ - I learned a TON from Casey,   https://www.youtube.com/watch?v=fQeqsn7JJWA&feature=youtu.be
  9. I've always been fasinated by prerendered technology. There's just this awesome 90s vibe about it, takes us back to our favoirte childhood games. One of those games for me was Resident Evil. I was talking to a friend today who happened to be in the business of ripping RE animations from ISO and other stuff. He showed me an interesting 'mask' image that left me wondering... (ripped it off using Rice Video Plugin)   Please see attached images.   Now, I don't know much about PS1 development nor its graphics API, so I don't know for sure how they implemented their illusion of depth. I'm almost certain (correct me if I'm wrong) that they didn't have programmable shaders at the time, but I guess it doesn't necessarily mean that they didn't have access to a depth buffer, if that's the case, and they used depth maps, well depth maps are greyscale images usually, which leaves me wondering what those colors mean in that 'mask' image... They definitely look like they're somehow related to depth, but why colored? Couldn't they just have used, say white and magenta?   Any idea what those colors mean?   (And if anybody know some detail about how they created illusion of depth please share)
  10. Hi, thanks for the great post! I also didn't know about NavMesh going free. About the article, I'm not sure, but on my browser all the jpgs and gifs are not appearing. They're showing up as links, when I click on them they lead to broken links. Any idea?
  11. Thanks for your fast reply. Yes I also read that you could plug a FSM in a BT, but I'm wondering if a FSM can be represented as a BT - More specifically, I'm wondering if the tiger statue puzzle i mentioned in my original question can be somehow a BT instead of an FSM, does it make sense? or a FSM is more suited for these types of things? (in my game I have much more similar triggers to that one I mentioned, they change state - you interact with them once, they do A, interact with them in a different way they do B, etc. so I'm wondering if using a BT is better here than an FSM...) Thank you.
  12. Hi all,   so I was just recently learning about Behavior trees and found them interesting so I decided to implement them. But what I don't understand is, are BTs a replacement for FSMs? in other words, can BTs do anything FSMs do?   I have asked the question here (no answers) with a concrete example that I would love to see how to represent it in a BT.   Your help is greatly appreciated, thanks!
  13. I don't have a problem getting Unity Pro if that covers the LOD thing you mentioned - but I just need to make sure that Unity can do it all - like for example how to setup the pre-rendered backgrounds with 3D character models and have them drop shadows. Thanks for the answer but I was hoping for something with a bit more details.
  14. You have to provide some more details, like what's the spces of your comp, what's the error message, etc.