• Advertisement


GDNet+ Basic
  • Content count

  • Joined

  • Last visited

Community Reputation

1157 Excellent

About xycsoscyx

  • Rank

Personal Information

  • Interests
  1. How to design linear forest levels in videogames?

    What about flipping the canyon around, have it on a ridge with sheer cliffs to the side. That lets you still have a dense forest on top but the player can't go too far to the side without falling off. For navigation, even with a dense forest you can still see bright spots through the trunks, especially if the path is flatter and there's not a lot of dense over brush or vegetation on the ground. If you setup the destinations with large lights/fires/etc it can still give the player a way point, without feeling artificial or ruining the atmosphere. Using dense vegetation that makes sense in that setting though (dense sticker bushes) also gives a way of blocking the player, without resorting to the old "oh look, there is a fern, well I guess I can't get past it".
  2. GetRawInputData actually gives your the mouse motion in the X/Y directions, you can call it in response to a WM_INPUT event, after making sure to call RegisterRawInputDevice so that your window actually receives the events. https://msdn.microsoft.com/query/dev15.query?appId=Dev15IDEF1&l=EN-US&k=k(WINUSER%2FGetRawInputData);k(GetRawInputData);k(SolutionItemsProject);k(DevLang-C%2B%2B);k(TargetOS-Windows)&rd=true
  3. If you wanted to let the user decide how to handle the memory, then you should also let them decide how to allocate it. You're really just letting the user decide when to free the memory, but not how. A user could have a string with a custom allocator, so if you templatized this function to accept any string, then the user could truly manage it themselves. You'd end up using their allocator inside your function, and they could manage everything themselves.
  4. Getting the near/far values from the projection matrix is basically done by reversing the projection matrix equation. In particular, you take the perspective projection equation, then solve for near/far values. There's a plethora of information online about the perspective projection formula, and still a lot of information already on how to solve for the near/far values (including caveats), so I'll leave that to you to look up. As for XMVECTOR, the plane equation is just a 4D set of values (a, b, c, d), so anything that holds the 4D values can be used to store a plane. You can store this inside an XMVECTOR if you like, then operate on it while it's loaded into the vector. DirectXMath has a wide range of functions to set/get values inside an XMVECTOR
  5. I haven't looked at it extensively, but it seems to be more about loading content into FBX files and working with them, instead of loading the data for rendering? Triangulation and mesh optimization are useful parts of Assimp, and it loads it into a (more straightforward?) structure that makes it easy to handle and put into vertex/index buffers.
  6. Quick search shows this: https://stackoverflow.com/questions/7736845/can-anyone-explain-the-fbx-format-for-me Note the explanation for PolygonVertexIndex, negative values denote the end of the polygon, the actual index being the absolute of the value minus one. You should always have the same number of indices between vertices and uv coordinates, and your normals should always be your index count times 3. Beyond that, what are you using to actually load the FBX file? Have you considered using something line Assimp to load the format for you (along with a range of other formats)? The end result of the Assimp scene/mesh is a lot easier to work with, and can handle auto generation of things like normals, uv coordinates, and auto triangulation of polygons.
  7. Tone Mapping

    Typically you'd apply it at the end of your pipeline, since that's when you want to map from your higher source range to your lower target range (i.e. the monitor). It really depends on your pipeline though, if the UI is part of the pipeline then you may end up doing tone mapping before that, only on your lit scene, then rendering the UI over that directly.
  8. 3D I like to learn about BSP rendering

    BSP trees are mostly used for the initial partitioning (typically done beforehand as a preprocess step), or runtime for visibility or collision detection. Your vertices don't change during rendering, you just have a set of data per leaf of the BSP that, when determined to be visible, is pushed to your rendering system. So in answer to your question, BSP has nothing to do with the rendering pipeline. It's simply a way of partitioning the world into more manageable spaces, and how to determine spatial relationships between these spaces. If you look at engines that used BSP like Quake, it also used during preprocessing to determine if the world was closed or had gaps, and at runtime to get correct front2back and back2front sorting of faces, and to do collision detection by checking if you're going from an empty area to a solid area or vice versa.
  9. Level editing for 3D platformer

    I'm currently working on some type of level editor myself, but I've opted to forgo abstract shapes and just use existing meshes for editing. Just consider grid aligned/snapped placement of existing meshes, and use those as building blocks for a larger level. If you want to support flood fill, you can still do that using meshes. Select a start/end of a box, then fill that in with as many meshes as it takes. It's not one single cube that you're creating, it's a set of X by Y instances of a single block. This is similar to how TES/Fallout levels are made, they have a set of models for the different parts of a level (corners, walls, halls, etc), then they place those on a grid to create the entire level. After that you can support non aligned/snapped placement to add clutter (furniture, debris, etc), or even rotate the walls to create curved halls, sloped floors, etc. The advantage of all this is that you're likely to reuse the same models for different cells, which means you can use instancing to draw a single mesh X times. You also can use a separate model editor for better control of the individual meshes and texturing, instead of trying to support things like texturing and random shapes in the level editor. For the physics, I'm not sure about how bullet does things, but in Newton you can create a single collision object, then use that object for any number of bodies. This is basically the same as instancing with rendering, you have a single set of source data (authored separately, depending on how you store/load data), and each body has it's own orientation that uses that single source collision data.
  10. Favourite cocktails?

    I like a good Manhattan or an Old Fashioned, really anything with Bourbon, Whiskey, or Whisky. swiftcoder, why rum+rum+rum+other when you can have rum+mint+lime+simple syrup+club soda (man, math is hard after so much rum)? I'll take a Mojita over a Hurricane any day of the week.
  11. At what time do you sleep and wake up?

    I've shifted my sleep schedule to accommodate less traffic before and after work. I tend to get up around 4AM now, but I still get to bed around 11PM. I've been doing this for a while now (well over a year), and the lack of sleep hasn't caught up to me yet, but I'm predicting that I'll hit a wall sooner or later and have to start going to bed earlier.
  12. OpenGL Quake3 ambient lighting

    Here's a good description of the light volume part of the level format: http://www.mralligator.com/q3/#Lightvols Here's an interesting article talking about actually trying to use the light volumes for rendering: http://www.sealeftstudios.com/blog/blog20160617.php
  13. It looks like TransformFrustum only takes a single value as a scale, so even though the matrix supports full XYZ 3d scaling, this just breaks it down to 1D scaling, so the X scale is used for all XYZ values. Oddly, if you look inside the TransformFrustum call, it actually creates a full XYZ 3D vector from the single scale value, then uses that with its internal transformations. It seems like the function should just take a XYZ scale to begin with, instead of converting from/to.
  14. OpenGL Quake3 ambient lighting

    Quake 3 used a light grid, it's a 3D grid of precomputed lights that it creates during map compile time, then find the closest N lights for the dynamic models that it is rendering. It then just uses simple vertex lighting from that set of lights for that model. This is just for dynamic models though, the scene itself is all precomputed lightmaps (though they are toggled/adjusted to achieve more dynamic effects, like a rockets light trail). The Quake 3 code is readily available online, you can look it over yourself to get an idea of how they did it/
  15. 2D Sprite SDF Tool

    Just to note, all of the steps in the link that Hodgman posted are not specific to Photoshop, you can do the same thing with free software like Gimp. If you just want a command line method, you could perform the same steps via ImageMagick as well, potentially all from a single command line.
  • Advertisement