xycsoscyx

GDNet+ Basic
  • Content count

    461
  • Joined

  • Last visited

Community Reputation

1153 Excellent

About xycsoscyx

  • Rank
    GDNet+

Personal Information

  • Interests
    Business
    Design
    Production
    Programming
  1. Quick search shows this: https://stackoverflow.com/questions/7736845/can-anyone-explain-the-fbx-format-for-me Note the explanation for PolygonVertexIndex, negative values denote the end of the polygon, the actual index being the absolute of the value minus one. You should always have the same number of indices between vertices and uv coordinates, and your normals should always be your index count times 3. Beyond that, what are you using to actually load the FBX file? Have you considered using something line Assimp to load the format for you (along with a range of other formats)? The end result of the Assimp scene/mesh is a lot easier to work with, and can handle auto generation of things like normals, uv coordinates, and auto triangulation of polygons.
  2. Tone Mapping

    Typically you'd apply it at the end of your pipeline, since that's when you want to map from your higher source range to your lower target range (i.e. the monitor). It really depends on your pipeline though, if the UI is part of the pipeline then you may end up doing tone mapping before that, only on your lit scene, then rendering the UI over that directly.
  3. 3D I like to learn about BSP rendering

    BSP trees are mostly used for the initial partitioning (typically done beforehand as a preprocess step), or runtime for visibility or collision detection. Your vertices don't change during rendering, you just have a set of data per leaf of the BSP that, when determined to be visible, is pushed to your rendering system. So in answer to your question, BSP has nothing to do with the rendering pipeline. It's simply a way of partitioning the world into more manageable spaces, and how to determine spatial relationships between these spaces. If you look at engines that used BSP like Quake, it also used during preprocessing to determine if the world was closed or had gaps, and at runtime to get correct front2back and back2front sorting of faces, and to do collision detection by checking if you're going from an empty area to a solid area or vice versa.
  4. Level editing for 3D platformer

    I'm currently working on some type of level editor myself, but I've opted to forgo abstract shapes and just use existing meshes for editing. Just consider grid aligned/snapped placement of existing meshes, and use those as building blocks for a larger level. If you want to support flood fill, you can still do that using meshes. Select a start/end of a box, then fill that in with as many meshes as it takes. It's not one single cube that you're creating, it's a set of X by Y instances of a single block. This is similar to how TES/Fallout levels are made, they have a set of models for the different parts of a level (corners, walls, halls, etc), then they place those on a grid to create the entire level. After that you can support non aligned/snapped placement to add clutter (furniture, debris, etc), or even rotate the walls to create curved halls, sloped floors, etc. The advantage of all this is that you're likely to reuse the same models for different cells, which means you can use instancing to draw a single mesh X times. You also can use a separate model editor for better control of the individual meshes and texturing, instead of trying to support things like texturing and random shapes in the level editor. For the physics, I'm not sure about how bullet does things, but in Newton you can create a single collision object, then use that object for any number of bodies. This is basically the same as instancing with rendering, you have a single set of source data (authored separately, depending on how you store/load data), and each body has it's own orientation that uses that single source collision data.
  5. Favourite cocktails?

    I like a good Manhattan or an Old Fashioned, really anything with Bourbon, Whiskey, or Whisky. swiftcoder, why rum+rum+rum+other when you can have rum+mint+lime+simple syrup+club soda (man, math is hard after so much rum)? I'll take a Mojita over a Hurricane any day of the week.
  6. At what time do you sleep and wake up?

    I've shifted my sleep schedule to accommodate less traffic before and after work. I tend to get up around 4AM now, but I still get to bed around 11PM. I've been doing this for a while now (well over a year), and the lack of sleep hasn't caught up to me yet, but I'm predicting that I'll hit a wall sooner or later and have to start going to bed earlier.
  7. OpenGL Quake3 ambient lighting

    Here's a good description of the light volume part of the level format: http://www.mralligator.com/q3/#Lightvols Here's an interesting article talking about actually trying to use the light volumes for rendering: http://www.sealeftstudios.com/blog/blog20160617.php
  8. It looks like TransformFrustum only takes a single value as a scale, so even though the matrix supports full XYZ 3d scaling, this just breaks it down to 1D scaling, so the X scale is used for all XYZ values. Oddly, if you look inside the TransformFrustum call, it actually creates a full XYZ 3D vector from the single scale value, then uses that with its internal transformations. It seems like the function should just take a XYZ scale to begin with, instead of converting from/to.
  9. OpenGL Quake3 ambient lighting

    Quake 3 used a light grid, it's a 3D grid of precomputed lights that it creates during map compile time, then find the closest N lights for the dynamic models that it is rendering. It then just uses simple vertex lighting from that set of lights for that model. This is just for dynamic models though, the scene itself is all precomputed lightmaps (though they are toggled/adjusted to achieve more dynamic effects, like a rockets light trail). The Quake 3 code is readily available online, you can look it over yourself to get an idea of how they did it/
  10. 2D Sprite SDF Tool

    Just to note, all of the steps in the link that Hodgman posted are not specific to Photoshop, you can do the same thing with free software like Gimp. If you just want a command line method, you could perform the same steps via ImageMagick as well, potentially all from a single command line.
  11. http://www.xsquawkbox.net/xpsdk/mediawiki/XPWidgetFunc_t   Looks like you need intptr_t instead of long for the parameters, most likely whatever compiler settings you are using (64bit?), intptr_t is not a long.
  12. What's actually going on with the screenshot?  It's a little hard to tell because of the saturation, but it looks as though you are doing lighting first, then subtracting the shadows over top of that?  If that's the case, then you need to flip it around a bit.  Shadows should restrict what a light contributes to the scene, not subtract from it after doing all lights.  Each shadow casting light source will only add to the non-shadowed area of the scene, so the shadowed areas still get the full light from other sources affecting that area (instead of looking like the shadow is dulling down the additional light sources).
  13. Keep in mind that a rendering engine is far removed from a game engine.  Typically, the game engine will contain a rendering engine, but that should only be a small part of the entire game engine.  Beyond that, a software rendering engine really isn't necessary for actual rendering, it's great to know how things work under the hood, but learning D3D/OGL/Vulkan/etc is going to be more applicable to modern game development than a software engine.  A software rendering engine is great for doing CPU occlusion culling though, but there you're working entirely with depth and visibility and pure optimizations to improve performance.  
  14. Moire' patterns

    Instead of assigning each tile one of the images at random, what about using all 4, then having a vertex weight for each.  This will let all 4 tiles surrounding a vertex use the same weight (to prevent seams), but then you randomly distribute the weights per vertex (or use noise, or etc) to break up the look per quad.  The downside though is that you're dnow doing a lot more texture fetches, 4 blended together vs 1 that varies per tile.   I just realized you said about texture splatting, which is basically what I'm suggesting.  Instead of using a separate texture for the splat though, you could just use additional vertex data for the blend weights (even just a single RGBA value).
  15. ground quad and ground texture sizes

    What about detail mapping, cover the large terrain with one texture, then add a repeating detail map after that.  The larger texture should prevent any patterns, while the detail map mostly improved the detail closer up so that things aren't as blurry.  Rastertek has a good series of terrain articles, including one specifically about detail mapping.