Jump to content
  • Advertisement


GDNet+ Basic
  • Content count

  • Joined

  • Last visited

Community Reputation

1162 Excellent

About xycsoscyx

  • Rank

Personal Information

  • Role
    Creative Director
    Game Designer
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Keep in mind that GDI is just the interface used to draw on the screen, the software rasterizer is built on top of that, which means you can replace that with anything you want. If you found good articles explaining how to create a software rasterizer, but it happens to use GDI to draw that to the screen, then just swap out the GDI for something else (mode 13h perhaps?).
  2. xycsoscyx

    How to make a retro game today ?

    Note that there's no reason you couldn't take Unreal or CryEngine or Unity and make a retro looking game. Considering the rendering system in those engine, it might not actually run on retro hardware, but you can easily tailor the models/textures/gameplay to have that retro feel.
  3. xycsoscyx

    Van houtte coffee dillema...

    Yes, the whole point is about the employee becoming a master of different skills, it's the same actor dressed as different masters, including dressed as JCVD mastering the splits. I think it's pretty obvious that it's NOT him actually, while they do a good job on a lot of stuff, the hair and face are still quite obviously the same guy throughout the rest of the commercial.
  4. xycsoscyx

    Projective Texture Mapping

    Projective textures are basically just cubemaps projected out from the light source. You can create a line from the current point you are lighting back to the light (which you'll already have for your lighting equation), and you can use that vector as a cubemap texture coordinate. This lets you do a simple cubemap lookup using that coordinate to get your projected textures color. The idea is that the cubemap adds a color filter over the area, but isn't affected by distance. Anything along that vector, away from the light source, will wind up with the same color. You can also do the same thing with just a simple 2D texture, as opposed to the cubemap. This lets you have a texture projected on just one side, such as with a spot light. The math behind that gets a little more complicated, but a quick search shows that there are quite a few examples of this online.
  5. xycsoscyx

    How to design linear forest levels in videogames?

    What about flipping the canyon around, have it on a ridge with sheer cliffs to the side. That lets you still have a dense forest on top but the player can't go too far to the side without falling off. For navigation, even with a dense forest you can still see bright spots through the trunks, especially if the path is flatter and there's not a lot of dense over brush or vegetation on the ground. If you setup the destinations with large lights/fires/etc it can still give the player a way point, without feeling artificial or ruining the atmosphere. Using dense vegetation that makes sense in that setting though (dense sticker bushes) also gives a way of blocking the player, without resorting to the old "oh look, there is a fern, well I guess I can't get past it".
  6. GetRawInputData actually gives your the mouse motion in the X/Y directions, you can call it in response to a WM_INPUT event, after making sure to call RegisterRawInputDevice so that your window actually receives the events. https://msdn.microsoft.com/query/dev15.query?appId=Dev15IDEF1&l=EN-US&k=k(WINUSER%2FGetRawInputData);k(GetRawInputData);k(SolutionItemsProject);k(DevLang-C%2B%2B);k(TargetOS-Windows)&rd=true
  7. If you wanted to let the user decide how to handle the memory, then you should also let them decide how to allocate it. You're really just letting the user decide when to free the memory, but not how. A user could have a string with a custom allocator, so if you templatized this function to accept any string, then the user could truly manage it themselves. You'd end up using their allocator inside your function, and they could manage everything themselves.
  8. Getting the near/far values from the projection matrix is basically done by reversing the projection matrix equation. In particular, you take the perspective projection equation, then solve for near/far values. There's a plethora of information online about the perspective projection formula, and still a lot of information already on how to solve for the near/far values (including caveats), so I'll leave that to you to look up. As for XMVECTOR, the plane equation is just a 4D set of values (a, b, c, d), so anything that holds the 4D values can be used to store a plane. You can store this inside an XMVECTOR if you like, then operate on it while it's loaded into the vector. DirectXMath has a wide range of functions to set/get values inside an XMVECTOR
  9. I haven't looked at it extensively, but it seems to be more about loading content into FBX files and working with them, instead of loading the data for rendering? Triangulation and mesh optimization are useful parts of Assimp, and it loads it into a (more straightforward?) structure that makes it easy to handle and put into vertex/index buffers.
  10. Quick search shows this: https://stackoverflow.com/questions/7736845/can-anyone-explain-the-fbx-format-for-me Note the explanation for PolygonVertexIndex, negative values denote the end of the polygon, the actual index being the absolute of the value minus one. You should always have the same number of indices between vertices and uv coordinates, and your normals should always be your index count times 3. Beyond that, what are you using to actually load the FBX file? Have you considered using something line Assimp to load the format for you (along with a range of other formats)? The end result of the Assimp scene/mesh is a lot easier to work with, and can handle auto generation of things like normals, uv coordinates, and auto triangulation of polygons.
  11. xycsoscyx

    Tone Mapping

    Typically you'd apply it at the end of your pipeline, since that's when you want to map from your higher source range to your lower target range (i.e. the monitor). It really depends on your pipeline though, if the UI is part of the pipeline then you may end up doing tone mapping before that, only on your lit scene, then rendering the UI over that directly.
  12. xycsoscyx

    I like to learn about BSP rendering

    BSP trees are mostly used for the initial partitioning (typically done beforehand as a preprocess step), or runtime for visibility or collision detection. Your vertices don't change during rendering, you just have a set of data per leaf of the BSP that, when determined to be visible, is pushed to your rendering system. So in answer to your question, BSP has nothing to do with the rendering pipeline. It's simply a way of partitioning the world into more manageable spaces, and how to determine spatial relationships between these spaces. If you look at engines that used BSP like Quake, it also used during preprocessing to determine if the world was closed or had gaps, and at runtime to get correct front2back and back2front sorting of faces, and to do collision detection by checking if you're going from an empty area to a solid area or vice versa.
  13. xycsoscyx

    Level editing for 3D platformer

    I'm currently working on some type of level editor myself, but I've opted to forgo abstract shapes and just use existing meshes for editing. Just consider grid aligned/snapped placement of existing meshes, and use those as building blocks for a larger level. If you want to support flood fill, you can still do that using meshes. Select a start/end of a box, then fill that in with as many meshes as it takes. It's not one single cube that you're creating, it's a set of X by Y instances of a single block. This is similar to how TES/Fallout levels are made, they have a set of models for the different parts of a level (corners, walls, halls, etc), then they place those on a grid to create the entire level. After that you can support non aligned/snapped placement to add clutter (furniture, debris, etc), or even rotate the walls to create curved halls, sloped floors, etc. The advantage of all this is that you're likely to reuse the same models for different cells, which means you can use instancing to draw a single mesh X times. You also can use a separate model editor for better control of the individual meshes and texturing, instead of trying to support things like texturing and random shapes in the level editor. For the physics, I'm not sure about how bullet does things, but in Newton you can create a single collision object, then use that object for any number of bodies. This is basically the same as instancing with rendering, you have a single set of source data (authored separately, depending on how you store/load data), and each body has it's own orientation that uses that single source collision data.
  14. xycsoscyx

    Favourite cocktails?

    I like a good Manhattan or an Old Fashioned, really anything with Bourbon, Whiskey, or Whisky. swiftcoder, why rum+rum+rum+other when you can have rum+mint+lime+simple syrup+club soda (man, math is hard after so much rum)? I'll take a Mojita over a Hurricane any day of the week.
  15. xycsoscyx

    At what time do you sleep and wake up?

    I've shifted my sleep schedule to accommodate less traffic before and after work. I tend to get up around 4AM now, but I still get to bed around 11PM. I've been doing this for a while now (well over a year), and the lack of sleep hasn't caught up to me yet, but I'm predicting that I'll hit a wall sooner or later and have to start going to bed earlier.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!