Jump to content
  • Advertisement

xycsoscyx

GDNet+
  • Content Count

    477
  • Joined

  • Last visited

Community Reputation

1165 Excellent

About xycsoscyx

  • Rank
    GDNet+

Personal Information

  • Role
    Creative Director
    Game Designer
    Programmer
  • Interests
    Business
    Design
    Production
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Have you tried not adjusting the sizes you pass to System_SetState? As was mentioned earlier, it seems like the engine accounts for window styles, since the whole point is to abstract the OS dependent code away. Try just passing in the exact size you want your viewport to be, so if you want 800x600, pass in 800x600 directly, no adjustment. It seems like there's still some discrepancy between your client area, the render target, and/or your viewports/projection calculations. This all need to be set correctly and equal to prevent the types of aliasing issue you are seeing.
  2. Are you sure your window is just WS_OVERLAPPEDWINDOW? Have you tried getting the window flags from the window itself first, to make sure you are adjusting to the correct ones?
  3. https://docs.microsoft.com/en-us/windows/uwp/gaming/glsl-to-hlsl-reference is a good reference for OpenGL <-> D3D. SV_Position is the equivalent semantic in D3D, though it's only supported in D3D10+ (so not D3D9). It's been a while since I really played around with the values of the semantics, but it should contain the Z value as expected (it's a float4, so it's actually the XY, depth, and homogeneous W coordinate if I recall).
  4. Where does the parent come into play with any of this? I see you storing it, but never actually using it to transform the child against the parents transformation, shouldn't each bone be relative to the parent? That doesn't entirely explain the screenshot because it would typically leave everything imploded in the center, but it should still be used somewhere. What document are you using for explaining the MD5 model and animation format? http://tfc.duke.free.fr/coding/md5-specs-en.html?
  5. Can you post a screenshot of what you mean be strange? Strange as in all points have collapsed into the center, strange as in the arms are backwards, strange as in...? Also, second on vinterberg's suggestion to try float4 for the position, you might have some packing issues going on using float3 (I don't recall how arrays of float3 values get handled).
  6. xycsoscyx

    Can't get antialiased picture from Direct3D

    Kudos for fixing the window scaling issue, I didn't even notice that, I thought you meant the minutia (forest through the trees and all that jazz). For anisotropic filtering, you'll need to increase the samples to more than 1 to notice the difference. I think a single sample is actually the same as having it disabled (don't quote me on that), since without multiple samples it won't be able to even it out over the surface. At the least, only using a single sample won't offer nearly the same enhancement as more samples. Note that increasing the sample size does decrease performance, but that's the price of enabling the filtering. This is why it's typically left as a user option, if they are ok with the stippling or have lower end hardware and need those few extra cycles, then they can decrease/disable the filtering.
  7. xycsoscyx

    Can't get antialiased picture from Direct3D

    Looking at the first picture, it does have msaa enabled, it's just subtle and the blockiness is still noticeable because of the extreme change in contrast at the edge, as well as the lower resolution being rendered (which just makes things more noticeable overall). Note the shades of red/gray along the lines when you zoom in, you wouldn't get that if msaa weren't enabled. You may need to increase the level of msaa, or look at a different method of antialiasing, if this isn't enough for your needs. For the second image, mipmaps wouldn't fix that and aren't intended to address that type of aliasing. Have you tried enabling anisotropoc filtering though? That takes into account more than just the depth and does actually help reduce the type of aliasing that you are seeing from the texture filtering.
  8. Keep in mind that GDI is just the interface used to draw on the screen, the software rasterizer is built on top of that, which means you can replace that with anything you want. If you found good articles explaining how to create a software rasterizer, but it happens to use GDI to draw that to the screen, then just swap out the GDI for something else (mode 13h perhaps?).
  9. xycsoscyx

    How to make a retro game today ?

    Note that there's no reason you couldn't take Unreal or CryEngine or Unity and make a retro looking game. Considering the rendering system in those engine, it might not actually run on retro hardware, but you can easily tailor the models/textures/gameplay to have that retro feel.
  10. xycsoscyx

    Van houtte coffee dillema...

    Yes, the whole point is about the employee becoming a master of different skills, it's the same actor dressed as different masters, including dressed as JCVD mastering the splits. I think it's pretty obvious that it's NOT him actually, while they do a good job on a lot of stuff, the hair and face are still quite obviously the same guy throughout the rest of the commercial.
  11. xycsoscyx

    Projective Texture Mapping

    Projective textures are basically just cubemaps projected out from the light source. You can create a line from the current point you are lighting back to the light (which you'll already have for your lighting equation), and you can use that vector as a cubemap texture coordinate. This lets you do a simple cubemap lookup using that coordinate to get your projected textures color. The idea is that the cubemap adds a color filter over the area, but isn't affected by distance. Anything along that vector, away from the light source, will wind up with the same color. You can also do the same thing with just a simple 2D texture, as opposed to the cubemap. This lets you have a texture projected on just one side, such as with a spot light. The math behind that gets a little more complicated, but a quick search shows that there are quite a few examples of this online.
  12. xycsoscyx

    How to design linear forest levels in videogames?

    What about flipping the canyon around, have it on a ridge with sheer cliffs to the side. That lets you still have a dense forest on top but the player can't go too far to the side without falling off. For navigation, even with a dense forest you can still see bright spots through the trunks, especially if the path is flatter and there's not a lot of dense over brush or vegetation on the ground. If you setup the destinations with large lights/fires/etc it can still give the player a way point, without feeling artificial or ruining the atmosphere. Using dense vegetation that makes sense in that setting though (dense sticker bushes) also gives a way of blocking the player, without resorting to the old "oh look, there is a fern, well I guess I can't get past it".
  13. GetRawInputData actually gives your the mouse motion in the X/Y directions, you can call it in response to a WM_INPUT event, after making sure to call RegisterRawInputDevice so that your window actually receives the events. https://msdn.microsoft.com/query/dev15.query?appId=Dev15IDEF1&l=EN-US&k=k(WINUSER%2FGetRawInputData);k(GetRawInputData);k(SolutionItemsProject);k(DevLang-C%2B%2B);k(TargetOS-Windows)&rd=true
  14. If you wanted to let the user decide how to handle the memory, then you should also let them decide how to allocate it. You're really just letting the user decide when to free the memory, but not how. A user could have a string with a custom allocator, so if you templatized this function to accept any string, then the user could truly manage it themselves. You'd end up using their allocator inside your function, and they could manage everything themselves.
  15. Getting the near/far values from the projection matrix is basically done by reversing the projection matrix equation. In particular, you take the perspective projection equation, then solve for near/far values. There's a plethora of information online about the perspective projection formula, and still a lot of information already on how to solve for the near/far values (including caveats), so I'll leave that to you to look up. As for XMVECTOR, the plane equation is just a 4D set of values (a, b, c, d), so anything that holds the 4D values can be used to store a plane. You can store this inside an XMVECTOR if you like, then operate on it while it's loaded into the vector. DirectXMath has a wide range of functions to set/get values inside an XMVECTOR
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!