• Advertisement

Hodgman

Moderator
  • Content count

    14747
  • Joined

  • Last visited

  • Days Won

    3

Hodgman last won the day on March 26

Hodgman had the most liked content!

Community Reputation

51865 Excellent

About Hodgman

  • Rank
    Moderator - APIs & Tools

Personal Information

Social

  • Twitter
    @BrookeHodgman
  • Github
    hodgman

Recent Profile Visitors

84088 profile views
  1. HLSL's snorm and unorm

    I've only seen these used with typed UAV loads: https://msdn.microsoft.com/en-us/library/windows/desktop/dn903947(v=vs.85).aspx#using_unorm_and_snorm_typed_uav_loads_from_hlsl
  2. Do indie game designers play their own game?

    Because you're still a small business and have bills to pay
  3. One feature that might be useful to you here is "Lua environments". http://lua-users.org/wiki/EnvironmentsTutorial https://www.lua.org/pil/14.html These allow each file/function to operate with a different set of global variables, so that your globals from one file won't "leak" into another file.
  4. Normal encoding/decoding

    The typical sphere-map transforms that I've used only require sqrt, not atan, e.g. https://aras-p.info/texts/CompactNormalStorage.html#method04spheremap Spheremap or angle-based methods tend to have very obvious precision biasing though -- e.g. some latitudes look great while others look awful... It works fine in world-space, but I guess if you know where the good/bad parts are, you could use view-space normals and try to position the bulk of them in a good part of the transform... At 8_8_8, the only good format I found was Crytek's BFN (page 38), which is great, but requires a LUT fetch during encoding. The best format that I found for 8_8 was octohedral encoding (1, 2), but it still shows major quantization artifacts for mirror-like surfaces -- roughly equal in quality to naive/unpacked 8_8_8 x/y/z SNORM storage of normals. It's fairly easy to reason about -- the 8_8 domain gives 65536 possible encodings, octohedral splits the sphere into two hemispheres, and each hemisphere into four triangles, at which point there's only 8k possible normals in that triangle, roughly equal to a 90x90 grid. At 16_16, octohedral is perfect though (rougly a 23k x 23k grid per triangle ). If you're doing manual G-buffer packing within 32-bit integer channels, you could use something in the middle, such as 12_12 octohedral
  5. HLSL structs

    Packing only really matters when the structures are exposed to the oustide world, such as in a buffer/etc. Structures that are just passed between functions are just syntactic sugar (AFAIK) and the compiler can inline everything however it likes. You can't put Texture2D, SamplerState, etc, into a buffer (right?), so they fall into the latter category, and the layout gets optimized away by the compiler anyway
  6. Does ID3D12Device_CreateDescriptorHeap succeed or fail?
  7. Why some games look orange?

    https://en.wikipedia.org/wiki/Color_grading
  8. Generally, yes, but, you can still use that to tune your LOD distances in-engine. More importantly, it helps you spot texel density issues though. If different parts of the model have different scales in the UV layout, or different levels of distortion (more of an issue for organic objects than hard-surface models like this), then different parts of the model may transition mips at different rates.
  9. Snapshot Interpolation

    This depends on the game genre highly. If you've got a race car going 300kmh, that's about 8cm per millisecond, which could show up as very noticeable jitter if that's your timer precision. Another way to look at that, is that a 1ms imprecision at 60Hz is 6% of a frame, which is a decently high error rate. Hardware timers will likely be closer to nanosecond precision than microseconds. So, definitely capable of under 0.006% per frame error if you use ticks (or time in seconds as double-precision float or 64bit fixed point), whereas with 32bit floats it only takes an hour to reach ~0.25ms quantisation (1.5% error). If someone leaves their PC on overnight, floating point time in seconds will quantise to more like half a frame
  10. My tips are: make sure Mip0 of the texture is actually used, and make sure you're not using many pixel sized triangles. If you can create a texture in your engine with hand-authored mip-maps, make Mip0 a 1024px green texture, Mip1 a 512px blue texture and Mip2 a 256px red texture. Then, when looking at your model you can instantly see if you're wasting texel. If the mesh shows up green, you're making good use of your 1024 res... But if parts show up red, it means you could've just used a 256px texture and it would've looked the same! For the second test, you need to be able to turn on the wireframe view in your engine, with 1px thick non-AA lines. Any areas that are solid, because there's too many lines right next to each other, are too high-poly. Ideally most lines would be 4+ pixels apart. Ideally each polygon would be above around 64+ pixels in area. The polygon count itself doesn't matter too much, as long as they're put to good use - but pixel-sized (or sub pixel sized) polygons are very very wasteful.
  11. Nintendo Switch is the only console that uses mainstream graphics APIs (GL/Vulkan supported). Every other console uses/used a custom API. PS4 has GNM, which is lower level than Vulkan, plus a wrapper around that called GNMX which makes it look a little closer to a D3D11 style API, and then a semi-unofficial wrapper around that to make it look like a GLES style API. Those wrappers are only recommended to get started, with the recommendation to eventually port to raw GNM. Xbone has D3D11.x and D3D12.x which are very similar to their PC counterparts, while also being very different in some key areas. PS3 had GCM, Xb360 had D3D9.x (again, very different to PC), Wii had GX. Everything earlier than that was even more fragmented as the concept of a GPU hadn't solidified yet... An indie dev who shall remain unnamed started a rumour that GL was the fastest API on PC and that it was used by the PS3 years ago, and for some reason many people still regurgitate this as fact... If you're making a cross platform game, you've always needed to have multiple graphics API back-ends. Even if "cross platform" just means Win/Linux/Mac to you and you believe in "OpenGL everywhere" - that's at least 7 different OpenGL implementations that you need to test your code against and almost certainly make code tweaks/fixes for (every manufacturer implements the entirety of GL from scratch, with differing core behaviour, extension support, performance characteristics and shader-code parsing abilities). It's quite likely cheaper to use D3D and Metal rather than doing the extra GL QA work on your Windows/Mac ports! The SDK was rolled into the Windows Platform SDK. The toolkit is the equivalent of the old D3DX library - very useful utilities that most apps will need, but aren't "core" enough to be part of the D3D API itself. "Practical rendering and computation with direct3d 11" is my go-to reference for D3D11
  12. Help disassembling laptop

    Hard to tell from the video, but looks like they might have uFL connectors on the ends of them?
  13. Delta compression.

    Quake 3 servers keep a snapshot history separately for each client, and only sends a sub-set of the gamestate based on what's visible to each client. So, as long as each client is in a different part of the world, Q3 client-side bandwidth would be the same regardless of whether there's 10 players on the server or 1000. Of course the server-side upload bandwidth scales with the number of clients, both the that's not as much of an issue -- it's cheap to rent servers in real data centres with 100mbps upload bandwidth and data caps high enough to keep it saturated. If all the players are able to congregate in the same area, then yeah you might need some extra measures though, such as changing the update rate per entity (the client themselves and their closest opponents at full rate, other opponents at half rate, etc...), changing the quantisation per entity (nearby opponents with 24bit positions, further opponents with 16bit positions, etc...), or something else...
  14. I use CMake to generate my VS projects with filter layouts that match my disk folder layouts. I was quite surprised when I started a C# project and realized that VS would do this automatically there! Surely there's some hidden option to turn in on for C++ projects (/ off for C# projects)?
  15. This might be of interest to you : https://www.gamedev.net/forums/topic/695942-is-graphics-programming-still-relevant-in-the-game-industry/
  • Advertisement