Jump to content
  • Advertisement

Hodgman

Moderator
  • Content count

    14841
  • Joined

  • Last visited

  • Days Won

    8

Hodgman last won the day on June 17

Hodgman had the most liked content!

Community Reputation

51962 Excellent

About Hodgman

  • Rank
    Moderator - APIs & Tools

Personal Information

  • Website
  • Role
    Game Designer
    Programmer
    Technical Artist
    Technical Director
  • Interests
    Programming

Social

  • Twitter
    @BrookeHodgman
  • Github
    hodgman

Recent Profile Visitors

86488 profile views
  1. What are you actually asking for?
  2. Hodgman

    Market Confussion, Video Game Industry?

    Yeah I didn't mean that you have to actually spend/have this much money. That's just a valuation of what the work/time is worth to a business. I'm also not in the US This can also be useful in other areas - for example, my state government will provide up to 50% funding for creative projects, and they will allow valuations of existing work to be counted as part of your project budget. So if you've done "$50k worth" of work on a game yourself already, and it's half finished, they will consider granting you $50k in cash to finish it.
  3. Hodgman

    Market Confussion, Video Game Industry?

    A high quality Android game can fetch a whopping 5c-$1 per user. That means 2.5M downloads = $125k to $2.5M gross retail revenue. This kind of game might cost a few hundred thousand dollars worth of time to create though (2 experienced devs for 2 years is already over $250k wages at market rates 😉). Hard for a new company to break into. I do know plenty of indies in this space though (indie = independent teams of only a few individuals). As a new developer making anything less than high quality titles though... Yes, the numbers don't stack up to an easy living (or a living at all).
  4. In Quake 3, they use this method (copy, not XOR), but they still do need to keep track of a history of old states on the client and server. This is because Quake 3 sends the deltas via UDP, which means they can be lost or arrive out of order -- and if simply naively apply deltas that arrive to the client's current state in this situation (where some don't arrive, or arrive in the wrong order), then the client won't correctly reconstruct the server's state. To get rid of the history-buffer concept, you need to use a reliable protocol like TCP, to ensure no deltas are lost and the order of changes is preserved. This is the same regardless of whether you do straight copies, or the XOR trick The history buffer idea is really the key innovation of Quake 3, which allows them to use both UDP and delta encoding reliably. Another alternative to the XOR method is to store differences. e.g. if health was 100 and now it's 80, then you send a delta value of -20 across the network, and the client ADDs this onto their value. This can also help when you've got an extra compression step, as if the changes are small, then the high bits of the detlas will probably contain a long string of zeros too. Google's protobuf system for reading/writing bitstreams is also optimized for situations like this -- IIRC, when writing an integer, the process is something like the following: * if it's smaller than 128, they write a 0 followed by the 7bit value * else they write a 1, and then: ** if it's smaller than 16384, they write a 0, followed by the 14bit value ** else they write a 1, and then (...repeat the pattern...) This causes small values to take up less space in the bit-stream, while large values pay an overhead of a few extra bits. If most of your values are small, then this can add up to a massive space saving. Another trick I've heard of is, when sending deltas that are against some previously known state (like in Quake 3), you use the previous state as the "dictionary" in LZ-style compression. The kinds of compressors build up a list of bit patterns that they reference ("the dictionary"), so if there's a lot of common patterns in the previous state and the new one, then they will be able to compress the data really well. It's a kind of automatic delta encoding. Normally this wouldn't be that great, because the "dictionary" has to normally be stored alongside the compressed file -- but in this situation, both parties already have the dictionary, so it doesn't need to be sent!
  5. Ideas: - tone map to LDR, color correct, then inverse tonemap back to HDR, then tonemap to HDR10. I guess this is what you're doing now though... Maybe it wouldn't fail as bad with a better tonemapper? - HDR10 has a fixed range of 0-10000, where ~200 is traditional white (and actually a displayable range of more like 0-1000 on current TV's), so you could simply tonemap to HDR10 range, divide by 1000, color correct, then multiply by 1000. - use scRGB as your post processing color space, port color correction to this new workflow. Tonemap from full range HDR to 0-125 scRGB range (corresponds to 0-10k nits). Then color correct. Then if not doing HDR10, tonemap a second time down to 0-1 sRGB range. But this violates your request to not change the color correction workflow
  6. In my implementation I write the deltas to a bitstream. When writing a property (except bool, which are 1 bit) I either write a 1 followed by the delta, or a 0 indicating that there's no change. For arrays, I either write an array of bits (1 bit for each array element), followed by just the elements that have changed, OR, write an array containing the indices of changed elements. These two strategies are selected depending on which would be smaller. The array is preceded by a single bit indicating which strategy is in use. When writing a delta for a property, you can either write out just the new value, or you can write "oldValue XOR newValue". In the first case, the client simply copies the values from the delta packets over its own values, and in the second case, the client XORs the delta packets values with their own to recover the new value. This second method is much more complex / slow / fragile, but, if you apply generic compression to your packets before sending them, this XOR process is likely to produce long strings of zeros, which compress quite well. Doom 3 didn't use a general purpose compressor like zlib, but instead wrote a simple RLE compressor that only looked for repeated 0's in the bitstream and replaced them with a single zero followed by a 3bit repetition count.
  7. Hodgman

    DirectInput GUID compare

    I'm not familiar with this part of DirectInput, but some issues to fix first: You have to fill in propRange before you pass it to GetProperty, or else it will fail, and you have to check whether GetProperty fails or not.
  8. 1. I use the UBO/CBV abstraction, even on APIs that don't natively support them. Matrices are placed in a constant buffer, which is then represented with a 16bit CB ID. 2. Ideally I compile draws once and then reuse them many times. My generic model renderer does this, and it saves a lot of work per frame. Some other more dynamic systems generate new, temporary draw items each frame, which are thrown away after submission.
  9. Hodgman

    DirecX9 Clipping Planes

    D3D 9 has custom clipping planes as part of the fixed function state: https://msdn.microsoft.com/en-us/library/windows/desktop/bb174426(v=vs.85).aspx
  10. Hodgman

    Allocator design issues

    If your allocation interface is at the C++ object level, instead of the blob-of-bytes level, then it's easier to support a common interface. A linear allocator can deallocate a C++ object (by calling the destructor), it just doesn't actually free up any address space until a later unwind operation takes place. If the allocation interface is templated by object type (i.e. Allocate<T>(arraySize) instead of Allocate(sizeof(T)*arraySize) ) then the pool can also implement the common interface (expect probably only for a single T per pool instance, instead of any T). If you're going to go down this path, consider a regular function pointer and a single void* parameter. std::function is pretty heavyweight for such a low level consideration as tracking individual allocations..
  11. Hodgman

    Why are enums broken

    I wouldn't really say that bitset is a clean/perfect solution to this problem because it's basically the same as using an int to represent an enumeration or collection of flags. You lose the benefits that come with the actual enum-language-feature.
  12. Hodgman

    Why are enums broken

    D3D12 uses some macro magic to allow you to use bitwise operators on their "flag enums", e.g. typedef enum D3D12_HEAP_FLAGS { D3D12_HEAP_FLAG_NONE = 0, D3D12_HEAP_FLAG_SHARED = 0x1, D3D12_HEAP_FLAG_DENY_BUFFERS = 0x4, D3D12_HEAP_FLAG_ALLOW_DISPLAY = 0x8, D3D12_HEAP_FLAG_SHARED_CROSS_ADAPTER = 0x20, D3D12_HEAP_FLAG_DENY_RT_DS_TEXTURES = 0x40, D3D12_HEAP_FLAG_DENY_NON_RT_DS_TEXTURES = 0x80, D3D12_HEAP_FLAG_ALLOW_ALL_BUFFERS_AND_TEXTURES = 0, D3D12_HEAP_FLAG_ALLOW_ONLY_BUFFERS = 0xc0, D3D12_HEAP_FLAG_ALLOW_ONLY_NON_RT_DS_TEXTURES = 0x44, D3D12_HEAP_FLAG_ALLOW_ONLY_RT_DS_TEXTURES = 0x84 } D3D12_HEAP_FLAGS; DEFINE_ENUM_FLAG_OPERATORS( D3D12_HEAP_FLAGS ); and the magic: // Define operator overloads to enable bit operations on enum values that are // used to define flags. Use DEFINE_ENUM_FLAG_OPERATORS(YOUR_TYPE) to enable these // operators on YOUR_TYPE. // Moved here from objbase.w. // Templates are defined here in order to avoid a dependency on C++ <type_traits> header file, // or on compiler-specific contructs. extern "C++" { template <size_t S> struct _ENUM_FLAG_INTEGER_FOR_SIZE; template <> struct _ENUM_FLAG_INTEGER_FOR_SIZE<1> { typedef INT8 type; }; template <> struct _ENUM_FLAG_INTEGER_FOR_SIZE<2> { typedef INT16 type; }; template <> struct _ENUM_FLAG_INTEGER_FOR_SIZE<4> { typedef INT32 type; }; template <> struct _ENUM_FLAG_INTEGER_FOR_SIZE<8> { typedef INT64 type; }; // used as an approximation of std::underlying_type<T> template <class T> struct _ENUM_FLAG_SIZED_INTEGER { typedef typename _ENUM_FLAG_INTEGER_FOR_SIZE<sizeof(T)>::type type; }; } #define DEFINE_ENUM_FLAG_OPERATORS(ENUMTYPE) \ extern "C++" { \ inline ENUMTYPE operator | (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) | ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \ inline ENUMTYPE &operator |= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) |= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \ inline ENUMTYPE operator & (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) & ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \ inline ENUMTYPE &operator &= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) &= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \ inline ENUMTYPE operator ~ (ENUMTYPE a) throw() { return ENUMTYPE(~((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a)); } \ inline ENUMTYPE operator ^ (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) ^ ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \ inline ENUMTYPE &operator ^= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) ^= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \ } #else #define DEFINE_ENUM_FLAG_OPERATORS(ENUMTYPE) // NOP, C allows these operators. #endif P.S. don't go copy & pasting this into your own projects. Copyright (c) Microsoft Corporation. All rights reserved. But this does go to show that it's pretty easy to support this in C++.
  13. Also, forgot to address the actual question in my last post... SM5 plus vendor extensions is the only way to use these new features in D3D11. In D3D12, you have the choice of the ugly/complex vendor extensions (and up to 4 versions of shaders that use them - 1 for each vendor plus one for the case where no extensions are supported!) or using SM6 (and hopefully more like 2 versions of your complex shaders - using wave optimisations and not). SM6 is still new. I haven't used it yet because I haven't had time to experiment, but I assume it is the future of D3D shader programming, so anything not shipping today should probably look into it.
  14. Everything in D3D/GL is a high level abstraction, that each GPU vendor can map to HW in their own way. AMD has special HW for append buffer counters in D3D11 and D3D12, while NV uses general purpose HW to implement them. NV uses special HW to accelerate the Input Layout / Input Assembler, while AMD uses general purpose HW to implement it. Neither of those features are "fake". This could mean two things - they use a normal set of shaders plus another set based on AMD extensions. - they did all their shader optimisation work while using AMD GPUs and AMD's profiling tools. This means that the whole point of this demo is to show off an AMD-specific extension. The demo will run, but tell you that the extension is not available (and thus not 'work').
  15. Hodgman

    Core Game Loop & Core Mechanic Loop?

    Often there's two loops to a user's experience. To use a collectable card combat game as an example: * the mechanics loop involves choosing a card to play, waiting for the right time to play a card, and reacting/watching what cards the opponent has played. * the meta game loop then might involve entering into battles with opponents (entering the above loop), gaining rewards from those battles, unlocking new cards using rewarded currency, curating your decks of cards, etc.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!