Jump to content
  • Advertisement

SeraphLance

Member
  • Content Count

    700
  • Joined

  • Last visited

Community Reputation

2613 Excellent

About SeraphLance

  • Rank
    Advanced Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. SeraphLance

    The Han Solo Movie and the Star Wars Franchise's Direction

    I didn't know GotG was a superhero story at all until after I finished watching it (which I guess shows how little of a comic book fan I am), and enjoyed it immensely, presumably for just that reason. I knew it was based on a comic, but thought it was some isolated comic and not a piece of the marvel universe. You can't really do that kind of disconnect against something like the avengers even if you wanted to.
  2. SeraphLance

    WHO recognising 'gaming disorder'

    It depends on what they mean by "other activities". If by that they mean other forms of entertainment, that's a normal thing to see in anything. Some people watch netflix all day. Some people spend all their hobby time reading. Some people spend al ltheir hobby time drinking, etc. Frankly, I find people with a cornucopia of dabbling hobbies to be boring and fake, without any real interest. As you develop yourself, it's natural to trend towards some hobbies over others. If by "other activities" they mean work, that's just plain hedonism, and it's something I've suffered from greatly over my life, to the point that I actually initially dropped out of college due to too much games and anime. So, armchair psychology in the lounge aside, It's a concept I'm intimately familiar with. However, it sounds like the WHO is conflating those two things, which are very distinct and in no way should be combined. Now MMORPG addiction is a different beast, because that's a social addiction, which is a real, distinct phenomena than manifests itself in many places.
  3. SeraphLance

    Allocator design issues

    By compositing allocators, I can allocate from one arena into another. For example, I might have a stack allocator that grabs N bytes of stack memory, and then using that allocator I might create a pool of objects. It also lets me do something like this (using the first method in my OP): template<typename BackingAllocator> class DebugAllocator : BackingAllocator { constexpr char sentinelValue = 0xFE; constexpr size_t sentinelSize = 4; public: AllocBlock Allocate(size_t size) { // insert sentinels at beginning and end of allocation range. auto block = BackingAllocator::Allocate(size + sentinelSize * 2); ::memset(block.ptr, sentinelValue, sentinelSize); ::memset(block.ptr + size + sentinelSize, sentinelValue, sentinelSize); block.size = size; block.ptr -= sizeof(magicNumber); return block; } //... }; So I can inject behavior like memory region guards. I guess in your parlance, I've got a stack allocator, a heap allocator (which I call "mallocator"), and everything else is a block allocator that uses memory doled out by the two aforementioned allocators. I could call them "memory adaptors", but that just sounds weird. I'm not sure what you mean by this. I want to be able to show myself something like this: Current Memory Usage: Networking: 2 KB bytes (budget 4 KB) Textures: 120 MB (budget 512 MB) Scripts: 37 MB (budget 64 MB) Game Objects: 27 MB (budget 25 MB -- OVER BUDGET -- ) ..and so on. That naturally can't be done statically, other than defining budget values. In order to know that sort of information, it has to hook into the allocators somehow, right? Even the bitsquid article you cite uses "proxy allocators" to achieve the same thing. I considered doing it that way too, and haven't entirely ruled it out. In fact, I read that article while I was trying to come up with this stuff, and it was an influence on the "case 2" example in my OP. re-reading it now that you posted it, I might actually go back to using the proxy idea instead of what I had in mind, but I deliberately left it out because there's a number of ways to get that information that don't impact the base design much. My current plan of attack with regards to threading is to just make my allocators all thread local. Mallocator is currently stateless and malloc itself is thread-safe so that should be fine, and everything else is either operating on stack memory or pre-allocated heap memory, so I don't think I can have any issues with that as long as I don't pass raw memory pointers around between threads. I haven't really thought of a use case for sharing an allocator between threads that isn't questionable at some level. If I do end up calling the Virtual* functions instead of using mallocator, I'll have to revisit things a bit, and that will definitely have to be thread-aware to some degree I'm sure. That's an interesting observation. So for untyped allocators (i.e. stack/linear allocators), just templatize the allocate function? I like it, thanks. Yeah, I just wanted to write something up to express an idea, and it was faster for me to remember the syntax of std:;function than it was for the function pointer syntax. That said, I've become increasingly convinced over the last few days that this third route of letting allocated blocks free themselves is a poor one, because for stuff like stack allocators I kind of need the context of neighboring allocations to know if they can be freed at all, and adding the plumbing for that sounds like more work than it's worth.
  4. SeraphLance

    Size of C++ polymorphic objects.

    If by "polymorphic object", you mean "derived type but using a base type pointer", then no, of course you can't get that without virtual function dispatch or a dynamic_cast. C++ has no way of knowing which type you actually want the size of, even with RTTI. Therefore, you either have to tell it, either directly (through dynamic_cast), or indirectly (through providing the plumbing in the virtual interface).
  5. I've spent quite a while (and probably far longer than I actually should) trying to design an allocator system. I've bounced ideas around to various people in the past, but never really gotten something satisfactory. Basically, the requirements I'm trying to target are: Composability -- allocators that seamlessly allocate from memory allocated by other allocators. This helps me to do things like, for example, write an allocator that pads allocations from its parent allocator with bit patterns to detect heap corruption. It also allows me to easily create spillovers, or optionally assert on overflow with specialized fallbacks. Handling the fact that some allocators have different interfaces than others in an elegant way. For example, a regular allocator might have Allocate/Deallocate, but a linear allocator can't do itemized deallocation (but can deallocate everything at once). I want to be able to tell how much I've allocated, and how much of that is actually being used. I also want to be able to bucket that on subsystem, but as far as I can tell, that doesn't really impact the design outside of adding a new parameter to allocate calls. Note: I'm avoiding implementation of allocation buckets and alignment from this, since it's largely orthogonal to what I'm asking and can be done with any of the designs. To meet those three requirements, I've come up with the following solutions, all of which have significant drawbacks. Static Policy-Based Allocators I originally built this off of this talk. Examples; struct AllocBlock { std::byte* ptr; size_t size; }; class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; template <typename BackingAllocator, size_t allocSize> class LinearAllocator : BackingAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } AllocBlock Allocate(size_t size); }; template <typename BackingAllocator, size_t allocSize> class PoolAllocator : BackingAllocator { AllocBlock baseMemory; char* currentHead; public: PoolAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator<Mallocator, size>; Advantages: SFINAE gives me a pseudo-duck-typing thing. I don't need any kind of common interfaces, and I'll get a compile-time error if I try to do something like create a LinearAllocator backed by a PoolAllocator. It's composable. Disadvantages: Composability is type composability, meaning every allocator I create has an independent chain of compositions. This makes tracking memory usage pretty hard, and presumably can cause me external fragmentation issues. I might able to get around this with some kind of singleton kung-fu, but I'm unsure as I don't really have any experience with them. Owing to the above, all of my customization points have to be template parameters because the concept relies on empty constructors. This isn't a huge issue, but it makes defining allocators cumbersome. Dynamic Allocator Dependency This is probably just the strategy pattern, but then again everything involving polymorphic type composition looks like the strategy pattern to me. ūüėÉ Examples: struct AllocBlock { std::byte* ptr; size_t size; }; class Allocator { virtual AllocBlock Allocate(size_t) = 0; virtual void Deallocate(AllocBlock) = 0; }; class Mallocator : Allocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; class LinearAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } AllocBlock Allocate(size_t size); }; class PoolAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* currentHead; public: PoolAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator, size); There's an obvious problem with the above: Namely that PoolAllocator and LinearAllocator don't inherit from the generic Allocator interface. They can't, because their interfaces provide different semantics. There's to ways I can solve this: Inherit from Allocator anyway and assert on unsupported operations (delegates composition failure to runtime errors, which I'd rather avoid). As above: Don't inherit and just deal with the fact that some composability is lost (not ideal, because it means you can't do things like back a pool allocator with a linear allocator) As for the overall structure, I think it looks something like this: Advantages: Memory usage tracking is easy, since I can use the top-level mallocator(s) to keep track of total memory allocated, and all of the leaf allocators to track of used memory. How to do that in particular is outside the scope of what I'm asking about, but I've got some ideas. I still have composability Disadvantages: The interface issues above. There's no duck-typing-like mechanism to help here, and I'm strongly of the opinion that programmer errors in construction like that should fail at compile-time, not runtime. Composition on Allocated Memory instead of Allocators This is probably going to be somewhat buggy and poorly thought, since it's just an idea rather than something I've actually tried. Examples: struct AllocBlock { void* ptr; size_t size; std::function<void()> dealloc; } class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size) { void* ptr = malloc(size); return {ptr, size, [ptr](){ free(ptr); }}; } }; class LinearAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) {end = ptr = baseMemory.ptr;} AllocBlock Allocate(size_t); }; class PoolAllocator { AllocBlock baseMemory; char* head; public: PoolAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) { /* stuff */ } void* Allocate(); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator.Allocate(size)); I don't really like this design at first blush, but I haven't really tried it. Advantages: "Composable", since we've delegated most of what composition entails into the memory block rather than the allocator. Tracking memory is a bit more complex, but I *think* it's still doable. Disadvantages: Makes the interface more complex, since we have to allocate first and then pass that block into our "child" allocator. Can't do specialized deallocation (i.e. stack deallocation) since the memory blocks don't know anything about their parent allocation pool. I might be able to get around this though. I've done a lot of research against all of the source-available engines I can find, and it seems like most of them either have very small allocator systems or simply don't try to make them composable at all (CryEngine does this, for example). That said, it seems like something that should have a lot of good examples, but I can't find a whole lot. Does anyone have any good feedback/suggestions on this, or is composability in general just a pipe dream?
  6. SeraphLance

    Using D with C++

    Note that you can annotate a function with @nogc to tell it not to use the GC. Some people avoid the GC by just annotating main, because the attribute propagates to everything the annotated function calls (which in the case of main is basically your whole program). You don't necessarily have to go that far, but I'd recommend controlling things tightly through that.
  7. SeraphLance

    Using D with C++

    You probably won't see any performance drawbacks as long as you're clever about avoiding the GC in general.Granted, that's easier said than done, but them's the breaks. IMO, the biggest advantage of D over C++ is the compile-time reflection/introspection, which makes serialization much easier, along with stuff like binding functions to a developer console or something. You might also find that, while D can "talk C++", you're going to lose a lot of that functionality across language boundaries. I originally started writing a game with a similar approach (except using wrappers around C++ rendering/math APIs rather than writing the subsystems in C++ directly), and I found that, while my productivity went through the roof for a lot of the typical drudgery glue code and engine business logic: The terrible state of library support got to be a problem -- a lot of the bindings to stuff like UI libraries and graphics APIs are in various states of disrepair, some requiring me to dig in and fix them. The build system for D is absolutely atrocious; they build everything around a package manager, but like most package managers it makes for a terrible build tool, and none of the larger build systems people have extended for D support the aforementioned package manager (but the library ecosystem requires it, so good luck with that). If you have any inkling at all of supporting 32-bit windows builds, walk away from D immediately. To be fair, 32-bit isn't something many people care to support nowadays, but if you do want to, you've got to worry about the fact that the backend for the mainline D compiler (which is the only one that works on windows without POSIX shenanigans of some sort) is built around OMF format object files, so you need to make sure all of your library dependencies are built in that format. This is a pretty significant pain point for your build pipeline, and I'm not even sure it's possible to sustain if you require any closed-source libraries. If you can deal with those issues, D is a great language, and while I eventually migrated to pure C++, I was probably twice as productive in D overall with a similar performance profile despite having several years more experience with C++. However, the language is suffering the classic chicken-and-egg problem (no tooling/libraries = no users = no tooling/libraries) that most new languages face, and most new languages never really get out of that and die out. That said, if you want to dogfood everything, and don't want to take in any external dependencies at all, or if all your external dependencies are things you plan to write C/C++ modules around, then sure, go for it. Just be careful of the GC. You might be able to get most of the advantages of D with none of the disadvantages if you do the above and use this: https://dlang.org/spec/betterc.html However I don't really have much experience with it.
  8. SeraphLance

    Using Perlin Noise to turn a vector

    Perlin noise is an N-to-1 mapping, meaning that whatever dimensionality your noise is, you end up with a one-dimensional result (usually represented as a grayscale color). So if you want to displace coordinates with perlin noise, you're going to have to displace each coordinate individually. You could do something like this: Vector3 YawPitchRollValues = new Vector3(Perlin1D(oldVec.x+offset1), Perlin1D(oldVec.y+offset2), Perlin1D(oldVec.z+offset3));
  9. I do virtually everything on the loading thread. The simplest answer to what you're asking is to just load all your assets before any other part of your engine needs it. So, for your UI textures, that entails loading them at program startup, before your UI is actually initialized. This is one of those things where just doing everything in the right order (if possible) can massively reduce the complexity you need to worry about. If you're worried about changing sizes on reloaded textures, you don't need size data immediately to begin with.
  10. Why do billions of unskilled people need to be able to do it? No education whatsoever was standard only a few hundred years ago, and now most developed nations have at least ten years. Yes, it's likely that in another two hundred years, something resembling a modern 4-5 year college curriculum will also be mandatory. Yes, if you lose your job you might be screwed, because you can't easily retrain in your lifetime. However, that's a short-term loss. You're going to grow old and die, and your kids if you have them will be educated in something else more relevant and employable. That's an actual economic problem to deal with; the rate of automation can (relatively) short-term cause labor issues. However, that makes discussion of a star-trek post-supply utopia the wrong problem to talk about. But yeah, if your problem is "what do I do if my white-collar job gets automated", then we're talking past one another. However, I don't think such a problem necessitates considering entirely new economic models, but rather a shorter working life until retirement, or continuing education (especially relevant in our field).
  11. That's a small part of the job pool created by technology. There are more musicians, actors, and writers today than during any era of human history. It's not because they were enabled by new technology, but because they were released by technology of having to do other things -- like, say, growing food. This is why major inventions and developments lead to revolutions across many unrelated fields. This is analogous to saying before the agricultural revolution, "If we're not all hunting for food, what are we going to do all day?" The answer is something else.
  12. SeraphLance

    Why A.I is impossible

    If I created a human being from whole cloth in a lab, I would have created an artificial intelligence. Whether AI is "organic" or "mechanical" has no bearing over whether or not it is AI. Aren't you an Atheist? I would think that outside of spiritual beliefs, everyone ought to think creating an AI is possible this way, even if we can't realistically do it now. I don't think it's a terribly interesting observation, honestly. It's kind of axiomatic, even.
  13. Yeah, I'm still not seeing much of a case here. We like to split "blue collar" and "white collar" up as if there's some fundamental binary difference, but it's a continuum, "Smarter" technology just broadens the range of white-collarness that machines can cover. In response, human labor opens up on the white end of the spectrum as we no longer need people doing the now-obsolete jobs. Someone earlier responded to my statement that the growth of automation will outpace the creation of jobs to develop and maintain said automation. That's semi-fallacious. It's true that technology opens technician jobs, but it's necessarily true that fewer technician jobs will open than are closed by the automation. That's the entire point. It's not just work centered around the new technology, but new work in general as we are allowed new bandwidth for labor. We should all be aware of this phenomenon given that we're arguing about job loss on a freaking game development site. It seems to me that the fear is that automation will cover the entire blue/white spectrum to the point where there's literally no work that can't be done by an automaton. That folds back into the original discussion on the other thread about whether an AI that can totally replace a human but doesn't simultaneously have human (or human-like) needs can exist As for CCP Grey's video, he's got a lot of specious reasoning. The "horses" shtick is a false equivalence because horses didn't invent cars; they're not the species that the economy in question is centered around. Now the short-term consequences of automation are observable. You can't just retrain a barista into a writer, and if your job is replaced by a machine you may have difficulty re-entering the work force. However, the idea that automation will reach the point where we as a species have nothing to do? That's going to take more convincing.
  14. It is. In fact, it's a huge problem that's hotly debated practically every election cycle. The problem is that curtailing freedoms is seen as a big no-no, and a lot of campaign spending isn't just politicians spending money, but non-profits spending money on their behalf. Blocking that is blocking endorsement, which is an infringement of freedom of speech. That's enshrined in our constitution, making it political suicide to attack in any seriously manner (and rightly so, IMO). It goes hand-in-hand with the controversial "businesses are effectively people" supreme court decisions. It's not always about what is pragmatic or fair, but what the constitution guarantees (or what the Supreme Court thinks the constitution guarantees).
  15. If you had to stick them somewhere in the left-right scale, I'd call them left-wing because their political structure bears much more in common with practical socialism (i.e. Soviets) than it does anything resembling capitalism. It's also the logical extreme of "big government", which isn't exactly a right-wing ideology. However, that's for the modern left and right, at least within the United States. It's important to remember that the Republicans prior to The New Deal era were collectivists in a sense. Now they're somewhat individualist, but the left was the individualist side of the coin once upon a time. Blame the Bull Moose people, the "neoliberals" of the early 20th century. Personally though, I think trying to shoehorn everything into left/right is a poor decision. Modern parties are much more syncretic these days. Even the whole 2-axis model has pretty significant limitations.
  • Advertisement
√ó

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!