• Content count

  • Joined

  • Last visited

Community Reputation

6123 Excellent

1 Follower

About Oberon_Command

  • Rank

Personal Information

  • Interests
  1. Why A.I is impossible

    It seems reasonable to assume that in a world where the US actually implements UBI, it has also implemented universal healthcare, just like every other developed nation, so the costs would be significantly lower. I don't think anyone would disagree that universal healthcare is politically more likely than UBI. (Seriously, $30k for an ER visit if you're uninsured? In Canada, even if for some reason you don't have provincial health insurance, an ER visit would be at most $600. What the hell is your country smoking?) Anyway, this thread isn't just about the US and it's myopic to pretend that assumptions that apply to the US as it is right now invalidate a concept that would be applied world-wide in the future. I also doubt that UBI would actually be implemented at above poverty level numbers regardless of the quality of life that provides.
  2. Why A.I is impossible

    13 trillion / 323 million is a bit over 40k. I'm not aware of any UBI concepts where participants are to be paid $40k per year. That seems like a lot to me, especially given how low the cost of living can be outside of the major cities. Pilot programs are typically paying something in the $16-24k range (eg, the one currently underway in Ontario) and I would assume that a full-scale program would be similar. As is described here, the typical poverty rate is in that range even for families with children. Therefore, $40k is easily double the poverty rate for a lot of people - hardly "just above," so if your aim is to be just over the poverty line, $40k is far overkill. I don't know where you got that $40k number, but I suggest that your dismissal of the concept is founded in this assumption and not any realistic conception of how UBI would actually be implemented. You're also assuming that every person living in the US would get UBI, which I strongly doubt - do non-citizens get UBI? Do children below working age get UBI? What is your source of these assumptions?
  3. Why A.I is impossible

    I don't believe this has been proven, has it?
  4. I'm not Bregma, but one thing that comes to mind is that it allows you use the enum value as "type code" at runtime. That has a number of use cases, but the one that comes to mind is cases where you convert between ID types and you want to validate at runtime that your ID points at what you think it's pointing at. For example, you might have an object hierarchy where Animal and Tree both inherit from Actor; you have ActorID, AnimalID, and TreeID. AnimalID and TreeID can be converted to ActorID and vice versa. Now you have a potential case where an ActorID that actually points at an Animal could be converted to a TreeID, which you probably want to validate at runtime! You don't strictly NEED to use an enum for the type code, but it simplifies the problem of having unique IDs per type. Especially in an environment where RTTI is disabled. This article illustrates an example of how to set up a handle type that uses "indices with type code" handles: http://gamesfromwithin.com/managing-data-relationships Another thought that comes to mind is that you might want to use the "type code" itself as an index or key. For example, suppose you have some content that specifies a table mapping object type onto some parameter that's uniform across all objects of those types, because the designers don't want to have to set that parameter on every single object definition of that type in content. If you have a type code that's mappable to a simple array index, you can have a flat array in the content and just index into that to get the value you're looking for. Another option that hasn't been mentioned is specializing your handle type on the actual type it's supposed to refer to, eg. struct Animal { // ... }; using AnimalID = IDType<Animal, unsigned int>; This obviates the need for an extraneous type that never gets used just for the tag.
  5. Why A.I is impossible

    But why?
  6. Why A.I is impossible

    I'm gonna call [citation needed] on that one. We don't really know what consciousness is yet. Not all of us believe in souls or the supernatural, incidentally. From my point of view, dismissing AI on the grounds that it can't possibly have something that we haven't demonstrated to even exist, never mind form a fundamental aspect of consciousness, seems... premature. This looks like an attempt to have a religion thread...
  7. No - but you need to construct it when the class that owns the sprite is constructed in the initializer list, like so. SpriteContainer::SpriteContainer() : singleSprite(nullptr) { } // or SpriteContainer::SpriteContainer(Texture* texture) : singleSprite(texture) { } You could try making the base class's constructors private, or use C++'11's "=delete" syntax to explicitly delete them. Yes, in fact the Windows SDK provides smart pointers specifically for working with COM pointers. The usual approach is to just use a raw pointer or a reference and ensure that the non-owning class never outlives the owner of the resource. If ownership is being shared, then shared_ptr for the owners and weak_ptr (if they can outlive the owners) or raw pointers (if they can't) for the non-owners. Most of the time you're not likely to want shared ownership, though.
  8. Cant find how to do this asm code in c++...

    What is this function trying to accomplish, exactly? edit: Never mind, read the comment above it.
  9. How to learn from Quake source code

    No, we do not all agree. Windows without DirectX is still very much usable. In fact, the parts of DirectX that aren't Direct3D are deprecated, so really "DirectX" nowadays means "Direct3D". I don't know of anyone using DirectInput, DirectSound was replaced with XAudio2 which is theoretically succeeded by WASAPI, and DirectMusic doesn't even have complete documentation available anymore! Even if we didn't have Direct3D, we'd have OpenGL, Vulkan, and Mantle (on AMD cards). And this is just talking about hardware access APIs - never mind the support for multicore computing and virtual memory and bigger address spaces and other such nice things. Using a 32 (now 64-bit, for the most part) OS confers advantages beyond merely those provided by DirectX. I'm a little fuzzy on what you think DOS does better, because from where I'm standing, running Windows 10 in November 2017, it does nothing better. I'm furthermore not sold on the relevance of this discussion to Quake, apart from "Quake originally used DOS" - which is true, but also meaningless, since it's been ported to multiple OSs since then, including Windows (twice!).
  10. How to learn from Quake source code

    We have the tools now. We shouldn't keep around practices that are built around obsolete development paradigms. We certainly shouldn't teach them to beginners. There is no such thing as "holy" code. There are no perfect codebases. That's just the nature of software "engineering." I would never tell a beginner that any particular codebase represents the canonical way to structure and implement a game. There is no such thing. I would go so far as to suggest that anyone who tells you they have a perfect codebase is selling you something. If such a thing even existed, it certainly wouldn't be from the mid-'90s. It has been 20 years since Quake came out. Approaches that were optimal then are no longer optimal on modern hardware, for modern games. Our understanding of what constitutes "good design" has evolved considerably. I strongly doubt anybody would write a modern game the way Quake was written. There may well be instances where I could recommend that a beginner look over a part of a larger codebase for inspiration or as a reference for implementing a particular algorithm. I struggle to think of any specific example of such offhand and I definitely wouldn't recommend an entire codebase as an example. I recall encountering someone on the GDNet chat who structured their code in a complex, boilerplate-y way (for what they were doing) who when asked why they chose that approach, said "that's how Unreal/CryEngine does it." That's not an attitude I want to encourage. Broadly-speaking, I would agree that beginners should start off by learning to solve problems, but we should also actively discourage bad habits from forming while they're still learning. Overuse of global state (or use of any global state, depending on whom you ask) is widely considered to be a bad habit. So are deep inheritance hierarchies and the use of inheritance for code reuse, which are widely caricatured features of a lot of late '90s codebases.
  11. How to learn from Quake source code

    I've read that post before and I stand by my points counter to the practice of excessive inlining on the grounds that it doesn't scale all that well and can obscure where things happen when navigating the code. It's worth noting that this post is from 2007, with an addendum from 2014 - quite some time after Quake shipped and still some time before C++'11 and C++'14 went mainstream. Quite a few of the points he makes actually have nothing to do with code inlining per se - they're more to do with architectural choices. The example he used was an aerospace thing, as well, and I'm not confident that what works for aerospace would work all that well for games. For perspective, a few months ago I spotted a function that was multiple thousands of lines long that went something like this: void foo() { // 500 lines switch (thing) { case a: // 30 lines follow break; case b: // 1000 lines follow break; case c: // 300 lines break; case d: // 10 lines break; default: // 150 lines break; } // 30 lines } I ended up extracting each of the case blocks into their own functions because following what was happening when was proving too difficult, even though it had little to do with my current task at hand. Carmack's email on inlining would seem to agree that this is a good idea, actually:
  12. How to learn from Quake source code

    "Harder to read" is quite subjective. Having worked on code written this way, and having refactored said code to use multiple smaller functions, I find it much easier to follow the code blocks when they were shorter. It is claimed that large, monolithic functions are better because they don't introduce unnecessary boilerplate, which is true - but this ignores the fact that the "boilerplate" can be hugely useful when actually navigating the code after the fact - which is important because on large legacy projects we tend to spend more time navigating code than actually writing it. The cost of writing the boilerplate is negligible over the lifetime of the project and in the case of simply breaking a function out into multiple functions, does not generally contribute to code bloat if it is done at the proper granularity level, unlike certain other kinds of boilerplate that do little more than add layers of abstraction. Breaking a monolithic function into smaller functions is a transformation that adds value in the long term. Modern languages also allow us to make "local functions" - in C++, through the use of local lambdas with captures. One can contain any "clutter" that results from breaking a monolithic function down into sub-functions within the monolithic function itself, rather than polluting the local namespace with external functions, so that particular argument (which I know wasn't mentioned here, but I've heard it argued) doesn't really need to apply anymore. I fundamentally disagree. Commented out code is, at best, an inferior way to duplicate the functionality of your version control system, and outright misleading at worst. Pray tell, do you actually maintain your commented out code through refactorings and bug fixes, "just in case"? If you really need that code back, you can just go back through your version control's commit/checkin history for the file that contained it. In my experience, chances are good that if you need the same functionality again later, the context will have changed sufficiently that the original code would not work as written, anyway. I'd also argue that leaving commented code around "just in case it's useful" could contribute to perpetuating a culture of copy-and-paste programming, which I feel most of us can agree is not a culture conducive to producing maintainable code. "Good code" has come to mean all of what you've said plus "code that can be maintained long term without your successors wanting to shoot you in the face." Does the Quake engine meet that specification? If your code works, is efficient, and bug-free, but difficult to navigate and modify without breaking stuff, you have not written "good" code. You have at best written "throwaway code that gets the job done." In a world where the industry is moving towards "games as a service" and where at the very least, most major games get patched, extra content, or even outright sequels using the same codebase, I would say that throwaway code of any sort is no longer acceptable. I confess I've been forced to write throwaway code to solve a problem at the last minute like I imagine most programmers have, but I felt really dirty doing it and was constantly aware of how terrible it was that I was doing it. edit: I was going to say something on the subject of global state and how it can hurt you in an increasingly parallel world, but others have made that argument in the past and I don't think we need to rehash it for the nth time. Having maintained large C++ codebases where global state was the standard way of doing things, even in a single threaded environment they've caused me more pain than I felt they were worth.
  13. Is Phil Fish a Jerk?

    So you're okay with game developers getting death threats and having their lives potentially destroyed when they piss off internet trolls? And it's the fault of the game developers that they're receiving the death threats, and the people sending them are not culpable at all? I really don't see how this isn't victim blaming. In case you couldn't tell, I am strongly of the opinion that we shouldn't endorse toxic behavior in our culture or our players'. Blaming game developers for being on the receiving end of toxicity, rather than condemning that toxicity, constitutes an implicit endorsement of toxic behavior. It frankly frightens me a little that people who hang out in a game developer forum seem to think that the kind of hate Fish received is okay, or that Fish is culpable for his treatment...
  14. Is Phil Fish a Jerk?

    In other words, an insecure person shouldn't show themselves to the public because they might piss someone off enough that the person they pissed off sends them death threats? That's... still victim blaming. I mean, are you really trying to say that that Phil Fish is the aggressor, and the trolls who sent him death threats are the victims?
  15. Is Phil Fish a Jerk?

    It sounds to me like you already made up your mind on the subject of Phil Fish before you made this thread.