Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1730 Excellent

About MaulingMonkey

  • Rank

Personal Information


  • Twitter
  • Github

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. MaulingMonkey

    Non-fatal, undocumented return code

    Exact behavior in practice depends on what debug flags are passed to the compiler, which version of the standard libraries you're linking against (debug or release), whether or not you started the program with the debugger attached, and other stuff I'm likely forgetting. Many of these will be to use magic numbers, not zero, explicitly to help you track down what would be considered bugs in cases where the C++ standard does not require initialization, and may not initialize in Release builds. In release builds, the compiler may make extremely aggressive optimizations about uninitialized memory, such as evaluating two mutually exclusive if statement as both being true, even though common sense would say uninitialized memory has only one value and that such a thing is impossible: There are some circumstances where C++ guarantees data gets zero-initialized (global data, memory returned by calloc (clear-alloc), etc.), use those if you want your memory to be zero. Debug patterns like CCCCCCCC help make sure you're not accidentally using something that just happens to sometimes return zero-initialized memory and expecting it to always be zero-initialized (when it may not be in Release builds, on other compilers, when the allocator starts to reuse freed memory, etc.) EDIT: There are also more involved tools like which cause your program to actually crash when reading uninitialized memory, specifically so you can easily find it and fix it, instead of having strange bugs in your program which can be hard to get to the bottom of. (Additionally, there are ways to have Visual Studio use clang, gcc, and other non-Microsoft compilers - so it doesn't hurt to be specific about what you're talking about :))
  2. MaulingMonkey

    Nobody Wants A Cybergod?

    This will just cause you to fail the class, which you will blame on the teachers, school, and classmates instead of your behavior. Don't bother. You share so little of the actual fundamentals of your ideas that it's impossible for us to evaluate them. You share so much tangential ramblings as to bury what little you have shared. Of course nobody can sanely evaluate that the ideas of Rube is anything worth their time. At best they might try to extrapolate from the metaramblings about and around the ideas of Rube. "I would have written a shorter letter, but I did not have the time." Between this and your attitude, you're a poor communicator here, either by choice, skill, or both. You manage to gain a little attention with it, but largely the negative kind - little proper interest. I'm baffled by your supposed board game bonafides - isn't this the antithesis of a good rule set? Short and concise where possible makes things easier to pick up and quicker to play. Complexity must earn it's keep in fun or interest, or be cut. On the pen and paper side - sure, my library has thousands of pages of GURPS books alone - covering lore, worldbuilding, settings, mechanics, etcetera. But these are meticulously organized, divided into books, cross referenced, and edited down to their essential contents, organized into compelling themes, indexed, summarized. None of them ramble on about the affronts of the publishing industry against them, unless we somehow count the secret service raid on Steve Jackson Games way back when over the GURPS Cyberpunk series - ironically topical to the contents of the book. And GURPS Lite is enough to get you started, at 32 pages. For an entire game. The other thousands of pages in my library? Optional extra fluff to keep things going once you've already started. By stark contrast, in the decade since you've started posting here, you've started to... more consistently use paragraphs. Which, don't get me wrong, I appreciate, but isn't quite enough. It's frustrating to see you squandering your potential like this. Here we are, getting to make games, enjoying the creative process, seeing our creations come to fruition. There you are, blinded by ideas that shine more brilliant than the sun in your eyes, struggling to execute on them or get others to, trapped by your own nature, your own behavior, unable to see the way forward, only able to see your ideas, labeling large chunks of your life a "waste" a decade later: You're clearly frustrated too. That sucks. Nearly everything has already been said by those better than I at communicating the way forward, however. No sense generating another 500 pages retreading where we've been. Close - none of us actually know what you are talking about (and even if we did, it'd be no guarantee of our interest.) Learn to communicate your ideas to mere mortals, build your ideas using mere mortal means, pivot to something that will use your ideas in different mere mortal ways, or accept that your ideas are in fact a waste. Fool's gold blinding you from action and building things. Take some goddamn responsibility for your own life and what you can do with it instead of blaming some strangers on the internet for their lack of having done things with it for you. If your entire life is wasted, that is because of you. If you pivot - maybe fiction. Nanowrimo? You clearly like to write. You have imagination, and love metaphor to the point of it being a problem in design and technical discussions. You love language charged with your own meanings and vocabulary - again, problematic, when the rest of us being on a completely different wavelength where your terms already had very different definitions at times. Science fantasy can get away with not making technical sense to the reader, although plot and narrative structure will be important things to learn. You may end up screaming bloody murder when an editor gets their hands on your work. Learn to accept this when they are perhaps "ruining" your work. If you don't pivot or abandon your ideas, you must learn to communicate on design and technical topics better. The most genius and experienced among us will go entirely to waste if they cannot communicate. And are they truly genius if they cannot learn this? And even a fool can be a valuable member of a team if they can communicate. My apologies for not taking the time to make this shorter. And good luck. I mean it.
  3. MaulingMonkey

    HLSL banding - precision issue ?

    Fire up an image editor and use the dropper tool. Most of the banding I'm seeing is single color level changes (e.g. #373737 -> #383838) without enough texturing to obscure the effect - one band per drop. I think I found *one* drop out of 10+ where it was 2 instead. You *are* hitting the precision limits of what 8-bit buffers and monitors can give you. Much of the scene obscures the banding effect from textured backgrounds which have a much greater range and effectively drown out the banding with texture. The places where this doesn't happen are where you have your single, solid, untextured light as the dominant or only contribution to the color channel. Aside from having the background contribute more (that concrete wall has easily +/-32 levels of color difference just pixels apart, easily drowning out the effects of banding), you could also try texturing your volumetric lighting - think dust particles catching the light etc. And as Unbird mentions, dithering would help as well (at least if you've got e.g. HDR render buffers that you can get more than 8 bits of precision out of in the first place to dither *from*.
  4. I treated matrices as a black box until I had to debug a d3d9 -> opengl es 2 porting bug involving them, at which point I built myself a solid geometric understanding of 3x3 and 4x3 matrices, and at least enough of an understanding of 4x4 matrices to debug them.  (The bug was not realizing glUniformMatrix4fv ignored the transpose parameter in ES2.) As a single reject reason, it seems rather picky for a junior position.  If you've got a flood of well qualified candidates, it might be reasonable to be this picky (especially if they're expected to start working on graphics code right away).  If you've got a trickle and several slots urgently in need of filling, it may be *very* worthwhile to give them a shot - you can always try and mitigate some of your risk by trying them out as a contractor first.  If they work out, you can always offer them an upgrade to full time employment.
  5. MaulingMonkey

    Collision with deforming meshes

    To recap some points from discord: Low poly meshes are still a thing (you don't use your high poly render mesh for bullet hitboxes or nav meshes either, you use a lower poly equivalent) Tracking your current triangle means you don't need a line-mesh intersection vs the whole mesh every frame (just scan neighboring triangles instead) Cheat and limit what your animators can do, thereby simplifying your options by assuming more - such as having them not flip the model, and thus let you cull the entire stomach mesh, etc.
  6. MaulingMonkey

    "Don't read from a subresource mapped for writing" ...

    > Thats just ridiculous (discount the example about generated assembly).   It gets worse:   There's a couple of additional strong recommendations:  Always write sequentially, and don't leave any holes.  IIRC violating the latter rule can even result in a cacheline read by the CPU on at least one arch - although I can't find a proper source/docs for that.   My golden rule now is:  Always use memcpy.  Always.  For the entire buffer.  *Always*.
  7. The basic tradeoff here is merge early/often vs merge late/rarely.   If features A-D are small, I'd argue "who cares" - cherry picking or temporarily disabling changes is easy enough that I wouldn't bother optimizing my workflow around it.  So let's assume they're large.   "Feature D" doesn't get merged with the changes introduced by "Feature A" and "Feature B" until the last minute in your workflow.  Refactoring conflicts will be a real mess to resolve, near impossible to properly review, and impossible to bisect for problems.  I have repeatedly, in the past when faced with such problems, rebased "Feature D" atop "QA" or incrementally merged "QA" into "Feature D".  This makes it much easier to highlight refactoring conflict resolutions that need proper vetting in code reviews, makes the merge much more reviewable, and makes the history bisectable in a useful manner.  Unfortunately, this is basically turning your workflow back into the original one - with all it's problems.  If we suddenly decide Feature B is out, I'm still SOL.  Still, I'd generally rather have those problems than megamerge problems.   Maybe your situation is different enough that you'd rather make a different tradeoff somehow?  An alternative option possibly worth mentioning here:  Feature toggles.   > If someone in QA notices a bug on the live site, do you create a Hotfix branch from your current master/development branch or the old dead-end release branch?   You don't actually solve this conundrum with your workflow any better ;).   Hotfixes should generally always be branched from whatever commit is live ("master" here?) to minimize the scope of the hotfix - or the previous unshipped hotfix - and thus reduce the risk that you'll introduce additional bugs that won't be caught by the typical vetting process of soaking in the Dev/QA branch for awhile that you're mostly skipping - and then merged back into "Development" and/or "QA" at your earliest convenience.   Speaking of broken CSS: It's impossible to see what text I've selected for copy/paste on Chrome on that site.   > Release day! We get news Feature C is being removed.   I don't think I've had this happen once in my years of game development.  Occasionally we'll roll back a recent change for stability, but that's about it.  Having *this* late notice sounds weird to me.  Having it happen regularly enough to base your branching strategy around it - well, hey, maybe you do have a use case, but I don't think this is a "closed source" issue per se ;)
  8. Does seem to be broken.  A working download can be found here:   (Broken links are @ )
  9. MaulingMonkey

    Tiny Serialization Library

    Most of the low level I/O, if it fails, returns uninitialize data at best (which makes debugging nice and nondeterministic.) There's no sane bounds checking - it appears trivial to create a tiny file which will eat all your memory when using read_string. If you're lucky, it will throw std::bad_alloc or dereference null (depending on if exceptions are available or not).  If you're unlucky you'll exhaust all available memory and have OOM crashes later elsewhere.   I wouldn't consider this usable even in a non-hostile environment currently - savegame corruption happens, I don't want it crashing my titles on startup.   In a hostile environment, your attacker will use l=0xFFFFFFFF, the call to new char[0xFFFFFFFF+1] will succeed - it's the same as new char[0] which returns a unique non-null pointer.  The resulting read() call *without* the +1 will then be the start of the buffer overflow, likely probed for possible use in code injection attacks...   Throw SDL MiniFuzz or other fuzzing tools at this if you want to harden it up...
  10. MaulingMonkey

    Floating Point Constants

    Of course, do note that simple decimal expressions like "0.3" have an infinite number of digits in binary fraction form and will suffer additional rounding if coerced to float.
  11. MaulingMonkey

    Software for at-a-glance overview / organizing of 9001 Projects?

    Nothing out there quite like I want. Rolling my own pile of hacks: Needs more icons, better sorting metric, more accurate metrics in general, better handling of branches.
  12. MaulingMonkey

    Software for at-a-glance overview / organizing of 9001 Projects?

    Zol on #gamedev suggests some sort of lightweight sourceforgelet stack that you run locally to quickly share local projects (or their information) with others.
  13. I've been digging into my projects\dead\ folder recently, a dumping ground of projects I've stopped working on. There's some stuff well worth salvaging for use in new projects in here. This process is daunting, despite 'only' containing 31,705 files out of the 64,473 files in my projects\ folder. I want a way to better document my hundreds of throwaway projects. I'm going through, renaming project folders based on language, branch count, adding short descriptions, taking screenshots, etc. -- but this is inefficient, slow, and isn't all that ideal for browsing. I'm after a better at-a-glance view of all my projects. VCS websites like github and sourceforge tend to at least have a project list that lists project names and descriptions -- I'm looking for something similar and more advanced that works locally. Before I roll my own, is there anything already existing out there? If so, my google-fu is failing me. Failing that, what are some of the features that could be useful? Screenshots, Short Descriptions, Tagging, Statistics (language, loc/commit/branch count), Pending changes, VCS and Website tracking all come to mind.
  14. Not to question your ultimate authority, but if it happens with the consent of the OP, and with good manners, whats the problem exactly? [/quote] A long, sordid history of language flamefests degenerating into complete and utter shite on this forum, as they attract out every lurker who doesn't know what they're talking about but has strong (and wrong) "opinions" and "facts" about programming languages. This derails and prematurely ends thread after thread in the process, either because the concentrated stupid drives away the rest of the participants, or baits them into participating, or gets bad enough that a moderator has to close it. Premature thread death is seen as bad for some reason, so effort is taken to avoid it. Wack's bringing up lists of C#, Java, and C++ "pros/cons". Without reading too closely (as like many members, I've taken to skimming past this stuff when it comes up as it comes up so often) that certainly smells like it's getting/heading toward the rather tangential. And -- lets assume I'm full of shit and it's completely on topic (since I am skimming after all) -- it's still exactly the kind of thing that will attract the wrong sort of people and conversation. So what can be done? Well, if you're Promit, you can tell the wrong sort to fuck off before they even start, and remind the right sort not to fall into the trap again. And now you know (tm)
  15. MaulingMonkey

    Are dirty coding techniques okay to use?

    Even modifying 3rd party headers is preferable to the complete evil that is random address offsets. I don't care if Jesus smacked you with a bible-by-4 and said "thou shalt use this address instead of modifying mine headers" -- I'd still prefer it over this nonsense. No code is perfect, and practicality takes precedence over trying to be perfect -- but even if you're going to do the wrong thing, don't use that as an excuse to not even try. Asserts, sanity checking, a few comments warning about how you're expecting it to break and why... try to make it so that, when it inevitably breaks, that's okay because the how and why are clear. Random address offsets aren't going to break cleanly nor clearly. Down that path lies memory corruption, heisenbugs, and crashes 2 hours down the line in a completely unrelated part of your program.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!