Agony

Members
  • Content count

    3298
  • Joined

  • Last visited

Community Reputation

3452 Excellent

About Agony

  • Rank
    Contributor
  1.   Yeah, but I'd argue that there are far more cases in which the auto keyword helps to avoid unintended silent conversions than cases in which it causes them.  That's been my personally experience anyway.
  2. OpenGL Vulkan is Next-Gen OpenGL

      There was some discussion about this back when Apple's Metal API came out.  I think the opinions of the administrators at the time was to wait and see what the posting patterns are like once all the new APIs are available and in common use before making a preemptive decision to reorganize the forums, based on speculation about what the posting patterns will be like.
  3. I struggled with this recently myself.  The recommendation to only reject when all vertices are in the negative of the same plane bothered me because it would allow some false positives.  But it's also a lot more efficient than more complex solutions, depending on how costly false positives are.  Here's an example of a false positive that was tripping me up:     It gets even worse in 3D where none of the box's corners are within the frustum, none of the frustum's corners are within the box, at least some corners are on the positive side of every plane, and it's still unclear whether or not the volumes intersect.  If you want perfect clarity, no false positives or false negatives, you'll need something more.   My solution was the following: Check all eight corners of the box against the six planes of the frustum. If any corner is in the positive region of all six planes, immediately accept the box and skip the rest of the algorithm. If all eight corners are in the negative region of any single plane, immediately reject the box and skip the rest of the algorithm. (Optionally do the inverse, checking the eight corners of the frustum against the six planes of the box.) Find the equations for the twelve line segments that make up the frustum's edges.  I found it convenient to use a ray structure (starting point and non-normalized direction vector). For each line segment, geometrically subtract all six planes of the box from the line segment, noting if it gets subtracted away completely.  (Each ray starts as parameterized from t = 0 to 1, and I just increase the minimum t or decrease the maximum t depending on how and where the ray intersects the plane.  When min >= max, the line is zero length.) If even a single line out of the twelve still has a positive length, then the box is accepted; otherwise the box is rejected. I'm pretty certain that the above is "perfect" in that there are no false positives or false negatives; it's an exact intersection test that handles all edge cases.  (And conveniently should work for any combination of convex polyhedrons, not just rectangular box and frustum.)  But no clue if it's optimal or close to optimal for that generic problem set.   And after having written the above, I'm now reassessing whether it's really worth it to do all those computations in order to eliminate false positives.  I guess the only way to be sure is profiling, eh?
  4. Sometimes you can get away with dropping the requirement for bit-for-bit identical results in the later stages of generation, as long as the early stages are perfectly identical.  Depending on the particular kind of procedural generation you're doing, this can possibly make things much easier.  Perhaps most of the numerical instability exists during the early/mid stages of generation, and so you can deal with that by using fixed point math, while a bunch of the heavy number crunching happens during the later stages, so you can switch to floating point and perhaps offload some of the work to the GPU.  Even if different GPUs produce different results, if the late-stage algorithms are numerically stable, they might still be close enough to work.
  5. Random Generation Issues

    You need to create a single instance of Random(), and then call Next() on it multiple times, once for each tile.  That's the appropriate way to use a PRNG:  Seed it once, and pull numerous random numbers from it.   My quick recommendation would be to add an int parameter to the Tile constructor and to the Tile.Generate() method, pass that parameter from the constructor to Generate() and use it within Generate() in place of x, and then from within the TileGenerator.Populate() method, create a single new instance of Random(), passing a call to Next() as the additional parameter that you just added to the Tile constructor.
  6. C# has a similar behavior, which I thought was odd given .NET's much smaller allowance of undefined behavior.  But accepting that as the way it was, I was recently looking into people's recommendations for what type of exception to throw in C# for the same situation, and I encountered the suggestion to throw a NotImplementedException.  The reasoning being that in the future you (or someone else) might add a new value to the enum, but forget to update every last location in code that switches on that enum.  In that scenario, complaining loudly at runtime that a path hasn't yet been implemented seems appropriate.   Granted, having the compiler complain at compile time that not all paths return a value (or initialize a variable before it is used in C#) is even better, but not all uses of a switch statement would produce that warning/error.  Even if they would, the problematic switch could be in an already compiled executable or library file which is still binary compatible with the expanded enum.  With no chance for the compiler to warn the developer, a noisy runtime error would still be welcome.
  7. I could see this going in two radically different directions.  The first would be to focus on humor and parody, poking fun at cults, playing up caricatures and stereotypes.  The second would be more serious in nature, allowing a player to explore the social dynamics, psychology, and ethical issues surrounding cults (possibly with or without trying to send a particular message).   I imagine that the game mechanics, aesthetics, and marketing would all be heavily dependent on which of these routes you were meaning to pursue.  There might be some other options too, which would likely have similar effects on the production process.  My biggest warning is to not get stuck in between two or more substantially different routes, because then it won't be clear what you're trying to achieve with the game, what type of experience you're attempting to generate among your audience (or even who your target audience is).
  8. Controls in Scrolling Windows

    Which particular UI framework are you using?  WinForms, WPF, something else?  And have you looked at the relevant documentation for the position properties?  It might explicitly tell you if the property returns the position relative to the parent's virtual area (pre-scroll), relative to the parent's visible area (post-scroll), relative to the screen, et cetera.  And if it doesn't provide you the value you want, there's like another set of properties that do give you what you're looking for, or at least some methods that will convert from one frame of reference to another.  (Client to screen and screen to client are common.  And note that the client area typically refers to the visible area of the control, so any position that is relative to the client area of the parent will produce the effect that you see; it is a position relative to the root visible corner of the parent.)   After a bit of research, if you're using System.Windows.Forms (WinForms), this MSDN page might help:  ScrollableControl.AutoScrollPosition
  9. C++ VirtualFunctions [SOLVED]

      An observation is that this could be seen as being related to what is known as the ABA problem, where a piece of data can change from the value A to B then back to A, and any bit of code that needs to be aware of changes to this data but only knows about changes by comparing the current value to the last value known by this particular code segment might miss that a change occurred at all.   The similarity being that one might naively hope that they can look at the current value of that piece of data and easily determine the correct course of action to take, but in reality their might have been multiple ways that the value became what it currently is, and the correct course of action is dependent upon more than just the current value, but also on how that current value came to be.  With the double deleted pointer, as you noted, there are indeed ways for the pointer to technically be pointing to legitimately allocated memory even if the pointer shouldn't really be construed as pointing to anything valid at all because it had already been deleted.
  10. C++ VirtualFunctions [SOLVED]

    Even then, I'd highly recommend writing your own set of smart pointers and using those as often as possible, if you're not already doing so.  Smart pointers as a concept are quite simply awesome tools perfectly suited for just about all memory management roles not already satisfied by the stack itself or the various standard containers, and it'd be a shame not to take advantage of the benefits they offer.   I've implemented various smart pointers a few times myself (as well as written my own string, vector, and map classes), and the learning experience alone is valuable.  These days I use the standard utilities in almost all cases, but that knowledge helps me understand why I'm doing so, how to use them correctly, the costs (if any) I'm paying for the convenience, and so on.  It's rare that I feel the need to roll my own, now that I've done so and understand most of the internals (or at least understand enough to know that the experts can take the standard implementations well beyond what I would easily be able to do).
  11. All the numbers you listed are presented in a decimal representation, easy for humans to comprehend.  But they were originally stored in binary, and so the decimal representation is almost definitely an approximation, not an exactly identical value.   The result is that when you add 2.475 and 29.3135 to get 31.7885, you're not actually doing the exact same math that the computer is doing.  It is adding two numbers that are really close to 2.475 and 29.3135 and getting a result that is really close to 31.7885, but it is different enough to matter at the computational level.  Similarly for the other set of numbers you presented.  I wouldn't be surprised if the two additions were resulting in identical results in the binary representation, meaning that your less-than returns false no matter which way the comparison is done, and the consumer of the comparison is free to pick either one as the best.  Or it's even possible that the decimal representations lose enough accuracy due to rounding errors that the second pair of numbers, when added in their binary form, actually are larger than the first pair.  That is, if the first two numbers were both rounded up, and the second two numbers were both rounded down, then you might incorrectly expect the first two summed to be larger than the second two, when that's not actually the case.   In the end, it's usually best to structure your use of floating point numbers such that very minor differences of this sort don't really matter.  Is it really a problem that the comparator thinks that #1 ranks higher than #2 in this case?  The path finder ought to spit out nearly identically optimal paths either way.   And if you want to dive deep into some research and gain some very valuable knowledge on the subject, I'd recommend What Every Computer Scientist Should Know About Floating-Point Arithmetic and similar articles.
  12. I haven't worked with UE4 much, and am only beginning to feel comfortable with Unity, but one thing I'm beginning to feel about Unity's framework is that the entities and components are structured very nicely, but they hide too many details of the underlying systems for my liking.  How is each component stored in memory?  How does each system iterate over its specific component instances, or its combination of multiple interrelated components?  Ideally, not every component would be stored identically, or iterated over identically, or lookup up identically, et cetera.   It feels like the Unity team has done a pretty good job of adhering to the data-oriented design philosophy in some regards, but they've made many of the design choices already and hidden the details within the private portions of the engine.  Which means that they've had to make generic design choices that work well enough for as many game elements and as many types of games as possible.  Which then severely weakens a developer's ability to exercise data-oriented design effectively.  The design depends on the data, and different games and different systems have and use data in vastly different ways, but a Unity developer can only do so much when trying to design appropriately.  It's like Unity is so close to awesome, and yet so far away.   So I don't think that Unity has problems because of its purity with the ECS concept, but because of the genericity forced by the closed nature of the engine.  Despite that, and as implied above, I do find it to be adequate.  It's good enough, but it doesn't achieve the awesomeness that I think its overall design grants it the potential to become.
  13. OpenGL Vulkan is Next-Gen OpenGL

    News:  Won't be done in 2015 (no surprise), but soon.  Annoyingly still vague; I'm not sure who that benefits or how, but supposedly someone thinks keeping a lid on as much info as possible is a good thing for those who control the info or some of their partners.  From www.khronos.org/vulkan:  
  14. User defined functions?

    If you're making a dynamically loaded library, it doesn't really make sense to have the ordinary entry point.  Your user's program is going to load your library and call into it to start it.  (And your user's program will therefore need to have the appropriate entry point for the target platform.)   This means that you can define whatever initialization functions you want your users to call, and require that they pass whatever data or function pointers you need to call back into their code for custom behavior.   If you want to keep your current architecture of defining the entry point yourself, then it seems to me you should provide your framework as an executable, and then it would be your user's code which is a dynamic library, and you would load their library yourself, finding it based on path or program parameter or whatever other technique you choose.  You'd then need to use the platform-appropriate calls to search for the proper initialization functions within your user's library and call those.
  15. Signed Rotation

    The purpose of the first constant and the & with value is to get the sign bit and nothing else (every other bit is forced to be 0).  The second constant does the opposite; it makes sure that the half of the expression after the | does not contain the sign bit (that bit is always forced to 0).  Then when the two halves are or-ed together, their bits are guaranteed to not overlap or interfere.  So we're basically merging the original sign bit with the newly rotated non-sign bits.   I forgot to mention it yesterday, but keep in mind that negative numbers are almost always represented using two's complement, which means that simply flipping the sign bit does not simply flip the sign of the number while otherwise leaving its value unchanged.  Depending on how you use the results of a sign-preserving rotation, this might fail to produce the values you are looking for.