Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Online Last Active Today, 04:34 PM

#5261406 Armour penetration and firearms

Posted by Ravyne on 10 November 2015 - 02:17 PM


So your simplified system might cater to group one (which may or may not be the majority of players anyway), but will certainly drive away group 2. To them, the metagame and number crunching IS the game, playing the game and winning against less informed players is just collecting the price for their hard work.

 

I don't have numbers, but I think its probably safe to assume that if your game leans towards simulation then you want more, (and more detailed) choices because the crunchers will be a significant portion of the player base (I would contend though that crunchers are not, in fact, the majority even in any but the most hardcore/niche simulations). Looking at racing games, you have the likes of Forza or Gran Turismo or F1 that are deep simulators and the details are necessary -- still, Forza and Gran Turismo, and to a lesser degree F1, still have a majority audience who are not crunchers. Even if those non-crunchers make choices because they are forced, or because they think they should, they don't understand the choices they're making and will make choices that don't benefit them.

 

Arcade-style racing games go the other direction, with very few choices and configuration details. Things are not so granular as to be overwhelming, and I relate to each of the limited details in ways that mean something to me. I know, for instance, that if I'm not a great player of racing games I should look for a car with better turning radius and acceleration, even if I give up top-end-speed and breaking, because that will better suit my style of play or skill level -- I know that top-end speed won't be of use to me unless I'm a deft driver.

 

If you haven't looked at that link, one of the examples she actually used was in studying how customers engage with a car customization program (so, making many of the same decisions as in a deep simulator, though not to the level of granularity) -- They found that even when the number of details could not be reduced out, just the order in which you present choices had a strong impact on engagement and satisfaction of choices. The trick was building up from simpler choices to build investment and confidence (and to a lesser degree to learn how the choosing apparatus functions) and to withhold more subtle or difficult choices until later (though, no later than they would impact subsequent choices).

 

I think in the end the number of choices is largely irrelevant, and that what you're really chasing is that your customers are satisfied with their choices and confident of their decisions having the impact that they think they do. Another experiment they talk about is that they presented 600 magazines organized as 10 categories, and 400 magazines organized as 20 categories where the set of 600 contained exactly the set of 400 and then some -- they found that people felt that the set of 400 magazines was providing more variety than the set of 600 even though that was objectively false; reason being was that the greater number of categories allowed them to connect with what made the magazines different from the others. I'm not a cruncher myself, so I'm probably biased, but in the end I think of all these super-detailed choices with barely discernible impacts is just statistical masturbation.




#5261280 Can a full program be stored in a photo image

Posted by Ravyne on 09 November 2015 - 10:45 PM

There are programs that do this already, for archival purposes and for resiliency against technological advancement.

Things like blue-ray discs aren't necessarily stable over hundreds of thousands of years, nor are magnetically-stored information. Likewise, our computers even on 25 years' time won't like have the interfaces to read the information from them, let alone in 200 years or more.

Paper is low-tech, low-density, but its a safe bet that computers will have image sensors for as long as human beings have eyeballs. That makes optically-retrievable information in the visible color spectrum a good bridge to even far-distant futures.

As I recall, the program could store something approaching 128k with the consumer grade printers of the day (1200dpi or so) and used unmodified printers. I imagine one could do better with a specially-prepared printer or by modifying the firmware/drivers of a standard printer. This gave results that at the time could be retrieved by desktop scanner (consumer level) but today could probably be retrieved at short distances using even your mobile-phone's camera.

It did work more or less like a QR code. You have to have an encoding scheme though -- for example, even if your printer can produce accurately all 24million shades of an rgb888 color value, its impossible to say that another printer will have the same results or that a given image sensor will agree and read back what you thought was written -- the differences are too subtle. You have to make the data more discrete -- a black&white encoding is very discrete but not densely populated with information. You have to find a balance; maybe a few levels of grey are reliably discernible, Fr instance. Then you also want to build in error-correction so that a page can withstand smudging or pieces being damaged or torn (A QR code can still work even with a significant portion missing -- which is also the mechanism that allows the creation of some of the fancier vanity qr codes).


#5260559 Armour penetration and firearms

Posted by Ravyne on 04 November 2015 - 02:05 PM

One thing though:

While I am aware that Magnum pistol round do have a quite high potential energy because of their weight and length, are you sure they can keep up with AR Ammunition like the small NATO 5.6mm round or the larger AK47 round? Both are fired at much higher muzzle velocities than pistol Magnum rounds AFAIK, and both are pointed versus the Magnum ammos rounded design.

Can Magmum ammunition penetrate a military class Kevlar vest without the plates? I am asking because I really don't know, Magnum rounds are extremly rare in europe where pistols almost exclusively come in 9mm Parabellum.



Or did you mean "at optimum range", which would be quite close for magnum rounds, but farther away for AR rounds (especially the 5.6mm NATO round...


I think in part the design of a revolver makes for good gameplay as much as anything -- you get a lot more punch, but naturally the trade-off is lower capacity between reloads.

But in real life, the larger rounds found in popular revolvers are usually high-pressure -- the design and function of a revolver is inherently better-suited to withstanding those forces because there's no springs, slides, or breach-locking mechanisms like a semi-auto pistol has. If you've ever seen an automatic pistol chambered in one of those revolver calibers, the first thing you notice is how cartoonishly large they are (and heavy) because of how much beefier all those mechanisms need to be.

So a magnum can deliver similar energy to small/medium rifle calibers because you're typically talking about a projectile that's 3-5 times the weight, even if its moving at only 2/3rds the velocity or so.

Now, the blunter design of the projectile might not literally break through the armor, but even-so will cause massive, likely debilitating tissue and/or bone damage. Its certainly better not to have perforated heart or lung, its certainly better to not experience the internal damage caused by hydro-static shock from a fast-moving projectile entering your body -- but its no fun having 4 broken ribs and a collapsed lung or bruised heart either; that person's no longer in the fight either way. Games simplify by treating both parties as dead, that's all.


#5260240 Armour penetration and firearms

Posted by Ravyne on 02 November 2015 - 08:26 PM

In real-world terms, penetrative force of a projectile is a function of the amount of energy it's carrying at the time of impact, combined with it's diameter, shape, and deformation properties. Energy is given as the mass of the projectile times velocity-squared. Common hand-gun cartridges typically produce low-to-mid-range velocities on medium-to-heavy-weight projectiles that are relatively blunt; common rifle cartridges typically produce high-to-very-high velocities on light-to-medium-weight projectiles that are usually more pointed; common shotgun ammunition produce low velocities and includes slugs or large shot ("buckshot") -- either a single very-heavy blunt projectile, or a handful of medium-weight round projectiles --  small shot ("birdshot") would be unpleasant to experience for sure, and could be lethal at short ranges, but is not worth considering for combat purposes (no military or police force would issue birdshot, except for pest-control purposes).

 

Because the velocity term in the energy equation is squared you deliver more energy on target using lighter, faster-moving projectiles. Also, blunt projectiles made from soft metals like lead dump a lot of energy as they deform on impact. More pointed projectiles will more-easily puncture hard armor because the energy is (at least initially) concentrated at the point, as will those made from hard metals (hardened steel or tungston) because they do not deform and so do not lose energy. There are also projectiles made entirely from, or including 'penetrator' elements made from, mild-steel which gives better penetration than soft lead but which is not "armor piercing" in the sense of defeating hard armors (including that worn on the body), but do better-defeat soft armors and barriers like wood or drywall.

 

I would put straight-walled (a good indicator of relative mass/velocity of projectile, for physics reasons) pistol-caliber cartridges less than 9mm in the low penetration camp regardless of what kind of firearm they're fired from (its true that a longer barrel will allow a projectile to gain velocity over a shorter one, but not enough to make much difference with such categories as you have; you could give SMGs or pistol-caliber carbines a bump if you wanted). .45 auto here too -- its heavy but slow -- and doesn't have significantly more energy than a hot 9mm load.

 

I would put certain larger pistol calibers (10mm, .44 magnum, 357 magnum, .40s&w) in the medium-penetration camp.

 

I would put buckshot in the low-penetration camp at distances greater than about 15 yards, and in the high penetration camp at ranges less than 3 yards; medium penetration otherwise; effective range limited to 40-50 yards.

 

I would put slugs in the low-penetration campt at distances greater than 50 yards, and in the high penetration camp at ranges less than 10 yards; medium-penetration otherwise; effective range limited to 75 or 100 yards. 

 

I would put rifles of .223 caliber or 7.62x39 in the medium penetration camp (to include similar hunting calibers like 30/30) to an effective range of 300 yards' anything larger in common military use (somewhere round-abouts 30-06, .308 (or its NATO designation 7.62x51)) in the high-penetration camp out an effective range of 800 yards -- though, I suppose all these effective ranges aren't much use in your top-down 2D game...

 

Grenades and explosive devices kill mostly by concussive force inside 3 yards, armor or not, you're dead if you're that close. Shrapnel is a factor outside that distance, very high penetration if you're hit, but the likelihood of being hit falls off drastically with distance (the likelihood can be reasonably estimated using the formula 1 / (2*pi*distance-squared) for ground detonations or 1 / (4pi*distance-squared) for detonations in the air or resting on a surface that the shrapnel will penetrate rather than ricochet from, these formulas assume uniform distribution of shrapnel).




#5260235 Should I wait for Directx 12 to buy Luna's book

Posted by Ravyne on 02 November 2015 - 07:16 PM

As a practical matter (that is: because you want to write your own complex rendering engine) you probably shouldn't learn either. Nowadays there are plenty of very capable, very inexpensive (and mostly free-to-start) game engines such as Unity or Unreal Engine 4. If your goal is to get products out the door, whether you're an experienced team or lone newbie, one of these options (or the many others) is your most expedient and cost-effective route. Many major studios today (perhaps most) license their engine technologies from others because they don't have the experience on staff, can't afford the effort, or can better use the resources they do have elsewhere; or because they can't take on the risk of an unproven engine, or because even if they succeed they can't just hire new people who are experienced in their proprietary technology -- in short, even game studios have a hard time making the economics of it work out.

 

Especially if your math/graphics background is shallow, as you say, then what you might benefit from going directly to OpenGL or DirectX3D (regardless of version) is beyond you, and will likely remain so for years. OpenGL or Direct3D are not the language you want to be speaking when what you really want are recipes for getting things done -- they're the wrong level of abstraction for reaching that end. They're the language you speak to write rendering engines, they're not the language you speak to write games.

 

 

If you're doing it to educate yourself or you're doing it to satisfy a curiosity, then by all means go right ahead. Just be aware that by all practical and tangible measures, rolling your own complex engine is almost certainly foolhardy. For my money, rolling your own engine is really only viable when you know that existing solutions don't meet your needs, when either A) your needs are actually so straight-forward that the complexity or philosophy of existing solutions would work against you, or B) your needs are actually so complex/novel/stringent that no existing solution meets (or could reasonably be made to meet) your needs.

 

I say all this somewhat begrudgingly as someone who takes pride in building my own solutions, and having extremely high standards for solutions I'd simply adopt. 




#5260024 How do I re-size a binary file?

Posted by Ravyne on 01 November 2015 - 02:12 PM

Why are you trying to resize "Batman Arkham Asylum" assets?


Haven't you heard? It has a 12gb RAM requirement. Clearly OP is trying to fix that.

/troll


#5259516 How to did Spelunky not get sued

Posted by Ravyne on 28 October 2015 - 11:19 PM

Perhaps unfortunately, perhaps not, gameplay by-and-large is not protectable by IP law. I argue that this is a good thing -- what a world we would live in if Nintendo owned the patent to the modern 2D platformer and forbade anyone from deriving key elements -- no Sonic, no Megaman, none of the countless others -- at least not without paying up for the privilege.

Its true there are serious abuses. There are lone developers in places like China or India who simply clone (or even repackage) software other developers have created. There are more than a few large Western companies who are only slightly-less blatent about it. This is bad.

On the other hand, if stronger IP protections of gameplay are enacted, what body would be responsible for deciding whether a new game has added enough novelty to get a pass, vs a new game which has simply appropriated an old one. What if you created 'Super Dario Brothers' and it had the same gameplay we know and love, but with all-new original characters, settings, level design, and design analogies? What if you made one obvious change? What if you made a dozen subtle changes?

I think its probably right, if imperfect, for game design and derivitive works thereof to operate more like books -- There are only between 3 and 17-or-so unique stories at the macro-level in the entire world (depending who you subscribe to) and yet there are millions of individual and individually worthwhile expressions of those base stories. This is because a story has its own character built up of its components, characters, and their interactions all filtered through the unique voice of its writer. Romeo and Juliet has been told 1000 times but there's still only one Romeo and Juliet (which itself is highly derivitive of an earlier version of the story).

TL;DR; -- Don't worry about it. Keep on keepin' on. Make the games you want to make and feel good about making. Set your own boundaries for what's right and moral, and make sure what you create expresses your own unique voice.


#5259316 The wrong way to count lines of code

Posted by Ravyne on 27 October 2015 - 03:09 PM


Lines of code deleted is clearly a useful metric

 

I don't think even that is good beyond question -- you can add lines of code and still reduce complexity, likewise you can take lines away and still increase complexity; at least, if we're talking about complexity as the mental load required to understand it. If reducing the number of lines of code in a program is good, I suggest to you that the increased quality thereof has less to do with the number of lines left, and everything to do with the fact that while we trust any junior developer to *add* significant amounts of code, usually only more-experienced devs are entrusted to *remove* significant amounts of it -- or, at the very least that the act of removing lines of source code necessarily requires a more-complete understanding of both the problem and the existing solution. In other words, the second-whack at the problem is better because of better understanding, and that it might result in fewer lines is a symptom, rather than the cause (same as that it might result in more lines).

 

We reason about code at several levels -- at a systems-scope where we need to reason about how our processes interact with other processes, at the global scope, where we have to reason about how each module in our code interacts with rest of the whole, at the module level where we have to reason about the sub-components of our module working together or how our modules interact with specific other modules in isolation from the rest of the system, at the class, function, or even smaller levels. The goal of good code is that, at each level, exactly the information you need to reason about those relevant things is clear -- not more, not less -- that's the ideal. I don't want more code, I don't want less code -- I want exactly the right amount of code, functions, classes, modules, and binaries to facilitate that understanding.

 

That said, sudden swings towards what seems like too many or too few lines of code, especially at inappropriate times in a programs life-cycle can certainly indicate code and design-quality issues. A sudden ballooning of source code might indicate, for instance, an over-reliance on inheritance vs. composition -- but that's really indicated by the first-order derivative of LOC, not LOC as a purely quantitative measure, and this is also usually better tracked and reasoned about at the module level, not at the whole-program level. I submit that a graph of LOC per-module, over time is far more informative than knowing total line-count at any given time. Furthermore, you need to know where to look for best results -- if you're scoped too broadly its really difficult to separate things that should concern you from totally normal background noise, and likewise too narrowly and you will miss issues altogether.

 

 

[EDIT] I probably should soften my stance a little -- what I think we all mean to say in one way or another is that all quantitative measures of our source code -- LOC, numbers of macros, loops, functions, classes, modules, dependencies, etc -- can all provide insight into what's going on with your code base if you look at the data in the right way, and with knowledge of the design transformations that are happening in time with those measurements. These and more can be useful *metrics* that inform hypotheses about potential code/design smells that can be validated or debunked through investigation or testing. What any of these things are not, is any kind of quota that we should derive badges of honor directly from.




#5259226 The wrong way to count lines of code

Posted by Ravyne on 26 October 2015 - 09:44 PM

LOC isn't a useful measure at all. Developers are hired to produce a certain quality of code, not a certain quantity of code.

 

Highly skilled programmers both add and remove lines of code (where we'll say that a 'line' is any independent statement in curly-braces languages) when it makes the code simpler or more functional. Less skillful programmers, on one hand, tend to write more lines of code than they ought to (e.g. because they're not aware of language facilities or library functions that would simplify their problem solution) or, on the other, write too little code because they haven't identified fully the breadth of things that could go wrong and don't handle them. The number of lines of code someone writes doesn't even tell you the quality of their own code, much less relative to anyone else.

 

A productive day programming might mean adding 500 lines of code, or just 10. It might also mean taking away 10, 500, or 3000 lines of code. The measure of productive programming is not monotonic.

 

 

And that's not even to mention that the process of writing code at a keyboard is only 10 percent or so of what a programmer's job actually is. A programmer's job is more than typing -- its an ability to consume and distill requirements withing a holistic framework of the problem space and existing codebase, its knowing when the requirements are misleading or under-informed and getting to the bottom of it, its about breaking down and solving complex--often-novel--problems, its understanding the time and space complexity of potential solutions, its about knowing that Big-O time/space complexity doesn't matter if the solution diverges more than a little from what the hardware can do best, its about being able to communicate ideas, file bugs, and work with teammates without putting others off.

 

Lines of text typed was a great way to measure office-worker productivity back in the 50s before copiers, computers, and printers, but its no measure of a programmer. If programmers were just typists, everyone would be doing it.




#5259199 C++ Passing an unknown class as an argument to a function

Posted by Ravyne on 26 October 2015 - 03:53 PM

One thing to mention is that its not *always* bad to duplicate the code -- yes, odds are that one of the other options presented by various people above is the right solution for your problem -- that said, simply avoiding code duplication is no excuse to contort your program's structure to suit one of the mentioned solutions. Use one of these solutions if and when they fit, if they don't fit ask yourself if there's some more-basic flaw in your design that causes the issue or your inability to apply one of these solutions, finally, if it really doesn't fit and that's not some other design flaw, a little code duplication isn't the worst thing you could do.

 

Now, that's a rare case, and maybe I'm arguing issues and subtleties you aren't ready for; take from this thread that there are lots of ways to solve this problem, but don't take from this thread that this problem must always be solved in one of these ways (or at all).




#5258007 Does calling ~myclass() (destructor) deletes the object?

Posted by Ravyne on 19 October 2015 - 04:48 PM


You should never call the destructor directly. Always use delete.

 

That's not true enough to say never do this, always do that. There are exceptions, though those exceptions are rare.

 

Some examples are using placement new (for e.g. low-level memory management, or perhaps to represent a memory-mapped I/O device), or with C++11's generalized unions (members with ctors/dtors are newly allowed -- you must deconstruct the active member before switching to another).

 

It would be fair enough, however, to say its almost certainly the wrong thing to do unless you know for certain its exactly the right thing to do. 




#5257979 Confusion with smart pointers

Posted by Ravyne on 19 October 2015 - 02:49 PM

[EDIT] Adding this note to be clear as to what the others are saying -- this is undefined behavior. However, it's likely to work as and why I describe below, and as others have alluded to because it just falls out of how a compiler does dispatch. The compiler isn't doing any special work to make this work, and doesn't care that it does. If a compiler vendor found a better way to do dispatch that broke the behavior you're seeing they would be free to do so and still maintain their standards conformance. In practice, they might choose not to do so, or might put one or the other behavior behind a compiler switch precisely because there probably exists code that relies on this behavior even though it shouldn't, strictly speaking.

 

In SomeFunc, you're not actually dereferencing 'this' at any point. You examine its value (to print it) and that's fine, you're allowed to look at a null pointer -- its only dereferencing it that's undefined.

 

Functions themselves aren't part of an object instance and so don't require a dereference to dispatch (unless they're virtual) -- they exist in your executable always (presuming they're called at least once, or you've instructed the toolchain to leave them even if not). Calling a virtual function would change that, because there's an indirection happening through a virtual function table pointer that's stored in the object (hence you'd have to dereference the object first) -- this, I believe, is only a problem when the the virtual member function is called through a pointer to base class though; if the compiler can statically know which virtual function to call I'd imagine it elides the indirection (but I'm not 100% certain).

 

Also:

 

printf("SP : %X\n", SP.get());
printf("SP2 : %X\n", SP.get());     <-- You're not printing SP2.get() here.



#5256043 Drawing graphics in C++ w/out APIs?

Posted by Ravyne on 07 October 2015 - 11:27 AM


Without an OS getting in the way, talking to hardware still requires poking interrupt handlers and hardware registers and DMA transfers and so on, none of which can be done with pure C++.

 

Well, that's not quite true. You certainly *could* do those things if you had access to (and understood) the bare-metal. Its true of course that going through at least a BIOS is common and more than enough "to the metal" for most everyone's tastes, but there's nothing inherently special about a BIOS; its just machine code implementing an abstraction layer over the barest of hardware. All those interrupt handlers and hardware registers, including those that control DMA, can be reached from the CPU, so I don't understand how that would prohibit C++.

 

Agreed, though, that its entirely impractical to attempt talking to modern hardware and getting modern features out of it. If that's your hobby, take up driver development, but even that's done at a substantially higher level of abstraction (with the OS providing IOCTL interfaces and other necessary or useful primitives).




#5255928 Drawing graphics in C++ w/out APIs?

Posted by Ravyne on 06 October 2015 - 07:28 PM

You can write a software rasterizer -- basically you create a region in memory that's an array of pixels, and then you write your own individual pixels into it. When you're finished, you can use you host's windowing system to display it, or you can shuffle it off to OpenGL/Direct3D through the usual layers, but not using any of their drawing routines. In the old days of DOS that's how graphics were done, except DOS was single user so you could take direct ownership of the graphics adapter's memory and write straight into it.

 

Drawing pixels, lines, circles, elipses, and bitmaps this way is pretty typical of a 100-level course in computer graphics. A 150 or 200-level course often extends this to 3D, where you first transform the 3D geometry in a software rendering pipeline, and then you rasterize shaded and texture-mapped triangles to the bitmap.

 

On most platforms you can't easily talk directly to the GPU, and GPU vendors aren't terribly open about their drivers' API surfaces or hardware command registers (Though, they've started being more open lately with the call for open-source drivers) and even if they were you're talking about 2000 pages or more of datasheet to get your head around.

 

If you want to talk directly to a GPU, your best bet would be something like the DreamCast or Raspberry Pi, but understand that even those "simpler" systems are vastly complicated, and the act of talking to them at a low level doesn't look much like graphics programming, if that's the part that interests you.




#5255925 Using a physics engine on the server

Posted by Ravyne on 06 October 2015 - 06:57 PM


This reminds me of a statement i once read in an internet RFC (request for comments) document:
 
Quote:
Be permissive in what you accept, but strict in what you send.

 
This statement holds true in any protocol including games.

 

Sure -- actually, a good application of that mantra would be something like "Clients and servers should be able to deal with bad data (corrupted, maliciously crafted), and send only good data". For example, when I said earlier that the client shouldn't ask the server to move through a wall, the server should never trust a client not to do so -- lots of hacks for different games involve sending messages to the server that are deliberately misleading -- the server needs to validate what the client attempts to do. On the flip side, non-compromised clients -- and especially the server -- should be very strict about what they send and how its sent (for example, don't just let unused bits/bytes in a message be sent out uninitialized).

 

[Edited to add] Defending against malicious packets is super important. If you recall the Heartbleed security vulnerability from a couple years ago or so, that was a maliciously-crafted packet where the client requested a response longer than it knew the data to be, and the server simply trusted the client to be honest, though I don't believe it was intentional -- more of a logic bug. Bad idea in any event, this compromised tons of services of all sizes -- webmail, banking, Facebook even, IIRC... I remember changing basically all of my passwords because of it. 






PARTNERS