Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Online Last Active Today, 04:13 AM

#5283808 Fast way to cast int to float!

Posted by Ravyne on 28 March 2016 - 12:49 AM

I would expect the fastest way is the most direct -- C or C++ style casts, assuming C++ as your language.

 

Its always hard to tell in Coding Horrors whether people are serious or trying to be funny -- if serious, what kind of profiling leads you to believe this is faster and what other methods are you testing against?




#5281374 how could unlimited speed and ram simplify game code?

Posted by Ravyne on 15 March 2016 - 01:30 PM

You can't have real infinite -- only seemingly infinite. Information can only move at the speed of light, information storage must reside in a physical thing even if only a particle. Setting aside a discussion of what relativistic/quantum challenges might exist sooner than later, those limits are the absolute upper bounds -- so sayeth the universe, as far as we know.

So if we do not have true infinite then we still have limits, and more importantly we still have latency -- which is usually what we're fighting -- we can do a first approximation of "anything" today if we give our hardware enough time. We optimize because the result comes too slowly to be useful or interesting.

Someone said earlier they'd predict the future and become infinitely rich, but *so will everyone else* and whomever has the fastest, smartest code and neared proimity to act on information and to take action will still win -- quantataive stock trading practices already confirm this.

So I think the things that would be adopted wouldn't be "throw caution to the wind" things like eschewing broad-phases, or culling, or other strategies for reducing the amount of work to do. I think they would be things where a "purer" more "universal" solution could reduce the amount of code and code-systems such that they are easier for us humans to reason about. Ray-tracing is a great example of this. I also think we'd put far more on the compiler to get right and that's where things like declarative-style programming languages start to come in.

I don't think we' have uber-objects or careless, naive code everywhere. Even if we had real infinite those things are harmful for very human reasons.


#5281231 Any good DirectX 12 "2D Platform" Tutorials?

Posted by Ravyne on 14 March 2016 - 11:42 AM

The only exception I can imagine are mobile games where low-overhead APIs could potentially improve battery life a lot even on 2D games


That's a valid point to consider, though I don't know how much simpler games stand to gain.

One thing to consider more generally is that Direct3D 12 and Vulkan won't raise your frame rate if you are (or would be) GPU-bound. 90% of the benefit is on the CPU-side, and the biggest gains there only come if you're willing and able to multithread your engine. In fact, most naive ports from 11 to 12 are performing 10-25 *worse* to start and it takes some moderate refactoring to get back to a rough par in terms of frame rate (with the advantage usually being a higher and more stable minimum frame rate and lower CPU load), but it takes significant rework to get those big gains that were promised, and they're mostly only there to be had if you were CPU bound in the first place.


#5280986 Any good DirectX 12 "2D Platform" Tutorials?

Posted by Ravyne on 13 March 2016 - 12:28 AM

The general approach to creating a 2D game using a 3D API is the same as ever -- textured, screen-aligned quads, batching, etc... But the directions in which D3D 12 reached aren't really things that 2D games needed -- utmost performance by getting closer to the metal. If this is a learning venture have fun, if this is a business venture you'll have a lot more to gain by taking advantage of the larger user base of more-established APIs.


#5279390 "Modern C++" auto and lambda

Posted by Ravyne on 03 March 2016 - 07:58 PM

If you're writing an OS kernel, your language's standard library is unavailable.
If you're trying to make a reasonably complex program with a total size in the KB, you can't link to the bloated standard library.


Of course if you're doing something special then a specialized approach might be better. And if that's you, you probably know it. But that's not most people; even in non-AAA games you probably ought to start with the standard library (or some other well-known library) if its available on your platform unless and until experience shows you that you'll need to rip it out for something more tailored.

If those libraries are not available for whatever reason, you should still be implementing the algorithms yourself and building on them, not sewing raw loops hither and yon. This advice really is not about "use the standard library" per se (though it should be your first, best starting point unless you have good reason why it can't be) this advice is about not proliferating one-off loops.

Again, I advise anyone who thinks they disagree with this premise to watch some of Sean Parent's presentations that are available on YouTube. He demonstrates several real-world examples in which he takes 3 slides worth of hideous loops and reduces it to a mere handful of algorithms calls -- not only is the code shorter, he also eliminates tons of temporary state and conditionals (read: Sources of bugs), and makes it possible to reason about that the code does so that it can be evolved. And he reduces the big-O complexity to boot.

Honestly, your loops are not a special snowflake. If they are not themselves a known algorithm or combination of known algorithms, then they must be novel -- no one invents novel algorithms at the rate with which they throw down loops. Novel algorithms go in ACM papers, earn you PHDs, secure patents, and win you prestigious awards. They can make your career.

The honest truth is that 95% of programmers are ignorant of their algorithms such that they can't spot them in their own code. They don't write raw loops because they're faster, smarter, or better -- they write raw loops because haven't developed the mental faculties to think about those kinds of problems any other way. And they're worse off for it.


#5279081 How C++ Programs Are Compiled (A Brief Look)

Posted by Ravyne on 02 March 2016 - 02:36 PM

That being addressed now, we don't need to keep piling on OP's rep now, do we?




#5279080 Are there any modern desktops(Windows, Linux, OS X) that use big endian?

Posted by Ravyne on 02 March 2016 - 02:27 PM


Windows has historically had versions that runs on other chipsets (MIPS, Alpha, PowerPC, ia-64/Itanium which is different from "x64", and ARM as recently as Windows ) but I believe currently only runs on the x86 chipset. Most of those were bi-endian chips but some were big-endian only. The ARM platform has mostly moved over to Windows Phone, or the tablet version Windows RT that is now discontinued, but both of these were also bi-endian running in little-endian mode. It supports both 32-bit ia32 (aka "x86"), and 64-bit ia32e (aka "x64" and "x86-64") modes.

 

Windows does still run on ARM, as the core OS for PC, Phone, IoT/Embedded, etc is all converging. However, modern ARM processors are bi-endian for the most part and I'm 99% certain Windows' ARM ABI specifies little-endian, so for all intents and purposes you can consider Windows to be a little-endian platform.

 

 

OP -- if you're concerned, you might at least mark code that has endianess concerns even if you only implement the little-endian codepath for now; or you might encapsulate those operations in macros or functions (again, even if you leave the big-endian code out for now). Better to leave yourself some breadcrumbs if you're concerned you might need to add support in the future.




#5278762 Right way to handle people complaining about price?

Posted by Ravyne on 29 February 2016 - 06:10 PM

Lots of angles, and I haven't had time to read the entire thread so forgive me if I'm retreading ground --

 

I think the very first thing to do is to understand the reasons behind the pricing complaints, and those can be myriad. For brevity, I'll refer to those being vocal about your "high prices" as complainers, though that has a more negative connotation than I mean it to.

 

What we're really talking about when we say that something is too expensive is that there's a disparity between perceived value (utility, entertainment, usability, subjective quality, etc) and actual cost; sometimes these traits are measured qualitatively in isolation (thus, not tying them to a price-point), often they are measured against other competitive products (thus, tying them to a competitive price-point). There are really only two permanent ways to deal with this perceived disparity -- raise the perceived value to match the actual cost, or lower the actual cost to match the perceived value. Temporary measures can also be employed, such as sales, freebees, or other promotions. Permanent solutions are costly (as in blood/sweat/tears/time/opportunity and/or financially) but done well can actually increase revenue on a longstanding basis. Temporary measures are often inexpensive, "costing" only as much as the unit-cost of the promotion eats into your margins -- it can be hard not to look at lost margin as lost revenue, but the two do not have a 1:1 relationship and retail is not usually a zero-sum game, and regardless I would assert that any reasonable promotion is *always* going to cost you less than implementing those permanent solutions. What you should take from that is when you get complaints that can be addressed by a temporary solution, its not cheating to take it and move on; permanent solutions are an investment, having all the usual costs and risks -- don't invest in rectifying fleeting complaints.

 

Because its an investment on your part, you really need to understand the problem and how to solve it; I don't mean having a formula in hand for the solution, but I do mean verifying to whatever extent you can that the solution path you'll embark on is the one that will increase customer satisfaction overall, however you measure that. Form hypothesis and test them, do A/B tests or betas if you can. But before getting the right solution, make sure you have the right problem! Maybe the complainers are disatisfied because they aren't seeing the value that's there -- perhaps because bad UI obscures it, or because the difficulty curve discourages them from ever reaching the really great content (or because your early content isn't engaging enough). Adding value is great but its the most costly thing you can do, bringing out latent value by polishing what you have can be equally effective and usually is not so costly. Sometimes perceived value is judged "unfairly", as when an otherwise great gameplay experience is marred by poor aesthetics (graphics, sound, etc) or more-fairly marred by bugs, which goes back to bringing out latent value -- If you can invest in better art assets or music or sound effects, or spend a week or two bug-bashing, it can be a relatively cost-effective way of making sure the true value of your product shines through unclouded.

 

Even with temporary solutions they don't always have to cut into margins -- For example, if you truly are certain that your product provides fair value but also have people complaining about your high price, consider a pay-what-you-want sale -- this gives the wouldbe/complainers an opportunity to pay a price they think is fair but also is good press and also gives those who are sure of its value a chance to throw you additional support (and you can even incentivise people to prefer to pay higher tiers). If you're confident of your own value proposition, there are probably people out there who are convinced you've undercharged, and they can help offset those who choose to pay less. Make those tiers human -- say your regular price is $1.99, have tiers at "$.99 -- Discount: Thank you for being our customer", "$1.99 -- Regular price: Thank you for supporting our game" and "$2.99 -- Booster price: Thank you for helping us make more great games". Valve's own Steam statistics have shown that revenues for high-quality games stay mostly the same regardless of pricepoint, and with not much appreciable impact on the long-tail of lifetime sales -- that is, outside of the launch window, a sales surge now does not generally imply a corresponding sales decline later (though I think that's mostly broad-appeal games, and games with very niche appeal might perform differently but I'm not sure if they have that data or if they broke it out as such).

 

Last but not least, I say don't be afraid to not address a battle you're bound to lose from the outset. Some people just believe that no casual or indie game should ever cost more than $1.99 (or even $.99 -- or even that all games aren't free-to-play with the option to grind through never paying a thing). The only thing you can do to appease these folks is to crater to their specific demands -- and even then most still won't buy your game because their indignation is usually more about their soapbox than what actually affects them personally -- that is, they want all games to be $.99 but that does not mean they'll buy your game if it were $.99. They've probably already pirated it, they've certainly moved on to complaining about another game. There are some people who will never be appeased, and all efforts you extend in their service are wasted. IMHO, They're certainly not worth any more than a marginally personalized copy/paste response, and its highly questionable whether they are worth even a fully-automated one.




#5278375 "Modern C++" auto and lambda

Posted by Ravyne on 26 February 2016 - 05:24 PM

So, to be clear, loops for control flow are sometimes unavoidable, and more often would become unnecessarily opaque  -- that said, a lot of programming challenges that are solved with control flow loops can be made into transformation loops without confusion, and will often become composable as a result.

 

To whit:

 

 

 


I think an example of the type of modification I am talking about is in order. Let's say I have this loop:
for (float x : vector_of_floats) {
  std::cout << x << ' ';
}
std::cout << '\n';

 

So, you've got a bit of a bug (feature?) here with a trailing ' ' between the last value and the newline, but neither are good for composability in any case. I'd re-write this as follows:

auto it = vector_of_floats.begin();

if(it != vector_of_floats.end())
{
    std::cout << *it;

    for_each(++it, vector_of_floats.end(), [](float f) {
        std::cout << " " << f;
    });
}

The big thing here is that we aren't relying on indexing.

 

 

 

 


Now I want to format it differently, like this: "(4, 3, -1)". Now I need to do something different either for the first element or for the last one, so it's probably easiest to change the loop to indices.
 
std::cout << '(';
for (size_t i = 0; i < vector_of_floats.size(); ++i) {
  if (i > 0)
    std::cout << ", ";
  std::cout << vector_of_floats[i];
}
std::cout << ")\n";

 

Since my above is basically equivilent to this already, let me take the composability to its logical conclusion:

template <class I>
void print_separated_values(I it, I end, string separator)
{
    if(it == end)
        return;

    std::cout << *it;

    for_each(++it, end, [](float f) {
        std::cout << separator << f;
    });
}

// usage:
std::cout << "("; print_separated_values(vector_of_floats.begin(), vector_of_floats.end(), ", "); std::cout << ")";

If I then need to have a mask that indicates which indices are active at this time and I only need to print those, I would do
std::cout << '(';
bool some_element_printed = false;
for (size_t i = 0; i < vector_of_floats.size(); ++i) {
  if (!active[i])
    continue;
  if (some_element_printed)
    std::cout << ", ";
else
some_element_printed = true;
  std::cout << vector_of_floats[i];
}
std::cout << ")\n";

 

All I need in my solution at this point is a container which only contains active elements, or one which is partitioned with the desired elements up front. The algorithm std::stable_partition gives us that, with a slightly odd predicate -- since you're passing in the active array (I assume you mean that elements are boolean, and true if that index is to be included) we need to walk it within the predicate, so we init-capture a starting index which will be incremented each time the predicate is called, and the predicate returns what it finds at that index, selecting the active values from vector_of_floats so that they appear in order at the beginning of the vector.

 

Altogether that gives:

template <class I>
void print_separated_values(I it, I end, string separator)
{
    if(it == end)
        return;

    std::cout << *it;

    for_each(++it, end, [](float f) {
        std::cout << separator << f;
    });
}

// usage: (note: lambda init capture is a C++14 feature)
auto last = stable_partition(vector_of_floats.begin(), vector_of_floats.end(), [int i = 0](float /*ignore*/) { return active[i++]; });

std::cout << "("; print_separated_values(vector_of_floats.begin(), last, ", "); std::cout << ")";

Now, we didn't save any lines of code (and I've not run that through a compiler), but I argue that the code in my solution is better for not mixing all those responsibilities inside a single loop, and because we're left with a generalized function for printing a list of values separated by arbitrary characters.

 

It would take some time to attack your NegaMax, but I bet it can be done. As an aside, there aren't all that many algorithms in the standard library, but its also valid to implement your own in the same style (many of which can be implemented as combinations of those provided).

 

 

Sean Parent has given several great talks on this very subject, such as C++ Seasoning (youtube).




#5278317 "Modern C++" auto and lambda

Posted by Ravyne on 26 February 2016 - 10:47 AM

Respectfully, I disagree with your objections.

 


 * The call to the algorithm doesn't tell me easily what argument means what. Perhaps in a language with named arguments this wouldn't be a problem.

Intellisense? Documentation? Being familiar with your language's standard library (admittedly, many/most non-expert programmers seem unfamiliar with the algorithms -- heck, I'm not yet as familiar with them myself as I preach people should be; working on it)?

 

In any case, between "look it up every time until it sticks" and "decipher a bespoke loop each and every time", the former seems to be the clear winner to me.

 


 * It you have to come back at some point and modify the loop in any way, it's much easier to do in a raw loop than if you used an algorithm.

If you have to come back and modify an algorithm-based loop, you've either chosen the wrong algorithm to begin with (or needs changed), or your predicate/function you passed in was incorrect or incomplete -- in either case, you save having to worry about whether the mechanics of the loop itself are correct. The algorithms are robust, correct, and fast in ways many bespoke loops are not -- e.g. WRT use of swap, copy/move semantics, exceptions, and are sometimes specialized on container types for greater performance.

 


It's probably easier to follow the flow in a debugger if you used a raw loop (this might depend on how good your tools are).

Maybe, but again, you don't need to follow the mechanics of the loop at all if you use algorithms and can be assured of their robustness, thus reducing the mental load involved in grokking the bug; raw loops always have to be ruled out as contributory to the buggy behavior. But of course good familiarity with the algorithms is prerequisite. 




#5278316 "Modern C++" auto and lambda

Posted by Ravyne on 26 February 2016 - 10:23 AM


Yeah, but I'd argue that there are far more cases in which the auto keyword helps to avoid unintended silent conversions than cases in which it causes them.  That's been my personally experience anyway.

 

QFE. I can't make a clear judgment one way or the other based on my own experiences, but its certainly true that explicit typing isn't immune from unintentional conversions. In that regard its no different than auto other than whether it gets it while its coming or going.

 

 

I'm in the camp that tends to prefer auto, but not to excess. My rules of thumb for auto are:

 

You must use auto when the name of the type is unutterable.

 

You should* use auto when type adaptivity is desirable (because it makes refactoring easier).

You should* use auto when type inference saves you repeating yourself.

You should* use auto when context is clear (e.g. range-for, iterators).

 

You shouldn't* use auto when you rely on a specific type, regardless of whether you can get that type via inference (e.g. for struct packing, math tricks).

You shouldn't* use auto to excuse yourself from thinking about types in your program.

 

 

It can seem tempting to green programmers especially, I think, or those weened on dynamically-typed languages such as JavaScript or Ruby to use auto as if it were equivalent to those langauges' concept of variables and associated keywords, but its important to remember that types are not just inferred in those languages, they're actually fluid -- a variable X can be bound to an integer when created and later become bound to a double; an important consequence of this is that variables migrate to types with greater precision as needed. This is not true in C++ or statically-typed languages--type-inference or no--and instead you get unexpected (again, from the standpoint of mistaking auto for dynamic typing) conversions resulting in a potentially-accumulating loss of precision.

 

 

* should/shouldn't -- as ever, there will be uncommon cases in which the rule is best stood on its head. 




#5278224 "Modern C++" auto and lambda

Posted by Ravyne on 25 February 2016 - 10:20 PM

Nice language features are great for productivity, but sometimes they lead to terrible code when someone doesn't grok what they're writing.

For example in C#:
 

someList.ForEach(x => Process(x, someOtherLocalVariable));
Who would write such a monstrosity? This is what you should write:
 
foreach (var x in someList)
{
    Process(x, someOtherLocalVariable);
}
Some people don't know that the first case is going to generate a hidden class with a member variable and method just to capture the variables it needs. The ForEach method is going to invoke that generated method each iteration of the loop, adding call overhead.

C++ would probably know how to optimize such code out, but in C# we can't be so confident about the optimizations we'll get.

 

 

In C++, a lambda with no captures is equivalent to a plain-old function, accessed via function pointer (one which is constant and known to the compiler) and from there all the usual optimizations apply. It must behave that way per the standard (which doesn't dictate further optimizations itself, but in doing so effectively ensures that it benefits from those kinds of optimizations already in place). VC++ at least is also able to optimize certain other lambdas when they don't have a life outside of the scope in which they're defined -- basically when certain conditions hold true, its possible to fold even non-trivial lambdas into the stack/code-block of the scope they're defined in.

 

That said, for that kind of case C++ has range-for and if what you need is a simple iteration over a collection, you'd use that instead much like your second example -- though its spelled differently in C++. But when you want to do other algorithmically-common things -- accumulate, transform, etc -- then lambdas make for better code than hand-written, one-off loops. This goes for searches -- find*, remove*, etc -- pretty much always since the name of the find function combined with the predicate lambda is basically always more clear than a solution using range-for, iterators, or raw loops.

 

In general, best practices now are to avoid raw loops whenever possible -- as long as they satisfy your needs, range-for is better and one of the algorithms are better still. Everyone should familiarize themselves with the algorithms.




#5278182 "Modern C++" auto and lambda

Posted by Ravyne on 25 February 2016 - 04:03 PM

Anecdotally, I once worked in a role at Microsoft where my job was to run major Xbox 360 titles through static analysis tools -- lambda's had just become a thing and one of those titles was an early adopter. Unfortunately the static analysis tool worked against a previous version of the compiler and didn't support lambdas, so to get the job done I had to 'backport' all those lambdas -- around 500 as I recall.

 

Now, this is a pretty mechanical translation, once you know what the compiler does under the hood its just a matter of following the recipe. Still, a very short lambda (which are overwhelmingly commonplace) translated to a function object goes from 1 line at the exact place you need it, to a 10+ line function object often located in odd places, and sometimes requiring intermediate variables created (depending upon what it captures and how, and where those things are defined and last touched). I gained a certain appreciation of what lambdas actually do for you.

 

As a result of that, it makes the standard library and similar interfaces much more practical -- "its easier to just write my own quick loop" is no longer an excuse, which encourages better-debugged, more-robust code to be used.

 

 

Also, for what its worth, its far from just Microsoft pushing these things. I haven't the foggiest where you got that impression.




#5277557 Releasing game with server source code?

Posted by Ravyne on 23 February 2016 - 02:03 AM

I wouldn't worry about preventing cheat servers -- I'd worry about and prevent them from masquerading as an official server or participating in any sort of official gamerank/leaderboard/matchmaking. You can accomplish this by tying that sort of traffic to your web-server to public-private key cryptography (keeping the private key private and out of any source control, though you're responsible for protecting it by securing your own game and back-end servers), that's the gist of it anyways. If you do that, you can even track different key-pairs relative to different community-made flavors of your game -- each key-pair would represent its own universe of stats, etc. Its just a matter of whether you want to provide that service to the community, and you might not for purely business reasons; maybe you want to offer it on a case-by-case basis through an application/registration process.

 

Let cheaters play by their own rules if they want, its only a problem if they mix with those looking for a fair game.




#5277491 Free painter program with transperent color

Posted by Ravyne on 22 February 2016 - 02:26 PM

Paint.Net is usually what I install to take over duties for Windows' in-box Paint. Its not a fancy program, its just a better Paint, and its free.

 

Gimp is a free Photoshop replacement, so its got things like layers and tons of photo manipulation tools. The interface of stock Gimp is good these days, but different than Photoshop. If you're at all familiar with Photoshop already, I know there was a fork of Gimp that was made specifically to mimic Photoshop's interface, but I forget the name of it and don't know if its maintained and up-to-date with standard Gimp.

 

Inkscape is a free program for creating vector-based art (like Adobe Illustrator, and as commonly used in Flash-based games) that's pretty good.

 

For creating low-resolution and/or low-color graphic assets I find that more specialized tools are better for me. Things like GraphicsGale or Tiled. Cosmigo ProMotion remains my favorite; its not free ($59 to buy in, $29 to upgrade to the next major version), but its well-worth the money. ProMotion is still fairly popular for 2D sprite games on mobile deviced and handheld consoles (though less-so with devices capable of high-color graphics in the newer ones) and was used to make Shovel Knight.

 

 

But there are a multitude of competitors free, cheap, and otherwise in any of these categories and even the paid ones have free trials. It never hurts to download and test drive them until you find the one you like.






PARTNERS