Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Yesterday, 07:45 PM

#5290997 Custom editor undo/redo system

Posted by Ravyne on 10 May 2016 - 12:12 PM

The command pattern approach is an oft-cited solution.

 

If you're interested, Sean Parent gave a talk Entitled "Inheritance is the Base Class of Evil" (Channel 9 link) which is a brief 24 minutes. Its all about the benefits of preferring composition over inheritance and value semantics over reference semantics -- these things are fundamental to his overhauling of Photoshop's undo/redo, and he gets more specific about how that works by the end (I think from about the midpoint on, but its been awhile since I've watched it. Regardless I recommend watching the whole thing -- its short, and its informative enough that I've watched it a handful of times over the 30 months its been available).

 

Here's a youtube link as well in case that's more convenient, but I think the Channel 9 video is better quality; the youtube video is a third-party upload.

 

Also, Sean's presentations are always great, and never a poor way to spend a lunch break.




#5290874 Will Unity or Unreal Engine 4 suit better for me?

Posted by Ravyne on 09 May 2016 - 02:48 PM

The biggest difference between them, IMO, is that Unreal comes from an AAA lineage and has relatively recently started extending its reach down to mobile and indies, while Unity comes from a mobile (iOS) / indie lineage, and has been steadily extending its reach towards greater and greater AAA ambitions.

 

What this means for users is that they're really both converging towards similar capabilities, but they come at it from different beginnings. Both companies have a huge staff dedicated to ongoing engine development, very capable people all around, so you shouldn't make the mistake of assuming that Unreal is somehow more legitimate. In practice, Unreal has put a lot of effort into user-friendlier tooling with UE4 but there are still more and sharper rough edges than in Unity's tooling. Unity is more friendly for the casual developer, but sometimes the fact that they assume lesser of the average Unity user can get in the way -- Usually you can get around it, but it sometimes seems like more work than it ought to be, or that what you need is more hidden.

 

Licensing is also a big difference -- both in terms of access to the C++ source code (which you might come to need for performance tuning) and in cost to you to license either engine for commercial use. Unreal offers up C++ source code access for free, while Unity charges ~$50,000 last I checked. For usage, Epic wants 5% of your gross revenue above $3000 per product, per year, but there's no seat license -- this is nice and simple; its also entirely free if you're using it to make CG films, IIRC. Unity wants $75/month subscription or $1500/one-time fee per seat, per platform-package (e.g. extra iOS, Android features, Consoles -- which I think are a higher fee) for the Professional Edition, but they don't take a cut of your sales after that. There's a Personal Edition License for Unity that's basically free all up -- no royalties, no seat license fees -- and the engine is feature-complete, however, you lose some really nice non-engine features, can't get C++ source without a professional license, and the personal licenses aren't available to any team that's made more than $100,000 in the previous year, or who's currently funded more than $100,000 -- its a viable option for a small team working on little or no budget, though (and if its relevant to your plans, keep in mind that if you did something like Kickstarter and collected more than $100k during a given year, that's going to count and you'll need to pay up.)

 

Depending on what platforms you target, how many developer seats you're licensing, and how many sales you expect to do, one of these options will save you money; If you make a lot of sales, Unity works out to be less expensive in the end -- the break-even point is lower or higher as a function of how many seats and platforms you license, and whether you need C++ source; but, you pay unity up front, regardless of whether you make any sales at all. Unreal costs more when you're successful, but it doesn't penalize you if you have a commercial failure -- 5% is really never a burden. When I worked it out once, basically if you make less than a couple hundred thousand in sales, Unreal is the cheaper option; if you make more than that Unreal costs you, but making "too much money" is a wonderful problem to have and you'll probably be overjoyed to give them their 5%. That 5% is definitely cheaper than a team of high-caliber engine developers.

 

That said, whichever is most comfortable and has learning resources and a community that suites you is probably the way to go. Your game is always more important than the engine, and these engines and toolsets are already close enough to parity that neither will block you from achieving your vision.




#5290589 Is it C# Territory?

Posted by Ravyne on 07 May 2016 - 05:04 PM

 

 

 

- Save time by going with C#

- Use saved time for pushing compute-heavy algorithms to GPU instead

 

This is exactly what is happening at my job currently (MRI scanner). Most of the user-facing code is moving from C++ to C#. More and more of the high-performance code is moving from C++ to CUDA and the likes.

 

How would C# save my time? What about Java? Does it save time too? Python?

 

I may save some time and effort, but then I will hit the C# limitations I mentioned above!

 

 

Its an important distinction that we're talking not about core code here, we're talking about the stuff that lets the user drive the application and then displays what they've done, we're talking about interacting with networks and services, files, etc. Generally, those things are much more painful in C++ than in C# either because of standard language features (e.g. delegates, threading, and more) or because of the much larger standard library -- C++ has many libraries, but its a hodge-podge that can sometimes be an extra burden to make work together. C# is as good at this kind of thing as C++ is as good at the kind of things you forst mentioned.

 

CAD is certainly a high-performance graphics application, like games, but its also a good deal more regular. With a smaller set of problems, there's been a lot of concentrated effort in studying CAD techniques. That helps CAD developers push more and more of the computationally-heavy stuff onto the GPU, making the absolute performance of the CPU-side code less and less critical, as it is given less difficult responsibilities and simultaneously more time to do them in. I can easily imagine that a modern CAD application could be written basically in C# from the UI all the way down to a Job Queue, which would run job kernels either on the CPU written in C++, or on the GPU written in whatever GPGPU language flavor -- and maybe supporting data structures also in C++.

 

 

No sir, I'm not talking about reading files or parsing strings, or even accessing system services. I'm talking about using something like CGAL high performance CSG libraries, and ray tracing SDKs.

 

 

Fine -- which you don't need in the layers that interact with a human, or the OS, or networks and files; and probably most of which's applications could be tucked into those jobs I talked about, or surrounding data structures, or maybe just the whole core of your app. I'm not saying to eschew C++ entirely, I'm saying you could do as much of the heavy lifting as you'd like in C++ or on the GPU, with the *rest* of the application in C#. I'm saying it could save you time to not have to wrestle with C++ in those human/OS/Networks-and-files issues, and that there's no performance argument there to justify the effort. I'm saying that the hodge-podge of C++ libraries makes it harder to hire a dev who's already familiar with your technology stack than centering around C# standards for the same. I'm saying you could hire three C# programmers to focus on those layers for the price of just two C++ programmers. I'm saying C# and its standard library are very well suited to those areas, and are a viable avenue to save you time in the long run over what setting up a hybrid C#/C++ project costs up front.

 

And despite of all I'm saying in favor of this arrangement, I'm still only saying its a consideration and a viable approach -- not a mandate, nor even a best-practice. If you and your shop are happy with C++, and you can hire enough good C++ people, and your margins can support the salaries they demand as you and your local competition bid up offers competing for a scarcer and scarcer resource, then cool to. You do you. What people are taking issue with is your seeming insistence that full-stack C++ is the only way to achieve performance -- its not, and I think C++ need not go as far up the stack as you'd guess if you're leveraging GPUs.




#5290469 Is it C# Territory?

Posted by Ravyne on 06 May 2016 - 02:12 PM

 

- Save time by going with C#

- Use saved time for pushing compute-heavy algorithms to GPU instead

 

This is exactly what is happening at my job currently (MRI scanner). Most of the user-facing code is moving from C++ to C#. More and more of the high-performance code is moving from C++ to CUDA and the likes.

 

How would C# save my time? What about Java? Does it save time too? Python?

 

I may save some time and effort, but then I will hit the C# limitations I mentioned above!

 

 

Its an important distinction that we're talking not about core code here, we're talking about the stuff that lets the user drive the application and then displays what they've done, we're talking about interacting with networks and services, files, etc. Generally, those things are much more painful in C++ than in C# either because of standard language features (e.g. delegates, threading, and more) or because of the much larger standard library -- C++ has many libraries, but its a hodge-podge that can sometimes be an extra burden to make work together. C# is as good at this kind of thing as C++ is as good at the kind of things you forst mentioned.

 

CAD is certainly a high-performance graphics application, like games, but its also a good deal more regular. With a smaller set of problems, there's been a lot of concentrated effort in studying CAD techniques. That helps CAD developers push more and more of the computationally-heavy stuff onto the GPU, making the absolute performance of the CPU-side code less and less critical, as it is given less difficult responsibilities and simultaneously more time to do them in. I can easily imagine that a modern CAD application could be written basically in C# from the UI all the way down to a Job Queue, which would run job kernels either on the CPU written in C++, or on the GPU written in whatever GPGPU language flavor -- and maybe supporting data structures also in C++.




#5290185 Is it C# Territory?

Posted by Ravyne on 04 May 2016 - 08:24 PM

 

Your good reason is "We have hundreds of thousands of man hours invested in our giant aging C++ code base, thus we'll be keeping that around. kthxbye."

 

Sunk cost fallacy...

 

A good reason would involve comparing the expected results of the new product against the quality of the existing product to see if the value of improvement exceeds the implementation cost and risks.

 

 

Disagree with this application -- at some point its been far too long since the train left the station, the train's designer died or retired years ago (no one really knows), the conductor and engineering staff have completely turned over 5 times, and no one remembers any of the bugs that were squashed into the tracks further than about a mile back.

 

Its almost never a good idea to go all green-field over your existing product when the product is large and complex, and when your user-base is entrenched -- and especially when they're entrenched in your product's quirks and bugs. You want to talk about a nightmare... reaching feature-parity with a large and complex product is a nightmare -- but reaching bug-parity with the kinds of bugs people come to rely on (which, by definition, stayed around long enough to be relied upon precisely because no one could ever figure them out) is more than a nightmare, that's some Hellraiser shit right there. You know that weird for-scope bug from Visual C++ 6? The one with the option to turn it back on in every version of Visual C++ since it was fixed? There's a small set of regression tests somewhere in Microsoft that make sure no parser change or bug fix messes with that, and there's an engineer who's job includes making sure those regressions never go red. 20 years later that's still earning someone their bread, because the cost to the business is less than the cost of losing clients who can't or won't fix their code, even though its a good idea. And that's a trivial example.

 

Generally you would only make that kind of leap when the market forces your hand -- In recent years there's been a good deal of effort to get off of old systems written in COBOL and Fortran that very possibly run on old hardware you can't buy any more (and even repairing is difficult and costly), but mostly because there aren't any new COBOL and Fortran engineers who understand the intersection of those languages, and machines, and operating systems, and they can't persuade enough old coders to come out of retirement and into consulting -- the ones they can convince are A) billing at rates somewhere between a-charity-date-with-the-queen-of-England and who-wants-to-hunt-the-last-African-rhino, and B) are not unlikely to drift away at any moment due to death or dementia.




#5289950 Is making game with c possible?

Posted by Ravyne on 03 May 2016 - 02:46 PM

The modern pseudo-argument in favor of C is that its "more transparent" -- namespaces, and classes, and templates, and operator overloading, and virtual member functions, and overload resolution taking all of those into account, and lambdas, and exceptions, and rules of 5 and 3 and 1, and on-and-on are all compiler voodoo that some believe is too opaque to be trusted -- or, at the very least, that it short-circuits critical thinking regarding platform-level details, and sometimes that this allows lesser-skilled programmers ("undesirables") to sneak in and wreak havoc in places they are not welcome. This is the basic position that Linus Torvalds takes with Linux kernel development (unapologetically, as is his custom) -- though, there are good reasons for not wanting a mix of C and C++ in the kernel, and C is already entrenched for historical reasons.

 

But I don't consider that a good reason, for or against, myself. Many patterns that C++ makes easy are useful (e.g. classes, virtual member functions) in C, and can be replicated to a reasonable degree with some work -- for example, a virtual member function in C++ is basically a function pointer with a convenient syntax for declaring, defining, and calling them. C code that does this sort of thing looks different, but you can find examples of it in the Linux kernel, in Quake, and in Halo's engine.

 

C++ is generally more convenient and productive than C, IMHO, but there is some truth to the argument that the compiler does things behind your back, and its true that this can in some cases cause trouble. But its not terribly difficult to learn where most of the dragons are, and to guide yourself to a comfortable subset of C++ and way of speaking it that avoids them.




#5289820 Is using the Factory or Builder pattern nessesary?

Posted by Ravyne on 02 May 2016 - 08:37 PM

Yeah, that ^^ :D

 

Patterns should never be a play-book for writing code. Patterns are, well, patterns in code that just so happen to have been repeatedly reinvented, at which point people recognised the pattern and assigned a name to them to make discussion easier. You should look at lists of patterns more like a traveller's dictionary, and less like a cookbook.

 

A software engineer should strive to be comparable to a chef who writes cookbooks -- not a machine that only knows how to follow them.

 

I don't want to be too contrarian to this point because I don't want to undermine the strength of the general sentiment, because I agree that its closer to "correct" practice than the recipe-book alternative.

 

That said, I think a better approximation is that patterns are useful pre and post-mortem, so long as you treat them as ideas rather than things. As usually comes up in these discussions, there is no "The" in the Grand Book of Software Patterns -- there is no "The Visitor Pattern", no "The Factory Pattern", no "The Builder Pattern". That word "The" (with a capital-T) implies singularity -- it implies that there is one implementation to rule them all and that everyone agrees it is the best implementation for all circumstances.

 

We all know that this is not and cannot be true, and we have mostly all experienced solving our own problems in our own way, and later coming to realize we've done it in a way that we can recognize as one of these patterns. In this way, we've participated in the emergent definitions of these patterns.

 

At the same time, what makes patterns useful in post-mortem analysis and discussion is that these patterns imply general properties of the solutions and of general properties of the problems themselves. We know what a Factory does, and can surmise the rough shape of it, what its responsibilities probably are, and to roughly what problem it proved naturally useful in solving. Once we know those things, it becomes possible to work forward from the other direction -- we can say that if we have a familiar kind of problem, and if the surrounding responsibilities are like those we face now, then there's a strong chance that a solution in the shape of, say, the Factory pattern will prove a good solution. Even if it proves not to fit as well as we thought, beginning there can be useful as you think through the implications that choice would have for the rest of the code systems, which you might also think about in terms of the pattern you expect them to be shaped like. Done well, this is an effective tool for working out potential design problems before writing lots of code.

 

But you probably shouldn't attempt to define your whole system up front in terms of patterns; this usually doesn't scale much past textbook exercises. Even still, thinking in these terms during the design phase can help abstract away details that might be irrelevant and distract from what's necessary -- and likewise, if applied too aggressively, can obscure details that are relevant and necessary, and will go overlooked. Be wary of this. The worst happens when you are both too aggressive and too rigid in your application of patterns as design tools, which can cause you to crystallize concrete details before you have really thought them through, and have consequently allowed your design to rely upon, spreading the tendrils of your gaff beyond a simplex refactoring (the term of art for this is "painting yourself into a corner") -- "simplex", by the way, is a word you should come to know and love; its meaning is the opposite of "complex", whose own definition has roots meaning something like "to braid or blend together" -- simplex is best plex.

 

Further, those code systems which aren't naturally a strong fit to a pattern usually represent the greatest unknowns in your code systems -- which can be a good indicator that you should consider whether they are candidates for early prototyping. Many times these code systems turn out to be complex sorts of black-boxes and that's fine if they simply are -- games and many applications just have this kind of code in them that you can't really design until you get there. But other times it turns out that these code systems are opaque to you because it turns out that you don't understand the problem, the responsibilities, or the interactions with surrounding systems well enough to recognize the patterns or algorithms hidden inside.

 

Patterns are useful both before and after writing code -- just don't let them become the tail that wags the dog.




#5289466 Should i learn Win32 or UWP?(C++, Directx)

Posted by Ravyne on 30 April 2016 - 03:15 PM

The decision mostly comes down to platform support. UWP lets you reach all Windows 10 based platforms -- PCs, Xbox One, SurfaceHub, even ARM-based devices like Windows Phone and Raspberry Pi (model 2 and 3, through the Windows IoT platform). Its a single, mostly-uniform API that fairly-smoothly lets you target different devices characteristics with a single application. Its also the preferred format for the Windows Store, and its the only way you can get your app on Xbox without a bigger company backing you (FYI, UWP on Xbox One is D3D11, at least for now).

Win32 works on all versions of Windows for PCs, but isn't otherwise relevent to any devices you're likely to care about or have access to. The main benefit from a platform perspective is that Windows 7 and even Windows XP are still relevant in some parts of the world, and also in some corporate environments (mostly mid-sized companies tied down for legacy reasons, or small companies who can't afford to upgrade so frequently, and lots of small mom-and-pops who don't replace anything until it breaks.)

Honestly, you probably should learn both to at least a reasonable degree. And you probably will, even if you don't want to. But you can use the information here to decide where to invest first.


#5289359 Is inheritance evil?

Posted by Ravyne on 29 April 2016 - 07:21 PM

A good rule of thumb is to solve any problem with the least-powerful tool it can be (reasonably) solved with. Try not to think of that in a negative light -- by using the least-powerful tool, what we really mean is the one with the least unnecessary dangers attached.

 

There are a few general power-progressions you should try to observe:

  • Prefer composition over inheritance -- that is, use inheritance only when it supports precisely the relationship semantics you want, not for reasons of convenience.
  • Prefer interfaces (interface/implements, pure-virtual classes) over inheriting concrete classes (extends, "plain" inheritance).
  • Prefer non-member, non-friend functions over member functions, prefer member functions over non-member friend functions, prefer friend functions over friend classes, in C++.
  • Know the differences between private, protected, and public inheritance in C++, and use the appropriate one.
  • Keep things in the smallest reasonable scope.

 

Those are just a few examples. Being a good engineer doesn't mean being the one who smuggly wields tools of great power, confident you'll not fuck up; Its great when one can do that when they have no other reasonable choice--and you'll still fuck up--but a trait of a good engineers is that they seek out the solutions which are exposed to the minimum set of potential hazards while meeting requirements of (in mungable order) safety, performance, maintainability, usability, and ease of engineering.

 

Language features are not inherently evil (not even goto), but they are sometimes misapplied and the more commonly misapplied they are, or the worse the repercussions are, the worse their reputation becomes. Sometimes this is exacerbated by the way that languages are taught, as is the case with how inheritance has come to have such a poor reputation. Sometimes its exacerbated by the mistranslation of programming skills from one language to another; in general, a Java programmer (or C# programmer to a somewhat lesser degree) will *way* abuse inheritance if tasked to write C++ (and they'll probably leak memory like a sieve too  :) ).

 

TL;DR; Know thy tools, and program.




#5289205 RPG item system: storing definitions

Posted by Ravyne on 29 April 2016 - 01:59 AM

It somewhat depends on how homogenous the set of properties is among objects of a certain category (where category means, say, shields/chestplates/blades) -- if the properties are homogenous then each category maps well to a database table or, more simply, a spreadsheet page.

If individual objects are non-homogenous but the range of options is well-defined, then something like XML can be a good fit because it allows variance within a well-structured and verifiable format.

If individual objects are non-homogenous and the range of options is more ad-hoc (in the sense that objects might have properties that are unique to itself), then something like JSON or YAML can be a good fit; these formats are semi-structured -- that is, the grammer and parsing rules are well-defined, but there's no formal data schema like XML. For good and for bad, there's nothing stopping you from putting any data you want anywhere, so long as your program's parsing logic can cope.


#5289135 Returning by value is inevitable?

Posted by Ravyne on 28 April 2016 - 02:01 PM

Also, some compilers can have a lot of trouble in the presence of references to the point that they fail to make seemingly simple optimizations. e.g. a math function that takes two parameters as a reference can't easily tell that those parameters don't alias each other without a more complex post-inlining alias analysis pass in the optimizer and so might generate poorer code than you'd get if the parameters were passed as value types (and then preferably in registers).

 

Just wanted to note that this is another point in favor of that more-or-less canonical function-call signature pattern (first parameter non-const by value and to be used as return value (and hopefully in a register), second parameter by const reference) -- its trivial for the compiler to know that the arguments don't alias. The same is true of passing both arguments by value (again, hopefully in registers) as well, but if you can't or don't want to (maybe the object is too large, or is non-POD requiring a deep copy) the pattern I showed sidesteps the aliasing issue while mitigating one of the copies at least (if you can afford to pay it more attention, other signatures or techniques might do better, but the canonical pattern is effortless and a good default).

 

 

I also want to say quickly that the 'inline' keyword doesn't actually do what most people think it does -- It doesn't force the function to be inlined, and it doesn't even directly "suggest" that that the compiler should inline it (which is what most people think it does). The 'inline' keyword only exists to tell the compiler that the function is being defined inline, and to basically not complain about finding multiple definitions as it will be potentially multiple times as a result of being in a header. Having been defined inline, the function becomes more-available for the compiler to perform inlining, so its a sort-of suggestion in a kind of heuristic sense, but the 'inline' keyword is not itself an expression of intent for something to be inlined by the compiler -- many programmers believe that's what they're saying, but that's not what the compiler understands from it. "forceinline" is closer to what people think they're saying, and depending on compiler settings forceinline is not really forced, but just a suggestion.




#5289129 what is meant by Gameplay?

Posted by Ravyne on 28 April 2016 - 01:31 PM

But in my defense I wasn't trying to say it beats c++ every time I was saying it is "POSSIBLE" for a scriptinglanguage to outperform c++.

 

But its not possible, not even once, in a fair fight -- with the deck stacked against it, with poor C++ programming, with improper or incomplete library use, sure you can come up with micro-benchmarks that show C++ at a disadvantage -- but there are lies, damn lies, and statistics, right?. Doing the same work in C or C++ will always be as fast or faster than essentially any language, "real" or script. You might, though, have to do some work that's not readily available in the language or in common libraries, and if a technique is readily available in another language that yields more performance per effort-unit, then that's a point in favor of that language -- however, that's a productivity argument, not a performance argument.

 

And productivity is a damn fine argument for a scripting language. A much better argument than performance, frankly -- which is the point I've been driving at the whole time.




#5288970 OOP and DOD

Posted by Ravyne on 27 April 2016 - 01:37 PM

That's somewhat untrue -- In general, the most broadly-applicable DOD-type data transformations will benefit other platforms even if it is not absolutely optimal. In part this is because details of e.g. cache-line sizes, number of cache levels, associations of said caches, and relative lateness of each cache level through to main memory don't, in practice, have a lot of variance. Cache lines on applications processors are 16-words everywhere I'm aware of. L1 data cache is 16 or 32 KB everywhere, latency of about 3 cycles, usually 4-way set-associative. L2 caches are 256k-512k per core, latency around 10-12 cycles, 4 or 8-way set associative, L3 caches are 2-4MB shared among 2-4 Cores on simpler/slower cores (like PS4/XBONE) or 6-8 MB shared among 4 fast, wide superscaler cores (e.g. Intel i3/i5/i7) 8 way associativity or sometimes full associativity, about 36 cylcle latency, memory latency about 90 cycles if its in the page table, more if not. The prefetcher acts like an infinite L4 cache if your access patterns are well-predicted (linear forwards/backwards is best, consistent non-contiguous strides next-best), with latency not much worse than L3. Real L4 caches, where you find them, are typically a victim-cache. So on and so forth.

 

But even if there were greater variance, the transformations you make to make good use of any kind of cache are similarly beneficial to any other kind of cache, simply because caches and memory hierarchies are universally more similar than they are different, whatever the fine details may be.

 

The PS3 is notable in particular for the SPUs in its cell processor, which provided essentially all of the PS3s computational power -- these were streaming-processors, like DSPs, with no real "caches" to speak of (each SPUs local store had similar access properties to a cache, but was all the memory that an SPU could see, DMA was the only way to speak to main memory, other SPUs, or the rest of the system) and as such they essentially required DOD practices to achieve reasonable computational throughput. But developers also found that these transformations benefited scalar/altivec code on the PPU, and in cross-platform titles even benefitted Xbox360 and PC targets. The changes that were necessary and crucial to make the PS3 work as well as it was designed were good for other platforms as well, even when they weren't strictly reliant on such transformations in the way that the PS3's SPUs were.




#5288956 what is meant by Gameplay?

Posted by Ravyne on 27 April 2016 - 12:24 PM

I dont know much about speed of programming languages and such but i know that the scripting language Skookumscript is on its own not faster than c++ but with some optimizations some certain tasks that are completed in "human time" basically meaning completed over a couple of frames, and dont have to be refreshed every tick can in theory perform 100 times better in skookumscript than c++.  http://forum.skookumscript.com/t/skookumscript-performance/500

 

so it is possible for scripting languages to outperform real languages, but apart from some certain parts of certain languages real languages should always perform better

 

 

From the thread you linked:


Fundamentally, well-written C++ will be of course faster than any executed scripting language, but in practice SkookumScript (which itself is written in C++) can beat naive C++ in performance due to its ability to easily time slice operations (meaning code doesn't run every frame but only every few frames).

 

You have to be careful how you define "outperform" -- One of the creators of SkookumScript, which I have no doubt is very performant for a scripting language, is saying right here (bold) that its a fundamental truth that well-written C++ will beat SkookumScript (he does not even say "highly optimized"), and (italics) goes on to explain that SkookumScript can beat naive C++ (I take that to mean neither architecturally, algorithmically, nor locally optimized C++) because it has built-in time-slicing such that it does less work. While that built-in time-slicing is a nice feature (its an example of those kinds of programming models beneficial for scripting that I mentioned before) and its great to have ready-at-hand, its not an apples-to-apples comparison; You can do time-slicing in C++, you just have to write it (C++ coroutines which unfortunately landed in a Technical Specification rather than C++17 proper, but is already shipping as a preview in VS2015 Update 2, make this almost trivial), and it sounds like he's not even ruling out that non-time-sliced, but optimized C++ could best them.

 

That you can write straight-forward SkookumScript that will beat naive C++ is certainly noteworthy, and a valuable feature -- but you shouldn't take from that that it "outperforms" even average C++ -- its creators are not boasting that claim.




#5288839 what is meant by Gameplay?

Posted by Ravyne on 26 April 2016 - 04:35 PM

 

thats actually what led me to asking this question on gameplay as i was wondering why I cant just do it in c++ and not a scripting language

You can but it costs you a 10 to 50 times more lines of code.

 

Not necessarily -- its true that many scripting languages are compact or have features (e.g. actor-model, prototypal inheritance model) that lend themselves to scripting game entities and game interactions, but that does not mean that writing gameplay code in C++ has to more difficult or more verbose for the people "scripting" the gameplay elements, albeit in C++.

 

If you were to use C++ for gameplay code, you might not take any special effort if yourself or the entire team is fluent in C++ (and the engine); if you have less-experienced people "scripting" gameplay through C++, then your goal as an engine programmer would be much like any other task -- provide an API, or indeed an embedded domain-specific language, that makes it easy for client code "scripts" to express high-level intent while encapsulating the low-level details away; In practice, this ends up being not much different than the work of integrating a scripting language with your engine, though you might be providing more of the cogs and widgets yourself. The payoff is that, done well, your "scripting" staff gets many of the productivity, expressivity, sandboxing, and hand-holding benefits that stand-alone scripting languages are known for.






PARTNERS