Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Online Last Active Today, 04:39 PM

#5291960 [GBA] Kingdom of Twilight a retro rom

Posted by Ravyne on 16 May 2016 - 04:17 PM

Retro/Homebrew is always intellectually interesting, but its something people do more for the love and experience of doing it, rather than hopes of commercial success. Its interesting hardware, with features and limitations that can force your hand to deliver some really inspired solutions -- it and the dreamcast are usually the platforms I recommend for those interested in retro homebrew -- either platform is limited enough to prove a challenge, but capable enough to do interesting things; old enough to communicate the essence of old-school, to-the-metal development, yet not so archaic or downright weird that the lessons you'll learn won't benefit you in contemporary times -- they will.

 

Good luck, the GBA is an excellent platform for retro-styled RPGs.




#5291958 Difference Between 2D Images And Textures?

Posted by Ravyne on 16 May 2016 - 04:07 PM

 

Is mipmaps the only difference between textures and images?

 

As he wrote, it depends on the context.

 

For some contexts they are the same thing.

 

For other contexts they refer to data formats. Highly compressed formats like png and jpg need much memory and space to be decoded before going to the video card, some formats such as S3/DXTC and PVRTC are supported directly by different video cards, so some systems call one pictures and the other textures.

 

Building on this, and on Josh's previous reply, the compression format or lack thereof is often driven by the image content itself. Textures representing real-life images or "material" textures used in games often compress very well, even to lossy compression formats without a great deal of apparent visual fidelity loss. However, what you might typically call a "sprite" -- such as an animated character in a 2D fighting game, or any sort of "retro" look -- usually suffers too much quality loss from being converted to those kinds of compressed formats; instead, they might be left in an uncompressed format where their exact, pixel-perfect representation is preserved. On-disk formats like GIF or PNG can serve those images well, but GPUs don't understand them; so its common to see those formats on disk, with a conversion to raw (A)RGB before hitting the GPU. An 8.8.8.8 ARGB image is 32bits/pixel, while a highly-compressed textrue format can get as low as 2bits/pixel if the image content is suitable.




#5291462 What is Camera space ?

Posted by Ravyne on 13 May 2016 - 04:33 PM

Camera space (or View Space) is the space of the entire world, with the camera or viewpoint at the origin -- every coordinate of everything in the world is measured in units relative to the camera or viewpoint, but its still a full 3D space.

 

Screen space, is basically the pixels you see on your screen, its a 2D space, but you might also have buffers other than the color buffer (the pixels you see), like the depth buffer. You can reconstruct a kind of 3D space since you have an implicit (x,y) point that has an explicit depth (essentially z), and that's what gets used in these SSAO techniques, if I understand them correctly.




#5291030 Data-oriented scene graph?

Posted by Ravyne on 10 May 2016 - 03:34 PM

And I'll add, nothing says you must DOD'ify everything in your program. If OOP suites a problem, and the difficulty-to-benefit ratio of DOD is unclear, and the OOP solution is not holding higher-level adoption of beneficial DOD applications, then it is probably wisest not to replace a principled OOP solution with a poor DOD one, just for DOD's sake.




#5291028 Data-oriented scene graph?

Posted by Ravyne on 10 May 2016 - 03:30 PM

So, I can't go into deep specifics in the time available to me now, and I probably am not the right person to do so anyway -- but in general, its not usually the case design patterns (of which the scene graph is one) survive a transformation from OOP to DOD. You end up with something that serves basically the same purpose, but the organization necessary to make the transformation renders it unrecognizable as the original thing. DOD is really a call to re-imagine a solution from first-principles, taking into account the machine's organization -- DOD is not a call to take your OOP patterns and architecture and just "reproject" them into this new paradigm.

 

But I'm speaking generally, and not to whether Scene Graph has a straight-forward-ish transformation specifically. My gut says that DOD and graphs of most kinds are at odds, and nothing immediately comes to mind when I try to imagine a system that is logically a graph, but employs serious DOD underneath, while still achieving the performance benefits one would expect of DOD. You can do relatively straight-forward things, maybe, like making sure your scene-graph nodes compose only "hot" data, and while that would be some benefit, that doesn't fundamentally change the graph representation.

 

That said, I'm not expert enough myself to believe that no one here is going to come along and disprove me :)




#5291002 My game ends up being boring

Posted by Ravyne on 10 May 2016 - 12:34 PM

Fun is a very hard thing to pin down -- because it's composed of "fuzzy" sorts of concepts like challenge, choice, reward, punishment, variety, novelty, emergence, pace, flow, feel, and so much more.

 

Take a simple jump in a platformer, for instance. You can have a nice sort of parabolic jump that mimics gravity, or you can a linear up-down jump that doesn't. Both of these things can otherwise have the same properties (like max height / max distance) and so they perform in basically the same way as far as level design possibilities go. But the parabolic jump just feels nicer -- it has weight and gravity -- and that makes it fun; the linear jump feels cheap -- and that makes it boring. That's not to say that realistic physics are fun -- Super Meat Boy completely throws off realism to pursue "feel" 110% which its creators have spoken about several times -- even in the original Super Mario Brothers for the NES, Mario and Luigi's jump arc is not physically realistic (they can jump several times their height), but it doesn't even follow a single parabolic motion -- the parabolic arc they follow on the initial rising half of the jump is different (it "floats" more) than the falling half -- which gives their jump, and indeed the entire series, a very distinct feel to them.

 

That's just a single concrete example, but its illustrative of the fact that mechanical equivalence rarely implies that there's a similar amount of fun to be had.

 

 

There are a wealth of videos on youtube where games, design elements, and game mechanics are broken down in very critical and analytical ways. I recall being very impressed with one video which broke down how the camera tracking in Super Mario Bros evolved throughout its 2D incarnations, which I wish I could find and link to right now. I'd be remiss to not plug my coworker's excellent game design channel, Game Design Wit, but there are a lot of content creators doing these kinds of videos.

 

Edit: Found it -- How Cameras in Side-Scrollers Work




#5290997 Custom editor undo/redo system

Posted by Ravyne on 10 May 2016 - 12:12 PM

The command pattern approach is an oft-cited solution.

 

If you're interested, Sean Parent gave a talk Entitled "Inheritance is the Base Class of Evil" (Channel 9 link) which is a brief 24 minutes. Its all about the benefits of preferring composition over inheritance and value semantics over reference semantics -- these things are fundamental to his overhauling of Photoshop's undo/redo, and he gets more specific about how that works by the end (I think from about the midpoint on, but its been awhile since I've watched it. Regardless I recommend watching the whole thing -- its short, and its informative enough that I've watched it a handful of times over the 30 months its been available).

 

Here's a youtube link as well in case that's more convenient, but I think the Channel 9 video is better quality; the youtube video is a third-party upload.

 

Also, Sean's presentations are always great, and never a poor way to spend a lunch break.




#5290874 Will Unity or Unreal Engine 4 suit better for me?

Posted by Ravyne on 09 May 2016 - 02:48 PM

The biggest difference between them, IMO, is that Unreal comes from an AAA lineage and has relatively recently started extending its reach down to mobile and indies, while Unity comes from a mobile (iOS) / indie lineage, and has been steadily extending its reach towards greater and greater AAA ambitions.

 

What this means for users is that they're really both converging towards similar capabilities, but they come at it from different beginnings. Both companies have a huge staff dedicated to ongoing engine development, very capable people all around, so you shouldn't make the mistake of assuming that Unreal is somehow more legitimate. In practice, Unreal has put a lot of effort into user-friendlier tooling with UE4 but there are still more and sharper rough edges than in Unity's tooling. Unity is more friendly for the casual developer, but sometimes the fact that they assume lesser of the average Unity user can get in the way -- Usually you can get around it, but it sometimes seems like more work than it ought to be, or that what you need is more hidden.

 

Licensing is also a big difference -- both in terms of access to the C++ source code (which you might come to need for performance tuning) and in cost to you to license either engine for commercial use. Unreal offers up C++ source code access for free, while Unity charges ~$50,000 last I checked. For usage, Epic wants 5% of your gross revenue above $3000 per product, per year, but there's no seat license -- this is nice and simple; its also entirely free if you're using it to make CG films, IIRC. Unity wants $75/month subscription or $1500/one-time fee per seat, per platform-package (e.g. extra iOS, Android features, Consoles -- which I think are a higher fee) for the Professional Edition, but they don't take a cut of your sales after that. There's a Personal Edition License for Unity that's basically free all up -- no royalties, no seat license fees -- and the engine is feature-complete, however, you lose some really nice non-engine features, can't get C++ source without a professional license, and the personal licenses aren't available to any team that's made more than $100,000 in the previous year, or who's currently funded more than $100,000 -- its a viable option for a small team working on little or no budget, though (and if its relevant to your plans, keep in mind that if you did something like Kickstarter and collected more than $100k during a given year, that's going to count and you'll need to pay up.)

 

Depending on what platforms you target, how many developer seats you're licensing, and how many sales you expect to do, one of these options will save you money; If you make a lot of sales, Unity works out to be less expensive in the end -- the break-even point is lower or higher as a function of how many seats and platforms you license, and whether you need C++ source; but, you pay unity up front, regardless of whether you make any sales at all. Unreal costs more when you're successful, but it doesn't penalize you if you have a commercial failure -- 5% is really never a burden. When I worked it out once, basically if you make less than a couple hundred thousand in sales, Unreal is the cheaper option; if you make more than that Unreal costs you, but making "too much money" is a wonderful problem to have and you'll probably be overjoyed to give them their 5%. That 5% is definitely cheaper than a team of high-caliber engine developers.

 

That said, whichever is most comfortable and has learning resources and a community that suites you is probably the way to go. Your game is always more important than the engine, and these engines and toolsets are already close enough to parity that neither will block you from achieving your vision.




#5290589 Is it C# Territory?

Posted by Ravyne on 07 May 2016 - 05:04 PM

 

 

 

- Save time by going with C#

- Use saved time for pushing compute-heavy algorithms to GPU instead

 

This is exactly what is happening at my job currently (MRI scanner). Most of the user-facing code is moving from C++ to C#. More and more of the high-performance code is moving from C++ to CUDA and the likes.

 

How would C# save my time? What about Java? Does it save time too? Python?

 

I may save some time and effort, but then I will hit the C# limitations I mentioned above!

 

 

Its an important distinction that we're talking not about core code here, we're talking about the stuff that lets the user drive the application and then displays what they've done, we're talking about interacting with networks and services, files, etc. Generally, those things are much more painful in C++ than in C# either because of standard language features (e.g. delegates, threading, and more) or because of the much larger standard library -- C++ has many libraries, but its a hodge-podge that can sometimes be an extra burden to make work together. C# is as good at this kind of thing as C++ is as good at the kind of things you forst mentioned.

 

CAD is certainly a high-performance graphics application, like games, but its also a good deal more regular. With a smaller set of problems, there's been a lot of concentrated effort in studying CAD techniques. That helps CAD developers push more and more of the computationally-heavy stuff onto the GPU, making the absolute performance of the CPU-side code less and less critical, as it is given less difficult responsibilities and simultaneously more time to do them in. I can easily imagine that a modern CAD application could be written basically in C# from the UI all the way down to a Job Queue, which would run job kernels either on the CPU written in C++, or on the GPU written in whatever GPGPU language flavor -- and maybe supporting data structures also in C++.

 

 

No sir, I'm not talking about reading files or parsing strings, or even accessing system services. I'm talking about using something like CGAL high performance CSG libraries, and ray tracing SDKs.

 

 

Fine -- which you don't need in the layers that interact with a human, or the OS, or networks and files; and probably most of which's applications could be tucked into those jobs I talked about, or surrounding data structures, or maybe just the whole core of your app. I'm not saying to eschew C++ entirely, I'm saying you could do as much of the heavy lifting as you'd like in C++ or on the GPU, with the *rest* of the application in C#. I'm saying it could save you time to not have to wrestle with C++ in those human/OS/Networks-and-files issues, and that there's no performance argument there to justify the effort. I'm saying that the hodge-podge of C++ libraries makes it harder to hire a dev who's already familiar with your technology stack than centering around C# standards for the same. I'm saying you could hire three C# programmers to focus on those layers for the price of just two C++ programmers. I'm saying C# and its standard library are very well suited to those areas, and are a viable avenue to save you time in the long run over what setting up a hybrid C#/C++ project costs up front.

 

And despite of all I'm saying in favor of this arrangement, I'm still only saying its a consideration and a viable approach -- not a mandate, nor even a best-practice. If you and your shop are happy with C++, and you can hire enough good C++ people, and your margins can support the salaries they demand as you and your local competition bid up offers competing for a scarcer and scarcer resource, then cool to. You do you. What people are taking issue with is your seeming insistence that full-stack C++ is the only way to achieve performance -- its not, and I think C++ need not go as far up the stack as you'd guess if you're leveraging GPUs.




#5290469 Is it C# Territory?

Posted by Ravyne on 06 May 2016 - 02:12 PM

 

- Save time by going with C#

- Use saved time for pushing compute-heavy algorithms to GPU instead

 

This is exactly what is happening at my job currently (MRI scanner). Most of the user-facing code is moving from C++ to C#. More and more of the high-performance code is moving from C++ to CUDA and the likes.

 

How would C# save my time? What about Java? Does it save time too? Python?

 

I may save some time and effort, but then I will hit the C# limitations I mentioned above!

 

 

Its an important distinction that we're talking not about core code here, we're talking about the stuff that lets the user drive the application and then displays what they've done, we're talking about interacting with networks and services, files, etc. Generally, those things are much more painful in C++ than in C# either because of standard language features (e.g. delegates, threading, and more) or because of the much larger standard library -- C++ has many libraries, but its a hodge-podge that can sometimes be an extra burden to make work together. C# is as good at this kind of thing as C++ is as good at the kind of things you forst mentioned.

 

CAD is certainly a high-performance graphics application, like games, but its also a good deal more regular. With a smaller set of problems, there's been a lot of concentrated effort in studying CAD techniques. That helps CAD developers push more and more of the computationally-heavy stuff onto the GPU, making the absolute performance of the CPU-side code less and less critical, as it is given less difficult responsibilities and simultaneously more time to do them in. I can easily imagine that a modern CAD application could be written basically in C# from the UI all the way down to a Job Queue, which would run job kernels either on the CPU written in C++, or on the GPU written in whatever GPGPU language flavor -- and maybe supporting data structures also in C++.




#5290185 Is it C# Territory?

Posted by Ravyne on 04 May 2016 - 08:24 PM

 

Your good reason is "We have hundreds of thousands of man hours invested in our giant aging C++ code base, thus we'll be keeping that around. kthxbye."

 

Sunk cost fallacy...

 

A good reason would involve comparing the expected results of the new product against the quality of the existing product to see if the value of improvement exceeds the implementation cost and risks.

 

 

Disagree with this application -- at some point its been far too long since the train left the station, the train's designer died or retired years ago (no one really knows), the conductor and engineering staff have completely turned over 5 times, and no one remembers any of the bugs that were squashed into the tracks further than about a mile back.

 

Its almost never a good idea to go all green-field over your existing product when the product is large and complex, and when your user-base is entrenched -- and especially when they're entrenched in your product's quirks and bugs. You want to talk about a nightmare... reaching feature-parity with a large and complex product is a nightmare -- but reaching bug-parity with the kinds of bugs people come to rely on (which, by definition, stayed around long enough to be relied upon precisely because no one could ever figure them out) is more than a nightmare, that's some Hellraiser shit right there. You know that weird for-scope bug from Visual C++ 6? The one with the option to turn it back on in every version of Visual C++ since it was fixed? There's a small set of regression tests somewhere in Microsoft that make sure no parser change or bug fix messes with that, and there's an engineer who's job includes making sure those regressions never go red. 20 years later that's still earning someone their bread, because the cost to the business is less than the cost of losing clients who can't or won't fix their code, even though its a good idea. And that's a trivial example.

 

Generally you would only make that kind of leap when the market forces your hand -- In recent years there's been a good deal of effort to get off of old systems written in COBOL and Fortran that very possibly run on old hardware you can't buy any more (and even repairing is difficult and costly), but mostly because there aren't any new COBOL and Fortran engineers who understand the intersection of those languages, and machines, and operating systems, and they can't persuade enough old coders to come out of retirement and into consulting -- the ones they can convince are A) billing at rates somewhere between a-charity-date-with-the-queen-of-England and who-wants-to-hunt-the-last-African-rhino, and B) are not unlikely to drift away at any moment due to death or dementia.




#5289950 Is making game with c possible?

Posted by Ravyne on 03 May 2016 - 02:46 PM

The modern pseudo-argument in favor of C is that its "more transparent" -- namespaces, and classes, and templates, and operator overloading, and virtual member functions, and overload resolution taking all of those into account, and lambdas, and exceptions, and rules of 5 and 3 and 1, and on-and-on are all compiler voodoo that some believe is too opaque to be trusted -- or, at the very least, that it short-circuits critical thinking regarding platform-level details, and sometimes that this allows lesser-skilled programmers ("undesirables") to sneak in and wreak havoc in places they are not welcome. This is the basic position that Linus Torvalds takes with Linux kernel development (unapologetically, as is his custom) -- though, there are good reasons for not wanting a mix of C and C++ in the kernel, and C is already entrenched for historical reasons.

 

But I don't consider that a good reason, for or against, myself. Many patterns that C++ makes easy are useful (e.g. classes, virtual member functions) in C, and can be replicated to a reasonable degree with some work -- for example, a virtual member function in C++ is basically a function pointer with a convenient syntax for declaring, defining, and calling them. C code that does this sort of thing looks different, but you can find examples of it in the Linux kernel, in Quake, and in Halo's engine.

 

C++ is generally more convenient and productive than C, IMHO, but there is some truth to the argument that the compiler does things behind your back, and its true that this can in some cases cause trouble. But its not terribly difficult to learn where most of the dragons are, and to guide yourself to a comfortable subset of C++ and way of speaking it that avoids them.




#5289820 Is using the Factory or Builder pattern nessesary?

Posted by Ravyne on 02 May 2016 - 08:37 PM

Yeah, that ^^ :D

 

Patterns should never be a play-book for writing code. Patterns are, well, patterns in code that just so happen to have been repeatedly reinvented, at which point people recognised the pattern and assigned a name to them to make discussion easier. You should look at lists of patterns more like a traveller's dictionary, and less like a cookbook.

 

A software engineer should strive to be comparable to a chef who writes cookbooks -- not a machine that only knows how to follow them.

 

I don't want to be too contrarian to this point because I don't want to undermine the strength of the general sentiment, because I agree that its closer to "correct" practice than the recipe-book alternative.

 

That said, I think a better approximation is that patterns are useful pre and post-mortem, so long as you treat them as ideas rather than things. As usually comes up in these discussions, there is no "The" in the Grand Book of Software Patterns -- there is no "The Visitor Pattern", no "The Factory Pattern", no "The Builder Pattern". That word "The" (with a capital-T) implies singularity -- it implies that there is one implementation to rule them all and that everyone agrees it is the best implementation for all circumstances.

 

We all know that this is not and cannot be true, and we have mostly all experienced solving our own problems in our own way, and later coming to realize we've done it in a way that we can recognize as one of these patterns. In this way, we've participated in the emergent definitions of these patterns.

 

At the same time, what makes patterns useful in post-mortem analysis and discussion is that these patterns imply general properties of the solutions and of general properties of the problems themselves. We know what a Factory does, and can surmise the rough shape of it, what its responsibilities probably are, and to roughly what problem it proved naturally useful in solving. Once we know those things, it becomes possible to work forward from the other direction -- we can say that if we have a familiar kind of problem, and if the surrounding responsibilities are like those we face now, then there's a strong chance that a solution in the shape of, say, the Factory pattern will prove a good solution. Even if it proves not to fit as well as we thought, beginning there can be useful as you think through the implications that choice would have for the rest of the code systems, which you might also think about in terms of the pattern you expect them to be shaped like. Done well, this is an effective tool for working out potential design problems before writing lots of code.

 

But you probably shouldn't attempt to define your whole system up front in terms of patterns; this usually doesn't scale much past textbook exercises. Even still, thinking in these terms during the design phase can help abstract away details that might be irrelevant and distract from what's necessary -- and likewise, if applied too aggressively, can obscure details that are relevant and necessary, and will go overlooked. Be wary of this. The worst happens when you are both too aggressive and too rigid in your application of patterns as design tools, which can cause you to crystallize concrete details before you have really thought them through, and have consequently allowed your design to rely upon, spreading the tendrils of your gaff beyond a simplex refactoring (the term of art for this is "painting yourself into a corner") -- "simplex", by the way, is a word you should come to know and love; its meaning is the opposite of "complex", whose own definition has roots meaning something like "to braid or blend together" -- simplex is best plex.

 

Further, those code systems which aren't naturally a strong fit to a pattern usually represent the greatest unknowns in your code systems -- which can be a good indicator that you should consider whether they are candidates for early prototyping. Many times these code systems turn out to be complex sorts of black-boxes and that's fine if they simply are -- games and many applications just have this kind of code in them that you can't really design until you get there. But other times it turns out that these code systems are opaque to you because it turns out that you don't understand the problem, the responsibilities, or the interactions with surrounding systems well enough to recognize the patterns or algorithms hidden inside.

 

Patterns are useful both before and after writing code -- just don't let them become the tail that wags the dog.




#5289466 Should i learn Win32 or UWP?(C++, Directx)

Posted by Ravyne on 30 April 2016 - 03:15 PM

The decision mostly comes down to platform support. UWP lets you reach all Windows 10 based platforms -- PCs, Xbox One, SurfaceHub, even ARM-based devices like Windows Phone and Raspberry Pi (model 2 and 3, through the Windows IoT platform). Its a single, mostly-uniform API that fairly-smoothly lets you target different devices characteristics with a single application. Its also the preferred format for the Windows Store, and its the only way you can get your app on Xbox without a bigger company backing you (FYI, UWP on Xbox One is D3D11, at least for now).

Win32 works on all versions of Windows for PCs, but isn't otherwise relevent to any devices you're likely to care about or have access to. The main benefit from a platform perspective is that Windows 7 and even Windows XP are still relevant in some parts of the world, and also in some corporate environments (mostly mid-sized companies tied down for legacy reasons, or small companies who can't afford to upgrade so frequently, and lots of small mom-and-pops who don't replace anything until it breaks.)

Honestly, you probably should learn both to at least a reasonable degree. And you probably will, even if you don't want to. But you can use the information here to decide where to invest first.


#5289359 Is inheritance evil?

Posted by Ravyne on 29 April 2016 - 07:21 PM

A good rule of thumb is to solve any problem with the least-powerful tool it can be (reasonably) solved with. Try not to think of that in a negative light -- by using the least-powerful tool, what we really mean is the one with the least unnecessary dangers attached.

 

There are a few general power-progressions you should try to observe:

  • Prefer composition over inheritance -- that is, use inheritance only when it supports precisely the relationship semantics you want, not for reasons of convenience.
  • Prefer interfaces (interface/implements, pure-virtual classes) over inheriting concrete classes (extends, "plain" inheritance).
  • Prefer non-member, non-friend functions over member functions, prefer member functions over non-member friend functions, prefer friend functions over friend classes, in C++.
  • Know the differences between private, protected, and public inheritance in C++, and use the appropriate one.
  • Keep things in the smallest reasonable scope.

 

Those are just a few examples. Being a good engineer doesn't mean being the one who smuggly wields tools of great power, confident you'll not fuck up; Its great when one can do that when they have no other reasonable choice--and you'll still fuck up--but a trait of a good engineers is that they seek out the solutions which are exposed to the minimum set of potential hazards while meeting requirements of (in mungable order) safety, performance, maintainability, usability, and ease of engineering.

 

Language features are not inherently evil (not even goto), but they are sometimes misapplied and the more commonly misapplied they are, or the worse the repercussions are, the worse their reputation becomes. Sometimes this is exacerbated by the way that languages are taught, as is the case with how inheritance has come to have such a poor reputation. Sometimes its exacerbated by the mistranslation of programming skills from one language to another; in general, a Java programmer (or C# programmer to a somewhat lesser degree) will *way* abuse inheritance if tasked to write C++ (and they'll probably leak memory like a sieve too  :) ).

 

TL;DR; Know thy tools, and program.






PARTNERS