RavyneMember Since 26 Feb 2007
Online Last Active Today, 07:15 PM
Digipen grad and independent game developer in North Seattle, otherwise employed at Microsoft.
- Group GDNet+
- Active Posts 4,794
- Profile Views 23,351
- Submitted Links 0
- Member Title Member
- Age 33 years old
- Birthday June 10, 1983
Aside from game development: Computer Languages, Old school gaming (Particularly RPGs), Embedded Systems, Electronics.
Outstanding Forum Member
Posted by Ravyne on 26 May 2016 - 12:26 PM
Every single one of these has their own point-of-view, quirks, and warts. All of the engines have their own way of doing things that you need to learn to work with. Frameworks are sort if similar, except you're not so locked into doing things "their way" just by virtue of the fact that they do less and have fewer intertwined systems.
"Jack of all trades, master of none" as they say--most people take that to be an insult, but the full quote goes on to end with "--but better than a master of one." These engines are a great value proposition, but they don't fill that need for masterful execution (well, unless you're using UE4 to make a shooter). That's why in-house engines will always be a thing on the high-end.
On the low end, the mental lock-in and quirks of engines can be more hindrance than help for very simple or very unique small-scale games. Especially using those frameworks I mentioned, it can be less headache to roll your own purpose-built engine than to fight against an engine's natural currents to modify it to your needs; or simply to sidestep all those engine abstractions that are more complex than your game needs.
Where engines are worth their while is really in the middle ground -- your game is complex enough that rolling your own tech is more costly (money, time, risk, market opportunity), but not so unusual as to be a poor fit, and also not so complex or boundary-pushing that it risks outgrowing an off-the-shelf solution. Many games great and small fit into that box, and that's why Unity and Epic can staff hundreds of people behind these offerings and make healthy businesses of it.
Another side-effect, for good or ill, is that these engines instill a certain amount of liquidity among developers, and particularly those who aren't ninja-level engine developers. Unity and Unreal are concrete skill sets that you can recruit and hire on--before these engines became popular, every new hire had to spend time picking up the house engine, house scripting language, house toolchain, house pipeline. Nowadays that's still true many times--but not at the same rate it used to be. Part of the attraction for using Unity or Unreal among larger studios is that they gain a significant hiring pool (even including people who may not have a traditional CS background) and that those people can hit the ground running, more or less.
Posted by Ravyne on 18 May 2016 - 04:43 PM
I hate to rain on the anti-Microsoft parade, but all this advice to avoid Microsoft or vendor lock-in is tangential at best, and at the least seems outdated. But to start from fair ground, I'll throw out the disclaimer that I'm a writer (docs and such) on the Visual Studio team.
If you haven't been following along lately, Microsoft as a whole is really leaving the our-way-or-no-way mentality behind. To be frank, today's devs have more options that are good than was the case years ago, so there's a lot more mobility in dev tools, platforms, languages, etc -- they don't accept our-way-or-no-way anymore. Microsoft's continued success and relevance actually requires them to get with that program, and so they have. Today, Visual Studio is already a damn fine IDE for iOS, Android, Linux, and IoT development, in addition to the usual Microsoft platforms -- even just a couple years ago, Eclipse would have been basically the only "serious" IDE for those scenarios (and its still got inertia today). For example, you can do your programming using Visual Studio on Windows today, and the build/run/debug commands will talk to a Linux box where your code will be built (using your typical Linux development stack), launched, and hooked to GDB, and GDB in turn talks back to Visual Studio and looks just like a local debugging session of your Windows apps. And that's basically the same scenario for Linux-based IoT, Android, and iOS as I've described for Linux on the desktop and server; The android stuff can target a local emulator running atop Windows Virtualization, and is actually considered to be better than the stock emulators provided by other Android development environments, even if that sounds a bit unbelievable. Soon, you'll be able to run an entire Ubuntu Linux environment right inside Windows 10, so that developers will have all those familiar *nix tools right at hand.
Believe it or not, "old Microsoft" is basically dead and buried, especially in the server and developer tools division. They're pretty hellbent on making sure that Visual Studio is everyone's preferred IDE, regardless of what platform or scenario they're targeting -- and for those that like lighter-weight editors there's Visual Studio Code. Stuff is being open-sourced left-and-right, all our open-source development happens on GitHub, and a bunch of our docs and samples are already on github too.
By all means, people should find and use whatever tools and platforms they like; they should target whatever platforms they like, and as many as they like. Odds are, Microsoft and Visual Studio are relevant to where you are and where you're going, or will be soon. It's silly to dismiss them just because they're Microsoft. I use lots of tools every day in my work here that came from the *nix world -- Vim, Git, and Clang to name a few -- and they serve me well; partisanship between open/free and proprietary software isn't a very worthwhile thing IMO, unless you're talking about the very philosophy of it all.
Posted by Ravyne on 17 May 2016 - 02:07 PM
Of course if you expect to release on console, and most people only have 1080p televisions, then you'd ship 1080p images for them, and that cuts the storage requirements by 75% immediately -- but even 25MB is still a lot of geometry and textures. Realistically, you probably want to release on PC too, and may only be able to release on PC since getting access to the consoles is not currently wide-open; On PC, 4k is an increasingly common thing, you don't *have* to support it, but you ought to. And even if you chose not to now, you'll at least want to render and keep the files on hand in a very high resolution because if you ever need to recover, retouch, or remaster the files, you'll want to start from those. As a rule of thumb, you usually want to keep a copy of all game assets in at least 2x greater fidelity than the greatest fidelity you can imagine shipping -- the basic reason is that you can always downsample without really loosing information, up-sampling always requires a guess even if its a really well-informed one.
Also, there's no conflict between pre-rendered/real-time and static images. You can have static backgrounds that are pre-rendered, or you can render them in real-time. In general, a static view of any scene, especially an indoor scene (or more generally, any scene dominated by near occluders) is going to render very quickly -- even if you render it fresh every frame, you're not making any costly changes to it.
Posted by Ravyne on 17 May 2016 - 01:17 PM
There's a more-extensive answer in my previous post, but TL;DR --
Pre-rendered backgrounds will be very large (4k resolution, if not 8k), you'll have as many as a half-dozen of them per scene, and you won't probably won't be able to apply lossy compression techniques to get really good compression rates. Lets say you have 5 4k (color, depth, specular, normal, and occlusion) buffers and get 60% compression on average -- if we assume that each buffer is 32bits per element (some will be less, some might be more) that's going to be 5 x 32mb x 0.60 -- right around 100MB per scene. You can fit a *ton* of geometry and texture data into 100MB -- and there's a good chance you can re-use most textures and some geometry elsewhere, which lowers the effective cost of the run-time rendered solution even further.
Posted by Ravyne on 17 May 2016 - 12:59 PM
It really depends -- on the one hand, you can render very realistic scenes in realtime, and while this has a runtime cost associated, it also gives you freedom to move the camera around naturally if you like. From a production standpoint, that flexibility means that someone like a designer can move around a virtual camera and get immediate feedback, rather than having to get an artist to content pipeline tool in the mix -- being able to iterate that rapidly is really helpful.
On the other hand, pre-rendered backgrounds can look really great for what's basically a fixed cost, meaning that you can run on a lower spec or pour more power into high-end rendering of characters and other movable objects. If you go back to Resident Evil, -- or to Alone in the Dark, before that -- that's basically why they did it that way; They used pre-rendered backgrounds to give great scene detail combined with what were relatively high number of relatively high-quality 3D models (The models in RE were as good or better than comparable character models from 3D fighting games of the day, but with many more onscreen potentially).
If you were going to do pre-rendered backgrounds today, such that it mixed well with modern rendering techniques for non-prerendered elements, you would probably do something like a modern deferred renderer does -- you wouldn't have just a bitmap and depth buffer (like RE probably did), you'd have your albedo, normal, depth, specular, etc buffers, draw your 3D objects into each of them, and then combine them all for the final framebuffer. You could do the static parts offline in your build chain, or you could even to them in-engine at each scene change.
Its not cut-and-dried which approach (offline or runtime-at-scene-change) would occupy less disk space. Geometry isn't a big deal usually, and if you get a lot of re-use out of your textures and/or if they compress well, you could come out ahead with the runtime approach -- especially so if you can utilize procedural textures. offline (pre-rendered) images will have a fixed and small runtime cost, but will use a lot of disk space because the buffers will be large (you'd want to do at least 4k, and probably even 8k) buffers and you probably don't want to apply any lossy compression to them either.
Posted by Ravyne on 16 May 2016 - 04:32 PM
Which do you think should I particularly focus on? Where is the most revenue? I am in the view that if I spend my time learning unnecessary things (those which I will understand later are of little or no use in the future) then I will simply waste my time.
Please answer as descriptively and elaborately as possible, and if possible provide further references for statistical information, I'm serious .
Stop. How much time will you waste choosing this ideal platform? How much time have you already wasted? How much time will you have wasted when the decision you make proves wrong? Will you throw it all away to pursue your new choice from a fresh start?
None of us are omniscient. Some of the best minds are tasked with making their best guesses at what things will be like just 5 years from now, and still most of them are wrong most of the time -- 5 years is about the outside limit for what anyone is actually willing to bet serious money or resources on. People will think about 10+ years sometimes, but very rarely are they making any bets -- usually they're just looking for things to keep an eye on.
Take VR -- it was in arcades in the 90s. People were doing it even back then, we've had the basic idea and technical footing to pull it off all this time, but it wasn't clear if or how it could be brought to the mass-market. The guys at what's now Occulus bet early and bet big (in blood, sweat, and tears -- not so much money) and showed the way to bring it to the masses -- only after that was anyone with real money or resources willing to place stakes on the table; some of the best technical minds with access to the deepest pockets on the planet didn't see the way on their own. And its all well and good to say you want to be the next Ocullus or the next Mojang, but reality is littered with 1000 wrong guesses for every right one.
Learn, do, and adapt is usually a better strategy than betting it all on a predestined outcome.
Posted by Ravyne on 16 May 2016 - 04:17 PM
Retro/Homebrew is always intellectually interesting, but its something people do more for the love and experience of doing it, rather than hopes of commercial success. Its interesting hardware, with features and limitations that can force your hand to deliver some really inspired solutions -- it and the dreamcast are usually the platforms I recommend for those interested in retro homebrew -- either platform is limited enough to prove a challenge, but capable enough to do interesting things; old enough to communicate the essence of old-school, to-the-metal development, yet not so archaic or downright weird that the lessons you'll learn won't benefit you in contemporary times -- they will.
Good luck, the GBA is an excellent platform for retro-styled RPGs.
Posted by Ravyne on 16 May 2016 - 04:07 PM
Is mipmaps the only difference between textures and images?
As he wrote, it depends on the context.
For some contexts they are the same thing.
For other contexts they refer to data formats. Highly compressed formats like png and jpg need much memory and space to be decoded before going to the video card, some formats such as S3/DXTC and PVRTC are supported directly by different video cards, so some systems call one pictures and the other textures.
Building on this, and on Josh's previous reply, the compression format or lack thereof is often driven by the image content itself. Textures representing real-life images or "material" textures used in games often compress very well, even to lossy compression formats without a great deal of apparent visual fidelity loss. However, what you might typically call a "sprite" -- such as an animated character in a 2D fighting game, or any sort of "retro" look -- usually suffers too much quality loss from being converted to those kinds of compressed formats; instead, they might be left in an uncompressed format where their exact, pixel-perfect representation is preserved. On-disk formats like GIF or PNG can serve those images well, but GPUs don't understand them; so its common to see those formats on disk, with a conversion to raw (A)RGB before hitting the GPU. An 184.108.40.206 ARGB image is 32bits/pixel, while a highly-compressed textrue format can get as low as 2bits/pixel if the image content is suitable.
Posted by Ravyne on 13 May 2016 - 04:33 PM
Camera space (or View Space) is the space of the entire world, with the camera or viewpoint at the origin -- every coordinate of everything in the world is measured in units relative to the camera or viewpoint, but its still a full 3D space.
Screen space, is basically the pixels you see on your screen, its a 2D space, but you might also have buffers other than the color buffer (the pixels you see), like the depth buffer. You can reconstruct a kind of 3D space since you have an implicit (x,y) point that has an explicit depth (essentially z), and that's what gets used in these SSAO techniques, if I understand them correctly.
Posted by Ravyne on 10 May 2016 - 03:34 PM
And I'll add, nothing says you must DOD'ify everything in your program. If OOP suites a problem, and the difficulty-to-benefit ratio of DOD is unclear, and the OOP solution is not holding higher-level adoption of beneficial DOD applications, then it is probably wisest not to replace a principled OOP solution with a poor DOD one, just for DOD's sake.
Posted by Ravyne on 10 May 2016 - 03:30 PM
So, I can't go into deep specifics in the time available to me now, and I probably am not the right person to do so anyway -- but in general, its not usually the case design patterns (of which the scene graph is one) survive a transformation from OOP to DOD. You end up with something that serves basically the same purpose, but the organization necessary to make the transformation renders it unrecognizable as the original thing. DOD is really a call to re-imagine a solution from first-principles, taking into account the machine's organization -- DOD is not a call to take your OOP patterns and architecture and just "reproject" them into this new paradigm.
But I'm speaking generally, and not to whether Scene Graph has a straight-forward-ish transformation specifically. My gut says that DOD and graphs of most kinds are at odds, and nothing immediately comes to mind when I try to imagine a system that is logically a graph, but employs serious DOD underneath, while still achieving the performance benefits one would expect of DOD. You can do relatively straight-forward things, maybe, like making sure your scene-graph nodes compose only "hot" data, and while that would be some benefit, that doesn't fundamentally change the graph representation.
That said, I'm not expert enough myself to believe that no one here is going to come along and disprove me
Posted by Ravyne on 10 May 2016 - 12:34 PM
Fun is a very hard thing to pin down -- because it's composed of "fuzzy" sorts of concepts like challenge, choice, reward, punishment, variety, novelty, emergence, pace, flow, feel, and so much more.
Take a simple jump in a platformer, for instance. You can have a nice sort of parabolic jump that mimics gravity, or you can a linear up-down jump that doesn't. Both of these things can otherwise have the same properties (like max height / max distance) and so they perform in basically the same way as far as level design possibilities go. But the parabolic jump just feels nicer -- it has weight and gravity -- and that makes it fun; the linear jump feels cheap -- and that makes it boring. That's not to say that realistic physics are fun -- Super Meat Boy completely throws off realism to pursue "feel" 110% which its creators have spoken about several times -- even in the original Super Mario Brothers for the NES, Mario and Luigi's jump arc is not physically realistic (they can jump several times their height), but it doesn't even follow a single parabolic motion -- the parabolic arc they follow on the initial rising half of the jump is different (it "floats" more) than the falling half -- which gives their jump, and indeed the entire series, a very distinct feel to them.
That's just a single concrete example, but its illustrative of the fact that mechanical equivalence rarely implies that there's a similar amount of fun to be had.
There are a wealth of videos on youtube where games, design elements, and game mechanics are broken down in very critical and analytical ways. I recall being very impressed with one video which broke down how the camera tracking in Super Mario Bros evolved throughout its 2D incarnations, which I wish I could find and link to right now. I'd be remiss to not plug my coworker's excellent game design channel, Game Design Wit, but there are a lot of content creators doing these kinds of videos.
Edit: Found it -- How Cameras in Side-Scrollers Work
Posted by Ravyne on 10 May 2016 - 12:12 PM
The command pattern approach is an oft-cited solution.
If you're interested, Sean Parent gave a talk Entitled "Inheritance is the Base Class of Evil" (Channel 9 link) which is a brief 24 minutes. Its all about the benefits of preferring composition over inheritance and value semantics over reference semantics -- these things are fundamental to his overhauling of Photoshop's undo/redo, and he gets more specific about how that works by the end (I think from about the midpoint on, but its been awhile since I've watched it. Regardless I recommend watching the whole thing -- its short, and its informative enough that I've watched it a handful of times over the 30 months its been available).
Here's a youtube link as well in case that's more convenient, but I think the Channel 9 video is better quality; the youtube video is a third-party upload.
Also, Sean's presentations are always great, and never a poor way to spend a lunch break.
Posted by Ravyne on 09 May 2016 - 02:48 PM
The biggest difference between them, IMO, is that Unreal comes from an AAA lineage and has relatively recently started extending its reach down to mobile and indies, while Unity comes from a mobile (iOS) / indie lineage, and has been steadily extending its reach towards greater and greater AAA ambitions.
What this means for users is that they're really both converging towards similar capabilities, but they come at it from different beginnings. Both companies have a huge staff dedicated to ongoing engine development, very capable people all around, so you shouldn't make the mistake of assuming that Unreal is somehow more legitimate. In practice, Unreal has put a lot of effort into user-friendlier tooling with UE4 but there are still more and sharper rough edges than in Unity's tooling. Unity is more friendly for the casual developer, but sometimes the fact that they assume lesser of the average Unity user can get in the way -- Usually you can get around it, but it sometimes seems like more work than it ought to be, or that what you need is more hidden.
Licensing is also a big difference -- both in terms of access to the C++ source code (which you might come to need for performance tuning) and in cost to you to license either engine for commercial use. Unreal offers up C++ source code access for free, while Unity charges ~$50,000 last I checked. For usage, Epic wants 5% of your gross revenue above $3000 per product, per year, but there's no seat license -- this is nice and simple; its also entirely free if you're using it to make CG films, IIRC. Unity wants $75/month subscription or $1500/one-time fee per seat, per platform-package (e.g. extra iOS, Android features, Consoles -- which I think are a higher fee) for the Professional Edition, but they don't take a cut of your sales after that. There's a Personal Edition License for Unity that's basically free all up -- no royalties, no seat license fees -- and the engine is feature-complete, however, you lose some really nice non-engine features, can't get C++ source without a professional license, and the personal licenses aren't available to any team that's made more than $100,000 in the previous year, or who's currently funded more than $100,000 -- its a viable option for a small team working on little or no budget, though (and if its relevant to your plans, keep in mind that if you did something like Kickstarter and collected more than $100k during a given year, that's going to count and you'll need to pay up.)
Depending on what platforms you target, how many developer seats you're licensing, and how many sales you expect to do, one of these options will save you money; If you make a lot of sales, Unity works out to be less expensive in the end -- the break-even point is lower or higher as a function of how many seats and platforms you license, and whether you need C++ source; but, you pay unity up front, regardless of whether you make any sales at all. Unreal costs more when you're successful, but it doesn't penalize you if you have a commercial failure -- 5% is really never a burden. When I worked it out once, basically if you make less than a couple hundred thousand in sales, Unreal is the cheaper option; if you make more than that Unreal costs you, but making "too much money" is a wonderful problem to have and you'll probably be overjoyed to give them their 5%. That 5% is definitely cheaper than a team of high-caliber engine developers.
That said, whichever is most comfortable and has learning resources and a community that suites you is probably the way to go. Your game is always more important than the engine, and these engines and toolsets are already close enough to parity that neither will block you from achieving your vision.
Posted by Ravyne on 07 May 2016 - 05:04 PM
- Save time by going with C#
- Use saved time for pushing compute-heavy algorithms to GPU instead
This is exactly what is happening at my job currently (MRI scanner). Most of the user-facing code is moving from C++ to C#. More and more of the high-performance code is moving from C++ to CUDA and the likes.
How would C# save my time? What about Java? Does it save time too? Python?
I may save some time and effort, but then I will hit the C# limitations I mentioned above!
Its an important distinction that we're talking not about core code here, we're talking about the stuff that lets the user drive the application and then displays what they've done, we're talking about interacting with networks and services, files, etc. Generally, those things are much more painful in C++ than in C# either because of standard language features (e.g. delegates, threading, and more) or because of the much larger standard library -- C++ has many libraries, but its a hodge-podge that can sometimes be an extra burden to make work together. C# is as good at this kind of thing as C++ is as good at the kind of things you forst mentioned.
CAD is certainly a high-performance graphics application, like games, but its also a good deal more regular. With a smaller set of problems, there's been a lot of concentrated effort in studying CAD techniques. That helps CAD developers push more and more of the computationally-heavy stuff onto the GPU, making the absolute performance of the CPU-side code less and less critical, as it is given less difficult responsibilities and simultaneously more time to do them in. I can easily imagine that a modern CAD application could be written basically in C# from the UI all the way down to a Job Queue, which would run job kernels either on the CPU written in C++, or on the GPU written in whatever GPGPU language flavor -- and maybe supporting data structures also in C++.
No sir, I'm not talking about reading files or parsing strings, or even accessing system services. I'm talking about using something like CGAL high performance CSG libraries, and ray tracing SDKs.
Fine -- which you don't need in the layers that interact with a human, or the OS, or networks and files; and probably most of which's applications could be tucked into those jobs I talked about, or surrounding data structures, or maybe just the whole core of your app. I'm not saying to eschew C++ entirely, I'm saying you could do as much of the heavy lifting as you'd like in C++ or on the GPU, with the *rest* of the application in C#. I'm saying it could save you time to not have to wrestle with C++ in those human/OS/Networks-and-files issues, and that there's no performance argument there to justify the effort. I'm saying that the hodge-podge of C++ libraries makes it harder to hire a dev who's already familiar with your technology stack than centering around C# standards for the same. I'm saying you could hire three C# programmers to focus on those layers for the price of just two C++ programmers. I'm saying C# and its standard library are very well suited to those areas, and are a viable avenue to save you time in the long run over what setting up a hybrid C#/C++ project costs up front.
And despite of all I'm saying in favor of this arrangement, I'm still only saying its a consideration and a viable approach -- not a mandate, nor even a best-practice. If you and your shop are happy with C++, and you can hire enough good C++ people, and your margins can support the salaries they demand as you and your local competition bid up offers competing for a scarcer and scarcer resource, then cool to. You do you. What people are taking issue with is your seeming insistence that full-stack C++ is the only way to achieve performance -- its not, and I think C++ need not go as far up the stack as you'd guess if you're leveraging GPUs.