Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Today, 04:33 PM

#5293857 Do you usually prefix your classes with the letter 'C' or something e...

Posted by Ravyne on 27 May 2016 - 02:07 PM

I've simplified to the essentials over the years. I prefix for interfaces, cause C# still rolls that way and I like it. (Non-pure abstract base classes are suffixed with Base, usually.) Single underscore to label _private or _protected class variables*. And g_ for globals because those should look gross. That's pretty much the extent of it.

 

* The underscore thing also works great in languages that don't HAVE private scoping, or when I don't actually private scope them but they're considered implementation details.

 

I follow this kind of minimalist approach as well. Prefixing classes with C, or variables with their atomic type, or similar things doesn't really tell you anything useful with modern tooling. Even people who work with text-mode editors like emacs or Vi(m) will have CTags or some kind of IDE-like intellisense-thing going on.

 

Scope is still something helpful to know so I do 'g_' prefixes for globals and 'm_' (or sometimes, just '_') prefixes for private/protected members. I also like to know things like whether a variable is static or volatile at a glance, so I use 's_' and 'v_' prefixes there -- these are rare though.

 

As far as general naming, I use plural word-forms when dealing with collections of things, and boolean variables/functions are almost always prefixed with a word like 'is' or 'has' to reveal the question being answered -- like 'is_dead' or 'has_children()', among other things similar to what DonaldHays quotes above.

 

In C++ (as are all of my examples above), I defer to my own personal axiom of "do as the standard library does" regarding naming and other externally-visible conventions. For example, this is the reason I now use lower-case-with-underscores style, rather than CamelCase or pascalCase as I did at different points in the past. When I'm doing C#, my naming conventions follow the style of its standard libraries. My rationalization for why this axiom of mine is a good approach is that 1) these standard libraries represent the only style that has any credible claim that it "should be" the universal style, and 2) they've seen every odd corner case and combination thereof, and have laid down an answer; I don't have to waste brain-cells thinking it through and then being self-consistent on each rare occasion it comes up, months or years between.

 

Of course, if the place that makes sure your paychecks don't bounce has a house style -- and most do -- then you follow along with that because its part of what you're paid to do.




#5293847 Are Third Party Game Engines the Future

Posted by Ravyne on 27 May 2016 - 01:06 PM

@Ravyne

Though if you look at Unreal Engine 4, you will see a huge variety of different types of games being made with it. UE4 is very different to UE3. It supports large open worlds out of the box and small studios are making very big games with it. For example, Ark: Survival Evolved was created by a virtual team of indie developers. The engine is very flexible and although it may not be optimal for a specific genre, the source can be modified to make it optimal. An indie team starting out will not be making boundary-pushing games. If the team is successful and grows, they can hire more programmers and heavily modify the engine for more ambitious projects. UE4 is a lot more flexible. The Oculus team replaced UE4's renderer with their own for VR optimization and used it in the games they are making. They've also made this branch of the engine publicly available.

 

For sure, and UE4 is worlds better than 3 ever was -- I just meant to say that FPS is more-or-less the only "turn-key" option that Unreal gives you, or at least the one that's most turn-key. For all its flaws and over-engineered inefficiencies (keeping in mind that their "inefficiencies" are still faster than anything you or I would likely crank out without a ton of iteration), no one can deny Unreal Engine has powered one of the top-tier FPS series. I'd wager there's still a lot of code/design decisions in there stem from its FPS roots, though, obviously, they're going to be a lot more subtle and a lot less limiting than having an FPS-centric API surface.

 

On the topic of derision, what's popular and what's good are often barely-related axes. When last I interviewed for games positions before my current full-time gig of 5 years, it was a popular interview practice to show you some piece of shipping code and point out the bugs, or suggest ways it could be improved. After a time, I realized that every single one of these exercises, at several different studios, all used source code from UE3. That tells me two things: 1) that UE3 was not held up as any sort of paragon, and 2) that despite this, it was still popular enough that all these studios were familiar with the engine and cared that new hires could follow its source code.




#5293632 Are Third Party Game Engines the Future

Posted by Ravyne on 26 May 2016 - 12:26 PM

I think these engines are here to stay,and will improve over time; Unity is especially dominant in the indie space now, and Unreal is making inroads there (slowly) but has more mindshare among pro studios. Cryengine doesn't seem to have much uptake at all. Also in the indie space are simpler engines and frameworks like Cocoas, Monogame, DXTK, and others.

Every single one of these has their own point-of-view, quirks, and warts. All of the engines have their own way of doing things that you need to learn to work with. Frameworks are sort if similar, except you're not so locked into doing things "their way" just by virtue of the fact that they do less and have fewer intertwined systems.

"Jack of all trades, master of none" as they say--most people take that to be an insult, but the full quote goes on to end with "--but better than a master of one." These engines are a great value proposition, but they don't fill that need for masterful execution (well, unless you're using UE4 to make a shooter). That's why in-house engines will always be a thing on the high-end.

On the low end, the mental lock-in and quirks of engines can be more hindrance than help for very simple or very unique small-scale games. Especially using those frameworks I mentioned, it can be less headache to roll your own purpose-built engine than to fight against an engine's natural currents to modify it to your needs; or simply to sidestep all those engine abstractions that are more complex than your game needs.

Where engines are worth their while is really in the middle ground -- your game is complex enough that rolling your own tech is more costly (money, time, risk, market opportunity), but not so unusual as to be a poor fit, and also not so complex or boundary-pushing that it risks outgrowing an off-the-shelf solution. Many games great and small fit into that box, and that's why Unity and Epic can staff hundreds of people behind these offerings and make healthy businesses of it.

Another side-effect, for good or ill, is that these engines instill a certain amount of liquidity among developers, and particularly those who aren't ninja-level engine developers. Unity and Unreal are concrete skill sets that you can recruit and hire on--before these engines became popular, every new hire had to spend time picking up the house engine, house scripting language, house toolchain, house pipeline. Nowadays that's still true many times--but not at the same rate it used to be. Part of the attraction for using Unity or Unreal among larger studios is that they gain a significant hiring pool (even including people who may not have a traditional CS background) and that those people can hit the ground running, more or less.


#5292361 Best gaming platform in the future with marketing perspective.

Posted by Ravyne on 18 May 2016 - 04:43 PM

I hate to rain on the anti-Microsoft parade, but all this advice to avoid Microsoft or vendor lock-in is tangential at best, and at the least seems outdated. But to start from fair ground, I'll throw out the disclaimer that I'm a writer (docs and such) on the Visual Studio team.

 

If you haven't been following along lately, Microsoft as a whole is really leaving the our-way-or-no-way mentality behind. To be frank, today's devs have more options that are good than was the case years ago, so there's a lot more mobility in dev tools, platforms, languages, etc -- they don't accept our-way-or-no-way anymore. Microsoft's continued success and relevance actually requires them to get with that program, and so they have. Today, Visual Studio is already a damn fine IDE for iOS, Android, Linux, and IoT development, in addition to the usual Microsoft platforms -- even just a couple years ago, Eclipse would have been basically the only "serious" IDE for those scenarios (and its still got inertia today). For example, you can do your programming using Visual Studio on Windows today, and the build/run/debug commands will talk to a Linux box where your code will be built (using your typical Linux development stack), launched, and hooked to GDB, and GDB in turn talks back to Visual Studio and looks just like a local debugging session of your Windows apps. And that's basically the same scenario for Linux-based IoT, Android, and iOS as I've described for Linux on the desktop and server; The android stuff can target a local emulator running atop Windows Virtualization, and is actually considered to be better than the stock emulators provided by other Android development environments, even if that sounds a bit unbelievable. Soon, you'll be able to run an entire Ubuntu Linux environment right inside Windows 10, so that developers will have all those familiar *nix tools right at hand.

 

Believe it or not, "old Microsoft" is basically dead and buried, especially in the server and developer tools division. They're pretty hellbent on making sure that Visual Studio is everyone's preferred IDE, regardless of what platform or scenario they're targeting -- and for those that like lighter-weight editors there's Visual Studio Code. Stuff is being open-sourced left-and-right, all our open-source development happens on GitHub, and a bunch of our docs and samples are already on github too.

 

By all means, people should find and use whatever tools and platforms they like; they should target whatever platforms they like, and as many as they like. Odds are, Microsoft and Visual Studio are relevant to where you are and where you're going, or will be soon. It's silly to dismiss them just because they're Microsoft. I use lots of tools every day in my work here that came from the *nix world -- Vim, Git, and Clang to name a few -- and they serve me well; partisanship between open/free and proprietary software isn't a very worthwhile thing IMO, unless you're talking about the very philosophy of it all.




#5292137 Difference Between 2D Images And Textures?

Posted by Ravyne on 17 May 2016 - 02:07 PM

Of course if you expect to release on console, and most people only have 1080p televisions, then you'd ship 1080p images for them, and that cuts the storage requirements by 75% immediately -- but even 25MB is still a lot of geometry and textures. Realistically, you probably want to release on PC too, and may only be able to release on PC since getting access to the consoles is not currently wide-open; On PC, 4k is an increasingly common thing, you don't *have* to support it, but you ought to. And even if you chose not to now, you'll at least want to render and keep the files on hand in a very high resolution because if you ever need to recover, retouch, or remaster the files, you'll want to start from those. As a rule of thumb, you usually want to keep a copy of all game assets in at least 2x greater fidelity than the greatest fidelity you can imagine shipping -- the basic reason is that you can always downsample without really loosing information, up-sampling always requires a guess even if its a really well-informed one.

 

Also, there's no conflict between pre-rendered/real-time and static images. You can have static backgrounds that are pre-rendered, or you can render them in real-time. In general, a static view of any scene, especially an indoor scene (or more generally, any scene dominated by near occluders) is going to render very quickly -- even if you render it fresh every frame, you're not making any costly changes to it. 




#5292125 Difference Between 2D Images And Textures?

Posted by Ravyne on 17 May 2016 - 01:17 PM

There's a more-extensive answer in my previous post, but TL;DR -- 

 

Pre-rendered backgrounds will be very large (4k resolution, if not 8k), you'll have as many as a half-dozen of them per scene, and you won't probably won't be able to apply lossy compression techniques to get really good compression rates. Lets say you have 5 4k (color, depth, specular, normal, and occlusion) buffers and get 60% compression on average -- if we assume that each buffer is 32bits per element (some will be less, some might be more) that's going to be 5 x 32mb x 0.60 -- right around 100MB per scene. You can fit a *ton* of geometry and texture data into 100MB -- and there's a good chance you can re-use most textures and some geometry elsewhere, which lowers the effective cost of the run-time rendered solution even further.




#5292120 Difference Between 2D Images And Textures?

Posted by Ravyne on 17 May 2016 - 12:59 PM

It really depends -- on the one hand, you can render very realistic scenes in realtime, and while this has a runtime cost associated, it also gives you freedom to move the camera around naturally if you like. From a production standpoint, that flexibility means that someone like a designer can move around a virtual camera and get immediate feedback, rather than having to get an artist to content pipeline tool in the mix -- being able to iterate that rapidly is really helpful.

 

On the other hand, pre-rendered backgrounds can look really great for what's basically a fixed cost, meaning that you can run on a lower spec or pour more power into high-end rendering of characters and other movable objects. If you go back to Resident Evil, -- or to Alone in the Dark, before that -- that's basically why they did it that way; They used pre-rendered backgrounds to give great scene detail combined with what were relatively high number of relatively high-quality 3D models (The models in RE were as good or better than comparable character models from 3D fighting games of the day, but with many more onscreen potentially).

 

If you were going to do pre-rendered backgrounds today, such that it mixed well with modern rendering techniques for non-prerendered elements, you would probably do something like a modern deferred renderer does -- you wouldn't have just a bitmap and depth buffer (like RE probably did), you'd have your albedo, normal, depth, specular, etc buffers, draw your 3D objects into each of them, and then combine them all for the final framebuffer. You could do the static parts offline in your build chain, or you could even to them in-engine at each scene change.

 

Its not cut-and-dried which approach (offline or runtime-at-scene-change) would occupy less disk space. Geometry isn't a big deal usually, and if you get a lot of re-use out of your textures and/or if they compress well, you could come out ahead with the runtime approach -- especially so if you can utilize procedural textures. offline (pre-rendered) images will have a fixed and small runtime cost, but will use a lot of disk space because the buffers will be large (you'd want to do at least 4k, and probably even 8k) buffers and you probably don't want to apply any lossy compression to them either.




#5291962 Best gaming platform in the future with marketing perspective.

Posted by Ravyne on 16 May 2016 - 04:32 PM

Which do you think should I particularly focus on? Where is the most revenue? I am in the view that if I spend my time learning unnecessary things (those which I will understand later are of little or no use in the future) then I will simply waste my time.

...

Please answer as descriptively and elaborately as possible, and if possible provide further references for statistical information, I'm serious :mellow:.

 

Stop. How much time will you waste choosing this ideal platform? How much time have you already wasted? How much time will you have wasted when the decision you make proves wrong? Will you throw it all away to pursue your new choice from a fresh start?

 

None of us are omniscient. Some of the best minds are tasked with making their best guesses at what things will be like just 5 years from now, and still most of them are wrong most of the time -- 5 years is about the outside limit for what anyone is actually willing to bet serious money or resources on. People will think about 10+ years sometimes, but very rarely are they making any bets -- usually they're just looking for things to keep an eye on.

 

Take VR -- it was in arcades in the 90s. People were doing it even back then, we've had the basic idea and technical footing to pull it off all this time, but it wasn't clear if or how it could be brought to the mass-market. The guys at what's now Occulus bet early and bet big (in blood, sweat, and tears -- not so much money) and showed the way to bring it to the masses -- only after that was anyone with real money or resources willing to place stakes on the table; some of the best technical minds with access to the deepest pockets on the planet didn't see the way on their own. And its all well and good to say you want to be the next Ocullus or the next Mojang, but reality is littered with 1000 wrong guesses for every right one.

 

Learn, do, and adapt is usually a better strategy than betting it all on a predestined outcome.




#5291960 [GBA] Kingdom of Twilight a retro rom

Posted by Ravyne on 16 May 2016 - 04:17 PM

Retro/Homebrew is always intellectually interesting, but its something people do more for the love and experience of doing it, rather than hopes of commercial success. Its interesting hardware, with features and limitations that can force your hand to deliver some really inspired solutions -- it and the dreamcast are usually the platforms I recommend for those interested in retro homebrew -- either platform is limited enough to prove a challenge, but capable enough to do interesting things; old enough to communicate the essence of old-school, to-the-metal development, yet not so archaic or downright weird that the lessons you'll learn won't benefit you in contemporary times -- they will.

 

Good luck, the GBA is an excellent platform for retro-styled RPGs.




#5291958 Difference Between 2D Images And Textures?

Posted by Ravyne on 16 May 2016 - 04:07 PM

 

Is mipmaps the only difference between textures and images?

 

As he wrote, it depends on the context.

 

For some contexts they are the same thing.

 

For other contexts they refer to data formats. Highly compressed formats like png and jpg need much memory and space to be decoded before going to the video card, some formats such as S3/DXTC and PVRTC are supported directly by different video cards, so some systems call one pictures and the other textures.

 

Building on this, and on Josh's previous reply, the compression format or lack thereof is often driven by the image content itself. Textures representing real-life images or "material" textures used in games often compress very well, even to lossy compression formats without a great deal of apparent visual fidelity loss. However, what you might typically call a "sprite" -- such as an animated character in a 2D fighting game, or any sort of "retro" look -- usually suffers too much quality loss from being converted to those kinds of compressed formats; instead, they might be left in an uncompressed format where their exact, pixel-perfect representation is preserved. On-disk formats like GIF or PNG can serve those images well, but GPUs don't understand them; so its common to see those formats on disk, with a conversion to raw (A)RGB before hitting the GPU. An 8.8.8.8 ARGB image is 32bits/pixel, while a highly-compressed textrue format can get as low as 2bits/pixel if the image content is suitable.




#5291462 What is Camera space ?

Posted by Ravyne on 13 May 2016 - 04:33 PM

Camera space (or View Space) is the space of the entire world, with the camera or viewpoint at the origin -- every coordinate of everything in the world is measured in units relative to the camera or viewpoint, but its still a full 3D space.

 

Screen space, is basically the pixels you see on your screen, its a 2D space, but you might also have buffers other than the color buffer (the pixels you see), like the depth buffer. You can reconstruct a kind of 3D space since you have an implicit (x,y) point that has an explicit depth (essentially z), and that's what gets used in these SSAO techniques, if I understand them correctly.




#5291030 Data-oriented scene graph?

Posted by Ravyne on 10 May 2016 - 03:34 PM

And I'll add, nothing says you must DOD'ify everything in your program. If OOP suites a problem, and the difficulty-to-benefit ratio of DOD is unclear, and the OOP solution is not holding higher-level adoption of beneficial DOD applications, then it is probably wisest not to replace a principled OOP solution with a poor DOD one, just for DOD's sake.




#5291028 Data-oriented scene graph?

Posted by Ravyne on 10 May 2016 - 03:30 PM

So, I can't go into deep specifics in the time available to me now, and I probably am not the right person to do so anyway -- but in general, its not usually the case design patterns (of which the scene graph is one) survive a transformation from OOP to DOD. You end up with something that serves basically the same purpose, but the organization necessary to make the transformation renders it unrecognizable as the original thing. DOD is really a call to re-imagine a solution from first-principles, taking into account the machine's organization -- DOD is not a call to take your OOP patterns and architecture and just "reproject" them into this new paradigm.

 

But I'm speaking generally, and not to whether Scene Graph has a straight-forward-ish transformation specifically. My gut says that DOD and graphs of most kinds are at odds, and nothing immediately comes to mind when I try to imagine a system that is logically a graph, but employs serious DOD underneath, while still achieving the performance benefits one would expect of DOD. You can do relatively straight-forward things, maybe, like making sure your scene-graph nodes compose only "hot" data, and while that would be some benefit, that doesn't fundamentally change the graph representation.

 

That said, I'm not expert enough myself to believe that no one here is going to come along and disprove me :)




#5291002 My game ends up being boring

Posted by Ravyne on 10 May 2016 - 12:34 PM

Fun is a very hard thing to pin down -- because it's composed of "fuzzy" sorts of concepts like challenge, choice, reward, punishment, variety, novelty, emergence, pace, flow, feel, and so much more.

 

Take a simple jump in a platformer, for instance. You can have a nice sort of parabolic jump that mimics gravity, or you can a linear up-down jump that doesn't. Both of these things can otherwise have the same properties (like max height / max distance) and so they perform in basically the same way as far as level design possibilities go. But the parabolic jump just feels nicer -- it has weight and gravity -- and that makes it fun; the linear jump feels cheap -- and that makes it boring. That's not to say that realistic physics are fun -- Super Meat Boy completely throws off realism to pursue "feel" 110% which its creators have spoken about several times -- even in the original Super Mario Brothers for the NES, Mario and Luigi's jump arc is not physically realistic (they can jump several times their height), but it doesn't even follow a single parabolic motion -- the parabolic arc they follow on the initial rising half of the jump is different (it "floats" more) than the falling half -- which gives their jump, and indeed the entire series, a very distinct feel to them.

 

That's just a single concrete example, but its illustrative of the fact that mechanical equivalence rarely implies that there's a similar amount of fun to be had.

 

 

There are a wealth of videos on youtube where games, design elements, and game mechanics are broken down in very critical and analytical ways. I recall being very impressed with one video which broke down how the camera tracking in Super Mario Bros evolved throughout its 2D incarnations, which I wish I could find and link to right now. I'd be remiss to not plug my coworker's excellent game design channel, Game Design Wit, but there are a lot of content creators doing these kinds of videos.




#5290997 Custom editor undo/redo system

Posted by Ravyne on 10 May 2016 - 12:12 PM

The command pattern approach is an oft-cited solution.

 

If you're interested, Sean Parent gave a talk Entitled "Inheritance is the Base Class of Evil" (Channel 9 link) which is a brief 24 minutes. Its all about the benefits of preferring composition over inheritance and value semantics over reference semantics -- these things are fundamental to his overhauling of Photoshop's undo/redo, and he gets more specific about how that works by the end (I think from about the midpoint on, but its been awhile since I've watched it. Regardless I recommend watching the whole thing -- its short, and its informative enough that I've watched it a handful of times over the 30 months its been available).

 

Here's a youtube link as well in case that's more convenient, but I think the Channel 9 video is better quality; the youtube video is a third-party upload.

 

Also, Sean's presentations are always great, and never a poor way to spend a lunch break.






PARTNERS