Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Yesterday, 12:26 PM

#5233018 Thinking with Windows

Posted by on 05 June 2015 - 02:31 PM

The biggest things are that that A) the native controls simply feel out of place in games, and therefore a game using native controls would feel less polished, and B) using native controls in a multimedia (e.g. using, say, Direct3D) app means you've got then allow for your renderer to cooperate with the operating systems' own drawing of controls, which has a performance/latency impact, and sometimes also limits what you can do.

 

You can generally alleviate A) if you can put in the work to skin the controls, but then you're just about writing your own UI anyways. Its not that typical UIs are bad at what they do, they just do something different than most games want, and so they're often more of a hindrance to work around than they are help.




#5232609 What kind of infrastructure is found in game or software studio's

Posted by on 03 June 2015 - 12:45 PM

The artists are likely using Quadro/FireGL cards, the main benefit of those cards is 1) the configuration/firmware/drivers go through very extensive validation with professional graphics applications like Photoshop or Maya, etc, 2) the drivers are optimized and tuned for those applications 3) all of it together prefers stability over performance -- Usually, the silicon is the same as gaming GPUs, but used more-conservatively. You'll sometimes get a larger amount of VRAM, or fully-unlocked double-precision performance, but that's usually the only hardware differences.

 

At an AAA studio working on a high-end game, Promit's spot-on. Keep in mind that AAA games are being built for 3-5 years in the future, the devs have to be able to run code reasonably well even before its optimized, and they do a lot of local builds of the parts they're working on, tests, or tools. And again, it comes down to whether it makes any sense to skimp on a dev-box when that body is costing you $150k or more to put in a seat. Sure, having twice the CPU doesn't make your programmer twice as efficient, but if it makes them even 5% more efficient over the course of the year, a higher-end box pays for itself -- I'd wager that a powerful PC probably makes a typical dev at least 15% more efficient, and a skilled dev even moreso. Also, because those resources will at times go idle (e.g. writing code and doing not much else), those machines are sometimes conscripted into the build-farm as slave nodes, putting the excess CPU to work helping out the site infrastructure.




#5232484 An Exact Algorithm for Finding Minimum Oriented Bounding Boxes

Posted by on 02 June 2015 - 04:56 PM


Thanks for the comments! Orthopodal sounds fine as well, although I am not keen on too fancy naming, but feel free to use the terms interchangeably.

 

I think I'd side with Eric -- Ortho would seem more proper, and isn't overly fancy (ortho as in orthographic, like everyone knows), IMO.

 

Looks like great work, thanks! I've only scanned it while at work, and I'm still digesting since I'm not an especially good mathematician, but it appears you might have found something quite significant. You say you're not an academic, but the standard of work seems up to it. Anything in particular inspire you to present your findings so rigorously?




#5232469 VB.net limitation question

Posted by on 02 June 2015 - 03:23 PM

VB.Net shares the same compiler/JIT/Runtime technology as C#, and they've had a decree of maintaining feature-parity for awhile now. You can basically think of them as two dialects that speak differently but otherwise have the identical capacity for communicating powerfully. C# is neither faster, nor slower than VB.net in mainstream terms -- I suppose its possible that the patterns one or the other encourages might be marginally slower or faster in specific cases, but you should mostly be free to defy those patterns. I don't hesitate at all to say that, between the two, use what's most comfortable because neither has a clear performance or expressive advantage. I happen to think that C# is the better language, myself, and probably a significant majority would agree, but that's no good reason to dump VB.net if you're already familiar with it.

 

That said, there are ecosystem differences -- as I said, most people seem to prefer C# by a significant margin, and so its easier to find people to help you out if you're writing your code in C#. Likewise, there tend to be more tutorials and references written in C#, and also some of the VB.Net code you'll find are straight-line ports of the C# version, which may not yield idiomatic VB.net. Moreover, most .net libraries are written in C#, so you'll eventually want to be comfortable at least reading it. All of those things combined tend to mean that any VB.net programmer who sticks with programming long enough eventually will also pick up C# (except, perhaps, those whose job is solely to maintain large, legacy VB code bases, as sometimes exist in enterprises).

 

If you were new, and asking the question "Should I learn C# or VB.Net" I would hands-down recommend C#, but since you are coming already with VB.Net experience, there's no need for you to drop what you're doing now to take a break and learn C# before returning to your project. You will probably (want to) come to learn C# before long, and that's a good thing, but let that come to you as you need it is my advice, unless you are simply itching to learn a new language, or are finding that not knowing C# is holding you back right now.




#5232235 I want to write code for the PS2, because it's old.

Posted by on 01 June 2015 - 03:39 PM

I'm curious; how come you don't want to work on the Dreamcast anymore?

 

Me too. The Dreamcast is far more approachable than the PS2. The GBA would be even more approachable still. I don't think there's any platform homebrew platform offers anything in the way of access to the mainstream gamer. XNA was a decent attempt put to pasture before its time had come. ID@XBOX is looking good, though,

 

There's also a new cartridge-based, retro game system called the Retro VGS being developed by the folks behind Retro Magazine. Its never going to be a huge commercial success, but it might have a decent enough user-base one day that would at least make the programming worthwhile, even if it never made you rich or famous. The circulation of the magazine is pretty stellar for what one would assume is a pretty niche market (that is, an actual dead-tree, print-magazine about retro games and consoles, circulating in 2015), and if that same audience can be tapped for this new console it could prove interesting. They've teased, even, that some old-school programmers are onboard to bring back new sequels to classic games, exclusively on the Retro VGS. They're still finalizing things, but they're far enough along that people are already well into developing the first round of titles. Consider looking into that.




#5231539 How To Do RPG Interiors

Posted by on 28 May 2015 - 01:51 PM

The blended design is an interesting approach, but at that point I kind of wonder what the benefit of maintaining spacial reference to the exterior is. To me, the spacial reference only matters if there's a 1:1 correspondence to the interior space. I can't dismiss the blended approach out of hand, but I'd want to identify some other benefit that carries its weight, otherwise just do a slick transition and let me have my whole screen, thanks.

 

 

Another approach to interior shop/inn/dialog spaces, would be to do away with navigating the interior space altogether. Rather than walking around in overhead view, maybe just transition to a nice illustration of the shopkeeper and their wares from the character's POV -- you press into the exterior door, the external world dims, you conduct your business, and then when you're done the illustration fades away, the overworld screen brightens, and your character is outside, facing away from the door.

 

I wouldn't want that to be my only option for interior spaces, as it would eliminate a great deal of exploration/looting opportunities, but if you had such a system in place, it would be easy to apply it just as easily to an interior space (by walking up to the shop-keep, as per tradition), as to a shop building (by walking up to the door, as described above), or to a booth/cart in an exterior space.




#5231309 Asset Loaders/Managers: One for all or separate instances?

Posted by on 27 May 2015 - 12:51 PM


I think that code solves the problem quite nicely.

Personally I wouldn't use va_args and would instead use c++11 initializer lists which are safer and more flexible.

Most modern compilers support them e.g. recent gcc and visual studio.

 

An Initializer list might be a suitable solution for some use cases, such as when the number of initializing data isn't known statically, but I think in most cases in asset-loaders/managers you probably do know what the requisite parameters are based on asset type. In that case, you probably don't want initializer lists and should probably prefer variadic templates combined with perfect forwarding instead.




#5231301 Entity/Component System - Understanding Event Priority

Posted by on 27 May 2015 - 12:29 PM

Lots of things to consider:

  1. Is (or do you need) your system to be deterministic?
  2. Is your system multithreaded, and how does that impact its design with regard to point 1?
  3. Do you have events that require immediate response from subsequent events before continuing (something like double-dispatch of events).
  4. Does your environment provide coroutines/green threads/futures or any suitable suspend-resume feature.
  5. Do you process events entity-wise (all events for each entity, then to the next entity) or event-wise (all events of a type, then to the next event type).

I think the most scalable/flexible solution would probably be some kind of "fence" system -- which is basically a means of suspending one execution-path until some preconditions have been met, for example, you could have a fence on processing sound events until physics events are complete (because, e.g., they might spawn a sound). You can do this kind of course-grained dependency ordering just by controlling the general order in which events are processed, but it falls apart when you have different entities with different needs. Something like a fence on a particular entity could solve that problem, but of course you risk creating cycles or deadlock even more, so debugging can be more difficult.




#5231298 Retro graphics and game atmosphere.

Posted by on 27 May 2015 - 12:11 PM

Slapping 32 colors on something and calling it retro is about as effective as slapping candy-apple red on your Buick and calling it a Ferrari. In the case of games' art direction, it all needs to mesh together if you want to evoke a true sense of nostalgia. Taking NES-style retro games as an example, the feeling of nostalgia can be broken by rather unassuming things, even if everything else is right -- for example, having sprites that are too large, have too many simultaneously-moving parts, or are just too-smoothly animated you lose the nostalgic effect. As a rule of thumb, if you want to evoke nostalgia, you need to demonstrate period-or-platform-appropriate limitations, though you don't have to simulate deleterous effects of the hardware (e.g. certain NES games would slow to half-speed when too many sprites were on-screen; other games would flicker sprites or parts of sprites.)

 

 

I find 3D retro to be a pretty hard aesthetic to pull off. I've seen one retro FPS recently that successfully pulled off a Quake II aesthetic, I've seen, in passing, some modern turn-based first-person dungeon crawlers too. That's basically it. Part of the problem is that when we thing of nice 2D retro games, we're talking about replicating the NES or SNES era which was in some ways the (mainstream, at least) peak of 2D artistry -- that's what people are replicating because it was good, almost no one is creating Atari-level retro games, because they weren't that good or compelling. Most of what we think about as "retro 3D" is the Atari of 3D graphics -- no one wants retro 3D like the Playstation One. I'd argue that the PS2 and its contemporaries are sort of the NES-level of 3D evolution, and PS3 and its contemporaries are SNES-level. Both generations are too recent for there to be much nostalgia built up -- maybe in another 10 years, when the grade-schoolers who grew up with those systems become 30-somethings. But another hurdle is that most of the drive behind 3D advances has been photo-realism rather than style, while the opposite was true of 2D games -- its something of an open question whether anyone's interested in what state-of-the-art photo-realism looked like 20 years ago, and 'retro photo-realism' is honestly a bit of an oxymoron.




#5230129 tile objects in 2D RPG

Posted by on 20 May 2015 - 04:21 PM

In general, you would just store an index, UID, handle, pointer or other 'name' in each tilemap cell, which refers to data that describes that tile. That's the easy part -- you have one unique thing (the tile) that reappears in many places, so you tag each place with a way to find the thing, then look it up; this pattern appears all over the place, not just in maps, not just in games.

 

A more interesting question is how you want to store the map cells. The easiest way is using a 2D or 3D array, dynamically allocated so that you can load maps of different sizes. This can be a bit wasteful of memory (e.g. if you have 'overhead' map layers with lots of empty space, you'll store a lot of 'empty' tiles in those cells, but this is not a problem on modern systems, though it can be a large annoyance if you have just one or few very tall layers (you spend an entire layer to represent just a few overhead tiles, multiplied by the number of layers needed). You can have a 2D array where each cell is a linked-list, which saves that unused memory, but can be more costly to iterate over. Or you can store in smaller chunks (say 8x8 tiles) and stream in only the chunks you need. Lots of options with different properties and tradeoffs.




#5229680 How To Do RPG Interiors

Posted by on 18 May 2015 - 02:38 PM

I think its less about picking one or the other, and about doing what's right in that instance. The main issue, IMO, is transition time, and if you go to a "separate area" (a cordoned portion of this map outside the main area or a different map entirely) its not difficult to make the transition fast and smooth.

 

Doing transparent roofing is necessarily limiting -- you have exactly the outside footprint of the building to build an interior that suits your design needs. I would suspect that you will very quickly find yourself making building exteriors much larger than they need to be to support larger interiors, and that will knock-on to all the buildings too, regardless of what goes on inside them, just to maintain a sense of scale. Another difficulty is what to do about buildings with multiple floors or basements -- how will your engine handle those?

 

 

IMO transitions to separate areas are best for consistency and to avoid the problems I've mentioned with more grace. By pre-caching these separate areas while the player is exploring the external areas you can make sure the transition is fast and smooth.

 

A different solution to the "avoid separate areas for quick things like vendors" is to give them a booth, market-stall, or olde-timey wagon-carriage to sell their goods from, rather than from the inside of a building.




#5228427 Can I do MIMD on SIMD architecture?

Posted by on 11 May 2015 - 02:10 PM


I think there was never hardware that executes real MIMD. Some stream processors support that kind of, they use VLIW and execute different operations on every register lane, but the instruction streams are always interleaved in one real stream, hence there can be no diverging branching.

 

The latest PowerVR Rogue architecture might be close -- given applicability to OP, this is a bit of a non-sequiter -- but its real silicon, and a non-SIMD GPU that can actually do divergent branching as I understand it.

 

For what its worth, I suspect that when we see real implementations, low-power applications will be where we do (the Rogue GPU is a mobile-focused part). From a power-savings perspective, avoiding power-waste due to executing both sides of a divergent branch and then reconciling results is wasteful of battery and generates heat needlessly.




#5228018 Back Face Culling idea

Posted by on 08 May 2015 - 04:06 PM


A camera view vector in the world multiplied by an objects orientation matrix in the world, gives you a view vector in the model/mesh space.

 

...and then you need to transform the model into that space in order to do the backface culling -- the fact that you get to seemingly skip this step normally is because in camera-space you're always implicitly looking down the depth axis. I'm certain you could compensate by incorporating the view vector back in, but its probably not any less math in the end -- you have to do this for each vertex in the mesh, and then afterwards you have to do a real transform on the ones you keep (half, on average) -- whereas with the traditional order of things do the proper transform on all things and then throw out half (on average) -- not to mention you'd have to repack (e.g. copy) your vertex and index buffers new each frame.

 

There's just no win here.




#5227565 Average games length databse?

Posted by on 06 May 2015 - 01:06 PM

I would echo that firstly it doesn't really matter what other games do, and state secondly that you're not going to get a statistically-accurate number of average playtimes anywhere I'm aware of, as very few (if any) games are actually instrumented to measure length of play; the only one's I can think of off the top of my head are Open-World games like fallout 3 or RPGs (both japanese and western-style). Any self-reporting is going to be skewed, firstly because people want to look better, and secondly because my intuition leads me to think that speed-runs are going to be statistically out-sized (so, you might get something closer to the average length of a speed-run instead of the average length and average player takes to complete the game as intended).

 

Since you don't have statistical accuracy, your own educated guess about game length are probably as good a basis. For example, I'd say that the average length of a traditional FPS single-player campaign is around 15-20 hours, but could be as few as 8-10. Keep in mind that those figures are the length of the critical path, and discounts failed attempts and false-starts.

 

Perhaps a better way to approach the question is to name the genre(s) that interest you and poll the community for their own gut-estimates, and then averaging those answers to benefit from a larger sample size than your own experiences.




#5227549 How 3D engines manage so many textures without running out of VRAM?

Posted by on 06 May 2015 - 11:52 AM


So you would have a pool of, say, 20 textures at a given resolution (let's pick 512x512 for the purposes of this example).  If one of those textures is no longer needed at runtime, rather than destroy it, you would just reuse the texture and replace it's content.  In OpenGL this means a glTexSubImage call, in D3D a LockRect or Map.

 

Right, and in the new APIs like Direct3D 12 it takes things even further, as I understand. You just have reserved pools of memory which are only typed by the context you use to describe them, so you don't have to re-use a texture of the same size, just another resource that fits within that space, and you can join adjacent vacated memory areas together. The API adds fences to notify you when a resource is no longer in active use so that you can track what you might be able to over-write if you need. Then, there's also support builtin for large virtual textures that are paged in as needed.

 

Another trick to optimize texture occupancy is that when a particular model is far away, you don't need the full-resolution texture, and so you don't need the largest mip-maps -- every top-level mip-map you can get rid of reduces the amount of memory occupied by 75% -- so a compressed 2k texture occupying, say, 10.6MB (8MB 2k + ~2.6MP mip-maps) would be reduced to just that 2.6MB (now 2MB 1k + 0.6MB mip-maps), and if you reduce it again it occupies just around 600KB (512KB 512p + ~88KB mip-maps). 






PARTNERS