Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Offline Last Active Yesterday, 07:41 PM

#5236399 Resolution for different PC's, where to start?

Posted by on 23 June 2015 - 12:41 PM

Yes, Sean added an important point. Depending on how your UI works you might have elements that remain a fixed or semi-fixed size, and that are anchored to various points on a parent element (or the screen as a whole). In that case, its not quite as straight-forward as just normalizing coordinates by dividing by the resolution, there's an extra step to get the anchored coordinates first.

#5236205 Resolution for different PC's, where to start?

Posted by on 22 June 2015 - 01:54 PM

Its likely that you'd doing your hit-tests using coordinates in the 1366x768 resolution space, which is why other resolutions break the logic. I would wager that you can indeed click the buttons by manually adjusting where you click to be closer to the origin.


The gist of what you need to do, is to use normalized coordinates for hit-tests and the like. This would mean that you divide your mouse coordinates by the fullscreen resolution, and also divide the UI widget coordinates by the same by whatever resolution space you defined them in -- then you do your hit-test logic using those new numbers. That'll make your logic resolution-independent.


EDIT: Correction made.

#5235745 Virtual base classes vs templates for multi-platform

Posted by on 19 June 2015 - 01:19 PM

What you're talking about really boils down to runtime polymorphism (virtual functions, either through C++ virtual member functions or through function pointers a'la C) vs compile-time polymorphism (conditional code, either through C++ templates--specifically, template specialization--or through pre-processor directives and/or a platform-parameterized build).


To solve the problem "What's different when I'm running on Windows vs Linux" runtime polymorphism doesn't make sense, because you'll never have a single binary that could be run on both platforms -- there's never a decision to make at runtime.


To solve the problem "What's different when I'm running on Direct3D vs OpenGL" runtime polymorphism can make sense, because on Windows you could have a single binary that can speak to both APIs and allows the user to choose their preference -- but you don't know ahead of time, so you can't make that decision at compile time...


Unless, of course, you choose to provide separate binaries for both APIs, which is a legitimate approach if you only have a few such choices to make. The problem with the separate binaries approach is really the combinatorial explosion of factors (e.g. if you support 3 APIs for graphics and 3 for sound, and 2 for input) that's 18 binaries...


Or, you can define a rendering interface and keep all your D3D and GL calls inside an API-specific DLL that implements that interface. You can do the same with other API surfaces (audio, input, etc), and then you can cherry-pick which DLLs you load to interface with each combination of APIs. Of course, you might suspect that this can create unwanted overhead, and you'd be right about that -- there's always some overhead, and a poorly-designed API (one that's too low-level or "chatty" across DLL calls, which are essentially virtual calls) can exacerbate the overhead.



On the topic of conditionally including files in the build vs. pre-processor code blocks vs. templates to implement compile-time-polymorphism, its pretty much down to preference and familiarity, but IMHO tamplates have the edge, assuming your build environment across platforms has uniform support for the template features you need.


Here's why: Templates are just as flexible/capable as the other methods, but benefits from not having to repeat common blocks of code across files or conditional code blocks automatically. Code repetion is bad because that also means you have to keep that code in lock-step as features are added, refactors are made, and bugs are fixed. To reduce code repetition in the other approaches, you tend towards needing webs of conditional blocks that are more complex than should be necessary (that is, the conditions necessarily start to encode the organization of your source files, and you sometimes end up with new conditions who's only purpose is to do that, rather than to simply encode the high-level goal of "Give me [x] on X, and [y] on Y") or similarly for entire C++ source files where, to avoid repeating code, you might have to move some of it into its own files (again, to suite and thus encode the particulars of how your code is organized, rather than an a direct expression of solving the platform-selection problem). This also means that the reverse is true -- changing the way your code-selection logic works can potentially break your application logic, just like moving your application logic around can break your code-selection logic. You can end up with two (often complex) systems that should be independent instead being intertwined.


You still have to think about the template approach, of course, but in doing so its my observation that the expression of the solution remains closer to the expression of the simple intent, rather than on unnecessary details about how code is organized.

#5233616 MIT: Is it even remotely possible?

Posted by on 08 June 2015 - 02:38 PM



I got 15.7/20

Having a 3.2 GPA in the US is not considered very good.



Are you certain the two even map linearly? I'm not familiar with the French system, but I'd certainly not assume its linear, or that the grading standards are the same. Even within the US GPA system, a particular instructor's grading standards can shift you around vs. another instructor, or from one school to another. France has entirely different education standards than what we have in the US.

#5233018 Thinking with Windows

Posted by on 05 June 2015 - 02:31 PM

The biggest things are that that A) the native controls simply feel out of place in games, and therefore a game using native controls would feel less polished, and B) using native controls in a multimedia (e.g. using, say, Direct3D) app means you've got then allow for your renderer to cooperate with the operating systems' own drawing of controls, which has a performance/latency impact, and sometimes also limits what you can do.


You can generally alleviate A) if you can put in the work to skin the controls, but then you're just about writing your own UI anyways. Its not that typical UIs are bad at what they do, they just do something different than most games want, and so they're often more of a hindrance to work around than they are help.

#5232609 What kind of infrastructure is found in game or software studio's

Posted by on 03 June 2015 - 12:45 PM

The artists are likely using Quadro/FireGL cards, the main benefit of those cards is 1) the configuration/firmware/drivers go through very extensive validation with professional graphics applications like Photoshop or Maya, etc, 2) the drivers are optimized and tuned for those applications 3) all of it together prefers stability over performance -- Usually, the silicon is the same as gaming GPUs, but used more-conservatively. You'll sometimes get a larger amount of VRAM, or fully-unlocked double-precision performance, but that's usually the only hardware differences.


At an AAA studio working on a high-end game, Promit's spot-on. Keep in mind that AAA games are being built for 3-5 years in the future, the devs have to be able to run code reasonably well even before its optimized, and they do a lot of local builds of the parts they're working on, tests, or tools. And again, it comes down to whether it makes any sense to skimp on a dev-box when that body is costing you $150k or more to put in a seat. Sure, having twice the CPU doesn't make your programmer twice as efficient, but if it makes them even 5% more efficient over the course of the year, a higher-end box pays for itself -- I'd wager that a powerful PC probably makes a typical dev at least 15% more efficient, and a skilled dev even moreso. Also, because those resources will at times go idle (e.g. writing code and doing not much else), those machines are sometimes conscripted into the build-farm as slave nodes, putting the excess CPU to work helping out the site infrastructure.

#5232484 An Exact Algorithm for Finding Minimum Oriented Bounding Boxes

Posted by on 02 June 2015 - 04:56 PM

Thanks for the comments! Orthopodal sounds fine as well, although I am not keen on too fancy naming, but feel free to use the terms interchangeably.


I think I'd side with Eric -- Ortho would seem more proper, and isn't overly fancy (ortho as in orthographic, like everyone knows), IMO.


Looks like great work, thanks! I've only scanned it while at work, and I'm still digesting since I'm not an especially good mathematician, but it appears you might have found something quite significant. You say you're not an academic, but the standard of work seems up to it. Anything in particular inspire you to present your findings so rigorously?

#5232469 VB.net limitation question

Posted by on 02 June 2015 - 03:23 PM

VB.Net shares the same compiler/JIT/Runtime technology as C#, and they've had a decree of maintaining feature-parity for awhile now. You can basically think of them as two dialects that speak differently but otherwise have the identical capacity for communicating powerfully. C# is neither faster, nor slower than VB.net in mainstream terms -- I suppose its possible that the patterns one or the other encourages might be marginally slower or faster in specific cases, but you should mostly be free to defy those patterns. I don't hesitate at all to say that, between the two, use what's most comfortable because neither has a clear performance or expressive advantage. I happen to think that C# is the better language, myself, and probably a significant majority would agree, but that's no good reason to dump VB.net if you're already familiar with it.


That said, there are ecosystem differences -- as I said, most people seem to prefer C# by a significant margin, and so its easier to find people to help you out if you're writing your code in C#. Likewise, there tend to be more tutorials and references written in C#, and also some of the VB.Net code you'll find are straight-line ports of the C# version, which may not yield idiomatic VB.net. Moreover, most .net libraries are written in C#, so you'll eventually want to be comfortable at least reading it. All of those things combined tend to mean that any VB.net programmer who sticks with programming long enough eventually will also pick up C# (except, perhaps, those whose job is solely to maintain large, legacy VB code bases, as sometimes exist in enterprises).


If you were new, and asking the question "Should I learn C# or VB.Net" I would hands-down recommend C#, but since you are coming already with VB.Net experience, there's no need for you to drop what you're doing now to take a break and learn C# before returning to your project. You will probably (want to) come to learn C# before long, and that's a good thing, but let that come to you as you need it is my advice, unless you are simply itching to learn a new language, or are finding that not knowing C# is holding you back right now.

#5232235 I want to write code for the PS2, because it's old.

Posted by on 01 June 2015 - 03:39 PM

I'm curious; how come you don't want to work on the Dreamcast anymore?


Me too. The Dreamcast is far more approachable than the PS2. The GBA would be even more approachable still. I don't think there's any platform homebrew platform offers anything in the way of access to the mainstream gamer. XNA was a decent attempt put to pasture before its time had come. ID@XBOX is looking good, though,


There's also a new cartridge-based, retro game system called the Retro VGS being developed by the folks behind Retro Magazine. Its never going to be a huge commercial success, but it might have a decent enough user-base one day that would at least make the programming worthwhile, even if it never made you rich or famous. The circulation of the magazine is pretty stellar for what one would assume is a pretty niche market (that is, an actual dead-tree, print-magazine about retro games and consoles, circulating in 2015), and if that same audience can be tapped for this new console it could prove interesting. They've teased, even, that some old-school programmers are onboard to bring back new sequels to classic games, exclusively on the Retro VGS. They're still finalizing things, but they're far enough along that people are already well into developing the first round of titles. Consider looking into that.

#5231539 How To Do RPG Interiors

Posted by on 28 May 2015 - 01:51 PM

The blended design is an interesting approach, but at that point I kind of wonder what the benefit of maintaining spacial reference to the exterior is. To me, the spacial reference only matters if there's a 1:1 correspondence to the interior space. I can't dismiss the blended approach out of hand, but I'd want to identify some other benefit that carries its weight, otherwise just do a slick transition and let me have my whole screen, thanks.



Another approach to interior shop/inn/dialog spaces, would be to do away with navigating the interior space altogether. Rather than walking around in overhead view, maybe just transition to a nice illustration of the shopkeeper and their wares from the character's POV -- you press into the exterior door, the external world dims, you conduct your business, and then when you're done the illustration fades away, the overworld screen brightens, and your character is outside, facing away from the door.


I wouldn't want that to be my only option for interior spaces, as it would eliminate a great deal of exploration/looting opportunities, but if you had such a system in place, it would be easy to apply it just as easily to an interior space (by walking up to the shop-keep, as per tradition), as to a shop building (by walking up to the door, as described above), or to a booth/cart in an exterior space.

#5231309 Asset Loaders/Managers: One for all or separate instances?

Posted by on 27 May 2015 - 12:51 PM

I think that code solves the problem quite nicely.

Personally I wouldn't use va_args and would instead use c++11 initializer lists which are safer and more flexible.

Most modern compilers support them e.g. recent gcc and visual studio.


An Initializer list might be a suitable solution for some use cases, such as when the number of initializing data isn't known statically, but I think in most cases in asset-loaders/managers you probably do know what the requisite parameters are based on asset type. In that case, you probably don't want initializer lists and should probably prefer variadic templates combined with perfect forwarding instead.

#5231301 Entity/Component System - Understanding Event Priority

Posted by on 27 May 2015 - 12:29 PM

Lots of things to consider:

  1. Is (or do you need) your system to be deterministic?
  2. Is your system multithreaded, and how does that impact its design with regard to point 1?
  3. Do you have events that require immediate response from subsequent events before continuing (something like double-dispatch of events).
  4. Does your environment provide coroutines/green threads/futures or any suitable suspend-resume feature.
  5. Do you process events entity-wise (all events for each entity, then to the next entity) or event-wise (all events of a type, then to the next event type).

I think the most scalable/flexible solution would probably be some kind of "fence" system -- which is basically a means of suspending one execution-path until some preconditions have been met, for example, you could have a fence on processing sound events until physics events are complete (because, e.g., they might spawn a sound). You can do this kind of course-grained dependency ordering just by controlling the general order in which events are processed, but it falls apart when you have different entities with different needs. Something like a fence on a particular entity could solve that problem, but of course you risk creating cycles or deadlock even more, so debugging can be more difficult.

#5231298 Retro graphics and game atmosphere.

Posted by on 27 May 2015 - 12:11 PM

Slapping 32 colors on something and calling it retro is about as effective as slapping candy-apple red on your Buick and calling it a Ferrari. In the case of games' art direction, it all needs to mesh together if you want to evoke a true sense of nostalgia. Taking NES-style retro games as an example, the feeling of nostalgia can be broken by rather unassuming things, even if everything else is right -- for example, having sprites that are too large, have too many simultaneously-moving parts, or are just too-smoothly animated you lose the nostalgic effect. As a rule of thumb, if you want to evoke nostalgia, you need to demonstrate period-or-platform-appropriate limitations, though you don't have to simulate deleterous effects of the hardware (e.g. certain NES games would slow to half-speed when too many sprites were on-screen; other games would flicker sprites or parts of sprites.)



I find 3D retro to be a pretty hard aesthetic to pull off. I've seen one retro FPS recently that successfully pulled off a Quake II aesthetic, I've seen, in passing, some modern turn-based first-person dungeon crawlers too. That's basically it. Part of the problem is that when we thing of nice 2D retro games, we're talking about replicating the NES or SNES era which was in some ways the (mainstream, at least) peak of 2D artistry -- that's what people are replicating because it was good, almost no one is creating Atari-level retro games, because they weren't that good or compelling. Most of what we think about as "retro 3D" is the Atari of 3D graphics -- no one wants retro 3D like the Playstation One. I'd argue that the PS2 and its contemporaries are sort of the NES-level of 3D evolution, and PS3 and its contemporaries are SNES-level. Both generations are too recent for there to be much nostalgia built up -- maybe in another 10 years, when the grade-schoolers who grew up with those systems become 30-somethings. But another hurdle is that most of the drive behind 3D advances has been photo-realism rather than style, while the opposite was true of 2D games -- its something of an open question whether anyone's interested in what state-of-the-art photo-realism looked like 20 years ago, and 'retro photo-realism' is honestly a bit of an oxymoron.

#5230129 tile objects in 2D RPG

Posted by on 20 May 2015 - 04:21 PM

In general, you would just store an index, UID, handle, pointer or other 'name' in each tilemap cell, which refers to data that describes that tile. That's the easy part -- you have one unique thing (the tile) that reappears in many places, so you tag each place with a way to find the thing, then look it up; this pattern appears all over the place, not just in maps, not just in games.


A more interesting question is how you want to store the map cells. The easiest way is using a 2D or 3D array, dynamically allocated so that you can load maps of different sizes. This can be a bit wasteful of memory (e.g. if you have 'overhead' map layers with lots of empty space, you'll store a lot of 'empty' tiles in those cells, but this is not a problem on modern systems, though it can be a large annoyance if you have just one or few very tall layers (you spend an entire layer to represent just a few overhead tiles, multiplied by the number of layers needed). You can have a 2D array where each cell is a linked-list, which saves that unused memory, but can be more costly to iterate over. Or you can store in smaller chunks (say 8x8 tiles) and stream in only the chunks you need. Lots of options with different properties and tradeoffs.

#5229680 How To Do RPG Interiors

Posted by on 18 May 2015 - 02:38 PM

I think its less about picking one or the other, and about doing what's right in that instance. The main issue, IMO, is transition time, and if you go to a "separate area" (a cordoned portion of this map outside the main area or a different map entirely) its not difficult to make the transition fast and smooth.


Doing transparent roofing is necessarily limiting -- you have exactly the outside footprint of the building to build an interior that suits your design needs. I would suspect that you will very quickly find yourself making building exteriors much larger than they need to be to support larger interiors, and that will knock-on to all the buildings too, regardless of what goes on inside them, just to maintain a sense of scale. Another difficulty is what to do about buildings with multiple floors or basements -- how will your engine handle those?



IMO transitions to separate areas are best for consistency and to avoid the problems I've mentioned with more grace. By pre-caching these separate areas while the player is exploring the external areas you can make sure the transition is fast and smooth.


A different solution to the "avoid separate areas for quick things like vendors" is to give them a booth, market-stall, or olde-timey wagon-carriage to sell their goods from, rather than from the inside of a building.