Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Online Last Active Today, 03:40 PM

Posts I've Made

In Topic: Spritesheet sizes

Today, 03:26 PM

It used to be the case a very long time ago that spritesheet size could become a concern because a certain GPU might not support textures that large, you're not likely to have that worry today, since no GPU has supported anything less than 2048x2048 in approximately forever -- it was long enough ago that I honestly can't remember, late 90s at least, I would think. There's no particular advantage to sizing the sprite-sheet smaller or larger, they don't even *have* to be square, or even powers of two anymore (though powers of two remain nice for mip-mapping purposes, if its a concern -- but often isn't in a 2D game). You might have a theoretical concern over wasting texels with big unused areas, but GPU memory is quite large these days and its not likely to cause any actual issues)


Most tools that do what you're talking about tend to take a list of images, tries to pack them in a way that approaches the optimum, and outputs whatever texture size is necessary (possibly rounded up to some hardware alignment boundary, or to the next power of two).


In review:

  • Power-of-two dimensions are still something nice to have if your sprites can be viewed from oblique angles (e.g. as a texture) or which can be zoomed in and out.
  • The least capable hardware you're even remotely likely to interact with supports textures of at least 2K x 2K.
  • Textures don't need to be square.

In Topic: XBone games on pc?

Today, 02:09 PM

It only applies to games meant to do so, and so far only those bought through the store. Gears 4 is a good example, buying it from the store gets you a digital copy on your Xbox and your PC -- its the same store transaction, and the same "package", but it might be a different binary/resources underneath for each platform -- no, you can't just put an Xbox disk into a PC and play it -- though its possible you might buy a physical copy for Xbox and get a store code for a digital copy you can play on your PC. With Scorpio having specs capable of flirting with full 4k rendering (and easily able to achieve 4k with tricks like checkerboard rendering), the art assets needed for that will be the same as those on a high-end PC, so its reasonable to think that one day the same exact distribution of bits will power both PC and Xbox machines, even if the actual game binary is different. 


Going forward, they've made it clear that convergence is something they're actively working towards, and its something they've been persuing (and productizing) in their OS versions for years now -- Already, every version of Windows, including those on your phone or Xbox One, are essentially the same Kernel and core OS components -- its not like it was even 5-10 years ago where Windows 7, Windows Server, Windows Phone, and Xbox / Xbox 360 all had unique kernels and Core OS facilities (even if they shared lots of code, it might be tweaked or otherwise not a direct replacement). They've converged the store as well -- no more Xbox marketplace, Windows phone store, and Windows store -- its all one store now, with purchases, billing, and licenses all managed the same way.

In Topic: Game Programming Compared to Other Programming?

25 October 2016 - 03:56 PM

For the field of programming, the differences between a game developer and an app developer, or any other specialization is exactly that -- the specialization. Its not much different than the differences between an auto mechanic and a diesel mechanic -- the tools and fundamentals are more or less similar, but the workings and surfaces of the things you'll crank on day-to-day are different.


Concretely, game developers historically are focussed on low-latency and high throughput, which is a lot different than a mobile app developer, but not at all different than, say, a developer doing financial trading software for a wallstreet firm. About the only thing you can say about games being unique is that they're really the only mainstream, consumer-facing software that exercises the state-of-the-art in everything a computer can do, all at once, with more-or-less soft-realtime constraints.

In Topic: Language decision crisis

25 October 2016 - 06:21 AM

C doesn't seem to be a viable alternative for OP -- it may be 'simpler' in some view, but it doesn't change the equation for any of the issues they're pushing back against: it still uses the header/source model (it lacks modules), const correctness is the same, and you'll still find 10 different code-bases adopting 10 different sets of styles, idioms, and data structure implementations.


In general, C++ can't be said to be a worse choice than C anymore, as long as we're comparing modern versions of both languages with good platform support for the target. RAII, constructors, and destructors alone are a huge boon not just for abnormal/premature function returns, but for objects with arbitrarily-long lifetimes (e.g. those without static scope). C++ gives you tools and building blocks for encapsulating and automating all of C's best-practices that C itself gives you no option but to remember, recognize, and deploy by hand. Used well, C++ really delivers on the promise of 'zero-cost abstractions' and enables its users extend or customize those provided, and to build their own. C's abstractions are similarly low-overhead, but its range of expressivity peaks much earlier than C++, and sooner still without resorting to complexes of ugly, often-brittle macros.

In Topic: Matrix 16 byte alignment

24 October 2016 - 02:01 PM

Another potential performance impact of unaligned vectors and matrices is that your data can cross a cache-line boundary, increasing cache spills and potentially wasting precious memory bandwidth. A 4x4 single-precision matrix fills a cacheline exactly on most current architectures, so you might consider aligning static/long-lived matrices on 64-byte boundaries even. For 4-wide single-precision vectors, aligning on 16-byte addresses relieves the potential to cross cache-line boundaries which, in the worst-case scenario, can cause your program to read 128 bytes of data to use only 16 bytes of it (though, you probably shouldn't be operating on single small vectors anyways); it could also cause other useful data already in the cache to spill, potentially. I imagine, also, that small arrays of small vectors could benefit by 64-byte alignment (the array, not the individual vectors) but I'm not sure how quickly the prefetcher picks up on the array and kicks in -- this potential optimization would only help quite small arrays of vectors (I'd guess < 8 vectors for certain, < 16 probably) -- though it'll never hurt, AFAICT.