Jump to content

  • Log In with Google      Sign In   
  • Create Account

Ravyne

Member Since 26 Feb 2007
Offline Last Active Yesterday, 06:28 PM

#5218871 Is this a viable entry strategy?

Posted by Ravyne on 24 March 2015 - 01:11 PM

Honestly, start building skills that are directly related and demonstrable by building a portfolio of games. Go learn Unity 5, or Unreal Engine 4, or Cocoas-2D, or GameKit, or MonoGame, or anything really -- as long as you can use it to develop and showcase your skill at making games, you'll be moving in the right direction.

 

That said, design positions are a rarefied role in an already rarefied field in an already rarefied industry, and one which is extremely competetive with no shortage of young brainiacs willing to work 50-60 hours weeks for relative pennies on the dollar. I don't mean to be discouraging, but that's the reality of the situation. You may not posses the health, or the youth, or the willingness to give up a significant amount of personal time to be competitive, and you might have life-circumstances (e.g. people to support, debt, etc) that would be serviceable with a typical entry-level games industry wage, or much less as an IT monkey.

 

The upside of being alive today, though, is that there are avenues for dedicated, talented people to make their own way and break into viable markets like Steam or Xbox Live, or the Playstation Store. That's no cake-walk either, and far from guaranteed (or even likely) success, but its a shot you can make something of if you put in the work and have the vision.




#5218868 Pixel Art Sprites - An Overused Art Style?

Posted by Ravyne on 24 March 2015 - 12:54 PM

Its not really true that 2D art inherently ages better than 3D, its just that the stuff we remember fondly (or at all) was that which came from the heyday of 2D technology -- The SNES, GBA/DS, Genesis, and Neo-Geo or other arcade platforms. No one really thinks Atari or Intellivision 2D sprite art cuts the mustard anymore. In the same vein, when we think of "retro 3D" we think of the Playstation/Saturn/N64 and early PC titles -- which are the Ataris and Intellivisions of 3D graphics in the mainstream. We only just made it to comparably advanced 3D to the SNES' 2D with the last generation I'd say -- maybe even the current generation.

 

I haven't seen many 3D games that embrace an early-3D asthetic -- there was a Kickstarter recently for a modern throwback to Quake-era FPSes (with art and design tropes to match), and it actually looked really good. I can count the number of retro 3D games I've seen that don't just look cheap on one hand with room to spare -- though, I've not spent much time looking.

 

 

But 2D art is definately lower-fidelity than 3D, and that makes it significantly easier to produce. That's not to say that great 2D sprites are easy, but they're just more forgiving of imperfections, which means there's usually fewer iterations (note: this applies less to very large and high-color sprites, but I don't think we're meaning to talk about that here).




#5218592 what do you use to document your c++ code?

Posted by Ravyne on 23 March 2015 - 02:33 PM

For C++, Doxygen is still king of the hill as far as reference documentation goes, and I believe the last time I looked into it they now support the Microsoft-style triple-slash XML comments in addition to their own annotations. If you're already familiar with those, stick with them.

 

I do wish there were a tool based on Clang's front-end though -- Doxygen maintains their own parser, AFAIK, and that seems redundant now, and I would assume that it also loses some deeper context that could be important for documentation. Clang would not.

 

Doxygen is also nice since it generates static pages. This means that you can distribute the docs in a .zip file, or serve them up from a very basic (and innexpensive) web server, or even serve them up right from a github repo (this feature is called Github Pages) if you're open source and on GitHub (EDIT: actually, I believe you can have a public GitHub Pages site attached to a private Repo, if you want to keep the source hidden, but distribute DLLS / Headers on a release package.

 

 

But since programmer documentation is what I do for a living, I can also say that reference is good, but its not enough. People want good reference, but they also assume its there and take it for granted. What they usually ask for is code samples, example projects, and help understanding the general patterns of using your API. They usually don't care about overview-type topics unless they're really high quality, an overview that's just average is usually seen as just getting in the way.




#5217759 How 3D engines were made in the 80's and 90's without using any graph...

Posted by Ravyne on 19 March 2015 - 04:22 PM

Hey ya.

 

With this Link. one hell of a good book.

 

I actually never had that one, but Tricks of the 3D game programming Gurus is newer and also covers software rasterization; its really the successor to LaMothe's Black Art. Neither is especially state of the art software rasterization though -- you'd want to look at Nicolas Capen's 3D rasterizer (that went on to become SwiftShader), or the Larabee software rasterizer that Michael Abrash was working on -- not sure if code was ever released, but there were a couple whitepapers.

 

Also, required reading is Michael Abrash's Graphics Programming Black Book, I always wanted this one and finally stumbled across a copy last year. In particular, I believe this one covers how they did the small-triangle optimization in Quake, and also covers how they optimized for the Pentium by making sure both instruction pipes were working together (pentium was the first commodity super-scaler CPU -- it had two pipelines, the 'U' pipeline could do all integer instructions, and a second 'V' pipeline could simultaneously do simple integer instructions (like add/subtract, bitwise operations, etc) as long as there was no data dependency on what was in the U-pipe already) -- that sort of thinking is still relevant today, although processors are now so wide and compilers are so good that you probably don't have to think about it.




#5217689 Primary/Secondary/?

Posted by Ravyne on 19 March 2015 - 12:38 PM

As an addendum to my alternate UI proposal, I might propose that any "unspent" research points remaining in the middle might represent and influence the pace of "ambient" research -- so if you had two points to spend, you could put them both into construction and all other research would progress slowly or not at all -- or you could leave them both "unspent" which would increase the pace of ambient research in all categories.

 

In that system, if we assume that the "active" pace costs 1 research point, then having 5 research points unspent is effectively the same as having them spent 1 to each category -- but the advantage would be that if you got a 6th point, with 5 unspent and one allocated to construction, then all categories would behave as base-line "active" with base-line "aggressive" in construction -- and the twist would be that active maybe costs 1 point, but aggressive research costs 3 (which would emulate diminishing returns of one category getting too far ahead of other research, which is often the case in the real world). So in effect, the "unspent" points would become a sort of base-stat, that could be sacrificed to gain advantage in one or a few categories, but your pace is always increasing so you can put your research discoveries on a curve to balance them.

 

Obviously this is a different design than what you have, and its balanced differently (e.g. in this system, with a discovery curve, you couldn't plow everything into 1 category and master that while being downright rudimentary in others, but that seems possible in your system.) so take it or leave it smile.png I'm imagining all kinds of good uses for what I just described, so I'm definitely keeping it for myself -- but feel free to use it too if you like.




#5217683 2D vs 3D

Posted by Ravyne on 19 March 2015 - 12:16 PM

I plan on writing a blog post about how 2D games being easier is a misconception. I do agree in some areas 2D is easier, however 2D does have its own set of problem that are just simply easier to solve in 3D. Especially design and gameplay wise, the challenges you face with 2D are just purley different than the challenges in 3D

 

Its true of course that 2D has its own unique challenges that 3D does not have. But in general 3D has most of challenges that exist in 2D, and they're all complicated by an extra dimension and 3 extra degrees of freedom -- plus 3D games have their own set of unique challenges and it pretty significantly dwarfs the set of challenges in 2D, in my experience. Usually, you only run up against significant difficulties in 2D when you are emulating 3D in some way -- isometric engines with large and tall objects are famously less straight-forward in 2D than achieving the same effect in full 3D, and old-school arcade racers like Pole Position are odd ducks, even if they aren't all that complicated.

 

Design is surely different as you point out, but the OP is asking about graphics programming, not design, so I'm not sure that's relevant to the discussion. I'll be interested in reading your blog though, it sounds like an interesting topic and maybe you'll sway me.




#5217383 Couple of questions regarding graphics API?

Posted by Ravyne on 18 March 2015 - 10:43 AM

I want to create a new API like DirectX. I have a few questions regarding this?

 

If I create a new API that works on PC and consoles, will my API will be allowed to run on PC and consoles(or they will force me to use like PSSL for PS4 and DirectX for Xbox etc)?

 

If there are no such issues, how can I create a Graphics API( link to starting point from which I can move on) ? 

 

As another point, its no coincidence that Direct3D12, Vulkan, and Mantle (presumably) all look remarkably similar -- All three have looked at what modern hardware looks like and what hard-core graphics programmers are asking for, and all three came up with something very, very similar. Apple's Metal is somewhat further removed, it seems, but it still shares a lot of the others. Honestly, what do you expect to be able to do better/different than these guys, who among them have some of the best minds in hardware and graphics, consulted by the rest of the best minds in graphics, and funded with literally billions of dollars.

 

If you want to do it as a learning exercise, what I might recommend you to do is to create your API for something like the Raspberry PI (Who's GPU is now fully documented), or for a defunct system with good home brew support, like the Sega DreamCast (who's GPU is also pretty well documented, though unofficially).




#5217227 Can we use openCL on consoles?

Posted by Ravyne on 17 March 2015 - 06:28 PM

@MJP is there any way that we can use GPGPU on consoles?

 

Yes, of course. But if you don't know whether and what they are, you're really worrying about all this prematurely, and losing sleep over irrelevant details regardless.




#5217224 Can we use openCL on consoles?

Posted by Ravyne on 17 March 2015 - 06:23 PM


There is nothing stopping the console developer from making an OpenCL implementation available for their respective platform as its just a standard waiting for implementation. No different that DX or its variant being used on Microsoft console.

 

My point, though, is that you really only need OpenCL for the abstraction, and you don't need abstraction on consoles, even when your game is cross-platform (becasue each console will likely sell 100million units in its lifetime, and when you have those kinds of numbers, they demolish even the most popular, long-lived PC GPUs) -- yes, there could a a very specific version of OpenCL that's highly tuned for one or both consoles, but even still its unlikely it would produce as good results as a hand-tuned command stream. If you're on a console, and you have something that's so critical to be worth offloading at all, its equally worth hand-tuning it to the platform in all likelihood, and if you're pushing that kind of envelope its likely you have that expertise on your staff.

 

I argue that you won't get that top performance out of OpenCL even with extensions, a custom run-time, and a custom compiler -- but even if you did, that would be a huge software effort, and even still would not be vanilla OpenCL, but a custom-extended OpenCL -- so what would you gain? The extensions make it non-portable again, so we can't take our OpenCL source code and simply recompile it for another platform.

 

That doesn't mean one vendor's solution doesn't look like OpenCL maybe, or DirectCompute, or C++ AMP -- in fact, one could imagine easily that some subset of these are true -- it just doesn't matter. In the end, on the console, you're sufficiently specialized such that your source code in any form is effectively non-portable.

 

I mean, even look at the GPGPU ecosystem today -- if you need to run on CPUs and GPUs, or GPUs from anyone but nVidia, then you use OpenCL; and if you know your code will always run on nVidia hardware, then you use CUDA and you'd be a fool not to. 




#5217198 Primary/Secondary/?

Posted by Ravyne on 17 March 2015 - 05:03 PM

Tertiary would be the proper word in that succession.

 

But I agree that if this third status is the default research level unless they're activated at a higher level (secondary, primary), then it doesn't make much sense to make the player choose the tertiary status. In that case, what I might suggest is to label this status as something like "Research schedule", and then have the secondary status be "active" and the primary status be "aggressive" -- that would communicate to me that everything else advances at a natural ambient pace, as it normally would do absent additional funding or pressure.

 

I might also propose an alternative UI entirely, wherein (based on the 5 categories you have in the screenshot above) you have a 5-pointed polygon (a.k.a. a pentagon) -- with two inner rings (actually, kinda like the pentagon building, in fact) where in the center you have how many research points you have to spend, and a point on the pentagons corresponds to each research category -- so the outer-most point on the Construction category means Primary/Aggressive, the middle point means Secondary/Active, and the inner-most means Tertiary/Ambient/Minimum.




#5217184 Card game Class Structure

Posted by Ravyne on 17 March 2015 - 04:35 PM


An ECS doesn't necessarily map well to cards, but a compositional approach certainly works very well.

 

This.

 

ECS is good when you have a lot of entities that all do a different subset of some known quantity of components/behaviors, and when one particular entity might vary over time, or be different from other entities in broadly the same category of entities.

 

Some of that is a good fit for a game like magic, but others are not. Data-driven composition gets you what you need, IMO, without quite all the overhead of a full-blown ECS.




#5217163 Can we use openCL on consoles?

Posted by Ravyne on 17 March 2015 - 03:37 PM

You can do GPGPU certainly, but I wouldn't count on OpenCL specifically. I can't say because I alternately don't know or can't talk about either console's development tools.

 

But at any rate, I would guess that openCL wouldn't be your best bet even if its available, because it might just be at too high a level of abstraction anyways. Anything that's so important and intensive that you want to shuffle it over to the GPU is likely also important enough to be tuned specifically for the GPGPU capabilities of fixed platforms (e.g. consoles) where they present themselves, and OpenCL is likely insufficient for that task. Perhaps if you're not really pushing the envelope, but that's not most console games. If you're prototyping on a PC, implement your algorithm in OpenCL, or C++AMP, and tune it to run well on AMD GCN architecture to prepare for the consoles -- that'll be your biggest leg up. But you're going to need to tune it for the consoles anyways, IMO, so its probably premature to think about your approach in terms of source-code portability.




#5217128 2D vs 3D

Posted by Ravyne on 17 March 2015 - 01:37 PM

If you think it through 2D is not really easier conceptually different than 3D.

- You use vectors with 2, not 3 dimensions, which is basically the same difficulty. Or you dont use vectors and calculate everything twice for x and y.

- A few calculations get easier, but not much.

- You use a slightly simpler projection transform, which wont safe you much time as you basically create this once.

- You possibly waste much time emulating features like depth testing or pseudo-3D-perspectives, which you could get for free when using real 3D in the simplest possible way, without always thinking about which sprite to draw last so its above/in foreground or creating an isometric tile engine or similar things.

- You only use quads as meshes, sparing you from creating any real, more complicated meshes, but wasting way more time creating much more complicated textures, which you cant reuse as easily, making you need to create a multitude of them from different perspectives.

 

Fixed it for you smile.png

 

You do 2D in much the same way as 3D at a conceptual level, but when you get down to details the extra dimension introduces quite a lot more of them, and so quite a lot more difficulty. So, like a said before, if you plan to write things yourself (that is -- to get into the details) then 3D is significantly more difficult, and perhaps best avoided as a first-attempt; but if you plan to use existing libraries/middleware/engines (that is -- to work at a more conceptual level), then 3D is not much more difficult to wield (though even still, somewhat more difficult).




#5216949 2D vs 3D

Posted by Ravyne on 16 March 2015 - 04:36 PM

2D really simplifies everything, but not to the point of transforming it entirely -- in short, working in 2D is a great playground for learning to work in 3D later. Take physics, for example, 2D basically means you have only 3 degrees of freedom (translation along x, along y, and rotation in the X-Y plane) to deal with, rather than 6 (X, Y, Z, Roll, Pitch, Yaw) -- everything is basically the same, its just less of it and the simplifying assumptions you can make.

 

You probably are better off starting with 2D to get your bearings if you're new. One of the most difficult parts of making your first complex game is figuring out how all the parts fit together and interact -- and its almost entirely the same whether 2D or 3D, but you won't have the 3D details holding you up if you start with 2D.

 

But I guess that really only goes if you want to write those systems yourself. If you're going to use Unreal Engine or Unity or other engines/middleware, you'd probably be fine to choose either if you're very confident that your math skills and reasoning skills are up to the task. Just keep in mind that basically everything is twice as difficult or more in 3D than in 2D.




#5216356 Array of structs vs struct of arrays, and cache friendliness

Posted by Ravyne on 13 March 2015 - 05:41 PM


It's SIMD 4 register friendly, but not very cache friendly. a tuple of 4 vectors is 24byte in size, thus sometimes the x y z components are crossing cache line borders, and in that case you could just as good use pure SoA.

 

That you cross a cache line border doesn't really matter as long as you're accessing the data in a linear fashion. The memory that maps to the next cache line doesn't increase contention on the cache line before it. You may stall mid-operation waiting to read in that next cache line, but you were going to have to read it anyways, and after a few dozen or hundred sequentially-read cache lines, the pre-fecther is going to grab it for you ahead of time anyways -- all it might cost you is that the final cache line goes partially unused.

 

It all really comes down to this -- cache-friendly means having all the data you need when you need it, with as little other data in the way as possible. If you have a struct containing a 3D position and a 52bytes of other data, then your position-updating code can only update one position per cache line read in. If you don't also do work on those other 52 bytes, but you need to later, then you have to load that cache line again, and you pay that high cost twice. If you can't use that other cache line data right away, then it makes sense to restructure your data such that positions are in their own contiguous array, where you can process 5.3 positions per cache-line (and in the same amount of time as before to boot, given that cache-line reads are the bottleneck).

 

It can't be overstated how important cache utilization is. Take your system memory bandwidth and divide it by 64 bytes -- that's maximum number of cache lines you can load per second. Not one more. A typical system bandwidth is 25.6 GB/s, which translates to just 400 million cache lines per second. If you read a pathological worst-case of just one bye from each cache line, that's just 400 megabytes per second -- even if you read 12 bytes of position from each cache line, that's just 4.8 GB/s out of 25.6, a mere shadow of the machine's potential. This resource of cache-line reads per second is *way* more critical than how many operations your CPU can do per second, because it gates everything that doesn't operate entirely within the caches -- CPU cycles and even in-cache read/writes are almost free in comparison, being on the order of 100x and 10x faster, respectively.

 

And that's all best case, right, because like everything else that runs on a clock in your computer, those unused resources are expiring by the instant -- If you don't read any memory for 12 microseconds, you don't get to save 12 microseconds worth of cache line reads and roll them over to when they're more useful to you, they simply vanish into the ether of time. You use it or you lose it. A program at peak performance would load one new cache line at every opportunity, never load the same line twice, and be entirely done with each cache line by the time that new data needs start forcing things to be ejected (which depends on a number of factors, but assume the window is the size of your L3 cache, so anywhere between 1-6 megabytes in mainstream processors, 12-20 in enthusiast processors like the 6-8 core i7s, or larger in server CPUs)






PARTNERS