Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 26 Feb 2007
Offline Last Active Yesterday, 05:42 PM

#5221162 Need help choosing a language

Posted by Ravyne on 03 April 2015 - 12:21 PM

There's no sensible reason to "start with C# and gradually move to C++". Just pick a language and learn it. All the choices will be valuable learning experiences.


I'm not quite sure how to read this myself.


If you're saying "just pick C++ now since that's where you want to get to." then I disagree.


If you're saying "Whatever you choose now will be the first of 10 or 20 languages you'll learn eventually" then I agree, but still think C++ is a poor first choice unless you have someone to mentor and guide your progress.

#5221152 University Degree - CS vs. CSGM

Posted by Ravyne on 03 April 2015 - 11:50 AM

That's good to know. I think my goal overall is to come out with a well rounded humanities knowledge on top of an engineering degree. As for doing a double major, that's something I'll have to see with my course load. Judging by your response would you say it's favorable to pursue a non-technical degree? I've been wanting to take advantage of their cinema school ( George Lucas went there)


I wasn't so much talking about an entirely different program -- I mostly meant that it would be relatively easy to take those 10 other electives in addition to the games tract, and then you (should) have both their "regular" CS degree, and their "games" CS degree and you can put both on your resume. Minors in math or physics are also relatively easy to pick up by piggybacking on a CS degree as well, and are seen as a great bonus to anyone hiring CS grads.


But if you want to take unrelated coursework that's great too. Its college -- you'll likely not have the opportunity to devote yourself to your education essentially full-time ever again, so make the most of it. Most game production jobs are fairly siloed as either art or technical, but there are a few jobs that span both, such as the 'technical artist' -- usually this is a person with a background in traditional or digital art and in computer science, but I could see cinematography as a reasonable substitute for the art side -- mostly a technical artist is someone who can help the technical side understand how to support the artistic vision, and help the art side understand the technical limitations and processes, and usually has a role in defining what that balance is, and might contribute to tools the artists use and work closely with graphics programmers. Of course, many games these days also need cinematographers proper, though they don't need the CS background.

#5221139 trouble parsing quake3 md3

Posted by Ravyne on 03 April 2015 - 10:58 AM

It would only display as IDP3 on a little endian machine if they read it one byte at a time and you converted it one byte at a time.


That seems to be what he's doing, and its unlikely he's on a big-endian machine. If OP wants to continue reading in single bytes, they'll have to be re-ordered; the other solution is to not read in single bytes, and instead read the format using the precise primitive types.

#5221136 Need help choosing a language

Posted by Ravyne on 03 April 2015 - 10:37 AM

The single most important thing to do is to start programming -- in any language. Making the 'right' language choice in the beginning seems like a big deal now, but its literally one of the most inconsequential choices you will ever make as a budding programmer. Syntax is easy, its the programming that's hard. Programming is not mindlessly typing C++, or Java, or assembler code -- programming is taking a big problem and splitting it apart to its atoms, then creating the best machine you can imagine to put it back together again while balancing performance, elegance, maintainability, and more -- and working with others to build that machinery on time. 



Now, all that said, you can make choices that are better or poorer, and the poorer ones are to be avoided. Cobal or Perl are probably bad choices because they lack applicability in games. C++ is terribly complex -- its gotten a lot better with the newest language revisions (C++11/14, and soon, 17) but its still complicated enough that its probably not a great first language unless you have someone to guide and mentor you; on the other hand, C++ is the lingua franca of the games industry and indeed most of computing -- there's a pretty high return on knowing C++ when you do take the time to learn it correctly, but it might be premature as a first language. Java, in my opinion, is a poor choice because it forces you to use the dogmatic version of OOP they prescribe because its built into the language rules, and much of which flies in the face of what we understand to be better OOP practices now -- C# suffers some of that as well, but the dogmatism is less prevalent and it gives you more options and conveniences for making it a more 'livable' language. Some people would disagree with me, but my take is that C# is an objectively better language than Java. But successful, popular games have been made it both languages.


If I were pinned to making a recommendation, it would be C#. Or possibly plain old C. Either of those are relatively proven and are industry-standard. If I venture away from mainstay languages, I've been looking at a language called Nim lately (formerly called Nimrod) which I suspect would make a great first language because it has a very friendly Python-like syntax, a very nice macro/template system for you to grow into, and it integrates both a garbage-collected heap (like C# or Java) and a manual heap (like C or C++), and you can choose to use one, the other, or both in your programs.

#5221132 University Degree - CS vs. CSGM

Posted by Ravyne on 03 April 2015 - 10:14 AM

USC has a reasonable reputation as far as I know, and since they are a traditional school offering a games tract there's no reason you couldn't *also* take additional electives as you see fit, and if you take enough of them you could even come out with dual degrees. Yes, it'll cost some more money, but if you can take advantage of summers and/or can bare the load of one additional course per semester it won't even cost you any more time. The goal is not to do the minimum you need to get your slip of paper at the end, but to come out the other side well-rounded, capable, and able to continue learning on the job and on your own time.


If your concern is to have a resume where the word "Games" isn't polluting your education line, then you'll need to dual-degree it; but if you just want the knowledge, you could supplement your games courses with free online/MOOC classes -- Stanford offers their entire CS curriculum online for free, IIRC.

#5220985 Direct3D 12 documentation is now public

Posted by Ravyne on 02 April 2015 - 01:22 PM

Indeed. Looking at https://msdn.microsoft.com/en-us/library/windows/desktop/dn770336(v=vs.85).aspx which points to https://msdn.microsoft.com/en-us/library/windows/desktop/ff476329(v=vs.85).aspx suggests that D3D12 supports D3D_FEATURE_LEVEL_9_1 hardware. Can someone confirm if that is the case?


I don't know for sure, but this probably won't mean D3D_FEATURE_LEVEL_9_1 hardware on the PC -- You still need a WDDM 2.0 driver and Windows 10 in the loop, and you're simply not going to see that support from ISVs for such old hardware.


What this probably is for is mobile GPUs in phones and set-top-box-style devices. Many of those devices are kind of weird in the fact that they are simultaneously quite modern in many ways, but also not in others because the silicon or power cost is too high for such a small device. In that respect, mobile GPUs are very modern but only provide the most indispensable parts of a GPU, and so mobile hardware that supports only a 9.1 feature level (WRT bells and whistles) don't look much at all like the Direct3D 9.1 hardware we remember on the PC from 10+ years ago -- the parts that are there tend look more like modern GPUs. In fact, modern-looking but "Direct3D 9.x" hardware is very prevelent in that space -- only very few of the newest mobile GPUs do D3D10.x or 11.x feature levels.

#5220980 Is it realistic to expect to make money in Unity Asset Store/UE4 Marketplace?

Posted by Ravyne on 02 April 2015 - 01:04 PM

I think, really, that its like any other business -- except you have the "location, location, location!" part already answered.


Identify a need that isn't being adequately met, execute it fully to requisite standards (its not good enough to just be the best of bad options), price it attractively -- get good value for your own time invested, but remove all question in the potential customer's mind whether they could do it better, or cheaper, or both. I forget the name of it, but there's a control-mapping component that's very popular, and I think Unity actually acquired/invested in them. Here at Microsoft, we recently bought the company that created (and successfully sold) the UnityVS plugin (which integrates Unity with Visual Studio for scripting) so that we can give it away for free to help people make more and better games for Windows platforms using Unity. Those are examples of lucrative, core needs that someone decided to meet and made a good living at it.


Now, as a seller without a professional pedigree, part of identifying opportunities that aren't being met is also being honest about which ones you can fully and successfully execute on. Do only the things you can do well, don't pass yourself off as more capable than you really are. That's how bad reputations are born, and nothing kills person-to-person business like a bad reputation.


Another avenue, if you work on your own projects, is to monetize the components and assets you might have made for your own work, perhaps some time after you release your project so that there aren't a bunch of games that look like yours or have your systems.

#5220522 XBOX ONE still Big Endian ?

Posted by Ravyne on 31 March 2015 - 11:55 AM

The x86 architecture has been little endian since inception ( afaik ) so it stands to reason that a SoC or device with a x86 architecture will be little endian..PowerPC off which the XBox 360 and the Cell Processor in the PS3 were based is big-endian..


Yes. In more general terms, the chips that are little endian are almost exclusively those that have a legacy in 8-bit chips, like the x86 familly. The reason for this is that 8-bit chips didn't have to think about endianness as it relates to sequential addresses of data types that spanned more than one byte. When the 8bit intel 8008/8080 led to the 16bit 8086, a programmer porting code from one to the other would still expect the low-byte to be at the low address (hence, little-endian) -- keep in mind that most software at the time was written in assembler, so you couldn't simply recompile the program.


Chips that came around in the post-8bit era and didn't have that 8bit legacy are almost exclusively big endian or bi endian. I think in the past there were some silicon efficiencies that made big endian more attractive, but those have surely vanished by now. Neither is really better, they're just different.

#5220260 From scratch vs Unity

Posted by Ravyne on 30 March 2015 - 12:48 PM

Any engine worth it's salt allows you to implement new ways of drawing or organizing things... There's no reason you couldn't implement a minecraft voxel -> mesh generator inside Unity.


Someone did a video-blog-style presentation on this on YouTube. I think the guy spent a week or so.


There's essentially two reasons to disqualify Unity or other engine middle-ware:

  • Your project is complex and has performance or other requirements that require specific tuning and makes middleware engines a bad fit.
  • Your project is so simple or unusual that you'll spend more time fighting/working-around middleware than you would rolling your own.



I don't see the cost factor / licensing as a big issue. Its an annoyance I guess, but you are getting tons of value for the relative pittance you are paying for it. And its not just the engine, its the toolchain, support, pedigree and community too. Roll your own engine and you have to roll your own tools, provide your own support (which can be good and bad), don't have a history (e.g. you'll rediscover all the platform bugs Unity fixed years ago), and don't have a community of users to help you figure things out -- what's more, you don't have a community of people you can hire from who already know your in-house engine and tools. When you buy into Unity or other engine middleware, you're essentially time-sharing a very large, dedicated engine-team for a bargain-basement price (the difference, of course, is that so is everyone else, and so you don't get to dictate direction and don't get an engine that's specifically tuned to your needs and non-needs).

#5220255 The Atomic Man: Are lockless data structures REALLY worth learning about?

Posted by Ravyne on 30 March 2015 - 12:32 PM

Just wanted to quickly give a +1 to what Hodgman mentioned in the second half of his post: you can often get better performance *and* have less bugs by removing the need for mutable shared resources.




If you're at the point now of not wanting or needing to deal with it, the actionable item you can do now with relatively little pain is to reduce any reliance you have on mutable shared resources where you can, and take note of where you haven't. This is good practice even in single-threaded code. Almost any time you have mutable shared state (at least one mutator and at least one other reader) you are implicitly serializing the code that references it, the more visible the state is, the more likely it becomes that a larger portion of your code is serialized in this way.


Also, as said above, you can also create "lock-free" code out of policy -- by designing your system such that operations that would otherwise have to be synchronized simply don't tread on the shared state at the same time. Double-buffering is a form of this, but more generally you can declare, for example, that all updates (mutations) of a certain kind are complete at a given point, so only (unsynchronized) reads will occur thereafter. The cheapest lock is the one you never have to take.


Its also always a balance between fine and course-grained locking, and the requirements that might be imposed by your data structures. For example, do you lock the whole structure and do a lot of work before unlocking it, or do you lock only part of the data-structure that's relevant to each unity of work? That depends on your use patterns, and whether the structure is tolerant of fine-grained updates (e.g. std::vector is not for any operation that could cause it to grow or otherwise re-alloc if someone else is holding an iterator), and how expensive the necessary lock is.

#5219155 The coin-op market: Where is it thriving, and what companies support it?

Posted by Ravyne on 25 March 2015 - 02:44 PM

Probably the biggest difference today is that integrated graphics are now powerful enough that you can run even pretty graphically rich titles at 1080p on modest settings. If I weren't pushing the envelope too much, right now I'd wait for someone to release a small-form-factor PC (like a Gigabyte Brix) based on the coming AMD Carrizo APU (should be any week now), drop a 64-128GB mSATA or M.2 drive in it, with 16GB of fast DDR3 (fast stuff because the GPU benefits measurably) and call it good. You'd have something nearly as powerful as not terribly far behind the current-generation consoles, maybe half as powerful as Xbox One. If you needed a little more grunt, Zotac makes some small computers that have laptop-style discrete GPUs inside, those can easily match or surpass the current consoles.


Lots of the newer-style arcade machines are running on DVI/HDMI - style LCDs now, so you don't need to worry about the wonky refresh rates that the old arcade tubes used, and demanded specifically-tuned GPUs to drive them. If you opt for an authentic modern arcade LCD, this'll probably be the most expensive component, probably between $600 and $1200 depending on the size.


With a little more horse-power, or maybe for basic 2D games, you could probably even use one of those inexpensive Korean 34" 4k displays (about $400), though they'll only do 30hz at 4k and 60hz at 1080p.


Also in that time, ITX form factor has really been adopted widely, and you can build a very powerful system that way. My current gaming rig is an ITX boad with an i7 4770 CPU (water cooled even), 16GB RAM, and a Radeon 290x. The case is basically the same size as a 24-pack of soda cans.


On the complete opposite end, if your needs a really minimal, a 1080p display and a Raspeberry Pi 2 wouldn't be bad. Its got GPU that's somewhere between the original Xbox and the Xbox 360 (its capable of pushing games akin to the late Xbox/early Xbox 360, at 720p) and a quad-core 900Mhz ARM CPU with SIMD, and 1GB RAM for 35 bucks.

#5219148 Why didn't somebody tell me?

Posted by Ravyne on 25 March 2015 - 02:19 PM

I once had a restaurant server ask if we'd like a "cara fe" of water for the table.

#5219145 I'm good at programming, is it enough?

Posted by Ravyne on 25 March 2015 - 02:05 PM

I'll leave the attitude issue alone, but I'll also reinforce that you really do need to know how to work with others -- and I don't mean getting along with them, I mean working in a group setting collaboratively.


Firstly, you need be conversant in programming, which simply making some games doesn't prove on its own. It means when your mentor or lead says "Here's what I need you to do. I'd like you to use the Visitor Pattern." or "We don't use Singletons here." Those statements actually mean something to you, and aren't just giving you keywords to go look up. Extra research is fine when implementing things, of course, but you need to be able to talk about those kinds of things around a boardroom table in front of the studio head and not look like a fool.


Secondly, programming in a team is very different than programming solo. You need to know lots of little skills -- You need to know how to file a well-structured bug against another programmer for instance, how to use source-control software in general (Hopefully you are using one yourself already) but even with those basic skills, working in a repository with multiple people making changes and merging them together is much different than you making and merging changes all on your own. You also need to be able to deal with schedules, make good time estimates, and keep to them as well as you can -- when working solo and it takes longer to do something than you anticipated it sucks enough, but when you do that on a team you might be holding them all up too (protip: don't be the guy burning hundreds or thousands of your employer's money by holding everyone else up).


Thirdly is baseline skill set, knowledge base, and good coding style. These are the table-stakes of getting in, and if you don't have them, no portfolio will get you in no matter how good the results are. This basically means that you can talk intelligently about code and about design problems, that you have a good overview of contemporary and appropriate technologies, approaches, design patterns, and idioms -- that you can talk about them and recognize when to use them or not use them, and that you have a natural grasp of what good code looks and feels like -- and not just the code you write, but also the interfaces your code provides for others to use, and that should enable their code to look and feel good too. This also includes relevant areas of mathematics -- mostly linear algebra, and a smattering of quaternions, geometry, statistics, calculus.



If you've never collaborated on a project before, I suggest that's a good place to start. You can attempt to spin up your own project, but its probably easier to find a project that's looking for help and offer your services to them. Depending on what your skill-level actually is, you'll want to find a project that's at, or somewhat above your skill level if you can -- if your skills aren't entirely developed yet, you might find that some projects with a high-caliber team might not want your help, even for free. Just take that as something to aspire to, and a clear sign that you're probably not ready for an industry position anyways. Something at your level will let you develop the kinds of skills I talked about above, something a bit above your current level will do that too, as well as challenging you to grow your solo skills.

#5219137 8 bit sprite animation

Posted by Ravyne on 25 March 2015 - 01:41 PM

If by "especially 8bit style" you mean low-resolution, low-color as on the NES or Master System then you definitely do not do that using a 3D package. Drawing each frame by hand is pretty much necessary, since a lot of the tricks artists used to make things look good are not something that any 3D graphics package I know of can do (e.g. use dithering intelligently, or apply colors artistically to achieve the intended effect. Also, palette-swaps (by which I mean actually designing them) are another thing that no package I'm aware of does. A computer probably could be programmed to do these things reasonably well, but no one has, AFAIK).


On the 16 bits, with a larger color palette to use and more colors per sprite you started to see things like Donkey Kong Country that used pre-rendered sprites, created in a 3D program and then exported with texture and lighting applied. This had a charm of its own, but it wasn't the same charm as hand-drawn sprites.



For creating hand-drawn sprites, my preferred program is Cosmigo ProMotion, which is a sort of spiritual successor to Deluxe Paint (which was popular with game artists all the way through the GBA and DS, despite being a DOS-based program). Many of those who moved off of Deluxe Paint earlier moved to ProMotion from what I understand.


ProMotion costs $80 bucks, but its money well-spent IMO. Graphics Gale is only $20, IIRC, but I don't like it as much.

#5218875 Pixel Art Sprites - An Overused Art Style?

Posted by Ravyne on 24 March 2015 - 01:22 PM

Some of the early 3D games that are more stylized than realistic (like FF7) age pretty well, actually. The image above looks like an up-res screen from the PC version or maybe an emulator. It looks pretty good as is, and doesn't appear all that different than WOW, say. A higher resolution and better texture filtering can go a long ways all on their own. But it didn't look that good on the PS1, not by a long shot. That's acutally one of the key differences between upgrading a 2D game vs. a 3D one -- you can't really up-res a sprite game without changing it -- scaling algorithms like Scale2x can look good, but they can introduce some noise; 3D games can be scaled to any resolution and you're never less-sharp for it.


Early 3D games that tried to be realistic tend to be ones that don't age as well. Look at any of the early 3D sports games, for instance, or military shooters on PS1 or PS2.