Jump to content

  • Log In with Google      Sign In   
  • Create Account


graphics specialization


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 parallelpuffin   Members   -  Reputation: 132

Like
0Likes
Like

Posted 13 June 2013 - 12:55 PM

Hi,

 

So I'm about 3 years into a career doing businessy programming, and I'm considering jumping tracks into games. I've heard all the usual advice about better pay / more job security / less hours / etc in non-game fields, but so much of a good software engineer's job seems to be learning, understanding and adapting to domain-specific business requirements that I really want to work in a domain that's inherently interesting to me. What I'm doing now is rather uninspiring.

 

I was thinking about specializing in graphics programming, since low-level optimization and algorithm-heavy work gives me warm fuzzies. There seems to be a decent amount of resources out there for learning, so I'm reasonably confident I can learn whateve is needed given enough effort. But before I get going I wanted to get some insight into the overall industry so that I have a clearer idea of what specifically I should be learning.

 

My #1 question is, for anyone who's working in the industry, how much demand do you foresee for custom graphics work in the future? Given the budget busting nature of AAA games and the widespread availability of (from what I can tell) solid middleware, are studios likely to just use existing game engines instead of doing their own in-house rendering work? I guess I don't have a clear sense of what a graphics specialist would do in a studio that licenses an existing engine, or if one is needed at all. I imagine if everybody starts using 3 or 4 big game engines, there's not going to be enough work at the middleware companies for all the talented graphics guys out there, much less newcomers like me. (Please correct me if I'm wrong about that though!)

 

Assuming that there will still be need for graphics guys going forward, what do you guys think is the quickest path to becoming able to make useful contributions to a game? Should I try and see what I can contribute to an open source game engine? Make a software rasterizer from scratch to learn all the fundamentals? This field seems to move really quickly, so I don't want to spend a lot of time learning all kinds of overly specific stuff that's actually been obsolete in industry practice for several years. How do I avoid that?

 

Also, if there's a better place to ask this question, please let me know. Thanks in advance for your help smile.png



Sponsor:

#2 frob   Moderators   -  Reputation: 18419

Like
2Likes
Like

Posted 13 June 2013 - 02:40 PM

If you just want to do it, you can do it now.  Download some open source project and start dabbling.

 

 

 

If you want to do it as a career, at least two career moves are in order.

 

In order to specialize in graphics, you need to be already inside an industry that uses them.  There is little need for high performance graphics in most business software.  This is your first move.  These industries obviously include the entertainment industry (games, movies) but it can also potentially mean fields like broadcast television, advertising, or medical and scientific rendering.

 

After you have broken in to the industry of choice, begin taking on tasks that involve rendering and graphics. Generally it is a chicken-and-egg problem where only experienced people can touch the code and you only get experience by touching the code.  Make it known that it is something you are interested in, and focus on that as side tasks to your main job.  That is your second transition.


Check out my personal indie blog at bryanwagstaff.com.

#3 S1CA   Members   -  Reputation: 1398

Like
1Likes
Like

Posted 13 June 2013 - 05:37 PM

#1, what frob said. In particular, do as much and learn as much as you can about graphics programming before you try to make the jump. For junior roles and for people with no proven graphics or engine programming experience in the games industry, a good demo that shows you have a good understanding* of the core algorithms and techniques is the only way you can differentiate yourself from all the other people who want to transfer from other industries to games.

[* When I say understanding, I mean it - if I'm interviewing you for my team I'll want to discuss the details of the techniques you chose and what the alternatives might be - from the interviewer's side it's easy to spot the difference between "copied from a book but doesn't understand how it works" and "understands"].

 

 

#2, AAA teams and projects are big enough these days that graphics programming and renderer programming are increasingly two separate (but of course closely related) specialist areas. Many big games use graphics engine middleware or already have their own proprietary engines, so there will be more demand for graphics programmers in the future than there will be for graphics engine programmers. Entry level low-level engine programming jobs are also very very rare. I've worked on a few games now that have had people who spent the majority of their time writing shaders...

 

 

#3, what to learn? I think writing a game or graphics engine you'd spend as much time bogged down with software design issues and platform APIs as you would learning actual transferrable techniques. Use an off the shelf engine and skip the low-level stuff unless that's really really what you want to be doing.

 

Writing a software rasterizer is a good one for understanding a lot of underlying principles. Be careful not to get carried away with 1990's optimisation techniques and methods though, I'd advise Fabian Giesen's series of articles for an up to date look at the pipeline and rasterizers: http://fgiesen.wordpress.com/category/graphics-pipeline/

 

I'd learn the common basics such as lighting (the illumination/reflectance part and the implementation part), shadows (it's a start having an idea what a shadow map is, but do you know how to fix the aliasing issues?), skinning/animation, particle and other effects (a common entry level task), HDR (fake vs real).

 

Have a rough understanding of common visibility algorithms (PVS vs Portal, etc) can be useful.

 

A good book to read for ideas for topics to learn? Realtime Rendering.

 

Look at some of your favourite games - can you explain how everything is rendered? If not, start learning, start guessing, start experimenting.

 

Given the higher than average volume of data in graphics, good choice of data structures and algorithms can matter. Good choice means understanding some underlying CPU concepts (Big-O isn't everything!). 

 

Knowledge of how GPUs work and where the performance bottlenecks are in the pipeline is quite an engine-y thing but frame rate is the whole team's concern.

 

Maths, maths and more maths. It's useful for a graphics programmer. SIGGRAPH papers tend to be much easier to read when you understand at least some of the Greek bits ;)


Simon O'Connor | Personal website | Work


#4 parallelpuffin   Members   -  Reputation: 132

Like
0Likes
Like

Posted 14 June 2013 - 12:16 AM

Thanks, this is very helpful.

 

S1CA, it sounds like you're saying that even as companies use middleware more or have existing engines from previous projects, they're not so simple to use that artists (or "technical artists") can do all the required graphics work without a lot of help from programmers. Sorry if this is a silly question, but why do shaders in particular need to be made fresh for each game? I had thought (admittedly without any particular basis) that shaders were fairly general-purpose and you wouldn't have to write new ones for every new model or environment you make? Are there any other tasks for graphics programmers besides writing shaders that are likely to be customized for each game rather than being included as part of the engine?

 

It sounds like it's not a bad idea to start working on a software rasterizer and see where that gets me. As far as making demos goes, do you think there's educational value in making them "from scratch" using OpenGL or DirectX? I can see how it might not be the best use of my time to try to make an entire engine, but would I be at a disadvantage trying to get an entry level job if I'd only ever used an off the shelf engine instead of the raw APIs?



#5 Hodgman   Moderators   -  Reputation: 27068

Like
5Likes
Like

Posted 14 June 2013 - 12:45 AM

My #1 question is, for anyone who's working in the industry, how much demand do you foresee for custom graphics work in the future?

I describe myself as a "graphics programmer" on LinkedIn, and I get approached by recruiters on there for graphics programming jobs about once a month with decent salaries on offer (relocation required though -- Europe, North America, Asia).
I quit my last job as a graphics programmer about a year ago, and that company hasn't been able to find a candidate to replace me yet, so I still do occasional contract work for them.
So, in my experience, we are in good demand right now.
 

Given the budget busting nature of AAA games and the widespread availability of (from what I can tell) solid middleware, are studios likely to just use existing game engines instead of doing their own in-house rendering work? I guess I don't have a clear sense of what a graphics specialist would do in a studio that licenses an existing engine, or if one is needed at all.

There's a few kinds of roles that a graphics programmer can be doing.
1) Interacting with the underlying graphics API for each platform, and building cross-platform abstractions on top of them -- this is done within the engine.
2) Building the general rendering framework, either with the raw APIs, or with a cross-platform abstraction above. Lighting systems, deferred rendering, generic post-processing chains, etc...
3) Game specific rendering requirements. e.g. motion blur on a particular character, flame jets for some specific attack, "distortion" and smoke over a spawning animation, etc...
 
Generally #1 is done by the engine team and #3 is done by the game team. #2 could be done by either, depending on the project.
Any work done by the game-side graphics programmers will likely be done using the cross-platform API provided by the engine, rather than the underlying raw APIs (GL/D3D/etc).
 
I don't think that any two games that I've worked on have ever used the exact same lighting and post-processing setup. Generally things are tweaked specifically for the needs of each game, with the engine acting as a starting point and a flexible framework for making these changes.
 

Assuming that there will still be need for graphics guys going forward, what do you guys think is the quickest path to becoming able to make useful contributions to a game? Should I try and see what I can contribute to an open source game engine? Make a software rasterizer from scratch to learn all the fundamentals?
 
This field seems to move really quickly, so I don't want to spend a lot of time learning all kinds of overly specific stuff that's actually been obsolete in industry practice for several years. How do I avoid that?

It's hard to say... When Quake blew everyone out of the water in 1996 with their efficient 6DOF software rasterizer, which used BSP for polygon sorting to avoid the need for a z-buffer, they were implementing an idea that was published in 1969. You still see the same themes now, e.g. Splinter Cell Conviction came out in 2010, using a "brand new" occlusion culling technique, which was first published in 1993.
Software rasterization was always a kind of right of passage for graphics programmers, but just a toy to learn the basics, seeing as we all use hardware rasterizers now. But these days they've regained a bit of popularity, with people using them this decade to do occlusion rasterization on the CPU in order to reduce the work that's sent to the GPU. The BRDF papers I've been reading lately span the last three decades...
The above examples just go to show that like fashion, ideas in this field seem to routinely fall out of favour and later be rediscovered again wink.png
 
Personally, I quite enjoyed using the Horde3D engine as a graphics programming playground. It does all the OpenGL work for you, but requires you to write your own shaders. It also lets you modify the rendering pipeline, so that you can configure different post-processing effects, or change the lighting pipeline from deferred rendering, to light-pre-pass, to forward shading, to inferred shading, etc... This is good practice for tasks #2 and #3 above.
To gain experience in task #1, you've got to make a small framework (like Horde3D) using some API. Bonus points if you port it to more than one API (e.g. a D3D and a GL version). Generally, if you've learned one graphics API, then learning a 2nd (3rd, 4th) will be easy, as they all embody the same ideas, but in different ways.
 

Sorry if this is a silly question, but why do shaders in particular need to be made fresh for each game?

On current-gen consoles or mobile platforms, you don't really have enough performance to spare to use generic techniques and achieve "next gen" visuals. Everything is a constant trade-off between features, quality, execution time (performance) and effort (time to implement). The approximations or assumptions that are valid for one game might not hold for another game.
Pixel shaders in particular are the most important "inner loop" out of all your code -- they can be executed millions of times per frame (1280*720 is almost 1M, and each pixel on the screen will be affected by many different passes in a modern renderer).

Edited by Hodgman, 14 June 2013 - 12:52 AM.


#6 parallelpuffin   Members   -  Reputation: 132

Like
0Likes
Like

Posted 14 June 2013 - 01:31 AM

Thanks guys, this really helps a lot. I think I have a much better sense now of what's involved in graphics programming in practice... what actual roles are currently involved was pretty unclear to me and googling around turned up only vague explanations. It sounds like there's no harm in learning using an existing engine since I'd be unlikely to get a junior position that actually involved touching raw graphics API code anyway. I'll take a crack at a software rasterizer in any case because it just sounds like a really cool project. laugh.png



#7 Hodgman   Moderators   -  Reputation: 27068

Like
0Likes
Like

Posted 18 June 2013 - 08:28 PM

Sorry if this is a silly question, but why do shaders in particular need to be made fresh for each game?

On current-gen consoles or mobile platforms, you don't really have enough performance to spare to use generic techniques and achieve "next gen" visuals. Everything is a constant trade-off between features, quality, execution time (performance) and effort (time to implement). The approximations or assumptions that are valid for one game might not hold for another game.

An excellent example of this has just been demonstrated in this article!
http://www.gamedev.net/page/resources/_/technical/game-programming/rendering-and-simulation-in-an-off-road-driving-game-r3216
By designing something that only works within their assumptions (driving a vehicle on a heightmap with forests), they end up with a rendering pipeline and shaders that are extremely efficient, but look great for their game.
If you tried to implement the next CoD or battlefield using their shaders, many of their assumptions wouldn't be valid, and it wouldn't look great any more ;)




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS