Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


#ActualHodgman

Posted 14 June 2013 - 12:52 AM

My #1 question is, for anyone who's working in the industry, how much demand do you foresee for custom graphics work in the future?

I describe myself as a "graphics programmer" on LinkedIn, and I get approached by recruiters on there for graphics programming jobs about once a month with decent salaries on offer (relocation required though -- Europe, North America, Asia).
I quit my last job as a graphics programmer about a year ago, and that company hasn't been able to find a candidate to replace me yet, so I still do occasional contract work for them.
So, in my experience, we are in good demand right now.
 

Given the budget busting nature of AAA games and the widespread availability of (from what I can tell) solid middleware, are studios likely to just use existing game engines instead of doing their own in-house rendering work? I guess I don't have a clear sense of what a graphics specialist would do in a studio that licenses an existing engine, or if one is needed at all.

There's a few kinds of roles that a graphics programmer can be doing.
1) Interacting with the underlying graphics API for each platform, and building cross-platform abstractions on top of them -- this is done within the engine.
2) Building the general rendering framework, either with the raw APIs, or with a cross-platform abstraction above. Lighting systems, deferred rendering, generic post-processing chains, etc...
3) Game specific rendering requirements. e.g. motion blur on a particular character, flame jets for some specific attack, "distortion" and smoke over a spawning animation, etc...
 
Generally #1 is done by the engine team and #3 is done by the game team. #2 could be done by either, depending on the project.
Any work done by the game-side graphics programmers will likely be done using the cross-platform API provided by the engine, rather than the underlying raw APIs (GL/D3D/etc).
 
I don't think that any two games that I've worked on have ever used the exact same lighting and post-processing setup. Generally things are tweaked specifically for the needs of each game, with the engine acting as a starting point and a flexible framework for making these changes.
 

Assuming that there will still be need for graphics guys going forward, what do you guys think is the quickest path to becoming able to make useful contributions to a game? Should I try and see what I can contribute to an open source game engine? Make a software rasterizer from scratch to learn all the fundamentals?
 
This field seems to move really quickly, so I don't want to spend a lot of time learning all kinds of overly specific stuff that's actually been obsolete in industry practice for several years. How do I avoid that?

It's hard to say... When Quake blew everyone out of the water in 1996 with their efficient 6DOF software rasterizer, which used BSP for polygon sorting to avoid the need for a z-buffer, they were implementing an idea that was published in 1969. You still see the same themes now, e.g. Splinter Cell Conviction came out in 2010, using a "brand new" occlusion culling technique, which was first published in 1993.
Software rasterization was always a kind of right of passage for graphics programmers, but just a toy to learn the basics, seeing as we all use hardware rasterizers now. But these days they've regained a bit of popularity, with people using them this decade to do occlusion rasterization on the CPU in order to reduce the work that's sent to the GPU. The BRDF papers I've been reading lately span the last three decades...
The above examples just go to show that like fashion, ideas in this field seem to routinely fall out of favour and later be rediscovered again wink.png
 
Personally, I quite enjoyed using the Horde3D engine as a graphics programming playground. It does all the OpenGL work for you, but requires you to write your own shaders. It also lets you modify the rendering pipeline, so that you can configure different post-processing effects, or change the lighting pipeline from deferred rendering, to light-pre-pass, to forward shading, to inferred shading, etc... This is good practice for tasks #2 and #3 above.
To gain experience in task #1, you've got to make a small framework (like Horde3D) using some API. Bonus points if you port it to more than one API (e.g. a D3D and a GL version). Generally, if you've learned one graphics API, then learning a 2nd (3rd, 4th) will be easy, as they all embody the same ideas, but in different ways.
 

Sorry if this is a silly question, but why do shaders in particular need to be made fresh for each game?

On current-gen consoles or mobile platforms, you don't really have enough performance to spare to use generic techniques and achieve "next gen" visuals. Everything is a constant trade-off between features, quality, execution time (performance) and effort (time to implement). The approximations or assumptions that are valid for one game might not hold for another game.
Pixel shaders in particular are the most important "inner loop" out of all your code -- they can be executed millions of times per frame (1280*720 is almost 1M, and each pixel on the screen will be affected by many different passes in a modern renderer).

#3Hodgman

Posted 14 June 2013 - 12:51 AM

My #1 question is, for anyone who's working in the industry, how much demand do you foresee for custom graphics work in the future?

I describe myself as a "graphics programmer" on LinkedIn, and I get approached by recruiters on there for graphics programming jobs about once a month with decent salaries on offer (relocation required though -- Europe, North America, Asia).
I quit my last job as a graphics programmer about a year ago, and that company hasn't been able to find a candidate to replace me yet, so I still do occasional contract work for them.
So, in my experience, we are in good demand right now.
 

Given the budget busting nature of AAA games and the widespread availability of (from what I can tell) solid middleware, are studios likely to just use existing game engines instead of doing their own in-house rendering work? I guess I don't have a clear sense of what a graphics specialist would do in a studio that licenses an existing engine, or if one is needed at all.

There's a few kinds of roles that a graphics programmer can be doing.
1) Interacting with the underlying graphics API for each platform, and building cross-platform abstractions on top of them -- this is done within the engine.
2) Building the general rendering framework, either with the raw APIs, or with a cross-platform abstraction above. Lighting systems, deferred rendering, generic post-processing chains, etc...
3) Game specific rendering requirements. e.g. motion blur on a particular character, flame jets for some specific attack, "distortion" and smoke over a spawning animation, etc...
 
Generally #1 is done by the engine team and #3 is done by the game team. #2 could be done by either, depending on the project.
 
I don't think that any two games that I've worked on have ever used the exact same lighting and post-processing setup. Generally things are tweaked specifically for the needs of each game, with the engine acting as a starting point and a flexible framework for making these changes.
 

Assuming that there will still be need for graphics guys going forward, what do you guys think is the quickest path to becoming able to make useful contributions to a game? Should I try and see what I can contribute to an open source game engine? Make a software rasterizer from scratch to learn all the fundamentals?
 
This field seems to move really quickly, so I don't want to spend a lot of time learning all kinds of overly specific stuff that's actually been obsolete in industry practice for several years. How do I avoid that?

It's hard to say... When Quake blew everyone out of the water in 1996 with their efficient 6DOF software rasterizer, which used BSP for polygon sorting to avoid the need for a z-buffer, they were implementing an idea that was published in 1969. You still see the same themes now, e.g. Splinter Cell Conviction came out in 2010, using a "brand new" occlusion culling technique, which was first published in 1993.
Software rasterization was always a kind of right of passage for graphics programmers, but just a toy to learn the basics, seeing as we all use hardware rasterizers now. But these days they've regained a bit of popularity, with people using them this decade to do occlusion rasterization on the CPU in order to reduce the work that's sent to the GPU. The BRDF papers I've been reading lately span the last three decades...
The above examples just go to show that like fashion, ideas in this field seem to routinely fall out of favour and later be rediscovered again wink.png
 
Personally, I quite enjoyed using the Horde3D engine as a graphics programming playground. It does all the OpenGL work for you, but requires you to write your own shaders. It also lets you modify the rendering pipeline, so that you can configure different post-processing effects, or change the lighting pipeline from deferred rendering, to light-pre-pass, to forward shading, to inferred shading, etc... This is good practice for tasks #2 and #3 above.
To gain experience in task #1, you've got to make a small framework (like Horde3D) using some API. Bonus points if you port it to more than one API (e.g. a D3D and a GL version). Generally, if you've learned one graphics API, then learning a 2nd (3rd, 4th) will be easy, as they all embody the same ideas, but in different ways.
 

Sorry if this is a silly question, but why do shaders in particular need to be made fresh for each game?

On current-gen consoles or mobile platforms, you don't really have enough performance to spare to use generic techniques and achieve "next gen" visuals. Everything is a constant trade-off between features, quality, execution time (performance) and effort (time to implement). The approximations or assumptions that are valid for one game might not hold for another game.
Pixel shaders in particular are the most important "inner loop" out of all your code -- they can be executed millions of times per frame (1280*720 is almost 1M, and each pixel on the screen will be affected by many different passes in a modern renderer).

#2Hodgman

Posted 14 June 2013 - 12:49 AM


My #1 question is, for anyone who's working in the industry, how much demand do you foresee for custom graphics work in the future?

I describe myself as a "graphics programmer" on LinkedIn, and I get approached by recruiters on there for graphics programming jobs about once a month with decent salaries on offer (relocation required though -- Europe, North America, Asia).

I quit my last job as a graphics programmer about a year ago, and that company hasn't been able to find a candidate to replace me yet, so I still do occasional contract work for them.

So, in my experience, we are in good demand right now.

 


Given the budget busting nature of AAA games and the widespread availability of (from what I can tell) solid middleware, are studios likely to just use existing game engines instead of doing their own in-house rendering work? I guess I don't have a clear sense of what a graphics specialist would do in a studio that licenses an existing engine, or if one is needed at all.

There's a few kinds of roles that a graphics programmer can be doing.

1) Interacting with the underlying graphics API for each platform, and building cross-platform abstractions on top of them -- this is done within the engine.

2) Building the general rendering framework, either with the raw APIs, or with a cross-platform abstraction above. Lighting systems, deferred rendering, generic post-processing chains, etc...

3) Game specific rendering requirements. e.g. motion blur on a particular character, flame jets for some specific attack, "distortion" and smoke over a spawning animation, etc...

 

Generally #1 is done by the engine team and #3 is done by the game team. #2 could be done by either, depending on the project.

 

I don't think that any two games that I've worked on have ever used the exact same lighting and post-processing setup. Generally things are tweaked specifically for the needs of each game, with the engine acting as a starting point and a flexible framework for making these changes.

 


Assuming that there will still be need for graphics guys going forward, what do you guys think is the quickest path to becoming able to make useful contributions to a game? Should I try and see what I can contribute to an open source game engine? Make a software rasterizer from scratch to learn all the fundamentals?

 

This field seems to move really quickly, so I don't want to spend a lot of time learning all kinds of overly specific stuff that's actually been obsolete in industry practice for several years. How do I avoid that?

It's hard to say... When Quake blew everyone out of the water in 1996 with their efficient 6DOF software rasterizer, which used BSP for polygon sorting to avoid the need for a z-buffer, they were implementing an idea that was published in 1969. You still see the same themes now, e.g. Splinter Cell Conviction came out in 2010, using a "brand new" occlusion culling technique, which was first published in 1993.

Software rasterization was always a kind of right of passage for graphics programmers, but just a toy to learn the basics, seeing as we all use hardware rasterizers now. But these days they've regained a bit of popularity, with people using them this decade to do occlusion rasterization on the CPU in order to reduce the work that's sent to the GPU. The BRDF papers I've been reading lately span the last three decades...

The above examples just go to show that like fashion, ideas in this field seem to routinely fall out of favour and later be rediscovered again wink.png

 

Personally, I quite enjoyed using the Horde3D engine as a graphics programming playground. It does all the OpenGL work for you, but requires you to write your own shaders. It also lets you modify the rendering pipeline, so that you can configure different post-processing effects, or change the lighting pipeline from deferred rendering, to light-pre-pass, to forward shading, to inferred shading, etc... This is good practice for tasks #2 and #3 above.

To gain experience in task #1, you've got to make a small framework (like Horde3D) using some API. Bonus points if you port it to more than one API (e.g. a D3D and a GL version). Generally, if you've learned one graphics API, then learning a 2nd (3rd, 4th) will be easy, as they all embody the same ideas, but in different ways.

 


Sorry if this is a silly question, but why do shaders in particular need to be made fresh for each game?

On current-gen consoles or mobile platforms, you don't really have enough performance to spare to use generic techniques and achieve "next gen" visuals. Everything is a constant trade-off between features, quality, execution time (performance) and effort (time to implement). The approximations or assumptions that are valid for one game might not hold for another game.


#1Hodgman

Posted 14 June 2013 - 12:45 AM


My #1 question is, for anyone who's working in the industry, how much demand do you foresee for custom graphics work in the future?
I describe myself as a "graphics programmer" on LinkedIn, and I get approached by recruiters on there for graphics programming jobs about once a month with decent salaries on offer (relocation required though -- Europe, North America, Asia).

I quit my last job as a graphics programmer about a year ago, and that company hasn't been able to find a candidate to replace me yet, so I still do occasional contract work for them.

So, in my experience, we are in good demand right now.

 


Given the budget busting nature of AAA games and the widespread availability of (from what I can tell) solid middleware, are studios likely to just use existing game engines instead of doing their own in-house rendering work? I guess I don't have a clear sense of what a graphics specialist would do in a studio that licenses an existing engine, or if one is needed at all.
There's a few kinds of roles that a graphics programmer can be doing.

1) Interacting with the underlying graphics API for each platform, and building cross-platform abstractions on top of them -- this is done within the engine.

2) Building the general rendering framework, either with the raw APIs, or with a cross-platform abstraction above. Lighting systems, deferred rendering, generic post-processing chains, etc...

3) Game specific rendering requirements. e.g. motion blur on a particular character, flame jets for some specific attack, "distortion" and smoke over a spawning animation, etc...

 

Generally #1 is done by the engine team and #3 is done by the game team. #2 could be done by either, depending on the project.

 

I don't think that any two games that I've worked on have ever used the exact same lighting and post-processing setup. Generally things are tweaked specifically for the needs of each game, with the engine acting as a starting point and a flexible framework for making these changes.

 


Assuming that there will still be need for graphics guys going forward, what do you guys think is the quickest path to becoming able to make useful contributions to a game? Should I try and see what I can contribute to an open source game engine? Make a software rasterizer from scratch to learn all the fundamentals?

 

This field seems to move really quickly, so I don't want to spend a lot of time learning all kinds of overly specific stuff that's actually been obsolete in industry practice for several years. How do I avoid that?

It's hard to say... When Quake blew everyone out of the water in 1996 with their efficient 6DOF software rasterizer, which used BSP for polygon sorting to avoid the need for a z-buffer, they were implementing an idea that was published in 1969. You still see the same themes now, e.g. Splinter Cell Conviction came out in 2010, using a "brand new" occlusion culling technique, which was first published in 1993.

Software rasterization was always a kind of right of passage for graphics programmers, but just a toy to learn the basics, seeing as we all use hardware rasterizers now. But these days they've regained a bit of popularity, with people using them this decade to do occlusion rasterization on the CPU in order to reduce the work that's sent to the GPU. The BRDF papers I've been reading lately span the last three decades...

The above examples just go to show that like fashion, ideas in this field seem to routinely fall out of favour and later be rediscovered again wink.png

 

Personally, I quite enjoyed using the Horde3D engine as a graphics programming playground. It does all the OpenGL work for you, but requires you to write your own shaders. It also lets you modify the rendering pipeline, so that you can configure different post-processing effects, or change the lighting pipeline from deferred rendering, to light-pre-pass, to forward shading, to inferred shading, etc...


Sorry if this is a silly question, but why do shaders in particular need to be made fresh for each game?
On current-gen consoles or mobile platforms, you don't really have enough performance to spare to use generic techniques and achieve "next gen" visuals. Everything is a constant trade-off between features, quality, execution time (performance) and effort (time to implement). The approximations or assumptions that are valid for one game might not hold for another game.

PARTNERS