Jump to content
  • Advertisement
Sign in to follow this  
ZachBethel

Things every graphics programmer should know?

This topic is 2472 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Greetings,

I am an aspiring professional graphics programmer. Interestingly enough, despite being extremely close to finishing my undergraduate degree in Computer Science (with a concentration in graphics, mind you), most of the material I know has been self taught. Recently I bought Real-Time Rendering and Physically Based Rendering and my mind was blown on how much I still don't know. Even after nearly 10 years of hobby experience, I feel completely ignorant. I can definitely see why people need to get PhD's in the subject to actually get anywhere in the research realm.

As fascinated as I am in the subject, I feel like I have some serious gaps in my knowledge; things that I don't even realize I need to know. I guess I'd like to hear your opinion on what every graphics programmer should know. Here are some things that I can think of:

Linear Algebra - Unfortunately, I was never able to take linear algebra (long story--I transferred schools and planning classes has been a nightmare), but it's the most obvious math subject in graphics. You won't get far without a great understanding of vectors and matrices! I'm finding that the more heady parts of linear algebra are also a basic necessity as well, such as understanding eigenvalues (I still don't...). I just recently bought Gilbert Strang's book on Linear Algebra, so hopefully I will soon be a master of the subject. smile.png

Calculus - The deeper I delve into topics like radiosity and spherical harmonics, the more I realize I should have been a math major. I've had up through Calc II, and what I really need is Calc III/Differential Equations! I'm looking into fitting these classes into my schedule.

Computational Geometry - Things like voronoi diagrams and delaunay triangulations, convex hulls, line configurations, etc. I feel like I don't have an excuse not to know these.

Graph Theory - This is a new realization for me. I'm finding that I need to know my way around graphs...really well.

Rasterization/Ray Tracing Experience - I feel like every graphics programmer needs to have written at least one ray tracer and one software rasterizer. I've done the latter, but not the former (yes, I'm lame). That's one reason I'm super excited to read Physically Based Rendering.

The reason I ask is I want to graduate from the "I've messed around with DirectX and little math and stuff" stage to the "I'm a real graphics programmer" stage. It seems like I basically should have been a math double major. Thankfully, it's never too late, but I'm having to scramble to learn all the math now.

What do you guys think, are there other things?

Share this post


Link to post
Share on other sites
Advertisement
you got to strive to know every thing about graphics.

thats what i do, and ive been at it for about six years now and i learnd everything myself

Share this post


Link to post
Share on other sites
Unfortunately, I could get a PhD in the subject and still wouldn't know everything. I guess I'm looking for more specific examples of what I should focus on, as well as what foundational things might be really helpful but not immediately obvious.

Share this post


Link to post
Share on other sites
At a minimum, everyone should write one each of these:

- efficient 2D software rasterer: Bressenham lines, circles, filling polygons, drawing sprites with masks or alpha
- ray tracer
- software texture mapping

When playing with software 2d on any new device, platform or whatever:

1. can i draw a dot?
2. can i draw a line of dots?
3. can i draw a polygon of scanlines?
4. can i copy colors from a texturemap as i draw those lines?
5. can i blend the new colors on top of whats already there?
6. can i transform 3d coordinates to 2d screen coordinates?

At step 2, you can make simple games like old-school asteroids. When you get to step 6, you can draw 'anything' You'll also have appreciation for what the hardware does.

Share this post


Link to post
Share on other sites
Linear Algebra, Calculus, Computational Geometry, Graph Theory.
Linear Algebra is probably the most important thing -- even if your theory isn't completely solid, you need to have a solid practical understanding - to be able to intuit transforming values between different spaces/basis, intuiting dot and cross products etc...
Being familiar with the others is of course good and opens more opportunities, but aren't quite as key as linear algebra.
I also missed out on doing much math in my tertiary education and it slows me down sometimes, but doesn't hold me back completely.
Rasterization/Ray Tracing Experience - I feel like every graphics programmer needs to have written at least one ray tracer and one software rasterizer. I've done the latter, but not the former (yes, I'm lame). That's one reason I'm super excited to read Physically Based Rendering.[/quote]FWIW, I've not done my pilgrimage to the home-made software rendering gods either, but I am also currently reading PBRT ;)
What do you guys think, are there other things?[/quote]If I was interviewing a graphics programmer, I'd probably be most interested in their practical knowledge of rendering techniques. Off the top of my head, I'd ask things like -
* What are some shadow-volume algorithms? When would you use shadow-volumes over shadow-maps?
* What are some ways that you can filter shadow maps? What other techniques can you use to get better results from shadow-mapping?
* What are some techniques for rendering reflective materials?
* How would you implement DOF on SM3? What about SM5?
* Give an overview of how skinned meshes are drawn.
* What's the difference between forward and deferred shading schemes?
* What are some different lighting models, and what's special about them?
* What are the pros/cons of performing lighting in tangent space vs view or world space?
etc...

Share this post


Link to post
Share on other sites
Just for funsies:


* What are some shadow-volume algorithms? When would you use shadow-volumes over shadow-maps?

'Shadow volume algorithms' is a bit ambiguous. In terms of shading calculation, I've looked into the bog-standard Crow 1977 paper and Forest's Penumbra Wedges idea, but it's not something I've really investigated much. (see later) In terms of masking you have the standard Z-pass and Z-fail/Carmack's Reverse.

Second part: In practice, never. Shadow maps are superior in pretty much every conceivable way these days. Filtering's better, you get free support for alpha-tested surfaces, you can amortize a lot of the cost over several frames, etc. Translucent surfaces can also receive shadow maps, too, which is something shadow volumes are flat-out incapable of. Point lights are a little more difficult to do, but there exist methods like dual-paraboloid and cube projection that can work pretty effectively.

Lastly, I figured out how F.E.A.R./F.E.A.R. 2 got such nice shadow edge transitions not that long ago. They actually apply a depth bias to the shadow boundary geometry so you don't get that ugly faceting artifact near silhouette edges. It's so unbelievably stupid, it's brilliant. If you must use shadow volumes, give that a whirl.


* What are some ways that you can filter shadow maps? What other techniques can you use to get better results from shadow-mapping?

This could probably be rephrased as 'how *can't* you filter shadow maps, and the answer is 'there aren't any ways' smile.png
Percentage-closer filtering is the oldest (I think) and works pretty well for the general case. Variance shadow maps and friends are rather popular these days, though I think exponential variance shadow maps in particular are the flavor of choice. Shadow maps for area lights seem to be pretty in vogue for CG research, and most of them seem to involve microfacet back-projection, though there are some more interesting VSM-like approaches I know of (the rather underutilized interpolation-friendly soft shadow maps seems promising, but I haven't yet got around to implementing it so I can't speak with total authority) Crytek's also experimented with using some sampling noise to get softer shadows for marginal performance cost, and this looks really good if you render the contribution into a God of War-style white buffer and blur it a bit. I won't tell if you won't ;)

Better's also a vague term. If you want to avoid acne, generating the shadow map by way of using backfaces for map generation works fairly well, as does using midpoint shadow maps (though the latter is a lot more expensive and is NOT compatible with all filtering methods, at least not to the extent that backface maps usually are)
Improving effective resolution also has scads of research. You've got perspective shadow maps, trapezoidal shadow maps, the irregular Z-buffer idea, adaptive shadow maps, cascaded shadow maps and the new, very promising adaptive shadow maps by way of rectilinear shadow map warping. That doesn't really even begin to scratch the surface.


* What are some techniques for rendering reflective materials?

Reflective in the sense of 'analytical lights' would mean your standard rogue's gallery of BRDF models. Image-based reflection is a little less broad in scope and is really constrained by the capabilities of your target hardware more than anything. You have your cubemaps, spheremaps, esoteric-projection maps and even some raytracing done in screen space if you want to get really fancy.


* How would you implement DOF on SM3? What about SM5?

Rendering budget would play a larger role in determining the algorithm(s) used rather than shader model, IMHO. Bokeh DOF by way of scads of billboards has been receiving some attention by Epic and friends, inelegant as it may be. While they make use of D3D10+ hardware features, I think you could probably cook up something pretty similar using vertex texture fetch/render to vertex buffer and point sprites on D3D9-equivalent feature levels. Probably wouldn't be as fast due to the rather inexpressive API, though.

If you want cheap and are doing bloom, you can also do things the Unreal Way™ and repurpose your bloom buffer. You don't get out-of-focus bleed, but if your game involves a lot of fast motion the artifacts probably won't be too noticeable. Infinity Ward also details some tricks to work around that problem that mostly involves blurring your circle of confusion, but that's not quite perfect either.

Also, if you want bokeh DOF but don't want to commit performance seppuku DICE details how to do a separable blur with a hexagonal shape. Whoever thought that up needs to get a pay hike; it's brilliant and involves decomposing the shape down into a series of skewed box blurs.


* Give an overview of how skinned meshes are drawn.

Animate bones, collect/weight influences, project to screen(?) This seems a little too easy.

Note that I don't include things like tangent transformation since there are some more modern bump mapping approaches (thinking primarily of Mikkelsen's stuff here) that don't actually need it. You could also use quaternions or even dual quaternions if you want to save some memory/interpolators and preserve volume across deformations, respectively.


* What's the difference between forward and deferred shading schemes?

Forward shading considers lighting as an object-space query, deferred shading opts for a more screen-space approach. There's so much futzing around with G-buffer design I don't consider going into detail about the various data designs useful due to their fluidity. I'm actually working on expanding some of the ideas behind light-indexed deferred rendering to try and get some of the best from both worlds.


* What are some different lighting models, and what's special about them?

Isotropic microfacet BRDFs like Phong/Blinn-Phong consider the surface to have lots of bits pointing every-which-way, and are a good fit for most surfaces.

Anisotropic BRDFs, like the Ashikmin-Shirley and Kajiya-Kay models, are designed to simulate the effects of surfaces where the little bits are instead usually facing along a single direction, and are practical for things like hair or brushed metal.

BSSRDFs are probably the most expressive/expensive, so you don't see them much in games, though there's some new research by d'Eon, Jimenez and Mikkelsen about efficiently implementing them on a GPU. (Blur kernels say hello) Eric Penner's got some really cool stuff that calculates the scattering ahead of time and examines the curvature of the surface to try and fit what you've got to what you precomputed. Hacky, but extremely fast and surprisingly effective.

I consider things like energy conservation to be subtopics of the above three items, and if you aren't paying attention to this, then your graphics programmer license is revoked until you do smile.png In short, you'll need to adjust the approximate models like Blinn-Phong so that they reflect the amount of light they receive based on the properties of the surface, (tl; dr BRDF normalization) and for extra credit adjusting the diffuse reflection such that it only accounts for light that was NOT reflected based on the specular component of your shading model.


* What are the pros/cons of performing lighting in tangent space vs view or world space?

Precision reasons and being able to reuse tangent-space normal maps for different objects, though the latter is sort of a non-feature these days. World-space shading can also make handling scads of lights easier as there's less transforming to be done/interpolators that need to be used. Object-space shading (conspicuously absent, I note smile.png) can end up being the best of both worlds, actually. Contrary to popular belief, it can also play nice with skinned meshes.

Share this post


Link to post
Share on other sites

Isotropic microfacet BRDFs like Phong/Blinn-Phong consider the surface to have lots of bits pointing every-which-way, and are a good fit for most surfaces.


This might sound pedantic, but I don't think I'd call Blinn-Phong a "microfacet BRDF". Nevermind Phong, which isn't even close.

Share this post


Link to post
Share on other sites

[quote name='InvalidPointer' timestamp='1330622551' post='4918253']
Isotropic microfacet BRDFs like Phong/Blinn-Phong consider the surface to have lots of bits pointing every-which-way, and are a good fit for most surfaces.


This might sound pedantic, but I don't think I'd call Blinn-Phong a "microfacet BRDF". Nevermind Phong, which isn't even close.
[/quote]

Sure it is! Blinn-Phong in particular is an (very good, at least in the normalized incarnation) approximation of a Gaussian distribution centered on the normal. You're evaluating what fraction of the facets face the halfway vector and thus reflect light into the camera.

Share this post


Link to post
Share on other sites


Sure it is! Blinn-Phong in particular is an (very good, at least in the normalized incarnation) approximation of a Gaussian distribution centered on the normal. You're evaluating what fraction of the facets face the halfway vector and thus reflect light into the camera.


A distribution is not a microfacet BRDF, it's one part of a microfacet BRDF.

Share this post


Link to post
Share on other sites
Hmm, tricky. Very tricky.
* Math. Lots of math. Immense amounts of math. Linear algebra and basic computational geometry used to be pretty satisfactory. Not anymore! Need extensive calculus, signal processing, statistics, all kinds of stuff.
* Ground level knowledge. Basic rasterization and raytracing algorithms, the basic structure of a 3D render pipeline, shading concepts independent of hardware, etc.
* Currently standard rendering approaches. Forward and deferred renderers, lighting, shadows, post processing effects, spatial subdivision/acceleration, animation systems, etc.
* Debugging and performance analysis. Be able to make it work, fix it when it breaks, and hit that golden 30/60hz number. This implies a lot of familiarity with hardware and the tools for dissecting what is going on in that hardware.
* Generalized massively parallelized compute outside of the standard pipeline. GPGPU. We've reached the point where non-graphics GPU code is critical for graphics.
* High end/future/idealized rendering models. Ray tracing, radiosity/radiance/global illumination methods, physically motivated shading (BRDFs), basically a long term view of where we're headed.
* Art packages. You need to be proficient with Photoshop, Max/Maya/ZBrush, HDR tools, all that stuff. Plan on writing plugins for at least some of those tools.
* Photography and cinematography. Maybe how to draw and paint. This is one I only understood fairly recently. The vast majority of rendering is motivated by cinema approaches and some photography, which in turns draws a lot from old school hand-created art. There's no point developing visual effects if you don't understand why and how they're used.

Hodgman's list might make good interview questions, but I feel that they're too narrow and specific to be useful goals. They're things you should have picked up along the way. I dislike Draco's list of low level functions, because knowing that stuff doesn't make you even a slightly competent graphics engineer. A competent engineer should probably know those things, but he should also know linear algebra and neither of those makes you good.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!