Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


CDProp

Member Since 07 Mar 2007
Offline Last Active Apr 25 2015 04:22 PM

#5191161 How do triangle strips improve cache coherency ?

Posted by CDProp on 04 November 2014 - 12:43 PM


With an index buffer you also get cache coherency; the hardware can just store the most recently accessed indices, and if one of them comes up again, it doesn't need to retransform the vertex.  Of course you need to order your vertices to more optimally enable this, but in the best case adding indices can get you significantly better vertex reuse than strips or fans (memory usage isn't everything).

 

I think this is a really important point, and here is a good link for more information about it:

 

 https://home.comcast.net/~tom_forsyth/papers/fast_vert_cache_opt.html




#5190163 General Programmer Salary

Posted by CDProp on 30 October 2014 - 09:58 AM

Well, I don't want to derail the thread any further, but I do want to thank Tom, Quat, and stupid_programmer for all of your advice. Very helpful, and I appreciate it.




#5189962 A radiometry question for those of you who own Real Time Rendering, 3rd Edition

Posted by CDProp on 29 October 2014 - 11:01 AM

Radiance is sort of an abstract quantity, but I think it's not so bad if you think about it in terms of it's dimensions. When rendering, we like to think of light in terms of geometrical optics. So, instead of light being waves with a continuous spread of energy, they are discrete rays that shoot straight from the surface you're rendering to the pixel element on the screen. This takes some doing, however, because in reality, light is a continuous wave (classically-speaking -- no intention of modeling things at the quantum level here).

So how do you turn a continuous quantity like an EM wave into a discrete ray? By analogy, consider mass. As a human, you have a certain mass. However, that mass is not distributed evenly in your body. Some tissue is more dense than others. For instance, muscle is more dense than bone. Lets say you knew about the mass density function for your body. That is, if someone gives you a coordinate (x,y,z) that is inside your body, you can plug it into the function and the result will be the mass density at that coordinate. How would you calculate the total mass of your body with this function? Well, you would split up the volume into a bunch of tiny cubes, and you sample the density function in the center (say) of those cubes, and then multiply that density by the volume of the cube to get the mass of that cube, then add up the masses of all the tiny cubes. The tinier the cubes, the more of them you'll have to use, but this will make your mass calculation more accurate. Where integral calculus comes into play is that it tells you the mass you get in the limiting case where the cubes are infinitely tiny and there are infinitely many of them. In my opinion, it's easier to reason about it as "a zillion tiny small cubes" and just remember that the only difference with integral calculus is that you get an exact answer rather than an approximation.

So consider a surface that you're rendering. It is reflecting a certain amount of light that it has received from a light source. We want to think of the light in terms of energy, unlike the mass example. The surface as a whole is reflecting a certain amount of energy every second, which we call the energy flux (measured in Watts, also known as Joules/sec). However, we don't really care what the entire surface is doing. We just want the energy density along a specific ray. So, let's break the surface down into tiny little area elements (squares) and figure out how much flux is coming from each tiny area element. We only care about the area element that is under our pixel. That gives us a flux density per unit area, which is called Irradiance (or Exitance, depending on the situation). So now we know the energy flux density being emitted from the area under our pixel. But wait! Not all of that energy is moving toward our pixel. That little surface element is emitting energy in all directions. We only want to know how much energy is moving in the specific direction of our pixel. So, we need to further break down that Irradiance according to direction, to find out how much of that Irradiance is being emitted along each direction (a.k.a. infinitesimal solid angle). This gives us an energy density with respect to time, area, and solid angle, known as Radiance.


#5189698 General Programmer Salary

Posted by CDProp on 28 October 2014 - 08:16 AM

Ok, I've got a salary negotiation question, and hopefully no one will mind me tacking it onto this thread.

 

When I first accepted my current position, I accepted a low-ball offer in part because a) the job was (is) awesome, b) I had no higher education, c) I only had a few years of professional programming experience. I just felt the job was much cooler than any job I could hope to find at that stage of my life, and for the most part, I was right. That was about five years ago. Since then, I have gone back to school and I have almost completed my bachelor's in Physics. I have received incremental raises (cost-of-living, or slightly above) but I do not believe that these raises have been commensurate with my increase in experience and education. I am now well below the salary indicated as "average" by every salary survey I can find (Salary.com, Salary Fairy, GamaSutra, etc.) -- about 25% below.

 

This is not a game programming job, by the way. I'm a graphics programmer for a company that makes training simulators (which are very similar to your video games in most respects!). I want to keep the company identity confidential, so I'll leave it at that.

 

So here's the rub: my company is not doing very well, and they are in no position to be handing out raises. Additionally, I am in no position to be searching for another job because I still have 6 months left at school. I talked to our CEO to see if we could maybe come up with a plan to bring me up to speed (say) by the time I graduate next year, and I was rebuffed. I was told that we could revisit the question in a year or so, and if the company is doing better, then maybe.

 

So after having become "that guy" who brought up salary negotiations in the company's time of need (yeesh), and was turned away, I don't know what to do. My main concern isn't the short-term earnings, it's what it will mean for my salary track in the long-term. What if (heaven forbid) the company folds, and I find myself looking for a new job? I will get low-balled by every company out there on the basis of my previous salary. In addition to that risk, I feel that they're essentially asking me to take a pay cut for the company, which wouldn't even be out of the question if I felt like it would be appreciated, but I don't think they see it that way. Lastly, we are a small company, but our overall costs run in the millions of dollars per year, and so even if the company is not doing well, I hardly think that a $15k salary bump for one employee is going to affect things very much.

 

What does decency and decorum demand that I do here? Should I just drop the issue until the company is doing better? Or at least until I graduate? Should I be looking for other work, or should I not even bother until I'm done with school?




#5189614 General Programmer Salary

Posted by CDProp on 27 October 2014 - 11:38 PM

I'll take $20 per page, if that includes pages generated by the script / template that I write. :P




#5160846 Just a couple of Data-Oriented Design questions.

Posted by CDProp on 16 June 2014 - 08:13 AM

I'm not trying to use it for every piece of code. I'm trying to use it to speed up entity/component updates. I must say, what I'm asking seems like a totally reasonable request. That is, help me understand how data-oriented design is used in games. So far, I've gotten some good replies along with at least two lectures about how I shouldn't be using data-oriented design like a golden hammer. Is that what I'm doing? Because most of the reading I've done on the subject uses this sort of entity-component update as an example of precisely where DOD comes in handy. Is there some way in which I'm misapplying the concept? If so, it would most helpful if you would be explicit about it.


#5134174 Arithmetic vs. geometric mean avg luminance during nighttime scenes

Posted by CDProp on 24 February 2014 - 01:20 PM

Greetings, all.

 

I was wondering if we could discuss this issue a bit. For the purposes of simple exposure control, it seems common to store the log of the luminance of each pixel in your luminance texture. That way, when you sample the 1x1 mip map level and exponentiate it, you end up with the geometric mean luminance of the scene. This is done to prevent small, bright lights from dimming the scene.

 

I find that this works really well, but perhaps a little too well. I am using a 16F texture, and so the brightest pixel I can have is 65355. If I have a really dark nighttime scene, such that things look barely visible without any lights, and then I point a bright spotlight at the player (just a disc with a 65355-unit emission), it hardly affects the exposure at all. I would expect a bright light to sort of blind the player a bit and ruin her dark adaptation, so that the rest of the scene looks really dark. I have found that the light needs to cover nearly 20% of the pixels on the screen before it begins to have this effect.

 

So I switched over to using an arithmetic mean (just got rid of the log/exp on each end) and now it works more like what I would expect.

 

If you were in my shoes, would you switch to an arithmetic mean, or would you try to find exposure settings that will work better with a geometric mean? 

FWIW, my exposure-control/calibration function is just hdrColor*key/avgLum, where key is an artist-chosen float value, and avgLum is the mean luminance (float). After that, I'm tone mapping with Hable's filmic curve. If you have any suggestions on how to improve it, that would be most helpful. I suppose I could also experiment with histograms and so forth, but I'm not sure if they're meant to solve this particular problem.




#5130724 How much planning do game programmer before writing a single line of code and...

Posted by CDProp on 11 February 2014 - 11:58 PM

I'll tell you what my experience over the years has been. When I was very green, I didn't do much planning at all. I just sat down and started writing code. If the problem was geometrical in nature (as is often the case in 3D games), then I maybe had some diagrams that I drew, but those were really only for the geometry of the problem, and nothing to do with the final code design. My code designs were awful, by the way. I remember one class that took 12,000 lines of code because it did everything. Obviously, this concept of coding by the seat of my pants wasn't working.

 

So, I switched philosophies. I decided that I was going to plan everything beforehand, with class diagrams and whatnot. My new motto was, "A receptionist who doesn't know C++ ought to be able to implement this from my documentation alone." I felt that I shouldn't write a line of code unless I can prove that it works on paper. Yeah, that didn't work out too well for me, either. It's like trying to see 100 moves ahead in chess. 

 

Here's the thing. A new programmer doesn't have the ability to take a complex problem, break it down in their head, and then immediately sit down and start coding -- nor do they have the ability to sit down and plan the whole thing out with UML-like diagrams. I can't speak for everybody, but I had to go through this rigmarole of trying solutions, realizing they were crappy, and doing it differently next time. For me, there was no way to short-cut that process.

 

These days, I can do a much better job reasoning about complex problems in my head, and having a good intuition about what form the solution should take. I'm also better at planning complex problems on paper. So, I used to suck at both. As I gained experience, I got better at both. Go figure.

 

I always write a little something beforehand. Doesn't have to be much. At the very least, it helps to write down a set of goals and requirements so that I don't go off-track. And, of course, I'm working on a team, so it's often the case that I need to communicate my ideas to others, and that usually entails writing some documentation. Other than that, I tend use notes and diagrams as a sort of secondary storage -- it's difficult for me to keep zillions of details in my head, so if I think of something that I don't want to lose, I write it down. That about sums up the balance I've found for myself.




#5086144 Deferred Shading lighting stage

Posted by CDProp on 15 August 2013 - 09:31 AM

That material class probably covers all that you need at this stage in terms of material options. However, you might find it useful to have separate shaders to cover the cases where a) you have untextured geometry, b) you have geometry with a diffuse texture, but no specular or normal map, c) you have a diffuse texture and a normal map, but no specular, d) etc. If you handle all of these cases with the same shader, you end up sampling textures that aren't there (or else introducing conditional branching into your shader code). Of course, if everything in your game has all of these textures, then it isn't a problem. Unfortunately, I am not that lucky because I have to render some legacy models, some of which have untextured geometry.

 

As you start working with more advanced materials, you may find that your shader inputs grow in number and become more specialized, and so the number of material shaders you use will grow as well.




#5086126 Deferred Shading lighting stage

Posted by CDProp on 15 August 2013 - 08:35 AM

What I would recommend doing is outputing your positions and normals in view space. If you have any tangent-space normals, transform them in your material shader before outputting them to your g-buffer. That way, you don't need to store tangents or bitangents. I do think it would be a good idea to output specular and ambient properties. If you find that memory bandwidth becomes a problem, then there are some optimizations you could try. For instance, you could reconstruct the position from the depth buffer, thus getting rid of the position texture in your g-buffer. You could also store just two components of each normal (say, x and y) and then use math to reconstruct the third in your light shaders. Even though these reconstructions take time, it's often worth it because of the memory bandwidth savings. Also, if you haven't already done so, you can try using a 16-bit half-float format, instead of a full 32-bit floating point format.




#5085850 Deferred Shading lighting stage

Posted by CDProp on 14 August 2013 - 09:58 AM

That seems more or less correct to me. At some point you'll want to implement some sort of light culling so that you're not shading every pixel with every light. But yeah, typically you'll have one shader for every material type, and one shader for every light type (directional, point, ambient, etc.). In your first pass, you bind the G-buffer as your render target. You render the scene using the material shaders, which output normals, diffuse, etc., to their respective textures in the G-buffer. In the second stage, you go one light at a time, and you render a full-screen quad using the correct light shader for each light. The light shader samples from each of the G-buffer textures, and applies the correct lighting equations to get the final shading value for that light. Make sure to additively blend the light fragments at this stage.

 

What a lot of people do, in order to cut down on the number of pixels being shaded with a given light, is to have some geometric representation for the light (a sphere for a point light, for example). Before rendering the full-screen quad with the point light shader, they'll stencil the sphere in so that they can be sure to only shade pixels within that sphere. Some even go as far as to treat the sphere almost like a shadow volume, stenciling in only the portions where scene geometry intersects the sphere. This gives you near pixel-perfect accuracy, but it might be overkill. I've been reading lately that some people just approximate the range of the point light using a billboarded quad, because the overhead in rendering a sphere into the stencil buffer (let alone doing the shadow volume thing) is greater than the time spent unnecessarily shading pixels inside the quad that the light can't reach.

 

Of course, a real point light can reach anywhere. If you were to use a sphere to approximate the extent of the point light, you'd have to use a somewhat phony falloff function so that the light attenuates to zero at the edge of the sphere.




#5085730 Radiometry and BRDFs

Posted by CDProp on 13 August 2013 - 08:55 PM

Thanks, Hodgman. I have some thoughts on that idea, although I may have it wrong. I agree that the the pixel will subtend a solid angle from the point of the view of the surface point that I'm shading. However, I am not certain that it matters in this case. Because we are rendering a perfectly focused image, I believe that each point on our pixel "retina" can only receive light from a certain incoming direction. Here's what I mean (I apologize if a lot of this is rudimentary and/or wrong, but it helps me to go through it all).

 

If you have a "retina" of pixels, but no focusing device, then a light will hit every point on the retina and so each pixel will be brightened:

 

LWDzWzd.png

If you add a focusing device, like a pinhole lens, then you block all rays except those that can make it through the aperture:

 

v4VE4Nc.png

So now, only one pixel sees the light, and so the light shows up as it should: as a point. We now have a focused image, albeit an inverted one. If you widen the aperture and put a lens in there, you'll catch more rays, but they'll all be focused back on that same pixel:

 

15gIKLe.png

And so I might as well return to the pinhole case, since it is simpler to diagram. I believe that having a wider aperture/lens setup adds some depth of field complications to the focus, but for all intents and purposes here, it can be said (I think) that a focusing device has the effect of making it so that each pixel (and indeed, each sub-pixel point) on the retina can only receive light from one direction:

 

GYCUk9x.png

The orange ray shows what direction the pixel in question is able to "see", and any surface that intersects this ray will be in the pixel's "line of sight." Each pixel has its own such line of sight:

 

XxnK0V6.png

With rasterization, we have things sort of flipped around. The aperture is behind the retina, but the effect is more or less then same. If I put the retina on the other side of the aperture, at an equal distance, I get this:

 

16oL9IU.png

Now we can see the aperture as the "eye position", the retina as the near plane, etc. The orange rays are now just view vectors, and they are the only directions we care about for each pixel. The resulting image is the same as before, except it has the added bonus of not being inverted (like what a real lens would do).

 

So with that said, here is what happens if I redraw your diagram, with 5 sub-pixel view vectors going through a single pixel:

 

WdSzccc.png

So, the single pixel ends up covering the entire light blue surface. You can see that view vectors form a sort of frustum, when confined to that pixel.

 

I've also added a green point light, with 5 rays indicating the range of rays that will hit that light blue patch. All 5 of those green rays will end up hitting the retina somewhere, but only one of those rays comes in co-linearly with one of the orange "valid directions". 




#5085582 Radiometry and BRDFs

Posted by CDProp on 13 August 2013 - 12:31 PM

Ugh, so I forgot to ask my BRDF-related questions.

 

What I'm really trying to figure out here is how I would create a BRDF that represents a perfectly reflective surface, i.e. a surface where there is zero microgeometry, zero absorption, zero Fresnel, and 100% reflectance, such that each ray of light is reflected perfectly at the same angle as the incident angle. 

 

zrZyyNY.png

Here is a situation where the perfect mirror (blue) is reflecting light from a point light (orange) into a pixel (green segment, with the eye point being the green dot). Because we're dealing with perspective projection, which attempts to simulate a sort of lens, we only care about light coming in along the view vector. The orange ray is therefor the only ray we care about. I'm beginning to think, as I type this, that my difficulty in grasping this problem has something to do with the unrealistic nature of point lights that I mentioned earlier, and perhaps also the unrealistic nature of a perfect reflector. But I digress.

 

The problem I'm trying to solve is that I have this ELcosθ term, which is the irradiance at the point on the surface where the ray bounces, and this makes perfect sense to me. However, now I need to create a BRDF that will reflect all of that light in one direction, and return zero in all other directions. However, I know that this function can't return 1 in the required direction, because the units would be wrong. The ELcosθ term is in W·m-2, and the BRDF needs to return something in units of sr-1. If I return 1, then the end result is that the calculated radiance is the same magnitude as the irradiance, even though the units are different, which seems too easy. Can anyone help me figure out the right way to think about this problem so that I can figure out what this BRDF should look like?

 

Edit: I also mispelled BDRF multiple times. =P I can't fix it in the title, I don't think.

Edit2: No I didn't.




#5085573 Radiometry and BRDFs

Posted by CDProp on 13 August 2013 - 12:06 PM

Greetings. I've been reading a lot about radiometry lately, and I was wondering if any of you would be willing to look this over and see if I have this right. A lot of the materials I've been reading have explained things in a way that is a little difficult for me to understand, and so I've tried to reformulate the explanations in terms that are a little easier for me to comprehend. I'd like to know if this is a valid way of thinking about it.

 

So, radiant energy is simply a matter of the number of photons involved, times their respective frequencies (times Planck's constant). SI Units: Joules.

 

Radiant flux is a rate. It's the amount of radiant energy per unit of time. SI Units: Joules per Second, a.k.a. Watts. If you have a function that represents the radiant flux coming out of a light source (which may vary, like a variable star), and you integrate it with respect to time, you'll get the total radiant energy emitted over that time. 

 

The next three quantities are densities. A brief aside about densities. Let's think about mass density, which is commonly talked about in multivariable calculus courses, as well as in many freshman calculus-based physics courses. You have a block of some solid substance. Let's say that the substance is heterogeneous in the sense that its density varies, spatially. One might be tempted to ask, "What is the mass of the block at the point (x,y,z)?" However, this question would be nonsensical, because a point has no volume, and therefore can have no mass. One can answer the question, "What is the mass density at this point?" and get a meaningful answer. Then, if you wanted to know the mass of some volume around that point, you could multiply the density time the volume (if the volume is some dV, small enough that the density doesn't change throughout it), or else integrate the density function over the volume that you care about.

 

So, in terms of radiometry, the three density quantities commonly spoken-of are irradiance, radiant intensity, and radiance.

 

Irradiance is the power density with respect to area. The SI units are W·m-2So, if you have some 2-dimensional surface that is receiving light from a light source, the Irradiance would be a 2-dimensional function that maps the two degrees of freedom on that surface (x,y) to the density of radiant flux received at that point on the surface. Exitance is similar to Irradiance, with the same exact units, but describes how much light is leaving a surface (either because the surface emits light, or because it reflects it). As with all densities, it doesn't make sense to ask, "How much power is being emitted from point (x,y) on this surface?" However, you can ask, "What is the power density at this point", and if you want to know how much power is emitted from some area around that point, you have to multiply by some dA (or integrate, if necessary).

 

Radiant Intensity is power density with respect to solid angle. The SI units are W·sr-1Unlike irradiance, which gives you a density of power received at a certain point, radiant intensity tells you the power density being emitted in a certain direction. So, a point light (for example) emits light in all directions evenly (typically). If the point light emits a radiant flux of 100W, then its radiant intensity in all directions is about 8 W·sr-1. If it's not an ideal point light, then its radiant intensity might vary in each direction. However, if you integrate the radiant intensity over the entire sphere, then you will get back the original radiant flux of 100W. Again, it doesn't make sense to ask, "How much power is being emitted in this direction?", but you can ask, "What is the power density in this direction?" and if you want to know how much power is being emitted in a small range of directions (solid angle) around that direction, then you can integrate the radiant intensity function over that solid angle.

 

Radiance is the power density with respect to both area and solid angle. The SI units are . The reason you need radiance is for the following situation. Suppose you have an area light source. The exitance of this light source may vary, spatially. Also, the light source may scatter the light in all directions, but it might not do so evenly, so it varies radially as well (is that the right word here?). So, if you want to know the power density of the light being emitted from point (x,y) on the surface of the area light, specifically in the direction (θ,ϕ), then you need a density function that takes all four variables into account. The end result is a density function that varies along (x,y,θ,ϕ). These four coordinates define a ray starting at (x,y) and pointing in the direction (θ,ϕ). Along this ray, the radiance does not change. So, it's the power (flux) density of not just a point, and not just a direction, but a ray (both a point and a direction). Just as with the other densities, it makes no sense to ask, "What is the power (flux) being emitted along this ray?" It only makes sense to ask, "What is the power density of this ray?" And since a ray has both a location and direction, the density we care about is radiance.

 

The directional and point lights that we are used to using are physically-impossible lights for which it is sometimes difficult to discuss some of these quantities.

 

For a point light, it's meaningless to speak of exitance, because a point light has no area. Or perhaps it's more correct to think of the exitance function of a point light as a sort of Dirac delta function, with a value of infinity at the position of the light, and zero everywhere else, but which integrates to a finite non-zero value (whatever the radiant flux is) over R3. In this sense, you could calculate the radiance of some ray emanating from the point light, but I'm thinking it's more useful to just calculate the radiant intensity of the light in the direction that you care about, and be done with it.

 

For a directional light, it almost seems like an inverse situation. It's awkward to talk about radiant intensity because it would essentially be like a delta function, which is infinite in one direction, and zero everywhere else, but which integrates to some finite non-zero value, the radiant flux. Even the concept of radiant flux seems iffy, though, because how much power does a directional light emit? It's essentially constant over infinite space. It's easier to talk about the exitance of a directional light, though.

 

In any case, even with these non-realistic lights, it's easy to talk about the irradiance, intensity, and radiance of surfaces that receive light from these sources, which is what we typically care about.

 

How did I do?




#5082661 I feel like my graphics programming career is stagnating. Is it my fault? Wha...

Posted by CDProp on 02 August 2013 - 10:14 PM

I wanted to try to keep this as succinct as possible, because I'm definitely sensitive to the fact that no one wants to read paragraphs upon paragraphs of some schmoe's biography. However, I can't decide what else to cut. So, I apologize for the length, but if you are a professional game programmer (especially a graphics or engine programmer), I would be much obliged if you could read this monstrosity and give me whatever advice you can give. I desperately need direction!

 

I am one of the countless programmers who got their foot in the door without a degree. In 2006, I got a job at a local game developer as a sort of build engineer. By the end of my first project, I had automated most of my job, and so I had the programmers on my team give me some basic programming tasks to work on. I eventually became a full-time programmer. In 2008, the company went bust, and so I lost my job.

 

It turns out that two and a half years of game programming experience at a defunct (and honestly, crappy) game development studio does not make for a great resume, especially if you don't have a degree. So, I took a low-paying job at a very tiny web development company (think "working-in-some-dude's-basement" tiny). I had to learn a completely new skill set. There, I did both back end (Perl, MySQL) and front-end (HTML, JQuery w/ AJAX, CSS) stuff. Unfortunately, that company also went bust 5 months later.

 

Shortly after that, in late 2009, I found another job as a graphics programmer for a simulation company. This is my current job, and I've been here for almost 4 years.

 

So that about brings you current on my job history.

 

As you can probably tell, at the time that I was hired for my current position, they weren't looking for a graphics programming guru (if they were, they would not have hired someone with so little experience!). They were just looking for a good, smart programmer who could take ownership of the visual side of things. During the interview process, I programmed a very simple DX9 demo involving a pool of water (environment mapped reflections, refractions, fresnel), and an archway that casted a planar shadow. I modeled everything myself in Blender and exported it to a custom (albeit simplistic) file format using a script I wrote in Python. The company uses OpenGL, which I had no experience with, but I guess they decided that I knew enough about graphics to take ownership of their visual side of things.

 

The problem I am having with this job is that they have decided that their graphics fidelity needs are very modest, and so they've made realistic graphics programming an extremely low priority. Most of the time, they fill my plate with tasks that are only peripherally-related to graphics programming. For instance, they began work on a new level editor, and they needed me to code all of the 3D GUI stuff for it (dropping objects into the scene, picking, moving, rotating them, etc.). It turns out that this is 80% of the programming needed on it, and I quickly became THE owner of the level editor, responsible for the other 20% as well. For the first two years of this job, about 90% of what I did was related to this level editor. I essentially felt like a tools programmer, not a graphics programmer.

 

Edit: There are many other examples of non-graphics stuff that they've had me work on. but for the sake of brevity, I won't list them!

 

 

Meanwhile, I'm trying to get them excited about updating their graphics fidelity. When I first got there, their graphics already looked old (think original Half-Life, but with higher-res textures). I've done a few things over the years to update the graphics (added some normal mapping, gloss mapping, splatting to reduce the tiled look in grass and dirt textures -- all stuff that has been commonplace for more than a decade). When you combine this with the third-party libraries that I've integrated for ocean and sky rendering, the graphics look quite a bit nicer than they once did. But they still look horrendously out-dated. 

 

I thought I had them convinced, at one point, that we really needed to change things. They gave me the go-ahead to re-architect the visual software from the ground-up so that it would be more flexible. The old visual software was so static; it was full of hard-coded behavior and ad-hoc hacks, and adding new stuff to it was very cumbersome. So, I took several months to reformulate things, and we just shipped our first project on the new architecture. Yay! I'm ready to add some new awesome effects.

 

But now they are loading my plate with more menial tasks that, at this point, I feel like would be better-handled by a junior programmer under my direction. We still have a long way to go on the graphics front. I haven't even been able to sell them on the importance of HDR yet, for example. We're still using simple single-buffer shadow mapping, and so we don't have full-scene shadows. I try to stay abreast of new developments, and I keep reading about things that I really want to do -- tiled light culling for nighttime simulations with lots of headlights and other work lights, image-based reflections to make the rain effects look more realistic, etc. -- that I think have practical value for the company, but I'm having a tough time selling them on it.

 

I really do understand, to some extent. The items they are having me work on are things that the customers are asking for, and there really isn't anyone else in the company who work in this area. However, I don't know if this is working for me anymore. I need a steep learning curve that I can climb. I feel like I'm on a plateau.

 

So I know what you're probably thinking. "Why not work on these projects in your own time, and then present them to the company when you have a working demo?"

 

And the truth is, I do work on side-projects such as this. However, it is extremely slow going because I am also in school. You may remember from the beginning of this tome that I did not have a degree when I started. Well, I started school about two and a half years ago, and I'm now about halfway toward a bachelor's degree in physics.* I am going full time, and the courses aren't easy. In order to maintain a good GPA (currently 3.93), I easily spend 30-40 hours per week on school. This is in addition to work, which often demands 50- or 60-hour weeks during crunch times. Yes, I have some weeks where I spend 90+ hours on work and school (although 70 is far more frequent).

 

I also have a family, and so I find myself with precious little time for side projects. I feel like I'm in one of those "Pick Two" situations: work, school, side-projects.

 

I'm really loath to quit school. I am just over halfway through, so why quit now? A degree might not be as important now as it was in the beginning of my career, but I'm not a quitter. Plus, I'm really excited about what I'm learning in my physics and math classes. I'm also currently working on some undergrad research involving crystals and electric field gradients, which I find really interesting and fun.

 

I am also loath to quit my job. I can't afford it, first of all. I also don't want the gap in my work history. Plus, I really like it where I work. These complaints aside, it's a great place to work, with nice people. They've been kind enough to take a chance on me, back when my graphics programming skills were unproven, and they have been very flexible with scheduling while I go to school. I feel that I owe them some loyalty.

 

But I also feel that, if I spend a year or two more in this "Graphics Programming Kiddie Pool", it's going to severely stunt my career. People are going to start wondering why I spent 6 years as a graphics programmer and haven't even implemented a good tone mapping operator before. 

 

What should I do?

 

 

* Why physics? I would probably learn a lot from a CS degree, but I thought something cross-disciplinary would be more fun. It's not as though graphics is completely without a basics in physics, and most gaming and simulation companies seem to value knowledge in physics. So, why not?






PARTNERS