Jump to content

  • Log In with Google      Sign In   
  • Create Account

CDProp

Member Since 07 Mar 2007
Offline Last Active Yesterday, 08:14 PM

#5160846 Just a couple of Data-Oriented Design questions.

Posted by CDProp on 16 June 2014 - 08:13 AM

I'm not trying to use it for every piece of code. I'm trying to use it to speed up entity/component updates. I must say, what I'm asking seems like a totally reasonable request. That is, help me understand how data-oriented design is used in games. So far, I've gotten some good replies along with at least two lectures about how I shouldn't be using data-oriented design like a golden hammer. Is that what I'm doing? Because most of the reading I've done on the subject uses this sort of entity-component update as an example of precisely where DOD comes in handy. Is there some way in which I'm misapplying the concept? If so, it would most helpful if you would be explicit about it.


#5134174 Arithmetic vs. geometric mean avg luminance during nighttime scenes

Posted by CDProp on 24 February 2014 - 01:20 PM

Greetings, all.

 

I was wondering if we could discuss this issue a bit. For the purposes of simple exposure control, it seems common to store the log of the luminance of each pixel in your luminance texture. That way, when you sample the 1x1 mip map level and exponentiate it, you end up with the geometric mean luminance of the scene. This is done to prevent small, bright lights from dimming the scene.

 

I find that this works really well, but perhaps a little too well. I am using a 16F texture, and so the brightest pixel I can have is 65355. If I have a really dark nighttime scene, such that things look barely visible without any lights, and then I point a bright spotlight at the player (just a disc with a 65355-unit emission), it hardly affects the exposure at all. I would expect a bright light to sort of blind the player a bit and ruin her dark adaptation, so that the rest of the scene looks really dark. I have found that the light needs to cover nearly 20% of the pixels on the screen before it begins to have this effect.

 

So I switched over to using an arithmetic mean (just got rid of the log/exp on each end) and now it works more like what I would expect.

 

If you were in my shoes, would you switch to an arithmetic mean, or would you try to find exposure settings that will work better with a geometric mean? 

FWIW, my exposure-control/calibration function is just hdrColor*key/avgLum, where key is an artist-chosen float value, and avgLum is the mean luminance (float). After that, I'm tone mapping with Hable's filmic curve. If you have any suggestions on how to improve it, that would be most helpful. I suppose I could also experiment with histograms and so forth, but I'm not sure if they're meant to solve this particular problem.




#5130724 How much planning do game programmer before writing a single line of code and...

Posted by CDProp on 11 February 2014 - 11:58 PM

I'll tell you what my experience over the years has been. When I was very green, I didn't do much planning at all. I just sat down and started writing code. If the problem was geometrical in nature (as is often the case in 3D games), then I maybe had some diagrams that I drew, but those were really only for the geometry of the problem, and nothing to do with the final code design. My code designs were awful, by the way. I remember one class that took 12,000 lines of code because it did everything. Obviously, this concept of coding by the seat of my pants wasn't working.

 

So, I switched philosophies. I decided that I was going to plan everything beforehand, with class diagrams and whatnot. My new motto was, "A receptionist who doesn't know C++ ought to be able to implement this from my documentation alone." I felt that I shouldn't write a line of code unless I can prove that it works on paper. Yeah, that didn't work out too well for me, either. It's like trying to see 100 moves ahead in chess. 

 

Here's the thing. A new programmer doesn't have the ability to take a complex problem, break it down in their head, and then immediately sit down and start coding -- nor do they have the ability to sit down and plan the whole thing out with UML-like diagrams. I can't speak for everybody, but I had to go through this rigmarole of trying solutions, realizing they were crappy, and doing it differently next time. For me, there was no way to short-cut that process.

 

These days, I can do a much better job reasoning about complex problems in my head, and having a good intuition about what form the solution should take. I'm also better at planning complex problems on paper. So, I used to suck at both. As I gained experience, I got better at both. Go figure.

 

I always write a little something beforehand. Doesn't have to be much. At the very least, it helps to write down a set of goals and requirements so that I don't go off-track. And, of course, I'm working on a team, so it's often the case that I need to communicate my ideas to others, and that usually entails writing some documentation. Other than that, I tend use notes and diagrams as a sort of secondary storage -- it's difficult for me to keep zillions of details in my head, so if I think of something that I don't want to lose, I write it down. That about sums up the balance I've found for myself.




#5086144 Deferred Shading lighting stage

Posted by CDProp on 15 August 2013 - 09:31 AM

That material class probably covers all that you need at this stage in terms of material options. However, you might find it useful to have separate shaders to cover the cases where a) you have untextured geometry, b) you have geometry with a diffuse texture, but no specular or normal map, c) you have a diffuse texture and a normal map, but no specular, d) etc. If you handle all of these cases with the same shader, you end up sampling textures that aren't there (or else introducing conditional branching into your shader code). Of course, if everything in your game has all of these textures, then it isn't a problem. Unfortunately, I am not that lucky because I have to render some legacy models, some of which have untextured geometry.

 

As you start working with more advanced materials, you may find that your shader inputs grow in number and become more specialized, and so the number of material shaders you use will grow as well.




#5086126 Deferred Shading lighting stage

Posted by CDProp on 15 August 2013 - 08:35 AM

What I would recommend doing is outputing your positions and normals in view space. If you have any tangent-space normals, transform them in your material shader before outputting them to your g-buffer. That way, you don't need to store tangents or bitangents. I do think it would be a good idea to output specular and ambient properties. If you find that memory bandwidth becomes a problem, then there are some optimizations you could try. For instance, you could reconstruct the position from the depth buffer, thus getting rid of the position texture in your g-buffer. You could also store just two components of each normal (say, x and y) and then use math to reconstruct the third in your light shaders. Even though these reconstructions take time, it's often worth it because of the memory bandwidth savings. Also, if you haven't already done so, you can try using a 16-bit half-float format, instead of a full 32-bit floating point format.




#5085850 Deferred Shading lighting stage

Posted by CDProp on 14 August 2013 - 09:58 AM

That seems more or less correct to me. At some point you'll want to implement some sort of light culling so that you're not shading every pixel with every light. But yeah, typically you'll have one shader for every material type, and one shader for every light type (directional, point, ambient, etc.). In your first pass, you bind the G-buffer as your render target. You render the scene using the material shaders, which output normals, diffuse, etc., to their respective textures in the G-buffer. In the second stage, you go one light at a time, and you render a full-screen quad using the correct light shader for each light. The light shader samples from each of the G-buffer textures, and applies the correct lighting equations to get the final shading value for that light. Make sure to additively blend the light fragments at this stage.

 

What a lot of people do, in order to cut down on the number of pixels being shaded with a given light, is to have some geometric representation for the light (a sphere for a point light, for example). Before rendering the full-screen quad with the point light shader, they'll stencil the sphere in so that they can be sure to only shade pixels within that sphere. Some even go as far as to treat the sphere almost like a shadow volume, stenciling in only the portions where scene geometry intersects the sphere. This gives you near pixel-perfect accuracy, but it might be overkill. I've been reading lately that some people just approximate the range of the point light using a billboarded quad, because the overhead in rendering a sphere into the stencil buffer (let alone doing the shadow volume thing) is greater than the time spent unnecessarily shading pixels inside the quad that the light can't reach.

 

Of course, a real point light can reach anywhere. If you were to use a sphere to approximate the extent of the point light, you'd have to use a somewhat phony falloff function so that the light attenuates to zero at the edge of the sphere.




#5085730 Radiometry and BRDFs

Posted by CDProp on 13 August 2013 - 08:55 PM

Thanks, Hodgman. I have some thoughts on that idea, although I may have it wrong. I agree that the the pixel will subtend a solid angle from the point of the view of the surface point that I'm shading. However, I am not certain that it matters in this case. Because we are rendering a perfectly focused image, I believe that each point on our pixel "retina" can only receive light from a certain incoming direction. Here's what I mean (I apologize if a lot of this is rudimentary and/or wrong, but it helps me to go through it all).

 

If you have a "retina" of pixels, but no focusing device, then a light will hit every point on the retina and so each pixel will be brightened:

 

LWDzWzd.png

If you add a focusing device, like a pinhole lens, then you block all rays except those that can make it through the aperture:

 

v4VE4Nc.png

So now, only one pixel sees the light, and so the light shows up as it should: as a point. We now have a focused image, albeit an inverted one. If you widen the aperture and put a lens in there, you'll catch more rays, but they'll all be focused back on that same pixel:

 

15gIKLe.png

And so I might as well return to the pinhole case, since it is simpler to diagram. I believe that having a wider aperture/lens setup adds some depth of field complications to the focus, but for all intents and purposes here, it can be said (I think) that a focusing device has the effect of making it so that each pixel (and indeed, each sub-pixel point) on the retina can only receive light from one direction:

 

GYCUk9x.png

The orange ray shows what direction the pixel in question is able to "see", and any surface that intersects this ray will be in the pixel's "line of sight." Each pixel has its own such line of sight:

 

XxnK0V6.png

With rasterization, we have things sort of flipped around. The aperture is behind the retina, but the effect is more or less then same. If I put the retina on the other side of the aperture, at an equal distance, I get this:

 

16oL9IU.png

Now we can see the aperture as the "eye position", the retina as the near plane, etc. The orange rays are now just view vectors, and they are the only directions we care about for each pixel. The resulting image is the same as before, except it has the added bonus of not being inverted (like what a real lens would do).

 

So with that said, here is what happens if I redraw your diagram, with 5 sub-pixel view vectors going through a single pixel:

 

WdSzccc.png

So, the single pixel ends up covering the entire light blue surface. You can see that view vectors form a sort of frustum, when confined to that pixel.

 

I've also added a green point light, with 5 rays indicating the range of rays that will hit that light blue patch. All 5 of those green rays will end up hitting the retina somewhere, but only one of those rays comes in co-linearly with one of the orange "valid directions". 




#5085582 Radiometry and BRDFs

Posted by CDProp on 13 August 2013 - 12:31 PM

Ugh, so I forgot to ask my BRDF-related questions.

 

What I'm really trying to figure out here is how I would create a BRDF that represents a perfectly reflective surface, i.e. a surface where there is zero microgeometry, zero absorption, zero Fresnel, and 100% reflectance, such that each ray of light is reflected perfectly at the same angle as the incident angle. 

 

zrZyyNY.png

Here is a situation where the perfect mirror (blue) is reflecting light from a point light (orange) into a pixel (green segment, with the eye point being the green dot). Because we're dealing with perspective projection, which attempts to simulate a sort of lens, we only care about light coming in along the view vector. The orange ray is therefor the only ray we care about. I'm beginning to think, as I type this, that my difficulty in grasping this problem has something to do with the unrealistic nature of point lights that I mentioned earlier, and perhaps also the unrealistic nature of a perfect reflector. But I digress.

 

The problem I'm trying to solve is that I have this ELcosθ term, which is the irradiance at the point on the surface where the ray bounces, and this makes perfect sense to me. However, now I need to create a BRDF that will reflect all of that light in one direction, and return zero in all other directions. However, I know that this function can't return 1 in the required direction, because the units would be wrong. The ELcosθ term is in W·m-2, and the BRDF needs to return something in units of sr-1. If I return 1, then the end result is that the calculated radiance is the same magnitude as the irradiance, even though the units are different, which seems too easy. Can anyone help me figure out the right way to think about this problem so that I can figure out what this BRDF should look like?

 

Edit: I also mispelled BDRF multiple times. =P I can't fix it in the title, I don't think.

Edit2: No I didn't.




#5085573 Radiometry and BRDFs

Posted by CDProp on 13 August 2013 - 12:06 PM

Greetings. I've been reading a lot about radiometry lately, and I was wondering if any of you would be willing to look this over and see if I have this right. A lot of the materials I've been reading have explained things in a way that is a little difficult for me to understand, and so I've tried to reformulate the explanations in terms that are a little easier for me to comprehend. I'd like to know if this is a valid way of thinking about it.

 

So, radiant energy is simply a matter of the number of photons involved, times their respective frequencies (times Planck's constant). SI Units: Joules.

 

Radiant flux is a rate. It's the amount of radiant energy per unit of time. SI Units: Joules per Second, a.k.a. Watts. If you have a function that represents the radiant flux coming out of a light source (which may vary, like a variable star), and you integrate it with respect to time, you'll get the total radiant energy emitted over that time. 

 

The next three quantities are densities. A brief aside about densities. Let's think about mass density, which is commonly talked about in multivariable calculus courses, as well as in many freshman calculus-based physics courses. You have a block of some solid substance. Let's say that the substance is heterogeneous in the sense that its density varies, spatially. One might be tempted to ask, "What is the mass of the block at the point (x,y,z)?" However, this question would be nonsensical, because a point has no volume, and therefore can have no mass. One can answer the question, "What is the mass density at this point?" and get a meaningful answer. Then, if you wanted to know the mass of some volume around that point, you could multiply the density time the volume (if the volume is some dV, small enough that the density doesn't change throughout it), or else integrate the density function over the volume that you care about.

 

So, in terms of radiometry, the three density quantities commonly spoken-of are irradiance, radiant intensity, and radiance.

 

Irradiance is the power density with respect to area. The SI units are W·m-2So, if you have some 2-dimensional surface that is receiving light from a light source, the Irradiance would be a 2-dimensional function that maps the two degrees of freedom on that surface (x,y) to the density of radiant flux received at that point on the surface. Exitance is similar to Irradiance, with the same exact units, but describes how much light is leaving a surface (either because the surface emits light, or because it reflects it). As with all densities, it doesn't make sense to ask, "How much power is being emitted from point (x,y) on this surface?" However, you can ask, "What is the power density at this point", and if you want to know how much power is emitted from some area around that point, you have to multiply by some dA (or integrate, if necessary).

 

Radiant Intensity is power density with respect to solid angle. The SI units are W·sr-1Unlike irradiance, which gives you a density of power received at a certain point, radiant intensity tells you the power density being emitted in a certain direction. So, a point light (for example) emits light in all directions evenly (typically). If the point light emits a radiant flux of 100W, then its radiant intensity in all directions is about 8 W·sr-1. If it's not an ideal point light, then its radiant intensity might vary in each direction. However, if you integrate the radiant intensity over the entire sphere, then you will get back the original radiant flux of 100W. Again, it doesn't make sense to ask, "How much power is being emitted in this direction?", but you can ask, "What is the power density in this direction?" and if you want to know how much power is being emitted in a small range of directions (solid angle) around that direction, then you can integrate the radiant intensity function over that solid angle.

 

Radiance is the power density with respect to both area and solid angle. The SI units are . The reason you need radiance is for the following situation. Suppose you have an area light source. The exitance of this light source may vary, spatially. Also, the light source may scatter the light in all directions, but it might not do so evenly, so it varies radially as well (is that the right word here?). So, if you want to know the power density of the light being emitted from point (x,y) on the surface of the area light, specifically in the direction (θ,ϕ), then you need a density function that takes all four variables into account. The end result is a density function that varies along (x,y,θ,ϕ). These four coordinates define a ray starting at (x,y) and pointing in the direction (θ,ϕ). Along this ray, the radiance does not change. So, it's the power (flux) density of not just a point, and not just a direction, but a ray (both a point and a direction). Just as with the other densities, it makes no sense to ask, "What is the power (flux) being emitted along this ray?" It only makes sense to ask, "What is the power density of this ray?" And since a ray has both a location and direction, the density we care about is radiance.

 

The directional and point lights that we are used to using are physically-impossible lights for which it is sometimes difficult to discuss some of these quantities.

 

For a point light, it's meaningless to speak of exitance, because a point light has no area. Or perhaps it's more correct to think of the exitance function of a point light as a sort of Dirac delta function, with a value of infinity at the position of the light, and zero everywhere else, but which integrates to a finite non-zero value (whatever the radiant flux is) over R3. In this sense, you could calculate the radiance of some ray emanating from the point light, but I'm thinking it's more useful to just calculate the radiant intensity of the light in the direction that you care about, and be done with it.

 

For a directional light, it almost seems like an inverse situation. It's awkward to talk about radiant intensity because it would essentially be like a delta function, which is infinite in one direction, and zero everywhere else, but which integrates to some finite non-zero value, the radiant flux. Even the concept of radiant flux seems iffy, though, because how much power does a directional light emit? It's essentially constant over infinite space. It's easier to talk about the exitance of a directional light, though.

 

In any case, even with these non-realistic lights, it's easy to talk about the irradiance, intensity, and radiance of surfaces that receive light from these sources, which is what we typically care about.

 

How did I do?




#5082661 I feel like my graphics programming career is stagnating. Is it my fault? Wha...

Posted by CDProp on 02 August 2013 - 10:14 PM

I wanted to try to keep this as succinct as possible, because I'm definitely sensitive to the fact that no one wants to read paragraphs upon paragraphs of some schmoe's biography. However, I can't decide what else to cut. So, I apologize for the length, but if you are a professional game programmer (especially a graphics or engine programmer), I would be much obliged if you could read this monstrosity and give me whatever advice you can give. I desperately need direction!

 

I am one of the countless programmers who got their foot in the door without a degree. In 2006, I got a job at a local game developer as a sort of build engineer. By the end of my first project, I had automated most of my job, and so I had the programmers on my team give me some basic programming tasks to work on. I eventually became a full-time programmer. In 2008, the company went bust, and so I lost my job.

 

It turns out that two and a half years of game programming experience at a defunct (and honestly, crappy) game development studio does not make for a great resume, especially if you don't have a degree. So, I took a low-paying job at a very tiny web development company (think "working-in-some-dude's-basement" tiny). I had to learn a completely new skill set. There, I did both back end (Perl, MySQL) and front-end (HTML, JQuery w/ AJAX, CSS) stuff. Unfortunately, that company also went bust 5 months later.

 

Shortly after that, in late 2009, I found another job as a graphics programmer for a simulation company. This is my current job, and I've been here for almost 4 years.

 

So that about brings you current on my job history.

 

As you can probably tell, at the time that I was hired for my current position, they weren't looking for a graphics programming guru (if they were, they would not have hired someone with so little experience!). They were just looking for a good, smart programmer who could take ownership of the visual side of things. During the interview process, I programmed a very simple DX9 demo involving a pool of water (environment mapped reflections, refractions, fresnel), and an archway that casted a planar shadow. I modeled everything myself in Blender and exported it to a custom (albeit simplistic) file format using a script I wrote in Python. The company uses OpenGL, which I had no experience with, but I guess they decided that I knew enough about graphics to take ownership of their visual side of things.

 

The problem I am having with this job is that they have decided that their graphics fidelity needs are very modest, and so they've made realistic graphics programming an extremely low priority. Most of the time, they fill my plate with tasks that are only peripherally-related to graphics programming. For instance, they began work on a new level editor, and they needed me to code all of the 3D GUI stuff for it (dropping objects into the scene, picking, moving, rotating them, etc.). It turns out that this is 80% of the programming needed on it, and I quickly became THE owner of the level editor, responsible for the other 20% as well. For the first two years of this job, about 90% of what I did was related to this level editor. I essentially felt like a tools programmer, not a graphics programmer.

 

Edit: There are many other examples of non-graphics stuff that they've had me work on. but for the sake of brevity, I won't list them!

 

 

Meanwhile, I'm trying to get them excited about updating their graphics fidelity. When I first got there, their graphics already looked old (think original Half-Life, but with higher-res textures). I've done a few things over the years to update the graphics (added some normal mapping, gloss mapping, splatting to reduce the tiled look in grass and dirt textures -- all stuff that has been commonplace for more than a decade). When you combine this with the third-party libraries that I've integrated for ocean and sky rendering, the graphics look quite a bit nicer than they once did. But they still look horrendously out-dated. 

 

I thought I had them convinced, at one point, that we really needed to change things. They gave me the go-ahead to re-architect the visual software from the ground-up so that it would be more flexible. The old visual software was so static; it was full of hard-coded behavior and ad-hoc hacks, and adding new stuff to it was very cumbersome. So, I took several months to reformulate things, and we just shipped our first project on the new architecture. Yay! I'm ready to add some new awesome effects.

 

But now they are loading my plate with more menial tasks that, at this point, I feel like would be better-handled by a junior programmer under my direction. We still have a long way to go on the graphics front. I haven't even been able to sell them on the importance of HDR yet, for example. We're still using simple single-buffer shadow mapping, and so we don't have full-scene shadows. I try to stay abreast of new developments, and I keep reading about things that I really want to do -- tiled light culling for nighttime simulations with lots of headlights and other work lights, image-based reflections to make the rain effects look more realistic, etc. -- that I think have practical value for the company, but I'm having a tough time selling them on it.

 

I really do understand, to some extent. The items they are having me work on are things that the customers are asking for, and there really isn't anyone else in the company who work in this area. However, I don't know if this is working for me anymore. I need a steep learning curve that I can climb. I feel like I'm on a plateau.

 

So I know what you're probably thinking. "Why not work on these projects in your own time, and then present them to the company when you have a working demo?"

 

And the truth is, I do work on side-projects such as this. However, it is extremely slow going because I am also in school. You may remember from the beginning of this tome that I did not have a degree when I started. Well, I started school about two and a half years ago, and I'm now about halfway toward a bachelor's degree in physics.* I am going full time, and the courses aren't easy. In order to maintain a good GPA (currently 3.93), I easily spend 30-40 hours per week on school. This is in addition to work, which often demands 50- or 60-hour weeks during crunch times. Yes, I have some weeks where I spend 90+ hours on work and school (although 70 is far more frequent).

 

I also have a family, and so I find myself with precious little time for side projects. I feel like I'm in one of those "Pick Two" situations: work, school, side-projects.

 

I'm really loath to quit school. I am just over halfway through, so why quit now? A degree might not be as important now as it was in the beginning of my career, but I'm not a quitter. Plus, I'm really excited about what I'm learning in my physics and math classes. I'm also currently working on some undergrad research involving crystals and electric field gradients, which I find really interesting and fun.

 

I am also loath to quit my job. I can't afford it, first of all. I also don't want the gap in my work history. Plus, I really like it where I work. These complaints aside, it's a great place to work, with nice people. They've been kind enough to take a chance on me, back when my graphics programming skills were unproven, and they have been very flexible with scheduling while I go to school. I feel that I owe them some loyalty.

 

But I also feel that, if I spend a year or two more in this "Graphics Programming Kiddie Pool", it's going to severely stunt my career. People are going to start wondering why I spent 6 years as a graphics programmer and haven't even implemented a good tone mapping operator before. 

 

What should I do?

 

 

* Why physics? I would probably learn a lot from a CS degree, but I thought something cross-disciplinary would be more fun. It's not as though graphics is completely without a basics in physics, and most gaming and simulation companies seem to value knowledge in physics. So, why not?




#5075083 Yet another Deferred Shading / Anti-aliasing discussion...

Posted by CDProp on 03 July 2013 - 12:59 PM

Hi. My apologies if this discussion has been played out already. This topic seems to come up a lot, but I did a quick search and did not quite find the information I was looking for. I'm interested in knowing what is considered the best practice these days, with respect to deferred rendering and anti-aliasing. These are the options, as I understand them:

 

Use some post-processed blur like FXAA.

I've tried enabling NVidia's built-in FXAA support, but the results were not nearly acceptable. Maybe there is another technique that can do a better job?

 

Use a multi-sampled MRT, and then handle your own MSAA resolve.

I've never done this before, and I'm anxious to try it for the sake of learning how, but it is difficult for me to understand how this is much better than super-sampling. If I understand MSAA correctly, the memory requirements are the same as for super-sampling. The only difference is that your shader is called fewer times. However, with deferred shading, this really only seems to help save a few material shader fragments, which don't seem very expensive in the first place. Unless I'm missing something, you still have to do your lighting calculations once per sample, even if all of the samples have the same exact data in them. Are the material shader savings (meager, I'm guessing) really worth all of the hassle?

 

Use Deferred Lighting instead of Deferred Shading.

You'll still have aliased lighting, though, and it comes at the expense of an extra pass (albeit depth-only, if I understand the technique correctly). Is anybody taking this option these days?

 

Use TXAA

NVidia is touting some TXAA technique on their website, although details seem slim. It seems to combine 2X MSAA with some sort of post-process technique. Judging from their videos, the results look quite acceptable, unlike FXAA. I'm guessing that the 2X MSAA would be handled using your own custom MSAA resolve, as described above, but I don't know what processing happens after that.

 

These all seem like valid options to try, although none of them seem to be from the proverbial Book. It seems to me, though, that forward rendering is a thing of the past, and I would love to be able to fill my scene with lights. I could try implementing all of these techniques as an experiment, but since they each come with a major time investment and learning curve, I was hoping that someone could help point a lost soul in the right direction.

 

Bonus questions: Is there a generally agreed-upon way to lay out your G-Buffer? I'd like to use this in conjunction with HDR, and so memory/bandwidth could start to become a problem, I would imagine. Is it still best practice to try to reconstruct position from depth? Are half-float textures typically used? Are any of the material properties packed or compressed in a particular way? Are normals stored as x/y coordinates, with the z-coord calculated in the shader?

 

I'm using OpenGL, if it matters.




#4980665 Graphics programming book recommendations

Posted by CDProp on 16 September 2012 - 11:46 AM

The thing about Real-Time Rendering is that it's a great overview of what's out there, and includes a lot of background information on topics like Math and Radiometry and Colorimetry, but for actual implementation details of the various techniques, you usually have to look elsewhere. Luckily, the book is well-cited and almost everything in its bibliography can be found for free online somewhere. As such, I recommend the book highly, but don't expect it to hold your hand through every implementation step.


#4978985 OO Design and avoiding dynamic casts

Posted by CDProp on 11 September 2012 - 12:19 PM

Greetings, all.

I've often found it difficult to avoid dynamic casts in my designs, and I was wondering if we could brainstorm what strategies one could use to minimize their use. I would find that tremendously productive.

One instance that comes up a lot is with the Abstract Factory pattern and its variants. An often-used illustration of this pattern is found on Wikipedia:

http://en.wikipedia....act_factory.svg

Now, I would make this a little more complex by saying that the GUIFactory interface would also have methods for creating other controls, like createComboBox and createList. It may also have something like createWindow. And the Window interface might have an addControl method that can be used to add buttons/comboboxes/lists to the window.

So, the trickery comes when you're writing your concrete implementations. Let's imagine an imaginary GUI framework called Foo32 that you want to target. So you write FooFactory and concrete product classes like FooWindow, FooButton, FooComboBox, FooList, etc.

So, the class hierarchy would look something like this:

foo.gif

And the factory would look something like this:

[source lang="cpp"]class FooFactory : public GUIFactory{public: FooWindow* createWindow(); FooButton* createButton(); FooList* createList(); FooComboBox* createComboBox();};[/source]

And here are some possible class definitions for FooButton and FooWindow:

[source lang="cpp"]class FooButton : public Button{ Foo::Button *m_button; // The actual button object.public: FooButton(); // Button interface void paint();};class FooWindow: public Window{ Foo::Window m_window; // The actual button object.public: FooButton(); // Window interface void paint(); void addControl(Control *ctrl) { m_window.addControl(ctrl); // Error! Control is not a Foo::Control }};[/source]

So the question is, how to solve that error. The naive approach would be to just test and see what concrete class we're dealing with.

[source lang="cpp"] void addControl(Control *ctrl) { FooButton *fooButton = dynamic_cast<FooButton*>(ctrl); if(fooButton != NULL) { m_window.addControl(fooButton->getFooButton()); // getFooButton returns the Foo::Button* } FooComboBox *fooCB = dynamic_cast<FooComboBox*>(ctrl); if(fooCB != NULL) { m_window.addControl(fooCB->getFooComboBox()); // getFooButton returns the Foo::ComboBox* } /* and so on... */ }[/source]

Ouch, though. I suppose I could use multiple inheritance, i.e. FooButton would inherit from both Button and Foo::Button. Since Foo::Button is a Foo::Control, then the method implementation is simplified:

[source lang="cpp"] void addControl(Control *ctrl) { Foo::Control *fooCtrl = dynamic_cast<Foo::Control*>(ctrl); if(fooCtrl!= NULL) { m_window.addControl(fooCtrl); } }[/source]

Problem is, this uses multiple inheritance and we still have a dynamic_cast. I've spent enough time debugging dynamic_cast to know that it is not a quick or trivial thing (with MSVC, it does multiple strcmps). Perhaps that's not a problem for setting up a GUI (most of this is done during program initialization and GUI's typically spend most of their time waiting for user input anyway). But what if the above pattern is needed for something other than a GUI -- something for a real-time video game that requires dozens or hundreds of Control-like objects to be added and removed from a Window-like object each frame?

I am currently wrestling with a situation just like that -- a have a "View" (analogous to Window) that needs to have "Scene Objects" (analogous to Control) added to it. Examples of Scene Object types are Model, Particle System, Hud Element, etc. I feel that it is necessary to use something like an Abstract Factory because of a high possibility that I may want to change what graphics framework I'm using someday (I'm currently using OpenSceneGraph).

It's often said that, if you are using dynamic_cast a lot, you're probably doing something wrong. This makes me feel uneasy about my design.

It's also often said that your classes should be open for extension and closed for modification. I find it really difficult to close classes like GUIFactory, because one is always thinking of new types of controls that need to be created. Maybe a createTextBox method needs to be added at some point -- is it really desirable to subclass GUIFactory just because you want to call it 'closed'?

I'm getting better and better at OO design, but I'm still fledgling in a lot of ways, and as you can see, I could use some advice. From where I sit, there are two basic possibilities:

1) Don't try to abstract the graphics framework in something like a game, where performance is critical.
2) Minimize the amount of time that the code sequence crosses your abstraction layer, i.e...

[source lang="cpp"]// This code can't know we're using Foo32. It just knows about the abstract interfaces.// The typical way to do it...Window w = guiFactory->createWindow(); // returns a FooWindow, but this code doesn't know or careButton b = guiFactory->createButton(); // returns a FooButton, but this code doesn't know or carew->addControl(b); // this will do a dynamic_cast (slow)// Here is an alternate possibility and gets around the above issues...int windowId = guiFactory->createWindow(); // returns the id of the created window, but the factory keeps the window for itselfint buttonId = guiFactory->createButton(); // as above, only the id of the button is returnedguiFactory->addControlToWindow(windowId, buttonId);[/source]

Since FooFactory is owning the objects it creates in that second solution, it has concrete references to them, and thus no casting is needed. This has a very messy, non-OOP feel to it, though. Almost as if I'm working with OpenGL.


#4976243 Mind if we discuss VBO streaming?

Posted by CDProp on 03 September 2012 - 06:22 PM

Greetings.

I've decided to do some experiments with different methods of VBO streaming in order to see what kind of performance I can get. Most of the ideas that used came from this discussion, but I'm a little bit reluctant to necro that thread since it is so old.

Basically, I don't have any goal in mind except to maximize the amount of verts that I can render for a particle system (or something else that requires geometry that is updated every frame). So, I've set up a test project that generates a bunch of particles and renders them as falling snow. The verts contain positions only (no normals or texture coordinates or anything like that). The primitive type is GL_POINTS and texturing and lighting are disabled, and there is absolutely nothing else in the scene. I'm trying to minimize all the variables so I can concentrate specifically on the performance issues inherent in moving a bunch of vertex data from the CPU to the GPU. Here is a screenshot:

Posted Image

I've decided to try three different approaches and compare them:

1. Arrays

With this method, I use straight vertex arrays without any sort of VBO. This is my baseline. It should be the slowest because it is not asynchronous.

2. VBO w/ Orphan

With this method, I use a VBO. However, as an added twist, I call glBufferData each frame, passing NULL as the data. That way, it will allocate a new buffer for me, thus orphaning the old buffer. I then use glMapBuffer to set the data.

3. VBO w/ glMapBufferRange

This is similar to method 2 above, except that when I called glBufferData, I allocate a chunk that is much larger than I need (say, 10X larger). So, that basically gives me 10 segments that I can treat like 10 separate buffers. I use glMapBufferRange to load data into the first segment, and then I use glVertexPointer and glDrawArrays to draw that segment. On the next frame, I use glMapBufferRange to load data into the second segment, and then I draw that segment. Each time I call glMapBufferRange, I pass in GL_MAP_UNSYNCHRONIZED_BIT so that it won't block. Yet, I can be sure there won't be any read/write collisions because each frame uses a different part of the buffer than the last. Once all 10 segments are used up, I call glBufferData again (with NULL as the data ptr) to orphan the old buffer and start a new one.

Method 3 seems to be what everyone is recommending, and Method 1 was sure to be the slowest. What I found, in fact, was that they both had the same exact performance! In fact, Method 1 was a little faster (but it was close enough to call it a tie). For about 22,000 verts, each took about 2.2 ms per frame. For 400,000 verts, each took about 50 ms per frame. There was no difference in either case. I even tried changing the buffer usage between GL_STREAM_DRAW and GL_DYNAMIC_DRAW to no avail. VSync is disabled. So, I'm pretty stumped.

I have a few questions about this:

1. Do those frame times sound reasonable? 2.2 ms for 22k verts and 50 ms for 400k verts? My computer isn't super-powerful; it's a laptop with Intel Core i5 2.30GHz, 4.00 GB RAM, GeForce GT 550M. I know it's impossible to tell me how it should perform from these stats, but maybe if I'm an order of magnitude off, it will jump out at someone.

2. It seems like whatever my bottleneck is, it isn't affected by which method I choose. Where should I look next to find the bottleneck? I've looked at it through geDebugger, and unfortunately it won't show me the orphaned VBOs so I'm not 100% sure if I'm filling up memory. I will say this: I've timed the updating of the particles on the CPU side, and I've also timed the glMapBufferRange/memcpy/glUnmapBuffer code block, and neither takes more than a fraction of a millisecond. I was kind of surprised about that last one, actually, because I expected the memcpy to be the bottleneck.

3. Has anyone ever implemented a particle system using transform feedback? I haven't looked into this OpenGL feature at all, but it seems like it would allow one to upload the verts to the GPU once, transform them while performing the physics in the vertex shader, and then store the transformed verts in a different VBO. Then, one could use that "Result VBO" as the starting point for the next frame and just ping pong VBOs that way. I'm not entirely sure I understand the transform feedback feature yet, though.


#4969714 Anyone want to help me choose a framework for some tools I want to write [Win...

Posted by CDProp on 14 August 2012 - 09:44 PM

It would be much appreciated.=)

Basically, the apps are just some tools to assist in game programming (editors and the like). So, there needs to be a panel that displays a 3D scene, along with a toolbar, property grid, etc.

I've written one such app before using WinForms in C#. I've found it to be a bit of a pain, because the 3D scene drawing was all done by a native C++ DLL, and I had to use p/Invoke in order to make calls into it. This means that my C++ DLL needed to have a flat API of C-linked functions (I could have omitted that step, but who wants to deal with mangling?). I could only marshal across PODS, basically, and I certainly couldn't instantiate C++ classes and manipulate the objects directly. The worst problem, though, is that any time I had an unhandled exception in the C++ DLL, I didn't get any information about it except a generic message that said "Something went kaboom" (or something equally uninformative).

I've also written one such app using WPF, but it had the same problem as above, with the additional problem that WPF doesn't give you a separate window handle for each control; instead, the entire window shares a device context, and so drawing just to one panel control (say) is difficult. The easiest solution ended up being to render to an offscreen buffer and read the pixels back, which was kinda gross.

I suppose I could use p/Invoke to make a thin wrapper around OpenGL32.dll (which I haven't tried before, but seems like it'd work), but I'd much rather just link to the same engine DLL that my game uses, so that code gets reused, instead of duplicating all of the functionality I need in C#.

So, I'm really itching to do something in native C++. What are my options here? As far as I can tell: Win32, MFC, and Qt. Out of these three, only Qt seems modern and friendly enough to use. However, it's the only one that I have no experience with.

What is the deal with the SDK? I installed it, and it's gargantuan (more than a gig), has it's own SDK, and installed some drivers on my machine (one of which was unsigned). I feel like I've swallowed a cow. I don't want to use their IDE. I see there is a Visual Studio Qt addon, so I suppose that would work. What do you think? Is it worth all of this? I was hoping for just a set of libraries and headers that I can link/include in my project.

My GUI needs are pretty simple. Window, menu, toolbar, property grid, panel with 3D scene. I can roll all of this programmatically; I don't need resource files, wizards, WYSIWYGs, etc. I admit I'll miss Xaml, though. =D


PARTNERS