Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


CDProp

Member Since 07 Mar 2007
Offline Last Active Today, 11:57 AM

Topics I've Started

Gift for someone who is just starting out with programming...

27 October 2014 - 08:37 PM

Hey folks.

 

A good friend of mine is a middle school math teacher, and he is taking a fairly bold step in quitting his job so that he can go back to school full-time to get a degree in computer science. He is very interested in programming. He doesn't have a lot of programming experience yet, but I think he has the right intelligence and mentality for it and he'll do well. He's been feeling an intense amount of trepidation about this decision (which, incidentally, is too late to reverse), and so I have been trying to think of a gift to give him that might make him feel encouraged. I was thinking of maybe a nice book, like The Pragmatic Programmer, although I don't know how useful that would be to a newbie. When I first started out, I was happy just to get a crappy shell game working and it was a while before I found the advice found in the big lofty books to be very helpful. Another idea I had was an Arduino, with maybe a project book to go along with it, but I am not sure how interested he is in the hardware side of things.

 

It doesn't necessarily have to be something for him to read or do, it could just be something that is cool and inspiring. I could make something for him that has a lot of programmer appeal, such as a 4-bit adder out of NAND gates and LEDs or something like that.

 

These are the sorts of ideas I've been having, but nothing yet seems like THE idea. Any ideas you could give me would be hugely appreciated. As far as price range, I was thinking something in the neighborhood of $50.


Debugging a system hang (entire OS freezes)

07 August 2014 - 01:54 PM

Hi. I have an application I'm working on that, perhaps 1 time in 500, hangs the entire system on startup. And by that, I mean that the entire OS freezes including the mouse pointer, and no input is accepted from the keyboard. I can't even imagine what I could be doing from within the confines of my program in order to cause these system-wide effects, but I assume that it's some kernel mode thing. Does anyone have any tips on debugging this sort of issue? This is a C++ OpenGL application running on Windows 7.


Just a couple of Data-Oriented Design questions.

15 June 2014 - 05:31 PM

So I'm just barely getting into this, and I think it's a really neat way to think about things. I'm going to program a simple Asteroids-like game (more of a Lunatic Fringe clone) just to get my feet wet, and I'm pretty excited about it. I just have a couple of things that I need to have clarified about doing a component-system type arrangement with Data-Oriented Design. 

 

My first question concerns how to organize the arrays of components. My first instinct is that there should just be one array of each type of component. For instance, there would only be one master array of transforms, and any game entity that owns a transform would just add their transform to this array. That way, any systems that need to affect the transforms can just iterate linearly through the whole array. Several drawbacks (none of them insurmountable) became immediately apparent:

 

  1. Entity deletion becomes more difficult. Not every entity needs every component, so the component arrays will all be of different sizes, and there is going to need to be some piece of code (in the Entity itself, perhaps) that keeps track of which components in which arrays belong to which entity. If it's keeping track via array indices, then these indices will have to be updated every time components are deleted from the arrays (assuming that components are always added to the end, and that deletion involves an actual erase-remove operation rather than just marking certain array elements as 'dead' and recycling them later).
  2. Not every system that works on transforms (say) needs to work on every transform. There may be certain systems that only need to perform operations on Asteroid transforms. This means that each component will have to identify the type of the entity to which it belongs, so that the system in question can check the type before operating on the component. Maybe this isn't such a bad thing. Conditional branches aren't incredibly expensive, and although this does mean you've potentially wasted a prefetch on an object that you don't need, doing nothing to an object is still faster than doing something.

So my next thought is that there should be a different component array for each Entity type that uses the component. In my surfing, I've come across several examples of code that looks like this:

struct Asteroids
{
    std::vector<Transform> transforms;
    std::vector<CollisionSphere> collisionSpheres;
    // etc...
};

So, the entity type here is Asteroid, and one instance of the Asteroids class holds a list of all asteroids in the scene. If you have a system that performs operations on asteroid transforms, but perhaps not the transforms of some other entity types, then you can just feed the system asteroids.transforms, and the system can do it's usual linear thing. There are a couple of drawbacks, it seems, to this approach as well:

 

  1. This breaks up a single big array (per component) into N smaller arrays per component (where N is the number of entity types that use that component), and some of those arrays (like players.transforms) might be very small. Running a system on a series of tiny arrays is probably not much better than jumping around in memory. However, there would still be significant savings, I would imagine, if your world contains several entity types that exist in large numbers (and thus have large component arrays).
  2. It seems that, suddenly, there are going to be a lot of areas of code that are going to need to know about the concrete entity types, which would not had they been programmed in a more traditional OOP style. In traditional OOP, you often just have an array of EntityBase* and you just call a polymorphic Update method on each entity. The code that iterates through this array doesn't know or care what the concrete types are. But suppose we're doing a data-oriented entity/system/component architecture, and suppose we have a system that performs some operation on transforms, but we only want Asteroids and BlackHoles to use it. There is going to need to be some code that calls system.update(asteroids.transforms) and then calls system.update(blackholes.transforms). My intuition (and maybe this is a good intuition, or maybe it's brought on by an overdose of OOP-thinking) is that main, central pieces of glue code should not know about concrete types; the only code that should care about the behavior of concrete types are the concrete types themselves, and the central "glue" code that brings all of these types together should know only about the abstract classes, and be completely decoupled from the concrete types themselves.

Despite these potential drawbacks, I still feel that option #2 (having entity classes like Asteroids that have their own arrays of components, rather than using a master array for each component that all entity types must share) is far preferable. However, I am interested in some other opinions, and to see if perhaps someone has thought of a third way that is even better.

 

My second question (or perhaps more of an observation) is that it seems like you can't entirely avoid random-access of objects. Take collision detection and response. Suppose you have some CollisionDetectionSystem whose job it is to iterate through all of the entities and find out which ones are colliding. For each collision, it adds a Collision object to an array of collisions. Then comes the collision resolution step. Even if you have just one CollisionResolutionSystem, that system is going to have to read each Collision object, find out which entities were involved in the collision, and then find the appropriate components for these entities in some other list. This find operation is going to necessarily entail jumping hither and yon through the component arrays, modifying them as called-for by the collision response.

 

Is there a clever solution to that sort of situation, or is it just a performance weakness that is accepted on the basis that it's still faster than the traditional OOP method of always jumping all over memory for everything?


Good rim lighting with Blinn-Phong

03 April 2014 - 09:36 PM

So, let's say I'm using a normalized Blinn-Phong BRDF. Here is the one from Real-Time Rendering, Third Edition, p. 257, Eq. 7.49:

 

Ukypzpa.png

 

Where:

 

cdiff is the diffuse reflectance

RF is the Fresnel term (Schlick's approximation)

αh is the angle between the view vector and the half vector

m is the specular exponent

θh is the angle between the normal and the half vector.

 

Now, this entire thing gets multiplied by the irradiance to get the final radiance:

 

AMHTDCY.png

 

It seems from this that it's quite difficult to get a nice Fresnel sheen, particularly on rough materials like asphalt. This is because, in order to maximize the specular term of the BRDF, both the view vector and the light vector have to be at grazing angles with the surface being rendered. Unfortunately, when the light vector is at a grazing angle with the surface, the irradiance factor is near zero, and so you don't get much light at all.

 

The situation can be somewhat improved by using a large m. Then, the normalization factor is large and this results in some light. However, it also results in a very sharp highlight, which is not helpful if the material is supposed to be rough.

 

In fact, it's difficult for me to envision a scenario where you can get strong Fresnel, strong n_dot_h, and strong irradiance.

 

I could divide the specular term by the clamped n_dot_l, but I'll have to add a small epsilon to avoid a singularity when n_dot_l is zero. Also, that seems to be sort of a hack without any physical basis and it doesn't conserve energy.

 

Is my thinking wrong?

 

 


Challenges Abstracting OpenGL

12 March 2014 - 07:23 PM

Greetings,

 

I am trying to write a C++ wrapper for OpenGL. I'm sure I'm reinventing the wheel a bit, but it's a learning exercise for me. My hope is to abstract things a bit to make it easier to spin up OpenGL programs, and to make the code easier to reason about a less error-prone. Following strict OO principles is somewhat of a secondary concern, but my #1 guiding principle here, I would say, is Item 18 from Effective C++: Make interfaces easy to use correctly and hard to use incorrectly.

 

Problem A: Binding Issues

 

With that said, one of the biggest challenges I'm running into is the fact that OpenGL is a state machine with a lot of global data. In particular, the bind points are causing me trouble. Take my Texture2D class, for example. If I were to mimic OpenGL's interface exactly, I would have methods like Bind, SubImage, etc. And that's pretty much what I've done. The problem is that there is that the user has to call Bind before they call SubImage, or else they'll get unexpected results (including, perhaps, subimaging a different texture entirely!). So it'd be nice to have a way to force the client to bind before calling SubImage.

 

Solution #1

 

My initial, insufficient solution was somewhat RAII-inspired. Here's how it worked:

 

  • First, I made the methods that required binding (SubImage, etc.) private.
  • Then, I wrote a Tex2DInterface class.
  • This class was a friend of Texture2D, and thus could access its private methods.
  • The constructor for Tex2DInterface accepts a Texture2D object and immediately called Bind() on it.
  • The destructor for Tex2DInterface calls Unbind() on the same object.
  • The Tex2DInterface class contains public pass-throughs for SubImage, etc.

So, if you needed to call SubImage on a Texture2D object, you were forced to create a Tex2DInterface object for it, and this object would automatically handle the binding and unbinding for you:

Texture2D myTex(...);

{
    Tex2DInterface ti(myTex);
    ti.SubImage(...);
} // Interface is destroyed, texture is unbound

The problem that almost immediately came to mind, though, is that one could create two interfaces for two different Texture2D objects in the same scope, and call methods on them in any order, and obviously this would be terrible. So, I quickly scratched this idea.

 

Solution #2 

 

I could just add a runtime check to see if the method in question (SubImage, or whatever) is being called from an object that is currently bound. This would leave it up to the client to make sure that the right object is bound. This solution won't prevent the client from making any mistakes, but it will catch mistakes and warn them so that they can fix them. The runtime checks can be compiled out of the release build.

 

Solution #3

 

For any methods that require binding, I could do the binding myself, inside the method. So, the SubImage method would call Bind() before doing it's thing. The upside to this is that the user of my Texture2D class can call any method on any Texture2D object without having to worry about calling Bind() first. In fact, the whole concept of binding could be hidden. The downside is that it would result in a lot of redundant calls to Bind(). Now, I'm already checking reduce redundant calls to glBindTexture, so that part won't be a problem, but the redundancy checks aren't free, either. The other downside is that if the client is totally oblivious to the binding going on under the hood, then they might be tempted to group their method calls in an inefficient way. My redundancy checks won't help them if they're calling a method on textureA, then textureB, and then textureA again, etc.

 

Problem B: Leaky Abstractions

 

Basically, my problem here is that I can create a nice Texture2D class that prevents the client from having to make direct OpenGL calls, but it doesn't prevent them from linking to OpenGL32.lib and making direct OpenGL calls. So, they could create a Texture2D object, and then mix it with direct OpenGL calls and mess everything up. I mean, this is a problem that even big OpenGL libraries like OpenSceneGraph have. I guess at some point, you just have to tell the client, "Don't do anything stupid." Is that the correct philosophy?


PARTNERS