• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Gumgo

Members
  • Content count

    477
  • Joined

  • Last visited

Community Reputation

968 Good

About Gumgo

  • Rank
    Member

Personal Information

  • Location
    Seattle, WA
  1. I am creating procedurally generated levels by stitching together "tilesets" created in a 3D modeling program such as 3DS max. Each tile consists of a 2x2x2 "section" of geometry, such as a wall, floor, corner, etc. These tiles are placed next to each other to form rooms and hallways.   After all the tiles are placed, I run a mesh simplification algorithm over the resulting geometry to reduce polygon counts for rendering and physics (and eventually NavMesh generation). The algorithm goes something like this: 1) Form groups of adjacent coplanar triangles that all have the same UV barycentric parameterizations (e.g. removing vertices wouldn't cause "warping"). 2) Combine each group into a single polygon, possibly with holes. 3) Remove excess collinear vertices from the boundaries. 4) Triangulate the polygons using constrained delaunay triangulation.   The issue is that step 4) is prone to producing long skinny triangles, which is causing problems everywhere (e.g. breaking my thresholds used to detect collinearity). Can anyone provide some advice on how to approach this problem, or point me to some resources or algorithms that deal with avoiding this?
  2. Ah thanks, that sounds like a way better idea! It makes more sense for the server to simply tell each client how far off they are, rather than the clients trying to guess. I'll go ahead and implement that!
  3. I'm performing client side prediction, so the clients need to run ahead of the server. The clients need to run ahead far enough so that their input packets are received by the server slightly before they actually need to be executed. If they arrive too late, the server would either need to back up the world (which I'm not doing) or make some estimations (e.g. the player was moving forward so they probably still are) and try to fix things up as much as possible once the input packet does arrive (e.g. the player might jump 1 frame late).   I'm having some trouble determining the amount to predict by in a robust way that deals well with jitter and changing network conditions. Sometimes I "get it right" and the game runs well, but other times the prediction is a bit too short and there are a lot of input packets arriving too late, resulting client/server discrepancies (where the server is actually "wrong" in this case) and lots of client correction.   Call the prediction time P. A summary of the ideal behavior is: - P should be the smallest value which is large enough so that under "stable" conditions (even stable bad conditions), all input packets arrive to the server in time. - If there is a brief network glitch, it should be ignored and P should not be readjusted. - If network conditions change (e.g. RTT or jitter goes up) P should be readjusted.   Currently, my algorithm to determine prediction time P is as follows:   - Packets are sent at a fixed rate R. Let S = 1/R be the spacing between two packets (e.g. R = 30 packets/sec means S = 33ms). - The ping time and jitter of each outgoing packet is measured. The ping time is measured by taking the difference between the time the packet was sent and the time the ACK for the packet was received. Note that since packets (which contain ACKs) are sent with a spacing S, this means that the ping times will be somewhat "quantized" by S. To get an average ping time (RTT), I use an exponential moving average with a ratio of 0.1. To compute average jitter J, I take the differences between each packet's ping time and the current average RTT and compute the exponential moving variance (section 9 of this article). - I have RTT and J - even though they are averages they still fluctuate a bit, especially J. - In each packet, there is a server time update message. Call this T_p, for packet time. I could just let P = RTT + T_p but then P would be jittery because both RTT is changing and T_p has jitter to it. Instead, to compute P I do the following: - I compute P_f - the prediction value associated with this single frame, which will be jittery. My algorithm is P_f = T_p + RTT + 2*J + S. My reasoning for each term: RTT because we want to predict ahead of the server's time by RTT/2, 2*J because we want to add some padding for jitter, and S because we want to add some more padding for the quantization error described above. - Each time I compute a new P_f for the frame, I update P_a, which is an exponential moving average of P_f with ratio = 0.1. - On the very first frame, I set P to P_f, since I only have one data point. On subsequent frames, I increment P by 1, so that it is constantly and smoothly increasing, and I check whether P needs to be "re-synced": - I take the value D = abs( P - P_a ), the difference between the prediction time P and the average per-frame prediction times computed P_a, and test whether D > 2*J + S. If that expression is true, I "re-sync" P by assigning P = P_a. I picked the value 2*J + S with the reasoning that if our predicted time differs from the average per-frame predicted time by more than twice the jitter (with padding for quantization), then it's likely wrong.   Can anyone suggest improvements to this algorithm, or suggest a better algorithm? My main problem right now seems to be determining when to re-sync P. Performing too many re-sync results in glitches and correction, but if P is too low and I don't re-sync (because it is "close enough" to P_a) then that also results in glitches and correction.
  4. Thanks for the replies. My target platforms are PC/Mac and iPad. I'm certainly glad to hear that these allocation issues are less of an issue on platforms with virtual memory. From what I've read, it seems like the iPad virtual memory setup is fairly similar to that of a PC, with the exception of swapping memory to disk (which shouldn't happen in a game anyway). Can anyone comment on this?
  5. So from everything I've read, game engines are one type of software where having fine control over how memory is allocated is often very important. I often read that during gameplay you ideally don't want to actually allocate any memory - it should all be preallocated when the level loads. This seems like a good idea, but in practice there are many cases where it seems very difficult to judge what the allocation limits should be. I've got some specific examples in mind which I'll list below. Could someone give examples on how these decisions might be made in practice?   (One thing to keep in mind is that in the particular project I'm working on, the levels are procedurally generated.)   - Enemy limits. Normally the player might be fighting maybe 5-6 enemies at once. But what if the player runs around the level and gathers a huge crowd? Would you allocate for worst possible case?   - Similarly, with items. Five players fire bow and arrow as rapidly as possible, and there are a ton of arrow items stuck in walls. Should I calculate "max fire rate" and determine the maximum possible amount of arrows that could be fired and stuck in the wall before they despawned? It seems like it might be really hard to determine these limits on certain gameplay elements. And networked objects just complicate things further, since their ordering isn't guaranteed.   - Network buffers. Messages that are guaranteed are queued up until the ACK has been received. But if there's a network blip, the queue might briefly have to get big.   - Objects that have variable sizes. For example, suppose one enemy has a skeletal mesh with 3 bones, and another has 20 bones. Each requires a "state" buffer. But if I allocate space for at most, say, 100 "enemies", I'd have to assume worst case (they all have 20 bones) if I didn't do some sort of on-demand allocation.   - Strings. Perhaps they should be avoided altogether. Right now for example, I have a file which describes the "sword" item, and an enemy might spawn "sword" when killed. Should these all be preprocessed into indices?   - Components in an entity component system. If it's possible to dynamically add components (e.g. an enemy is OnFire, Frozen, and Poisoned), should you assume the worst and allocate in case all enemies obtain the largest possible combination of dynamic components?   It seems like a lot of these "worst cases" could potentially be pretty hard to spot. Especially if there's any sort of user-provided scripting system. What is usually done to deal with these types of issues?
  6. Thanks, this information has been very helpful! I ended up going with hplus0603's method and just dropping packets which come out of order, and resending the moment I detect the missing ACK - greatly simplifies things. On the topic of reliability/ordering of messages, I had a thought today. This would probably have way too much overhead/be impractical, but have any games implemented a dependency graph for message ordering, rather than simply an ordered stream? An approach might work somewhat as follows: - Each message would have an ID (probably 2 bytes, maybe with some upper bits being used as flags) - Each message would be sent with a list of IDs it depends on (this would be the overhead-y part!) - When messages are received, their dependency graph is evaluated, and they are only executed after all dependencies have been executed - When the sender determines that all messages that a message is dependent on have been processed, the dependencies are removed to reduce dependency list length. To do this, examine incoming ACKs and figure out which messages must have been executed. Example of why the last point is important: suppose you spawn an enemy and then much later it gets killed. The "enemy killed" message should be dependent on the "enemy spawned" message, but in order for that to work the receiver needs to keep track of the "enemy spawned" message for a long time! But if you can confirm using ACKs that the "enemy spawned" message was already executed, you can just forget about it. And if you hang onto dependencies for a long time, you'll have issues when the 2-byte message IDs wrap around. This is just the more general extension of having multiple message channels each with independent ordering. Of course, the cost is probably extremely high. Anyone done something like this before?
  7. Thanks for the replies. From your posts, it sounds like UDP "isn't as bad" as I had expected. I was under the impression that packets are out of order and delayed fairly regularly. But hplus0603, if you're simply dropping packets any time they come late and it hasn't been an issue, then that probably doesn't happen too often. I know the answer is probably dependent on a lot of things, but about how often do issues such as out of order/dropped packets occur with UDP under a regular network load?
  8. I've implemented a system on top of UDP similar to the one described [url="http://gafferongames.com/networking-for-game-programmers/"]here[/url]. I have the system working well, and now I'm trying to add some "higher level functionality" on top of it to make it easier to do things like send guaranteed messages. I have a few questions about how dropped packets are handled. (Sorry they're a bit lengthy, but network code has lots of subtle issues that can be hard to explain!) [b]1) [/b]First of all, what is the best approach to detect likely dropped packets? The article's suggestion is basically the following: if a packet can't be ACKed anymore (i.e. the ACK bitfield window has moved past that packet), it's probably been dropped. So for example, suppose packets are being sent at a rate of 30 per second and the ACK bitfield contains 30 ACKs. If you send packet A at time t = 1, it will take about 1 second to receive ACKs for the last 30 packets. If packet A is never ACKed in this 1 second, it can NEVER be ACKed (the ACK bitfield window no longer contains packet A), so A should be considered dropped. But this seems weird to me. If you were sending packets at a rate of 10 per second and the ACK bitfield was still 30 long, it would take 3 seconds for packet A to fall off the ACK bitfield. But 3 seconds seems too long to wait for detecting dropped packets. Or if you were sending packets at a rate of 60 per second and the ACK bitfield was 30 long, packet A would only be on the bitfield for 0.5 seconds. The article doesn't mention using RTT to determine dropped packets, but that seems like a better indicator to me. Obviously if a packet is no longer in the ACK bitfield window it should be dropped (since it CANNOT be ACKed), but maybe the additional criteria of, say, a maximum ACK time of RTT*2 should be included as well? How is this usually done? [b]2) [/b]When you detect a packet that's likely dropped, how is that usually treated? I can see two options here: a) Decide that the packet IS dropped and ignore it, even if it comes in later b) If the packet comes in later, still process it Obviously, option a) could be more wasteful, since once you've decided that a packet is dropped, you can't ever "change your mind" upon receiving an ACK. However, the reason I'm considering it is that it simplifies things a lot. For example, suppose you send guaranteed message M in packet A, and after RTT*2 (or whatever your criteria is), you haven't received an ACK and decide it's probably dropped. You then resend M in packet B. If you go with option a), you can forget all about packet A and you only need to wait for an ACK from packet B. The downside is that if you end up receiving an ACK for packet A after you've determined it's been dropped, you can't actually use this ACK. A potential issue I see with this is if the RTT suddenly jumps up a lot, you might go through a brief period where you decide a bunch of packets are dropped when in reality they're just a bit more delayed. This would occur until the dropped packet cutoff time is adjusted to take the new RTT into account. I'm also not sure how this would work with network jitter - if there was high jitter then I guess more packets would be considered dropped. But if you go with option b), it's still [i]possible[/i] that you could receive an ACK for packet A! So the issue described above wouldn't occur, and you'd have the ACK information available sooner than if you waited for packet B's ACK. But in order to respond in this case, you'd need to keep some information for packet A around; i.e. you'll need to store "message M is pending in both packet A and packet B"; for each guaranteed message M, you need to store a [i]list[/i] of packets it's been sent in, rather than just a single packet (the most recent one). You don't want this list to grow forever though, so you need to eventually "forget" about old packets, and deciding when to do this isn't exactly clear. Right now what I'm doing is keeping track of each packet containing guaranteed messages until all guaranteed message in that packet have been ACKed. So for example, message M has a reference to packets A and B, and packets A and B have references to message M. If B is successfully ACKed, I notice that B contains M, and so I remove M from both A and B, and now I notice that A is empty, so I free A. But this just seems... "messy" and somewhat unbounded. If there are unexpected network issues and I end up resending message M several times, I'd suddenly be keeping track of a lot of old packets which probably didn't get though. And the worst case is if a (virtual) connection is dropped - if I have a 10 second timeout and a bunch of packets containing M never get through, I'd have to keep track of 10 seconds worth of resent packets until the connection timeout is detected! Does anyone have a suggestion on this issue? To summarize: when a packet is likely dropped, should I a) decide that it IS dropped, or b) still allow it to change its status to "delivered" if it's ACKed at a later time?
  9. Not looking for anything sophisticated, just keeping the player from falling through the world. Though after this phase of the update, I will have to do some discrete solving to push overlapping entities apart (everything is capsules or spheres though).
  10. Thanks for the reply. I'm currently doing something similar to what you describe in #1 - continuous collision detection, where I start with t = 0, solve for the first contact, step to that point, add to t, adjust velocity, and keep doing this until t = 1. The issue is in the "adjust velocity" step. Usually, when coming in contact with a new surface, the velocity should be adjusted by subtracting n*(n.v) from the current velocity - that is, you remove the component of the velocity which is parallel to the normal of the surface you just came in contact with. However, if you're actually in contact with several surfaces, it's more complicated. Consider the following case, as illustrated in the attached image: your character is walking along surface A, and then comes in contact with a sloped ceiling, surface B. The desired reaction in this case is that the initial velocity vector, v1, is projected along the line formed by where the the two surfaces meet - i.e. the cross product of their normal vectors - and the velocity after the intersection is v2. However, if when processing contacts you only take a single contact into account, the following happens: 1) The object is moving along surface A 2) The object hits surface B, and the component of the velocity which is parallel to surface B's normal is removed. This causes the object's new velocity to be slightly downward. 3) IMMEDIATELY after this ("0 time later"), an intersection with surface A is detected. The component of the velocity which is parallel to surface A's normal is removed, and the object's new velocity is, again, pointing into surface B. 4) IMMEDIATELY after this ("0 time later"), an intersection with surface B is detected... ... And the process keeps repeating until you reach an iteration cap. Essentially, there are two contacts, and they each alter the object's velocity so that it's going into the other object, and so no progress is ever made. In my initial post, what I was trying to get at was a general solution to this. You first build a list of all surfaces that the object is in contact with. Then, you compute the amount of force that would be applied to each surface when attempting to move the object by it's current velocity. Finally, you cause each surface to push back with a force equal to that which was exerted on it. This would result in the true final altered velocity of the object. However, now I realize that there are really 3 cases to deal with: 1) The object is only in contact with one surface, e.g. walking up a hill. In this case, you simply remove the velocity projected onto the normal. 2) The object is in contact with two surfaces, e.g. the example image. In this case, if the velocity direction is toward the "crease" created by the two surfaces, you project the velocity onto the vector onto the cross product of the two normals, and take that as your new velocity; otherwise, you revert to case 1, using the plane which the velocity vector points most toward. 3) The object is "pinned" by 3 or more points of contact (e.g. attempting to walk into a sharp corner). In this case, the object won't be able to move at all, so just set the new velocity to 0 and end the timestep. If you have just one contact point, it's always case 1. If you have 2 contact points, it could be case 1 or case 2, depending on where the velocity vector is pointed, but it's not too hard to tell. However, case 3 seems trickier. If you have N contact points, any of these cases could be produced. Consider the case where you're standing in a corner, contacting two walls and the floor. If you walk directly OUT of the corner, it's case 1: you're only "pushing" against one contact point, the floor. If you walk OUT of the corner against one of the walls, it's case 2: you're walking away from one of the walls, but pushing against the floor and one other wall. If you're walking INTO the corner, it's case 3: you're being "pinned" by all 3 surfaces. Any ideas on how to robustly distinguish between these cases?
  11. Here is my situation: I have a rigid body with finite mass in contact with N surfaces with infinite mass. I want to apply a force f to the rigid body. This force is divided among the N surfaces: for each surface s[sub]i[/sub], a certain proportion of this force will be applied, w[sub]i[/sub]*f, where the sum of all the w[sub]i[/sub]s is at most 1. The resulting force pushing back from each surface is -w[sub]i[/sub]*f·n[sub]i[/sub], i.e. the normal component of the force applied to surface. How do I compute the weights w[sub]i[/sub] for each surface? I attached an example image.
  12. I recently stumbled across several articles about using "partial derivative maps" rather than normal maps - i.e. altering the surface normals using the partial derivatives of the heightmap rather than using the normals directly. This seems really powerful to me, as partial derivatives are a lot easier to work with for things like combining multiple normal maps, correcting stretching/scaling issues, etc. And according to this article, if you use partial derivatives you don't have to store tangent space vectors. The article talks about "perturbing" the surface normal by the surface gradient and links to a paper by Morten Mikkelsen for details of the math behind it. Unfortunately, the link is broken, and I can't find any other links to the paper online. Could someone explain exactly what is going on with this process, or link to a paper that explains it? Is it "correct" in the sense that on a surface with orthogonal s and t surface vectors (not sure what to call these) it will produce the same result as tangent space normal mapping? I made an attempt at implementing... something (apparently not this since it didn't end up working and I ended up calculating a form of the tangent space), but I had problems with seams across triangles. My technique goes something like this: 1) Find the s and t "vectors" - the direction in space (screen space in my case) in which s and t are increasing. These will NOT necessarily be orthogonal (if the texture is skewed), but that's okay. The magnitudes of these vectors represents the size the texture is scaled to. So basically, this is reconstructing a non-orthogonal tangent space with the "z" vector being the triangle's face normal. Call these T and B. 2) Orthogonalize T and B with respect to N, but do this separately for each of them. That is: T' = T - N*(T . N) B' = B - N*(B . N) and then scale them back up to their original magnitudes. So now you have a non-orthogonal tangent space, TBN. 3) Apply the derivative map. If D is the vector of partial derivatives from the normal map, then: Dx' = T' + N*Dx Dy' = B' + N*Dy N = normalize( cross( Dx', Dy' ) ) So you treat Dx and Dy as being the partial derivatives in the non-orthogonal tangent space (i.e. the change in N along T' and B') and then take the cross product of Dx' and Dy' to get the normal. Unfortunately, this technique causes seams across triangles when the T and B vectors "change sharply" (unless it's because I messed up my math). I was hoping that since I was using the "true" tangent space with derivatives (not an orthgonalized one, which technically doesn't produce the true normals if the texture is skewed) this wouldn't be the case, but I guess this technique would still require tangent space "averaging". What I don't understand is how the normal perturbing method, or ANY method, could possibly work without some form of smoothing/averaging going on across triangles. The article I linked to talks about perturbing by the surface gradient. But if the s and t vectors "sharply" changed across an edge, wouldln't the gradient also sharply change and cause a noticeable edge?
  13. Well I finally managed to get 2D depth texture writes to work! The issue with that actually turned out to be rather annoying - I had previously been failing to write to cube map depth textures, so I tried binding a dummy 1x1 cubemap color texture to the framebuffer to see if that made a difference, which it did not. When I switched to trying it with 2D depth textures, for some reason I simply changed that dummy texture to 2D rather than remove it entirely. So when I rendered to the depth texture I got a black image with a single gray pixel in the corner. I assumed this meant the whole texture was just "garbage" since I'd gotten similar garbage results before, but it actually turned out that the depth texture [i]was[/i] being clearly - but only the small 1x1 section that overlapped the dummy color texture! I wouldn't have thought this would be the case since I did in fact set the viewport to the size of the depth texture. So it turns out (I guess) that the region written to the depth buffer is not only determined by the viewport size, but also by the color attachment sizes. Anyway, now that 2D depth textures are working, I'll see what cube map depth textures do... EDIT: So switching from the 2D depth texture to cube depth texture now appears to screw up future FBO depth buffer operations, as now the main depth (renderbuffer) is completely black (depth = 0) causing nothing to draw. Again, all I am doing is binding a depth cube map to the depth component of the FBO, and then calling glClear( GL_DEPTH_BUFFER_BIT ) with a value of 0.5f. And this seems to mess up future depth buffer operations. However, the cubemap is in fact cleared to 0.5f! (At least it is on the very first frame - I think after that point, depth operations are all messed up, so it ends up as pure black too.) I tried a few other things as well, including the following: - Binding a single face of the cube map to the depth attachment using glFramebufferTexture2D(). This DOES work - it clears the single face and doesn't mess up future framebuffer operations. - Binding a cube map texture (of format RGBA) to a color attachment and clearing it. This DOES work - it clears the color cube map and doesn't mess up future framebuffer operations. - Binding a 2D texture array to the depth attachment using glFramebufferTexture(). This DOES NOT work - the same behavior as with cube maps is exhibited: on the first frame the whole texture is properly cleared, but after that point all depth operations on the framebuffer get messed up. - Binding a single layer of a 2D texture array to the depth attachment using glFramebufferTextureLayer(). This DOES work - it clears the single layer and doesn't mess up future framebuffer operations. So from these results, the problem seems to be only with [b]binding layered depth textures to the depth attachment of a framebuffer.[/b] Specifically, when I do this and then perform a write operation, that single operation works but all future write operations on any depth attachment don't seem to have any effect. Has anyone experienced behavior like this? It seems to me like a driver bug. I'd really like to get single-pass shadow mapping working with the geometry shader, but it doesn't seem like this is going to work anymore.
  14. [b]EDIT:[/b] See the [url="http://www.gamedev.net/topic/633083-depth-renderbuffer-works-but-depth-texture-does-not/page__p__4992697#entry4992697"]4th[/url] post for an update. I'm attempting to set up a depth cube map for omnidirectional shadow mapping, but I've run into issues - specifically, I can't write to depth textures; however, using a renderbuffer in the same place works (I know this because I am currently using a depth renderbuffer in my FBO, and when I change it to a depth texture nothing is ever written). This includes both writing via the fragment shader AND a simple glClear() call - so the problem is also not that I'm just drawing my geometry wrong. Additionally, I have made sure to call glDepthMask( GL_TRUE ). Here is a short section of code which exhibits the issue for me (FBO creation code left out, but I've used that extensively and it works): [source lang="cpp"]int texID; // generate depth texture glGenTextures(1,&texID); glBindTexture(GL_TEXTURE_2D,texID); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_NEAREST); glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT,32,32,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); // attach it to the framebuffer glFramebufferTexture(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,texID,0); // check status - returns GL_FRAMEBUFFER_COMPLETE Log::info() << glCheckFramebufferStatus(); // set the clear depth to 0.5 - should produce gray glClearDepth(0.5f); // clear the depth buffer - this does not work!! glClear(GL_DEPTH_BUFFER_BIT); // bind the depth texture so that glIntercept will write it to disk glBindTexture(GL_TEXTURE_2D,texID);[/source] However, instead of producing a texture filled with 0.5 depth (would appear gray), it's filled with mostly black and a bit of garbage scattered around the edges. I've looked at tutorials and I can't seem to find anything I'm doing differently. The example above is pretty much as simple as I could get it. I've also tried different versions of glFramebufferTexture, such as glFramebufferTexture2D, but that didn't make a difference. My card is ATI HD Radeon 5870 and I have Catalyst 12.8 (ver. 2012.0806.1213.19931).