Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

647 Good

About bzroom

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Maybe there are duplicate entries in the container, and you want to correlate them. Programming is a mysterious world, with many mysterious tasks.
  2. if you need a very shortlived unqiueness, then the memory address will work fine. Say you're sorting a handful of pointers, you can simply use the address. One problem is if these are allocated non-continuous, each ptr is heap allocated, then the sort result will not be deterministic. Your tools may sort your ai nodes in one order and the build process in another, so when you go to debug node 25 in your tool, it's not the same node 25 that the build process created. A problem with incrementing IDs is that they wrap. For small tests it may appear that everything works fine. But after soaking your game for a couple days you will very likely wrap your IDs, then you need to deal with "well where do i get the next id from?" you'd need to returned freed ids to a list. So if you need short lived, automatically recycling ids that simply guarantee uniqueness (locally) and are not deterministic, the memory address is perfectly applicable. In most other cases, with multiple machines, or multiple runs, where the ids much be the same.. the memory address will fail miserably.
  3. you can start with toying with some simple kinematic chains. For example, you may start with a simple barrel attached to the tank. You'll need a point between the two shapes to use at the reference point for articulation. Have the tank oriented to the terrain and reorient the barrel so that it points at the terrain. However the learning curve is steep. once you reach this point, you are practically ready for full on skinning. the leap is not far from there. You can probably find many references on .3ds format for hierarchical bone transformations. personally i learned with the milkshape 3d tutorial from a while back. I have since then implemented my own exporter and dump the scene information to my own format.
  4. you may want to go so far as to mask the skybox, you may use one of the channels of your GBuffer, or a stencil buffer, or something to indicate which pixels of the screens should be eligible for bloom. blooming the sky is an intutive way to make a homogenous look to the effect. but your artists will likely want more control over what the player sees, even in the extremely bright sky. We haven't added masking yet, but we want to. we want to mask other post effects.
  5. Silly author, Y u no explain things? The reason the dot product is used instead of the vector magnitude, is because we are looking for the magnitude _in the forward direction_. Which means the result is signed. You could have a velocity _not_ in the forward direction, for which the dot product result would be negative. In other case, you may have some gigantic relative velocity, magnitude 1000, but if the vector is perpendicular to the forward direction, the dot product, and forward velocity magnitude, will be zero. Sorry, this article is extremely dated. Good to see it's still got some legs though! The correct projection is Vec Project( Vec a, Vec onto ) { Vec norm = onto.Normalized( ); return norm * Dot( a, norm ); //dot product against normalized "onto" vector. } You likely _dont_ want to use the magnitude of this vector, you likely just want the dot product result which is signed. (positive if vectors originally pointed in same direction, negative if not, zero if perpendicular)
  6. For the main rendering, not considering double-fast-z: The reason would be to write out a modified depth, such as linear depth. If you just wrote the new depth to HLSL depth output, it would disable hierarchical-z optimization which is critical. But writing a whole render target just for depth seems out of the question since we already have the depth buffer. I'm most definitely reconstructing position from the depth buffer. How ever i would like a more linear precision, because it's pretty bad right now. The double fast part, is for both shadow maps (which in some cases are orthogonal), and for the early-z pre pass used to fill the hierarchical z buffer.
  7. On the 360, I'm trying to accomplish two things at once. Double fast Z render: Disable pixel shader and color writes. vertex shader -> depth buffer only. Output linear depth: Used for reconstructing world position and depth in later stages of the pipeline. I've found all sorts of material on linear depth, but lots of it is contradicting. You've got this, which seems too good to be true, and has been denounced in other threads: http://www.mvps.org/directx/articles/linear_z/linearz.htm This, which requires a pixel shader and a render target or a fragment depth change: http://mynameismjp.wordpress.com/2009/03/10/reconstructing-position-from-depth/ What's the best way to put it all together? Is it possible to output "double fast" linear-z?
  8. Warning, loud noises. Plus tire physics: #t=183s
  9. We use GJK for collision. We typically search for the closest, or a sufficiently close solution. In your case we would be ray casting against the convex shape (the ellipsoid). The ray being the center of the laser. We would solve for the closest solution. This would be a point on the laser ray that was as close to the ellipsoid as possible. If that point was not more than the laser's radius away from the ellipsoid then it would be considered a hit. This is basically the method: http://www.continuousphysics.com/ftp/pub/test/physics/papers/jgt04raycast.pdf
  10. bzroom

    Light Index Buffer

    I've known species for a while. I told him i was working on a deferred shading renderer and he said: "even i can write a deferred shader." not to say that you are writing a conventional deferred shader, as you explain. but my only two cents on the matter is to "try it." Grab a copy of render monkey and plumb out your design. I'm not sure how the restriction of 4 got introduced, I think it was probably during the ambiguous moment where you make light contributions indexable per pixel, up to 256 indexes. when really you are limited to your rgb color packing capabilities into your render targets. which is probably 4. all of this complication definitely seems like more of burden when you start to run asset conversion tools on your production content and realize you need hundreds of shader variations to make pix happy and so on. simplicity is definitely very favorable. i suggest a very traditional deferred shader. A specific annoyance is that we have a terrain shader which lets you index up to 4 materials from a pallet and blend them into a result. It has terrible artifacts along the edges where pixels use slightly (or wildly) different material blend orders on the seam.
  11. I just wanted to answer those questions related to my post: * Stored linearly in that the data container's traversal time is linear. * They render polygons to the depth buffer, then for the test (here's where i'm speculating) they compute the AABB in the depth buffer space, in the sense that they compute a depth texture x,y,w,h,zmin,zmax and just run over all the pixels as fast as possible searching for a pass. This likely uses a hierarchical optimization.
  12. I've programmed robots for many years, and there is inevitably a case where gimbal lock will be an issue, no matter how much you prepare for them. How are quaternions better than matrices? Both store a single absolute orientation relative to their parent. Also, we have custom ragdoll and it hardly uses TRS. The physics doesn't spit out TRS, so that seems like it would require extra work. My bad on the 1/T thing. I was trying to choose a symbol that would be uniform across the board. So that the matrix proof line would look good. They must be applied in the reverse order correct? Because a rotate 1 plus translate (2,0), inverted is not rotate -1 plus translate (-2,0).
  13. bzroom

    Black screen

    Check your zbuffer settings. I dont see any clear depth or ztest direction specified here. It appears that all your fragments are being zrejected. Vertex shader and draw call code obviously works correctly.
  14. i wasn't sure what SQT meant at first.. but now i get it. Scale Quaternion Translation? We normally call it PRS, for position rotation scale (which makes no suggestion of the storage types.) Anyways. I would not store the bone data i mentioned in PRS format. I would only store the actual animation keyframe data in PRS, since it is much easier to interpolate than a fully composed matrix. It is completely impossible to avoid gimbal lock. Matrices are likely your best defense against gimbal lock for storing absolute tranforms. Incremental transformations, and affine transformations are where it will cause problems. Object space = model space, yes. Also, correct on the parentRel thing. The parent access needs to be checked so that first bone, who's parent is -1, does not access bad memory. But your pseudo code is correct. The inverse of PRS, or TQS would be 1/S 1/R 1/P, or 1/S 1/Q 1/T. (ie. inverted affine transformations in the opposite order) How you store a 1/S 1/R 1/P into a PRS likely requires composing a full matrix and decomposing it again. Proof: P * R * S * 1/S * 1/R * 1/P = identity; To answer your initial question about which part of PRS would store the parent relative offset. Well in the parent relative transform, that's be the P part. struct PRS { Vec3 P; Quat R; Vec3 S: PRS( const Matrix& m ) { P = m.Translation( ); R = m.Rotation( ); S = m.Scale( ); } Matrix ToMatrix( ) const { return Matix( P ) * Matrix( R ) * Matrix( S ); } PRS ToInverse( ) const { return PRS( Matrix( 1/S ) * Matrix( 1/R ) * Matrix( 1/P ) ); } }; You can easily lerp this structure to determine the animated bone delta, then use the ToMatrix method to use it in your final bone pallet procedure.
  15. - frustum culling I believe frostbite still uses a hierarchy for thier bounding spheres. It is stored linearly though, by depth. It lends itself very well to testing higher resolution primitives at a certain depth. though i'm sure they're limiting it to 2 or 3 levels, so it's pretty much all or nothing. - one thing that requires special attention is figuring out what the best solution for merging rendering and collision is On toy soldiers 2, all collision operated on the renderable geometry. For the setup it was perfect. I've since implemented a generic physics solution which can operate on the same data, or off some other lower resolution data. - renders bounding boxes to a software buffer and does low res depth look-ups to test whether an object is visible. I would say they are not rendering the bounding volumes, but rather using an AABB (in depth buffer space) to check the depth buffer for occlusion. There will be tons of false positives. However, the gain from all the useful rejections is supposed to outweigh that. - collision We used a quad tree, with KD trees at each draw call level. all of our debris simply raycasted into the world. from what i could tell, battlefield and all of the popular fps shooters have very negligible physics. it's kind of a shame. quad trees are terribly slow. our new system uses sweep and prune and frame to frame GJK to hopefully give us much more accurate collision for the same cost.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!