Jump to content
  • Advertisement

Trurl

Member
  • Content Count

    42
  • Joined

  • Last visited

Community Reputation

208 Neutral

About Trurl

  • Rank
    Member
  1. I think you're on the right track. I'm guessing the problem is simply in the ordering of your matrix tracking transforms. Try applying the translation before the rotation. transform.identity(); transform.setTranslation(Vector3(-(position.x),0.0f,-(position.z))); transform.rotateY(rotation.y); Keep in mind that the order that the transforms OCCUR is the opposite of the order in which they are applied. However, there's a catch in that cameras transform differently than normal scene objects: when transforming your camera, the opposite of the opposite happens, so the end behavior is non-intuitively what you expect. :D
  2. Trurl

    multiple viewport?

    There's another, more straightforward approach. After you've rendered your usual scene to the entire screen, use glViewport and glScissor to select the subset of the screen for the inset view. Then render the inset scene normally. This works with an arbitrary number of subwindows.
  3. Quote:Original post by polypterus Wouldn't this have a problem with flat surfaces? For instance if you had a salt flat somewhere, wouldn't it always get the same texture cooridinate? Right, a good point. Use the vertex position rather than normal. I was thinking of the simplified case of a smooth sphere, where the vertex normal is essentially the same as the vertex position. The basic idea still applies.
  4. Uniform sphere texturing is impossible. What kind of texture do you have that can tolerate being remapped every frame? Have you considered using a cubemap texture, and using the surface normal as texture coordinate?
  5. Having not examined that particular implementation, it might have something to do with the fact that the screen-space position will be linear in Z, while the depth buffer will not.
  6. Trurl

    Another terrain question

    Quote:Original post by RobMaddison I adjusted my terrain to use the minimum of a) the distance from viewer to nearest corner of the patch and b) the distance from the viewer to the center of the patch. This seems to work okay for distant terrain (i.e. areas with more 'dynamic' terrain appear in higher detail to those without), but when up very close to a patch in the terrain which should be split, it only gets split when I'm either very close to one of the corners or the center of the patch. Yeah I can see that, and after posting I realized that I omitted a significant consideration: the closest point on the patch bounding volume might not fall upon a corner. It might instead lie along an edge or in the middle of a face. Quote: To check I'm storing my error deltas correctly, once the error delta for a particular node has been calculated, I then get the error metrics for each of its children nodes, take the highest out of those four and add that to the current node's error delta. This means that each node of the terrain patch tree should contain the highest delta between _its_ LOD level and its farthest descendants (i.e. all its leaf nodes). Is this correct? The maximum error of a given patch can be computed in total isolation, given only the base height map as reference. I can see that computing parent error as some function of child errors might give a performance optimization (and there might even be something in the literature on this, IDK) but it kinda scares me. I don't think it's necessary to enforce the invariant that error increases monotonically up the tree, since it should happen naturally. Quote: So do I need to adjust my screen space error calculation in order to cater for when I'm getting closer to the leaf nodes? I did try the approach suggested by Trurl which is to check if the viewer position is within the bounds of the node, but that tends to make LOD detail appear 'drastically' when you enter a particular node. By this, I mean you could be looking at a fairly large patch (with a low LOD) and when you cross into its boundary, it splits along with the child quadrant at the point you entered and so on. You end up not seeing any detail, then almost all detail at the corner you entered the big patch. This won't work with my intention to include geomorphing. The correct solution (whatever it may be) will work at all LODs regardless of depth. Are you enforcing the constraint that adjacent patches be within one LOD of one another?
  7. The glGet is returning whatever is currently at the top of the modelview matrix stack, so the matrix that it returns depends upon when you call it. So lets assume that you begin rendering by calling glLoadIdentity, then you apply your view transformation (possibly using gluLookAt), and finally you apply your model transformation (using glRotate and glTranslate). If you call glGet at the beginning then you'll get the identity matrix. If you call glGet after the view transformation then you'll get just the view matrix (V). If you call glGet at the end then you'll get the model matrix composed with the view matrix (V * M). If you really really don't want to track matrices, then you can call glGet at both the middle and the end, thus separately acquiring V and V*M. Then invert V giving its inverse (Vi) and compose this inverse with V*M. So, Vi * V * M = M. Now if you transform your feelers with M you'll have them in world space. Let me know if you need a simple 4x4 matrix inverter.
  8. The modelview matrix skips right past world space and takes the local coordinate system all the way to the eye-space coordinate system. Eye space is the coordinate system centered upon and oriented with the viewer, with X toward the right of the screen, Y up toward the top of the screen, and Z pointing out of the screen. To get to world space you need to apply the model matrix, but not the view matrix. Unfortunately, GL does not draw a distinction between these two matrices. So, you'll need to do some matrix handling yourself. You either need to track the car's model matrix and use THAT in your conversion, or track the camera's view matrix and apply the inverse of that after applying the modelview.
  9. I can't answer any of these questions fully, but I can at least comment, having implemented both Perlin and Simplex noise. Quote:Original post by petalochilus Yesterday I implemented my own Perlin noise generator and it seems to work fine, and now I'm trying to gain some performance by switching to Simplex noise. However, there are some points with the both concepts I have not yet been able to grasp. 1) There seems to be a lot of confusion what is actually a Perlin noise, many articles confuse this with fractal brownian motion. I've seen tutorials where you take just some random numbers and interpolate between them and add them up with different octaves and call it Perlin noise. So, the question is how this simpler way to generate noise differs from actual Perlin noise (with gradients)? Is it faster or better or why everybody seem to use it? I agree that there seems to be misinformation on this topic. People seem to associate the tag "Perlin noise" with the concept of summing noise harmonics, rather than with Perlin's gradient-based coherent randomness approach. I wouldn't say such approaches are faster, but they are certainly easier to understand. I'm certain that Perlin would argue that such approaches are inferior, probably on the grounds that they exhibit isotropy. That is: you can see the shape and scale of the interpolated grid through the noise. Quote: 2) With standard 2d Perlin noise, there is four corners which contribute to the final result and to get smooth transition, interpolation is mandatory. However, Simplex noise, being faster and all, does not need this step - there is three gradients from each simplex corner to sum up. I didn't find any good explanation why this works, so I'm asking here: what does these gradients (and their dot products) actually represent (in standard case and Simplex)? I don't know! But there ARE gradient interpolations occurring... fifth order interpolations, in fact. They are similar to the gradient interpolations of standard Perlin noise, but they are obscured in two ways: 1) they happen on a skewed simplex grid, and 2) the gradient vectors themselves are generated by permuting unit vector elements (dictated by the bits of the integer grid coordinates). I think it's important to recognize the context in which Simplex noise was created. Yes, it improves upon the original noise algorithm by providing higher order continuity and better anisotropy. However, the intent of its design is its ease of hardware implementation. Perlin intended the Simplex algorithm to run in silicon rather than in software. So the example implementation is a dense wall of Java that seems to assemble coherent output from scattered bits as if by magic. It's completely unreadable. It is given in terms of a set of operations ready to be implemented by a minimal set of simple ALU operations. This also applies to some extent to Perlin's implementation of his original noise function. It is heavily optimized, and as such the core of the algorithm is obscured. Reading it, one is bogged down by permutations of pseudo-random gradient vectors, and misses the point of gradient interpolation entirely. Quote: 3) Is there any way to affect the probability distribution of the noise? I seem to get values between -0.7..0.7. Is there any advantage to choose gradient vector set differently than (-1, 0, 1) permutations? Annoying, eh? It's difficult to analytically predict what the min and max of the function will be. My personal practice is to generate an entire buffer of noise and then normalize it.
  10. Trurl

    Another terrain question

    Yes. Use the closest corner of the 3D bounding volume of the patch. The goal of LOD selection is to limit the maximum screen-space error. You've pre-computed the maximum world-space error of each patch, and it stands to reason that world-space error will have the greatest impact on screen-space error at the point of the patch closest to the viewer. So, if you determine the shortest distance to the patch and project the patch's maximum world-space error to that distance, then you have determined an absolute upper bound on the screen-space error. Of course this is complicated if the viewpoint is within the 3D volume of a patch. The "right" thing to do is to compute the shortest distance to the actual triangulated surface defined by that patch, but this is probably too expensive to be worth it. As an alternative you can assume that you'll always subdivide any patch whose bounding volume contains the viewpoint. The recursion terminates when either 1) the patches get so small that their bound no longer contains the viewpoint or 2) you hit a leaf.
  11. Oh yeah I totally screwed up the antialiasing on that. Not sure what I was thinking. It's still aliased on one side. We'd need two smoothsteps to make it perfect. You get the idea, at least.
  12. You can accomplish this with screen-space derivatives. Instead of drawing a band of color over a range of contours, you draw a band of color over a range of pixels near the contour value. Here's an example in GLSL, but it should port easily. This vertex shader simply passes along the position in a varying variable. varying vec3 k; void main() { k = gl_Vertex.xyz; gl_Position = ftransform(); } For reference, here's a GLSL fragment shader that does something similar to your approach. It uses smoothstep to get anti-aliased contour lines. varying vec3 k; void main() { vec3 f = fract (k * 100.0); vec3 df = fwidth(k * 100.0); vec3 g = smoothstep(0.05, 0.10, f); float c = g.x * g.y * g.z; gl_FragColor = vec4(c, c, c, 1.0); } Here's what this looks like applied to the head of the famous dragon. It has the same results that you see: lines that are too narrow in some places and too wide in others. Now here's a fragment shader that uses screen-space derivatives to accomplish the same task. It uses smoothstep to blend between distances of 1 and 2 pixels from the desired contour value. varying vec3 k; void main() { vec3 f = fract (k * 100.0); vec3 df = fwidth(k * 100.0); vec3 g = smoothstep(df * 1.0, df * 2.0, f); float c = g.x * g.y * g.z; gl_FragColor = vec4(c, c, c, 1.0); } The result is a set of contour lines of consistent width everywhere.
  13. Trurl

    HDR probe lighting

    Quote:Original post by ooooooooOoo I won't lie, I have no idea what harmonics are. I'm going to read your links and learn up on it though. Start here. You don't NEED spherical harmonics if all you want is a diffuse irradiance cubemap. If you're just preprocessing then you can do the convolve as you understand it.
  14. Trurl

    HDR probe lighting

    There are many resources on this topic. The only difficulty is in producing the blurred environment map. The blur kernel is a cosine-weighted hemisphere, which is huge. A normal convolve would be quadratic in the number of pixels. The standard technique, described by Ramamoorthi and Hanrahan, uses the spherical harmonic transform to perform the convolution in frequency space in linear time. The resulting spherical harmonic coefficients can be used directly (there are only 9 of them) or rendered to a cube map, which generally needs to be only around 32x32 to represent diffuse illumination. You note using the technique with pre-baked ambient occlusion. This is the essense of precomputed radiance transfer where-in a per-vertex or per-fragment spherical harmonic basis represents an object's ambient occlusion and inter-reflection. It is dotted with the irradiance basis in real time to produce the effect you suggest. A course at SIGGRAPH 2008 describes how this technique was used in Halo 3. (edit: broken html)
  15. I've got parallel-split shadow map code that renders the scene multiple times, so I've also approached the issue of optimizing for multiple shadow maps. For me, the biggest performance win was in the elimination of material state changes while in depth-only rendering mode. Most surfaces are opaque, which means that their material doesn't matter during shadow map rendering. This means you can merge the geometry of all opaque surfaces into a single large element buffer and render in one pass using a trivial shader. After that, any alpha masked materials are rendered one-by-one. I actually keep two sets of element buffers: one for color mode and another for depth-only mode. The result is a drastic reduction in batch count, program switching, and texture binding.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!