
Advertisement
Zorinthrox
Member
Content Count
19 
Joined

Last visited
Community Reputation
8 NeutralAbout Zorinthrox

Rank
Member
Personal Information

Role
Programmer
Technical Artist 
Interests
Art
Programming
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.

Computing normals of tessellated heighmap on the fly.
Zorinthrox replied to Syerjchep's topic in Graphics and GPU Programming
Fair. It's probably silly to worry about edge cases like spikes, which no height map terrain should have anyway. 
Computing normals of tessellated heighmap on the fly.
Zorinthrox replied to Syerjchep's topic in Graphics and GPU Programming
Is the second picture a "spike" in the terrain height map? Like the surrounding terrain is all at ~0.2 and there is one pixel at 1.0? If that is the case, then I'm going to agree with dpsdam450 that determining the local behavior of the surface without directly using the height sample at uvCoords might be the culprit. Note: I'm assuming a righthanded coordinate system where Z points up, X and Y describe a flat plane and that the center of a pixel in the height map corresponds to a vertex in the terrain mesh. If the tangent of the surface lies in the same general direction as X and the bitangent in the same general direction as Y, and d is the distance away from uvCoord you sample the height map, then the tangent should be something like: normalize( vec3( d,0, heightAtUV  texture(uvCoord  vec2(d,0)) ) + vec3( d,0, texture(uvCoord + vec2(d,0))  heightAtUV ) ) And the bitangent: normalize( vec3( 0,d, heightAtUV  texture(uvCoord  vec2(0,d)) ) + vec3( 0,d, texture(uvCoord + vec2(0,d))  heightAtUV ) ) Note that I omitted the reference to the texture being sampled. Then your normal is normalize(cross( tangent, bitangent)). That should smooth out any big spikes and correctly generate a normal that points straight up at the pinnacle of a spike in the texture. You can adjust the tightness of the shading by manipulating d, which you've wisely incorporated into the parameters of getNormal(). If you just sample the points up and to the right of the current point, you'll get other inconsistencies like the tip of a spike pointing out towards XY instead of straight up. That won't be too much of a problem if you've got mostly rolling terrain or a very high triangle density, but you don't know when you need a cliff or a shear face. ETA: The method I've described will probably result in spikes not having much of a "dark side". Artifacts around such discontinuities in discrete data is unavoidable because of the NyquistShannon Theorem. You need a height map slightly larger than twice the triangle density to avoid it. For that reason spikes in the terrain should be limited to having a pinnacle no less than 2x2 vertices, which is probably what a level designer would do anyway. You might get some shadow "bleeding" over sharp edges, too, but it will at least be smooth. You might avoid this by turning the above equations inside out and normalizing the vectors before adding them, then normalizing again x_x. 
Question about calculating tangent space for per fragment lighting (normal mapping)
Zorinthrox replied to Hashbrown's topic in Graphics and GPU Programming
It's been a while since I implemented this, but that should work so long as you are calculating the tangent basis correctly. However, since the tangent basis is being interpolated over the face for each fragment, you should really normalize it. I think what I did in my thesis project was to pass the U and V unit vectors of the tangent basis to the fragment shader, normalized them, crossed them to get the third axis, then constructing the basis matrix from those three unit vectors. That saves a normalization and ensures the basis is very close to orthogonal without having to reorthogonalize the whole thing. You get better results, especially with normal mapping. Also normalize the normal from the normal map; it's easy to think it won't make a difference, but depending on resolution it really can. 24bit Normal maps are pretty good at saving the direction but not the unit length of the normal (they can be better but I don't think it is standard practice to do the crazy Crysis 3 optimal normal mapping thing). 
Whether or not Ifs are costly depends on how the architecture handles branches. As Fulcrum.013 has pointed out, the highly parallelized nature of a GPU precludes branch prediction. The GPU is way too tuned and way too busy to really care. The top answer in that link says that with newer hardware it's not something to worry about: the GPU is running so many threads that if a branch fails, the core just goes to some other thread. Not to mention that shader code is rarely 'branchy'. Long branches might be problematic. It would be better to organize your rendering ahead of time to separate rendering into discrete operations if you are worried or the profiler says so.

what does glLinkProgram and glBindAttribLocation do ?
Zorinthrox replied to nOoNEE's topic in Graphics and GPU Programming
Off the top of my head, glLinkProgram 'stitches' together the compiled shader stages currently attached to the specified program object into a shading program that can be used. Shader compilation under OpenGL works more or less the same as C compilation: each file is preprocessed, then compiled into machine code, then linked together into an executable, albeit one that runs on the GPU. Same requirements apply. If you attach a new shader stage means you have to compile that new stage and link the program again; you would not have to recompile any stages that haven't changed if they are already attached and compiled for the given program object. Names that are used across shader stages have to match. Linking can fail for any number of reasons, so check the error state religiously. As for the other one. 1 reply

1

Convert coordinate to zero Z ( 2D )
Zorinthrox replied to the incredible smoker's topic in Graphics and GPU Programming
Essence of Linear Algebra 
Convert coordinate to zero Z ( 2D )
Zorinthrox replied to the incredible smoker's topic in Graphics and GPU Programming
So long as you can translate the math to code, Wikipedia is a good place to start when it comes to said math. Most basic graphic topics are well trodden and covered there. 
Convert coordinate to zero Z ( 2D )
Zorinthrox replied to the incredible smoker's topic in Graphics and GPU Programming
Someone else will have to offer API specific details; I've only ever really used OpenGL and it has been a long time since I implemented an actual rendering pipeline. I more of a general software developer. Basically, you will need to get your world, object, and let's say "look" matrices in order. The look matrix just gets the camera looking where you want it, it doesn't do any projection. Then formulate your orthographic projection taking into account any handedness issues, the size of the screen, and how big you want things to be. Then multiply things as usual, using the orthographic matrix as your projection matrix (instead of a perspective one). I don't know the details of your code or how you've set things up, and I don't know anything about DirectX, so really it is up to you to google your way though this unless someone else offers any DirectX details. 
Convert coordinate to zero Z ( 2D )
Zorinthrox replied to the incredible smoker's topic in Graphics and GPU Programming
"Orthographic" comes from the Greek for "proper drawing" (kinda). Think "faithful reproduction" or "scale drawing"; it works like a blueprint or schematic. An orthographic projection won't change the shape of objects; it might scale them, it might translate them, but it does not distort them. Squares will still be squares, circles will be still be circles, straight lines are still straight and parallel lines do not intersect at infinity.* A camera using an orthographic projection has no perspective; objects closer to the camera will not appear larger than those farther away. Basically, you scale everything such that the image you get is undistorted. In your case, a practical example would be something like: [ 1/10 0 0 0 ] [ 0 1/10 0 0 ] [ 0 0 1/1000 0 ] [ 0 0 0 1 ] which would map the box with corners at (10,10, 1000), (10,10,1000) to the cube with corners at (1,1,1), (1,1,1). Graphics (usually) requires depth information to be saved, so here the Z axis is not zeroed out and instead crushes a lot more Z into the final volume than it does X and Y. You still need to deal with your world, object, and camera matrices though. Note that this is an example based on a number of assumptions, like the handedness of your coordinates, the aspect ratio of the screen, or the bounds of the canonical volume. I've chosen a simple transform to make it clear what we are talking about, otherwise it'd be weird fractions. Also, I haven't implemented this in quite a while. * Unlike in perspective, where all lines parallel to the direction of the camera intersect at infinity. Rotations also have these properties, but rotations are not usually considered "projections" since they aren't intended to transform between numbers of dimensions. 
Size of C++ polymorphic objects.
Zorinthrox replied to Gnollrunner's topic in General and Gameplay Programming
If I'm not mistaken, the default allocators in C++ implementations usually store the size of the memory that has been allocated on the heap in some sort of enclosing structure that maps onto the chunk of memory allocated, within which the allocated object in constructed. Such a set up uses a structure that stores the chunk size 'before' the object itself, so when 'operator delete' is called all it needs to do is look right before the passedin address to find how much memory to free. So even the standard system needs to spend memory on knowing how much memory it allocated. That sort of thing is necessary when you're writing a generic allocator, where you can't depend on anything about the object itself (for instance, knowing it's size at runtime). Since your system only deals in subclasses of a particular type, I'd say the safer and less blowyourwholelegoff thing to do is exactly what you have now. ETA: Looks like a couple of people beat me to the punch while I was off digging up that link 
Convert coordinate to zero Z ( 2D )
Zorinthrox replied to the incredible smoker's topic in Graphics and GPU Programming
I am assuming you are using a perspective projection or other projection that is not orthographic. If that is the case, and what you want is the screen X,Y coordinates of the projected point to be unchanged when you transform them so that Z=0, then the easiest thing is to project the points and set Z=0 directly. This is what previous answers are assuming you mean, because the other option is...unusual. The other option is that you want to find out where the points are going to be after they are projected, then altering their position so that when they are projected they end up with the same X,Y coordinates but with Z=0. This would be a waste of computation, and against the general approach of separating logical coordinates from graphical coordinates. I'm not an expert game designer, just a regular programmer, but I'm betting that if your game logic depends on the screen space coordinates of an asset, and that logic is not related directly to user interaction, then it is likely your design is off. If perspective distortion is an aesthetic requirement, then the game logic shouldn't care or even know about it*. Given that your camera is up at Z=1000 and you want your objects to be down at Z=0, I bet that if you just use an orthographic camera like Scouting Ninja said it would all be much easier. * Unless you're doing something very, very clever. 
It might also help to have the rest of the team read the combined Parashift/Stroustrup Super C++ FAQ to get a better handle on C++.

What To Do if a Matrix is not Invertible?
Zorinthrox replied to Zorinthrox's topic in Graphics and GPU Programming
Makes sense. 
What To Do if a Matrix is not Invertible?
Zorinthrox replied to Zorinthrox's topic in Graphics and GPU Programming
Thanks gdunbar, I was thinking it'd be good to back up an assertion with an exception and you sold me pretty solidly. I hadn't considered a work around. By using an alternate representation are you exploiting: the possibility that the transforms individually are invertible but their composition isn't* possible differences in the accuracy of the arithmetic that lead to noninvertibility in one form but not the other knowledge of the transformation and how to invert it ahead of time (which could include nudging things just a tad so the math works out) * That doesn't sound like a mathematically sound assertion, but I figure I'd ask just in case. 
What To Do if a Matrix is not Invertible?
Zorinthrox replied to Zorinthrox's topic in Graphics and GPU Programming
My goal is games (or just graphics); I'm not doing finite element structural simulation or anything. Not saying I'm good at it either Thanks!

Advertisement