Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

481 Neutral

About TeaTreeTim

  • Rank

Personal Information

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. TeaTreeTim

    shadow mapping using directional light

    Directional light doesn't mean you are doing calculations for the 'entire scene', although if a pixel is processed by the pixel shader then yes you would be doing calculations. Perhaps you are visualising some kind of ray tracing, or maybe mixing this up with real-time shadow mapping. Directional light just means that light is based on a constant directional vector as opposed to say a spot light. The traditional method would be to have a normal for the mesh, do a dot of the normal to the light direction vector and that is your light value. This is not actually physically correct and doesn't consider the light being occluded (blocked) by something but that was the standard technique for real-time rendering up until 15 years ago or so. There are better people than me to explain all of the modern techniques from ambient occlusion to the latest ray tracing methods. If this question was about shadow mapping, then yes your question is valid. What the shadow mapped frustum sees is processed, imagine rendering the scene twice, once for the camera, once for the light. Google cascading shadow maps for a start there.
  2. TeaTreeTim

    Why is Eclipse the most popular Java IDE?

    You answered your own question in saying you used it in school.
  3. TeaTreeTim

    Help with 3d collision question

    How big is the lake, do you keep the whole thing loaded at once? Is the edge represented as a mesh or a texture? Do you need to accurately follow triangles in the mesh, or could you use bilinear control points to define the edge? Do you want to reflect, bounce or collide n slide, or something more complicated like get some air or crash? How much do you like maths?
  4. TeaTreeTim

    Missing pixels in mesh seams

    if the triangles have the same vertex position and are 32bit floating point and you arent doing anything extra other than just rendering, id look more at aliasing and multisampling than accuracy of vertice position.
  5. TeaTreeTim

    Simulate The "No Man's Sky Effect"

    I dont know what they do but I wrote this over 10 years ago now so its a bit dated:
  6. TeaTreeTim

    Interpolation over Mesh

    You're sweating the interpolation algorithm but skimp on how the vertices in question know the colours of their neighbours. Is colour a component of vertices or determined based on something like textural coordinates? Is this a pre-computed operation or something you want to do runtime, if so how? How do you associate vertices, by triangle neighbour relationships or by vector distance apart?
  7. If the light source is constant and directional, and the mesh doesn't deform, why not just render the pre-computed shadow as a texture decal? The target geometry would need uv coordinates aligned to the plane of the light direction. You could generate the uv on the fly as a distance to plane in shader or precompute for the vertices. Technically the below is incorrect because the wall has blocked the shadow below the yellow line but if you only have one light source its in shadow anyway. If you have a different light like a light bulb in the level below this system wont work. I guess it comes down to how accurate/fussy you will be: PS: If the mesh rotates like say its a car, I'd have 4 or 8 shadow decals and lerp between them.
  8. TeaTreeTim

    Smooth normals and Tessellation

    You are changing some heights after generating normals. Trying to calculate a normal from original vertice normals plus changes made in tesselation will be complicated. I would either: - Calculate in the pixel shader from heightmap sampling (slower performance better quality) - Calculate in the domain shader based on heightmap sampling - Calculate in pixel shader from normal map (a texture that has pregenerated normals as a colour value) - Calculate in domain shader or geometry shader (if you use one) from normal map Also you seem to have some flipped normals which is a hassle with tesselation. Since its a height map with no overhangs, you can just test like this: if (normal.y < 0) normal = -normal;
  9. If you had a pregenerated hex mesh, its just dividing triangles into 4 smaller ones. Stitching means being aware of neighbours and their level. That mesh class you linked is pretty stupid, not in an insulting way but I mean it is not aware of its neighbours so how will you deal with stitching different levels together? Also how do you know if you already made a new vertice for a subdivision and what is its index? A triangle in that link is just 3 indices to vertices, so I would add more verts for the smaller triangles and then be able to find them. I would probably add an additional field to the triangle class to index its children and mid point vertices and neighbours: Struct Triangle // or class whatev's { int[3] corners; // indices to the array of vertices for the corners int[3] midpoints; // these would be -1 if not made yet, and indices if they are int[4] childTris; // indices to the triangles array -1 for no children or an index if the child has been made } Also maybe an int pointing to the parent to make walking the whole thing easier, and if you are displaying levels based on LOD (not sure that seemed like a requirement) some kind of value to help the GPU know what level to render (like the mid point position of the parent and then you could render the level based on distance from camera). I'm confident that would also be adequate for stitching on a height map.
  10. TeaTreeTim

    A Novel Approach To (Non-Manifold) Dual Contouring

    It's all about time. How much time can you spend generating the mesh? By definition dual contouring is cellular, if you don't want to work with cells, make different sized faces. It's only time that limits your technique. An example:
  11. You don't want local space, you want bone head space. All rotations are around the pivot point for that bone, not the pelvic region or wherever local space 0,0,0 is. PS if you can get animation up and running in a week you are doing well.
  12. Why? Having multiple cell sizes isn't black magic. I don't know Unity so I don't know, is there a reason the CPU HAS to even know what the heights are? What just stops you from doing this in the shader: float distance = lengthSquared( - camera,xyz); float height = noise( * continentScale) * continentHeight; if (distance < closeEnoughToSeeMountains) height += noise( * mountainScale) * mountainHeight; if (distance < closeEnoughToSeeHills) height += noise( * hillScale) * hillHeight; This can just be done in vertex shader (and pixel shader for texturing), or used in tessellation too if you want to do that. If you wanted to do trees etc that also require height, it would be better to have a compute worker that creates height map texture(s) so the renderer and trees, collision etc know height as well. In reality height will need to be a bit more complicated of course but you get the point. I've done many different variations of chunked terrain, vtf and pregenerated and assembled in compute. For rendering entire planets that you wont be landing on I'd just do a grid something like this: but chunked grids with height determined in compute are ok if there's a reason. In a later engine to that link, I assembled the below as a heightmap in compute, each of those areas is a chunk, they are different geographical sizes but the same size texture (its just assembled into a single 2D texture like this so you can visualise): These height textures are used for: the rendering of grids, determining tree height, AI (all GPU based), collision detection (all GPU based), placing of bushes and water. The CPU never even knows the height. In fairness I've moved away from this in newer engines because I do server client engines and the CPU was less utilised than the GPU.
  13. Firstly, for a lunarscape you can do what you want but how do you path rivers? The second question is can you pregenerate anything or does it have to be purely run time generation, and by run time does that include 30 seconds for a compute process to assemble the terrain? Thirdly, why does your process preclude the use of GPU side generation (not sure why you mention dozens of calls back and fourth)? Fourth question, whats harder is everything else, collision detection (is that GPU or CPU), trees and bushes (lets face it, most vistas you are looking at trees and bushes not barren earth), content generation and did I mention water? How do you know a river will always flow down until it reaches the sea? I do a bit of everything
  14. If youre passing a position and rotation per joint per frame (as opposed to using inverse bind matrix), youd normally center the bone head then rotate then translate. I dont see you do that and also I dont understand: 2.0 * cross(,cross(,v) + rot.w * v) // huh is this a rotation based on identity position? If every thing is set to identity first, then just rotate one joint or only translate one joint etc how does it look?
  15. Most VR is obsessed with first person because its assumed this is the best way to use it but third person can help reduce motion sickness. If the camera is pulled behind a moving avatar that the player can fix on some of the effects of motion sickness are mitigated. Vertical motion can also be reduced as the camera follows the player on its own trajectory etc.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!