Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About trapazza

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello everyone, I tried to google for some info on this but had no luck (or didn't know where to look) I'm writing some shaders that would be applied to items on top of their "regular" shader once in a while. Since some of them can be computationally expensive I'd rather use these materials only when they're needed. So my question is, how is this normally done in Unity? (or any other engine for that matter). Is it ok to dynamically add and remove materials to/from objects when they are needed? Or maybe you can write every shader as a "pass" and somehow activate/deactivate passes? What if you have a scene with hundreds or thousands of items and want to apply the same material to all of them at once for a couple seconds? Thanks. Jm
  2. There's no magic trick. Even if it looks like it's flat, it's not (can you tell if earth is round from the ground?). They use a spherified cube (like anyone else these days) on which detail is being added depending on view distance. They're probably using the "entering atmosphere" glitch to actually add a lot of detail at once (space view -> ground view), but it's still spherical down there.
  3. It all depends. Is it an spherical world? is it a continuous plane? do you use different LODs for terrain and for the rest of the objects?
  4. trapazza

    make everything an object??

    Use common sense. Just that. The fact that you asked for it indicates that you already almost knew the answer.
  5. trapazza

    OOP is dead, long live OOP

    Just out of curiosity, I'd love to see how having "class hierarchies 15 levels deep" is actually useful.
  6. I'm trying to figure out how to design the vegetation/detail system for my procedural chunked-lod based planet renderer. While I've found a lot of papers talking about how to homogeneously scatter things over a planar or spherical surface, I couldn't find too much info about how the location of objects is actually encoded. There seems to be a common approach that involves using some sort of texture mapping where different layers of vegetation/rocks/trees etc. define what and where things are placed, but can't figure out how this is actually rendered. I guess that for billboards, these textures could be sampled from the vertex shader and then use a geometry shader to draw a texture onto a generated quad? what about near trees or rocks that need a 3D model instead? Is this handled from the CPU? Is there a specific solution that works better with a chunked-lod approach? Thanks in advance!
  7. This is exactly what I'm going to do. I've added a new function to the tree node: "public string ToGLSL()" that will do the trick.
  8. I'm trying to add some details like grass, rocks, trees, etc. to my little procedurally-generated planet. The meshes for the terrain are created from a spherified cube which is split in chunks (chunked LOD). To do this I've wrote a geometry shader that takes a mesh as input and uses its vertex positions as locations where the patches of grass will be placed (as textured quads). For an infinite flat world (not spherical) I'd use the terrain mesh as input to the geometry shader, but I've found that this won't work well on a sphere, since the vertex density is not homogeneous across the surface. So the main question would be: How to create a point cloud for each terrain chunk whose points were equally distributed across the chunk? Note: I've seen some examples where these points are calculated from intersecting a massive rain of totally random perpendicular rays from above... but I found this solution overkill, to say the least. Another related question would be: Is there something better/faster than the geometry shader approach, maybe using compute shaders and instancing?
  9. This is how I do it in my Mesh class: /// <summary> /// Welds close vertices into one. /// </summary> private void weldVertices( int[] tvis_ ) { const float sqrDistThreshold = DupedVertexMinDistance * DupedVertexMinDistance; var duplicatedVertices = new Dictionary<int, int>(); for( var i = 0; i < vertexCount - 1; ++i ) for( var j = i + 1; j < vertexCount; ++j ) if( !duplicatedVertices.ContainsKey( j ) ) if( (pointByIdx( i ) - pointByIdx( j )).sqrMagnitude <= sqrDistThreshold ) duplicatedVertices.Add( j, i ); // adjust triangles for( var n = 0; n < indexCount; ++n ) if( duplicatedVertices.ContainsKey( tvis_[ n ] ) ) tvis_[ n ] = duplicatedVertices[ tvis_[ n ] ]; } For performance reasons, this method just identifies duplicated vertices and corrects triangles. I later use a different method to remove orphan vertices and invalid triangles (triangles with 2 or 3 duplicated indices).
  10. Why? Having multiple cell sizes isn't black magic. I don't know Unity so I don't know, is there a reason the CPU HAS to even know what the heights are? What just stops you from doing this in the shader: The problem is not the cell sizes but the noise generation pipeline. The way in which noise is described for each main planet or asteroid is through an artist/designer provided XML file, which is later translated into a noise tree at run-time. Since there could potentially be thousands of combinations I just can't put them all in a shader. So imagine this simple scenario: private Noise build_simple_noise_generator() { return new Billow( new GradientNoise() ); } private float[] calc_mesh_heights( Mesh mesh ) { Noise noise_generator = build_simple_noise_generator(); float[] heights = new float[ mesh.vertexCount ]; noise_generator.process( mesh.vertices, heights ); return heights; } When calling noise_generator.process(...), the Billow node will first call "process" on GradientNoise, which will return some Perlin based noise, which will then be used by the Billow node as input. So, the idea was to walk the tree in the CPU and move the calculations made inside the different nodes (which can be costly) into the GPU. Now, I don't really know if this is a good approach or there is maybe a totally different way to achieve this.
  11. Hello TeaTree thanks for yout reply. By now, I'm just interested in the calculation of the terrain heights in the GPU. That would be the first step in the whole process. The terrain is subdivided in chunks, which contain about 1600 vertices each. Chunks can't be modified at run-time so whenever a new one is needed its heights are calculated just once. Noise trees can get complex as needed so imagine I have something simple like this: In this example, every input position (xyz) will be processed by all nodes found in the tree: (perlin + sine)->freqMod->billow->ridged->angle_selector. Each node takes as input a Vector3[] and returns a float[] with the heights, which will be further processed by the next node until the root node is reached. If I moved calculations in each node to the GPU, the returned heights will be passed back to the CPU so I can direct the results to the next node, which in turn will call the GPU again. Hope it makes more sense now.
  12. Do heavily-loaded compute shaders affect the performance of the other "normal/render" shaders? or do they use a dedicated core?
  13. A few years ago I started creating a procedural planet engine/renderer for a game in Unity, which after a couple of years I had to stop developing due to lack of time. At the time i didn't know too much about shaders so I did everything on the CPU. Now that I have plenty of time and am more aware of what shaders can do I'd like to resume development with the GPU in mind. For the terrain mesh I'm using a cubed-sphere and chunked LODs. The way I calculate heights is rather complex since it's based on a noise tree, where leaf nodes would be noise generators like Simplex, Value, Sine, Voronoi, etc. and branch nodes would be filters and modifiers (FBM, abs, neg, sum, ridged, billow, blender, selectors, etc). To calculate the heights for a mesh you'd call void CalcHeights( Vector3[] meshVertices, out float[] heights ) on the noise's root node, somewhere in a Unity's script. This approach offers a lot of flexibility but also introduces a lot of load in the CPU. The first obvious thing to do would be (I guess) to move all generators to the GPU via compute shaders, then do the same for the rest of the filters. But, depending on the complexity of the noise tree, a single call to CalcHeights could potentially cause dozens of calls back and forth between the CPU and GPU, which I'm not sure it's a good thing. How should I go about this? Thanks.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!