Sign in to follow this  
Plerion

OpenGL Using normals with voxels

Recommended Posts

Plerion    381

Hello everyone,

 

I just set the prefix to OpenGL because thats in the end what my engine is based on. Tho the question is not related to OpenGL per se but of a general kind.

 

Currently i am rendering terrain using cubes. 'Calculating' the normals for the cube faces is fairly obvious but im running into some problems with the fact that a vertex basically has 3 normals. I am using a vertex buffer with the points of the cube and an index buffer which holds the indices for the faces. So a cube in the end consists of 8 vertices and up to 36 indices (depending on the neighbors). Now a vertex has 3 distinct normals and there should be no interpolation amongst the faces of the cube. Is there an elegant way to my normals working and avoiding to create 3 vertices for each point with lots of redundant data?

 

Thanks in advance for you help and a merry xmas

Plerion

Share this post


Link to post
Share on other sites
unbird    8338
You can't avoid the "redundancy". For proper normals you need 24 distinct vertices (see here for something similar).
 
Some suggestions though:
- if bandwidth is a problem, you could use low-precision formats (normals and maybe even positions can be integers)
- how about creating the cube procedurally in the vertex shader ? (hmmm, probably only applicable with instancing...)

PS: Ah, ja, o schöni Wiehnachte smile.png

Share this post


Link to post
Share on other sites
Plerion    381

Hey, thank you for the hints. I have considered several options. First i was still trying to somehow get it done with 8 vertices and like a cube map for normals but in the end decided thats not even worth trying, as it hits performance way too bad and im not really in trouble with the current bandwidth. So rather ive tried to compress my data (3 floats position, 3 floats normals) and ended up with 2 bytes position and 1 byte normal. 1 byte of the position contains x and y as chunks have 16x16 rows so its upper 4 and lower 4 bits of the value. The second byte contains the z position as a row can be up to 256 blocks high. Finally the normal is compressed like this: 1 -> 1/0/0, -1 -> -1/0/0, 10 -> 0/1/0, -10 -> 0/-1/0, 100 -> 0/0/1, -100 -> 0/0/-1 and then reconstructed using the step function and division. So now i use less badnwidth and have correct normals and still have a byte left for occlusion values (as im using an Int4 vector).

 

PS: Merci, das wünsch ich diar doch au :D

Share this post


Link to post
Share on other sites
Brother Bob    10344

If you only need flat shading, then you may actually get away with only eight vertices even if a cube with normals technically has 24 unique vertices. In flat shading, an attribute is automatically replicated over all vertices for a given primitive, and it is in fact possible to specify a cube with a vertex/normal array size of only eight entries. Look up the command glProvokingVertex which specifies which of the three vertices in a triangle, for example, that contains the flat-shaded attribute.

 

If you have for example a flat-shaded normal attribute in your vertex shader, and specify the last vertex as the provoking vertex, then the last vertex is the one containing the normal for all three vertices. The first two vertices effectively contains unused normal data and you should be able to take advantage of that to reduce the size of the vertex arrays.

//     7-----6
//    /|    /|
//   3-----2 |
//   | 4---|-5 
//   |/    |/
//   0-----1
 
    vector3 p[] = {
       { -1, -1, -1},
       {  1, -1, -1},
       {  1,  1, -1},
       { -1,  1, -1},
       { -1, -1,  1},
       {  1, -1,  1},
       {  1,  1,  1},
       { -1,  1,  1},
    };
 
    vector3 n[] = {
       {  0, -1,  0},
       {  1,  0,  0},
       {  0,  0, -1},
       { -1,  0,  0},
       {  0,  0,  0},
       {  0,  0,  0},
       {  0,  1,  0},
       {  0,  0,  1},
    };
 
    int index[] = {
        0,1,2,3,0,2,
        5,6,1,6,2,1,
        5,4,7,6,5,7,
        4,0,3,7,4,3,
        3,2,6,7,3,6,
        5,4,0,1,5,0,
    };

Try and see if this works, I don't have the possibility to try it myself at the moment. The index list is a list for 12 CCW-winded triangles building the cube in the comment at the top of the code.

 

If you only need to draw parts of the cube, you can truncate the index list accordingly.

Edited by Brother Bob

Share this post


Link to post
Share on other sites
Samith    2460
Brother Bob's solution is pretty interesting, and probably better than what I'm about to propose, but I thought I'd add another option to the pot anyway.

You could construct the normal in the fragment shader using the derivative intrinsic. Basically: n = normalize(cross(dfdx(pos), dfdy(pos)));

Obvious drawback is that you must perform that calculation per fragment. But you don't even have to send any normal data down to the vertex shader!

Also, in a similar vein, you could compute the primitive normal in the geometry shader and spare yourself the per fragment computation, but it's not a given that that would be any faster.

Share this post


Link to post
Share on other sites
unbird    8338
And yet another idea: Vertex Puller. One of the latest GPU Pro books has a chapter about this. In essence you don't bind your vertex (and index buffer) as such, but as readable buffers or textures. Then in the vertex shader you load/sample from them using the vertex ID. Manual input assembler, so to speak.

PS: This is just for fun (D3D11 vertex shader, use with Draw(36,0)) wink.png :

void ProceduralCubeVS(uint iid: SV_VertexID,
  out float3 position   : POSITION,
  out float2 tex        : TEXCOORD,
  out float3 normal     : NORMAL,    
  out float4 positionCS : SV_Position)
{
    uint face = iid / 6;
    uint index = iid % 6;
    float sign = face >= 3 ? 1 : -1;
    uint dir = face % 3;
    normal = float3(dir.xxx == uint3(0,1,2));
    float3 t = normal.yzx;
    float3 b = normal.zxy;

    const uint FSQIDS[] = {0,1,2,1,3,2};    
    uint id = FSQIDS[index];
    tex = float2(id & 1, (id >> 1) & 1);
    float2 fsq = float2(tex * float2(1,-1) + float2(-0.5,0.5));    
    normal *= sign;
    position = float3(fsq.x * t + sign * fsq.y * b + 0.5 * normal);

    positionCS = mul(float4(position,1), WorldViewProjection);
    position = mul(float4(position, 1), World).xyz;    
    normal = mul(normal, (float3x3)World).xyz;    
}

768b0a297248306.jpg

Share this post


Link to post
Share on other sites
Plerion    381

@BrotherBob: Your suggestion is very interesting! I will surely try that as soon as i find time!

@Samith: I will try this as well and see what impact on performance it has and then see what balances out better, thanks for the tipp!

@unbird: Hehe, that looks interesting, but need to have a closer look at it first :D

Share this post


Link to post
Share on other sites
larspensjo    1561

An important question: Is this really a problem?

 

You should wait with this optimization until you find out that it is really an issue. Then, depending on exactly what the issue is, you optimize.

 

It is not unusual that a change of algorithms can give much better pay-off. For example, improved culling or the use of LOD.

Share this post


Link to post
Share on other sites
lc_overlord    436

I have basiclly two solutions to this issue

 

1. send points to the geometry shader and use it to draw cubes, this should be good for when you have a lot of freemoving cubes or basically cube particles.

 

2. if the terrain is more or less static then assembling the vertecies, normals and texture coordinates manually into raw ttriangles and then pushing it into a VBO is the way to go, even with the simplest possible culling it's fast enough that you don't have to mess around with indices at all.

Share this post


Link to post
Share on other sites
slicer4ever    6769

If you only need flat shading, then you may actually get away with only eight vertices even if a cube with normals technically has 24 unique vertices. In flat shading, an attribute is automatically replicated over all vertices for a given primitive, and it is in fact possible to specify a cube with a vertex/normal array size of only eight entries. Look up the command glProvokingVertex which specifies which of the three vertices in a triangle, for example, that contains the flat-shaded attribute.
 
If you have for example a flat-shaded normal attribute in your vertex shader, and specify the last vertex as the provoking vertex, then the last vertex is the one containing the normal for all three vertices. The first two vertices effectively contains unused normal data and you should be able to take advantage of that to reduce the size of the vertex arrays.

//     7-----6
//    /|    /|
//   3-----2 |
//   | 4---|-5 
//   |/    |/
//   0-----1
 
    vector3 p[] = {
       { -1, -1, -1},
       {  1, -1, -1},
       {  1,  1, -1},
       { -1,  1, -1},
       { -1, -1,  1},
       {  1, -1,  1},
       {  1,  1,  1},
       { -1,  1,  1},
    };
 
    vector3 n[] = {
       {  0, -1,  0},
       {  1,  0,  0},
       {  0,  0, -1},
       { -1,  0,  0},
       {  0,  0,  0},
       {  0,  0,  0},
       {  0,  1,  0},
       {  0,  0,  1},
    };
 
    int index[] = {
        0,1,2,3,0,2,
        5,6,1,6,2,1,
        5,4,7,6,5,7,
        4,0,3,7,4,3,
        3,2,6,7,3,6,
        5,4,0,1,5,0,
    };
Try and see if this works, I don't have the possibility to try it myself at the moment. The index list is a list for 12 CCW-winded triangles building the cube in the comment at the top of the code.
 
If you only need to draw parts of the cube, you can truncate the index list accordingly.


I'd like to point out that this solution is very special to this case currently, if you start adding other attributes, such as texture coordinates. This solution begins to fall apart, and you are back to square one.

Although i will admit it's pretty creative, not something i would have thought of trying.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now