Jump to content

  • Log In with Google      Sign In   
  • Create Account


a normals problem with voxels


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 15 March 2012 - 10:34 AM

so I was making my procedural landscape, with cubes (yeah like minecraft.... again.. i know), and i figured out that the landscape was not big enough, so I should make it look a little more further away, doing so resulted in passing from 60 fps with v-sync on, to around 5 fps... so at first i had a view distance of 256*128*256 cubes and i wanted something in the 1024*1024*1024 range... of course this killed my video card.

I was previously creating 4 vertices for each face with 6 indexes, so if i wanted to draw a cube floating in mid air, i will need to draw the same vertexes 3 times!!!... and if i had 4 block on a 2x2 layout, the center vertex will be drawn 4 times!!, not to mention other layouts that ended up with a single vertex drawn up to 6 times!!, so this was not acceptable, and I entered the obscure optimization path to meet my goal, and I had to meet rendering goal, performance goal, and generation time goal, as this cannot take 2 minutes to load or it wont work either.

A couple days later the goal is met, but then i tried to apply normals at my vertexes.. and there was a ... holy crap moment, when the final rendering came out.... first holy crap was looking at the landscape which was rather nice, had my 60 fps, ran in under 45 secs which is not bad for so many cubes (~34 Million).

gen test1.png

The optimization I did was, never repeat a single vertex (not entirely true but, at least 90% of the vertexes are unique, the other 10% are the ones on the edges of the sections that since they have their own vertexbuffer they must be repeated)

but since this change i can no longer set the normals to either up, left, right and so on, since a vertex now belong to several faces, so I sum up all the normals reading the faces the vertex belongs too and after that i normalize the result, this gives me this odd look.

gen test2.png

Is there any way to do normals by face instead of by vertex? Something that can be calculated in the pixel shader other than the vertex shader maybe?

Sponsor:

#2 japro   Members   -  Reputation: 887

Like
0Likes
Like

Posted 15 March 2012 - 11:01 AM

I just treated vertices at the same position but with different normals to be different, which of course increases the amount of data quite significantly.The only other way i can think of off the top of my head is to not store normals in the vertex format at all. Instead set up the indices such that all faces in the same index buffer have the same normal and then pass the normal as a uniform. That increases the amount of draw calls by factor 6 obviously.

#3 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 15 March 2012 - 03:51 PM

treating vertices at the same position will be going back to the beginning, where i had duplicated vertices, and so, no good.

the other idea, its not so bad... , since you would be making face batches by normals, and the index list will still be a big, the only problem I see (in my particular case at least), is that vertices have already been passed to the vertex buffer and vertices are the ones carrying the normals.

Any other ideas?

#4 slicer4ever   Crossbones+   -  Reputation: 3705

Like
0Likes
Like

Posted 15 March 2012 - 04:38 PM

when u create your vertex normals, do u check for co-planar normals on the vertice? this can be an over-looked problem for why your normals are resulting in a bit...funky look.

just a minor suggestion.
Check out https://www.facebook.com/LiquidGames for some great games made by me on the Playstation Mobile market.

#5 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 15 March 2012 - 08:31 PM

slicer can you explain that a little further?

what I do is, I extract the vertices that belong to a face, and add the corresponding normals to that vertices, then when all the normals are added up, I normalize all the results.

#6 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 15 March 2012 - 10:29 PM

what you have to do when you just don't get what going on by looking at the code, make it graphical...

Now i get that I'm doing it wrong... not entirely, just the corners... it should be 45 degrees or 90 degrees on all normals , and its very clear I'm not getting there.

gen test normals.png

#7 PolyVox   Members   -  Reputation: 708

Like
2Likes
Like

Posted 16 March 2012 - 03:22 AM

In my voxel terrain engine I avoid storing normals at all, and instead compute them in the pixel shader. This lets you do the kind of sharing you describe and also reduces the size of each vertex. Have a read here: http://www.volumesoffun.com/polyvox/documentation/dokuwiki/computing_normals_in_a_pixel_shader

#8 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 16 March 2012 - 08:43 AM

OHHHH!!! you can do that... now I have to do my own pixel shader... dang.. I was using basic effect.

#9 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 16 March 2012 - 08:57 AM

uhh I also found the source code of the basic effect... I'm a total noob in these shader stuff...

#10 PolyVox   Members   -  Reputation: 708

Like
1Likes
Like

Posted 16 March 2012 - 09:38 AM

Yep, you can do it :-) However, be aware that I have heard that there can be problems perhaps on the edges of polygons. Maybe the ddx/ddy is not well defined here... I forget. I've never seen an issue in practice though.

#11 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 16 March 2012 - 03:30 PM

Ok, now that code is for openGL, and I work in directx 9, so a little transformation must be done, but i would like to understand the logic behind that code before doing a 1to1 conversion. Can you give me a small explanation on how this works, I really don't understand how a pixel shader know the direction of the normal, by just looking at a single point.

#12 Postie   Members   -  Reputation: 932

Like
0Likes
Like

Posted 16 March 2012 - 09:45 PM

From what I've read, in minecraft Notch draws the faces of the cubes in batches, ie all north faces in one pass, then all east faces, then all up faces etc. If you figure out the orientation of the player's view, they can only ever be seeing 3 faces of a cube, so with some trickery you can really reduce the amount of drawing you need to do.

He also combines adjacent blocks into larger polys to cut down on the amount of vertex data he has to work with for a given frame.

Another thing to consider is that since you're working with axis-aligned blocks, there's only 6 unique normals in the entire world. You could represent this as a single index and pretty much hardcode it in the shader.
Currently working on an open world survival RPG - For info check out my Development blog: ByteWrangler

#13 PolyVox   Members   -  Reputation: 708

Like
0Likes
Like

Posted 17 March 2012 - 05:19 AM

Ok, now that code is for openGL, and I work in directx 9, so a little transformation must be done, but i would like to understand the logic behind that code before doing a 1to1 conversion. Can you give me a small explanation on how this works, I really don't understand how a pixel shader know the direction of the normal, by just looking at a single point.


As i understand it, the ddx and ddy instructions compute the rate at which a variable is changing. To do this, they compare the value at the current pixel with the value of the adjacent pixel in x or y.

In this case, the variable we are using is the position. ddx() computes how much this position changes as we move to the next position in x. This change of position is a vector which lies along the surface of the polygon. By doing ddy() as well you then have two vectors which lie along the surface of the polygon. The cross product to these is perpendicular to both, and is therefore a normal.

#14 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 17 March 2012 - 04:45 PM

minecraft uses textures to represent different terrains, but I wont have that, I'll use just colors on different gradients like the first image on the post, same for all my objects, characters and so on. And it seems that I have enough blocks already, and I could double the ones I have on screen to 2048 *2048 and still get 60fps, (even I get really tight on memory), I'll probably leave it at 512*512 as it only uses around 250MBs and that will leave me with quite so room to play with, besides also my approach is a little different, this map in infinite in every direction not just X and Z. I'm going for a terraria kind of map on the Y scale. Maybe after Y at -15.000 I'll put an indestructible block with hell just above it but I'll have to think that through. Also I'm not having destruction/construction in a minecraft way, its a different kind of game, the looks are just similar as destruction is much simpler in 1x1x1 blocks.

Poly,

so the ddx and ddy checks the adjacent pixes, and it kinda makes a fake triangle of pixels to find where the perpendicular value to all 3 is. Right?

#15 PolyVox   Members   -  Reputation: 708

Like
1Likes
Like

Posted 18 March 2012 - 06:21 AM

so the ddx and ddy checks the adjacent pixes, and it kinda makes a fake triangle of pixels to find where the perpendicular value to all 3 is. Right?


Yeah, I guess you could think of it like that.

#16 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 18 March 2012 - 02:30 PM

well the test came out rather well, with no visible performance decrease. and 136 million cubes in mem. not bad.

gen test3.png

thanks a lot for all your input guys.

#17 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 18 March 2012 - 08:41 PM

well it ain't over until the fat lady sings, So I started to put some procedural trees inside, and there seems to be some color bleeding. tree bark is supposed to be brown, and leaves green (for test purposes of course), any ideas on how to fix this?

gen test4png.png

#18 winsrp   Members   -  Reputation: 273

Like
0Likes
Like

Posted 18 March 2012 - 09:27 PM

duh... obvious what was going here, a pixel can have 2 colors, and since I-m using only one, the first to be written get prioritized, and so only one brown vertex was held, but since I don't want to loose the effect (probably I even can't unless I write several vertexes more) the I just let the creator know that I want my tree bark to be prioritized and such, it looks much better. I will probably change the prioritization of the bark when it hits the grass or something like that, in order to make the tree look more like getting out from the grass itself.

gen test5.png




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS