Fast rendering of cubes at points

Started by
8 comments, last by ChristOwnsMe 11 years, 10 months ago
I have a list of homogenous 3d points, and I want to render cubes at those points, and I was wondering what the fastest way would be to do this. There will be lots of cubes that will be rendered, as there are lots of points. I am new to graphics programming, so thanks for any help.
Advertisement
You'd most likely want to use instanced rendering.
Would that be faster than tesselation? The cube sizes will vary based on the LOD of the patch they are a part of.
Im no expert on tesselation, as far as I know it's just splitting up a triangle into smaller triangles, which I dont think would be of much use here?

My thought would be to either
a) Send the positions of all the points to the gpu, and then draw an instanced cube and for each instance position it in the vertex shader based on one of the positions
2) Send the positions of all the points to the gpu and let the geometry shader turn each point into a cube, although I'm not sure if the geometry shader can do this.
Sorry I forgot to mention I also need to be able to modify the colors of the cubes, and probably blend their colors with neighboring cubes.
zacaj. 2) is what I wanted to do and wasn't sure if the GS could do that.. and with 1) I don't know if I could modify the colors of the instances im drawing or blend them together.
It sounds like you'll have to use instancing then.

With instancing you can also specify positions/orientations/scales/colors. I'm not sure what you mean by blending colors with neighboring cubes though. That could get hairy with instancing depending on what you're trying to do.

If you decide to go with the geometry shader you'll need to pass in a position/color vertex. The geometry shader would then build the cube from the position and assign that color to all of the vertices. Compared to instancing it still seems like a lot of redundant work. I hate the thought of a cube being built hundreds of times a frame for no reason, heh.

Could you explain what you mean by blending the colors together?
So I have a terrain patch made of cubes. The color of those individual cubes is determined by noise, so their color is not influence by neighboring cubes. This isn't a requirement, but I thought it might look cool to blend the colors of 2 cubes vertices together. For example, if Cube A is all Red and cube B is all Blue, then the edge where they meet will be a blended color of Red and Blue.
Wouldn't a geometry shader be pretty slow (compared to telling the graphics card to render x amount of boxes), plus instancing would be a lot easier/simpler. If you are doing a minecraft like terrain with cubes (that's what it sounds like) you'd probably be better off pre processing the entire height field instead. You could cull non visible sides, interpolate colors across cube boundaries and you can even pre transform all of the data into world space while you generate it, removing the need to send a matrix to the shader. If you do it this way there is no redundant work, you generate the geometry once and never touch it again until you need to update it for whatever reason (removing a cube). One caveat is that you cannot modify rotations on a per cube basis like you can with instancing (but it doesn't sound like you want to do this). I believe the technique is called static vertex buffers.
Thanks a lot for the info everyone.

This topic is closed to new replies.

Advertisement