Rendering Point Cloud Data slow

Started by
6 comments, last by Krypt0n 10 years, 2 months ago

Hello,

I have written a C#/Windows 7 OpenGL application which converts arbitrary objects into point cloud data. Well, it's not really a cloud since only the surface of objects gets converted into points.

I tested my program with a cube of size 2 and with a density of 0.001. It basically works, the cube consists of 23976006 points.

I can render now my cube in two different ways: With GL_POINTS to show the point cloud and with GL_TRIANGLES to show the original cube. When I render it with GL_POINTS, my application gets insanely slow. But that makes sense, since rendering points is a slow operation in OpenGL. But when I switch from GL_POINTS rendering to GL_TRIANGLES rendering, my application stays very slow although OpenGL only has to render 12 Triangles now (6 sides of a cube x 2). Does the huge amount of consumed memory slow my application down?

Help would be very appreciated.

Advertisement

how do you define slow?

what's the usual speed on your hardware with your opengl programs?

what hardware do you use?

with some proper data, we can give you proper replies, otherwise:

Does the huge amount of consumed memory slow my application down?

yeah, I guess so.

Hi,

Maybe you could post some code snippets of your draw method ?

----

www.winthatwar.com

Download our latest demo here.

www.insaneunity.com

@InsaneUnity

@DTR666

Rendering points isnt slow, but im used to direct x, so I dont know much about opengl.

24 million is a fair amount tho, youd need a pretty good video card for that many projections, triangles or points it doesnt make a difference, its still projecting them no?

I dont know how old your hardware is, I suggest buying a new video card (like a 7 series gtx would be good), it could fix your problem.

Without seeing what you are doing, the best advice is to just point you to some "best practice" stuff, here are the slides to a great talk from nvidia at steam dev days on speeding up your opengl code (the video is on youtube if you want the audio guide). In particular, pay attention to the buffer management portion and probably the draw indirect stuff.

When you draw using GL_TRIANGLES, are you perhaps still sending all the point data to the GPU? While this won't render the points, there will still be some processing of the points somewhere along the line, which will cause a similar a slow-down as with GL_POINTS.

i.e. are you doing something like this? (pseudocode, assuming you're using VAOs of some kind)


bool drawPoints
...

GLenum type = if drawPoints then GL_POINTS else GL_TRIANGLES

// draw the points
glBindVertexArray(pointCloud)
glDraw...(type, ... )

// draw the cube
glBindVertexArray(cube)
glDraw...(type, ... )

When actually you should be doing something like this:


bool drawPoints
...

if (drawPoints) {
  // draw the points with GL_POINTS
  glBindVertexArray(pointCloud)
  glDraw...(GL_POINTS, ... )
} else
{
  // draw the cube with GL_TRIANGLES
  glBindVertexArray(cube)
  glDraw...(GL_TRIANGLES, ... )
}

While I don't know any exact details on the subject, what I've seen with other point-cloud rendering engines is that they do some crazy optimisations to render only the points they need. Using some kind of search engine (I'm thinking akin to an octree or other data structures relevant to 3D graphics), these optimisations try to process only the points that will be visible, i.e. points that are more-or-less directly in line with a pixel.

Basically if you try to render every single point in your point cloud data things will get insanely slow, and thus optimisation is necessary.

Sorry for replying so late. I was on vacation. Thank you for your answers, all are helpful.

I think it is some C#/.NET/SharpGL issue.

When you draw using GL_TRIANGLES, are you perhaps still sending all the point data to the GPU? While this won't render the points, there will still be some processing of the points somewhere along the line, which will cause a similar a slow-down as with GL_POINTS.

i.e. are you doing something like this? (pseudocode, assuming you're using VAOs of some kind)


bool drawPoints
...

GLenum type = if drawPoints then GL_POINTS else GL_TRIANGLES

// draw the points
glBindVertexArray(pointCloud)
glDraw...(type, ... )

// draw the cube
glBindVertexArray(cube)
glDraw...(type, ... )

When actually you should be doing something like this:


bool drawPoints
...

if (drawPoints) {
  // draw the points with GL_POINTS
  glBindVertexArray(pointCloud)
  glDraw...(GL_POINTS, ... )
} else
{
  // draw the cube with GL_TRIANGLES
  glBindVertexArray(cube)
  glDraw...(GL_TRIANGLES, ... )
}

Yes I do something like this.


if (drawPoints)
{
gl.Begin(OpenGL.GL_POINTS);
Points.ForEach(x => gl.Vertex(x.x, x.y, x.z));
gl.End();
}
else
{

gl.Begin(OpenGL.GL_TRIANGLES);
foreach (Testing_Environment.src.Geometry_DataStructures.Face face in m_Mesh.Faces)
{
gl.Normal(face.v0.Normal.x, face.v0.Normal.y, face.v0.Normal.z);
gl.Vertex(face.v0.x, face.v0.y, face.v0.z);
gl.Normal(face.v1.Normal.x, face.v1.Normal.y, face.v1.Normal.z);
gl.Vertex(face.v1.x, face.v1.y, face.v1.z);
gl.Normal(face.v2.Normal.x, face.v2.Normal.y, face.v2.Normal.z);
gl.Vertex(face.v2.x, face.v2.y, face.v2.z);
}
gl.End();
} 

whereas I create the points like this:


public void CreatePoints()
{
for(int i = 0; i < 100; i++)
  for(int j = 0; j < 100; j++)
    for (int k = 0; k < 100; k++)
    {
      Points.Add(new Vertex3D(i, j, k, null));
    }
} 

I will look in more detail at this topic.

Again thank you for your help!

your gl.Normal and gl.Vertex calls are actually the slowest possible way to draw. it might be not very critical if you'd draw a few very big triangles (e.g. fullscreen rect), but with point cloud data you're most likely vertex bound, so it's the worst case for your slow processing of vertices.

like jellymann assumed you already do, you should use VAO (vertex array objects) or VBO (Vertex buffer objects), in those cases your data will be resident on the GPU and you just call once "draw" to draw all points instead of submitting one by one. then all what will limit you is the gpu speed.

This topic is closed to new replies.

Advertisement