Archived

This topic is now archived and is closed to further replies.

Who Draws The Polygon? The Camera, or the Polygon Itself?

This topic is 6004 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Like everybody else, I am starting from scratch again with modelling a new 3d Engine. I am modelling everything OOP vise in C++. My problem is this; conceptually who draw the polygons? In my mind it is the camera who sees the polygons, and draws them in the way that it sees them. It is also the one who performs backface culling, etc. This however, creates an overhead with the camera, having to use getmethods to retrieve information from the polygons. Because of polymorphism and inheritance making the camera a friend class of the polygon and accessing the polygons directly is not an option. Making the polygons render themselves seems the fastest, however it seems conceptually wrong, and because of different API''s and inheritance the polygons need to know in what context they are rendered; For example most of the polygons would be hardware rendered by an API like OpenGL, and their render method would probably only consist of 3 glVertex3f statements. However this requires, that original caller of the render statements keeps control of glBegin(GL_TRIANGLES) statements, and knows that it only renders triangles. There are probably more examples in which the polygons need to know their rendering context, but this was the most obvious. So far because modelling is currently more important, that speed i''m going with option number 1. Is mix of the two better, but more unclean? What do people say? This could be a nice discussion. Regards Gorm

Share this post


Link to post
Share on other sites
I would personally recommend an abstraction using a base class for handling all of your graphical drawing functions, and derive classes from this abstract class for each API that you want to use. You should keep all actual drawing code within this class. Making the polygon draw itself is actually a good concept as OOP goes, but it is not really practical for any kind of speed-critical architecture because then you would need to make a seperate function call for each polygon that you draw which is very expensive, and you will lose reusability for other APIs if you decide to support D3D or something like that later. Instead you should give your drawing class a list of pointers to the polygons and optimize the interior workings of each derived class to actually draw those polygons. Normally if you do this you can construct fast linear arrays of vertices and indices and process them all with minimal function calls to things like glDrawElements, and still retain the ability for dynamic worlds (you keep the pointers to the polygons in the display object, and when any of the polygons change in any way you notify it what has changed and it can update the arrays).

Seeya
Krippy

Share this post


Link to post
Share on other sites
Heya,

OOP is good, but the polygon should not draw itself neither should the camera!

Polygons are drawed by the rasterizer! Make a base class rasterizer. then inherit a flat, gouraud, texture....rasterizer.

You should have a Engine class.
Engine has a set of rasterizers.
Camera is just a matrix.
And a Mesh or RenderObject has some polys. Which are throwen into a renderlist after being Clipped by the Clipper (class wich also does the backface culling!).
After that the specific rasterizer is called to rasterize the renderlist.

Hope this makes sense to you.

Think of a scene with more than 10000 polys with in every poly its own (duplicated) rendermechanism...
Also think of a scene with 10 cameras that all have the same clipping, backface culling code.

I once ''invented'' a pixel class. Wich could set its color all by himself....

Share this post


Link to post
Share on other sites
lol
I remember when I first started learning C++ was around the time of my first adventure into graphics programming, and I made a pixel class too. lol. I also had a rasterline class and a color class.

And I wondered why my flat-shaded mode 13h polygons were so dang slow to draw. lol. I was cursing C++ as a language for thwarting me...

Share this post


Link to post
Share on other sites
I cannot believe I didn''t think of the rastermanager class myself. A very good idea, though it cannot be done as cleanly as i had hoped. Not with some API''s that is.
For a software engine, you would go through a pipeline, that would collect all visible polygons. And then in the end you would render this collection. (At least this is the way I have used, and the one I think seems most logical).
What bothers me with for eksampel OpenGL, is that it seems impossible to seperate the transformations and the rendering.
most code would go like this:

push matrix
make transformation
.
.
make transformation
glDrawElements
popMatrix
.
(you get the idea)

In this case the rastermanager, would actually only set up stuff, like turning texturemapping, etc.

BoRReL it is not a problem, having many polygons and cameras with rendercode, as only variables not methods are copied, when making new instantiations. Nearly all compilers will make objects of the same class share the same code. Though I do see an overhead of calling the rendering methods, of say a polygon 10000 times pr. frame.
Any comments...


Regards

Gorm

Share this post


Link to post
Share on other sites
Try not making anything private. I don''t know why that statement even exists! You should be able to access any data in your program directly. Then you wont have to use those stupid pointless Get methods.

// God.c
void main() {
WORLD Earth;
LIFE People = Earth.CreateLife(HUMAN);
GiveHope(&People);
delete Earth;
EvilCackle();
}

Share this post


Link to post
Share on other sites
IronFroggy,

The protected and private access specifiers (keywords) exist for a very good reason. Any proper OO-language should have them. It is a little something called encapsulation. By encapsulating data you ensure that it is only accessed via the methods you provide. Say you had a public pointer to the internal representation of a class. Any client of that class could just call delete on it and the actual object holding the pointer wouldn't know about it! That is very bad and one of the reasons C++ and Java have the public/protected/private keywords.

gorm,

If you make your rendering functions inline, there is a good chance the compiler will put the calls to the graphics API inline etc.


Another way to create the graphics engine hierarchy is to use the Bridge pattern. You create a class named Renderer which holds a reference/pointer to abstract class RendererImpl. RendererImpl's interface defines methods for rendering polygons/triangles, setting colors, lights etc. Then you declare your actual API specific rendering classes by deriving them from RendererImpl. These classes could be called D3D_Renderer and OpenGL_Renderer.

    

---------- --------------
|Renderer|<>----|RendererImpl|
---------- --------------
^
|
|
---------------
| |
-------------- -----------------
|D3D_Renderer| |OpenGL_Renderer|
-------------- -----------------



Many people complain about C++ being too slow when you use classes to represent vertexes or triangles. They often forget that in C you use structures to hold triangle information (3 verticies, normal, color, texture co-ords) etc. Well a structure in C++ is basically a class with all public members. The slowness of using a class is when you force the use of access modifiers. This access time can be negligible, or often reduced by make access methods inline (it doesn't guarantee inlining though).

One way to optimize vertex operations is to think of the vertex as a stream. If you use vertex arrays you could, for instance, have a triangle class that serializes it's internal data to the vertex array passed to it by the Renderer.

Example:

          

class Renderer
{
public:
Renderer(const std::string& api)
{
impl_ = RendererFactory::CreateInstance(api);
}

virtual ~Renderer()
{
impl_->Release();
impl_ = 0;
}

BeginScene()
{
impl_->BeginScene();
}

EndScene();
{
impl_->EndScene();
}

DrawTriangle(const Triangle& t)
{
impl_->DrawTriangle(t);
}

DrawVertexArray(const VertexArray& va)
{
impl->DrawVertexArray(va);
}

protected:
class RendererImpl;
RendererImpl* impl_;
};

class OpenGL_Renderer : public Renderer
{
public:
BeginScene()
{
glBegin(GL_TRIANGLES);
}

EndScene()
{
glEnd();
}

DrawTriangle(const Triangle& t)
{
glVertex3f(t.x1(), t.y1(), t.z1());
glVertex3f(t.x2(), t.y2(), t.z2());
glVertex3f(t.x3(), t.y3(), t.z3());
}

DrawVertexArray(const VertexArray& va)
{
glVertexPointer(3, GL_FLOAT, 0, &(va.arrayptr()));
// call glNormalPointer() etc

glDrawArrays(GL_TRIANGLES, 0, va.count());
}
};



Hopefully this gives you a few ideas. The implementation above is off the top of my head (other than the Bridge pattern etc) so it may not be completely thought out

One last word:

If you want the polygons/triangles to draw themselves you should do something similiar to the following:

             

class Triangle
{
public:
Render(const Renderer& r)
{
r.Color(r, g, b);
r.Vertex3(x1, y1, z1);
r.Vertex3(x2, y2, z2);
r.Vertex3(x3, y3, z3);
}

protected:
// data

};



Basically, instead of the Renderer class providing top level methods like DrawTriangle, it provides the primitive methods like Vertex3, Color etc. It makes defining your Renderer interface a bit more complex because you need to find a common interface between DirectX Graphics (D3D) and OpenGL.

Best regards,


Dire Wolf
www.digitalfiends.com


Edited by - Dire.Wolf on July 4, 2001 1:59:57 PM

Edited by - Dire.Wolf on July 4, 2001 2:01:23 PM

Share this post


Link to post
Share on other sites