Archived

This topic is now archived and is closed to further replies.

Xarragon

OGL Hardware & Performance - How are they related?

Recommended Posts

Hi, Im quite new to 3D programming, I started out with NeHe´s tutorials some months agoe, and have now read several books on the subject, made some simple, and some more advanced applications using OpenGL. I have a quite good understanding of basic 3D concept, such as homogenous space coordinates, affine and vector spaces, linear algebra and complex numbers, and the way many of these work together. Most of this knowledge have been adapted from www resources, but since I have just finished what should be the swedish correspondense to the us High School, and since I choosed a "Science Line"(or whatever, you get the picture, right?) quite a bit of what we learned in school about those subjects aided me alot... Ok, enough ego-stroking... My interest right now is to learn more about performance optimization and implementation. I have a small game project underway, basically a simple space shooter witch i started on for learning purposes... I grabbed the Nvidia SDK, hoping for some optimization tricks there, but found them to be either too basic ("use display lists") or very high level. Now I looking for two distinct subjects. The first is how OpenGL interacts with hardware, i.e, how do I load vertex arrays into video memory? When i create a display list, is it stored in Video memory? The next subject is implementation of 3d rendering on "professional" projects. How do you separate the triangle generation from the game code that receives user input? Are there any guidelines for this? Is there any examples? Do you implement a "message queue" for the renderer as well? Such as "Draw this polygons _here_, please" ? I´ve grabbed some source code here and there, but so far I´ve seen none with some separation of game and rendering code... The questions posed here was only examples of what Im looking for, since I have some trouble specifying myself... :-) I would greatly apperciate any information or directions to other resources. In addition, it would be interesting if anyone with experience from professional graphics programming could elaborate how they do optimization and code separation... (As read in some of the Gamasutra articles). Any reply is received with great apperciation. Martin Persson, 18 Sweden

Share this post


Link to post
Share on other sites
might be a bit lame tip but try the quake2 sources. they have the game engine and the renderer split to seperate libs.

also try the opengl redbook (should be out there on the web somewhere, forgot where i found it )

for ''how do I load vertex arrays into video memory'' check out the nvidia docs, somewhere there should be an explanation about the differences between display-lists, vertex-arrays and the like (i think it was in one of the performance doc, im not sure cos it was some time i looked through it)

(the quake2 sources are somewhat messy so they might not be very helpful)

sorry for not beeing more specific but its been awhile since i looked this stuff through i mentioned



mfg Phreak
--
"Input... need input!" - Johnny Five, Short Circuit.

Share this post


Link to post
Share on other sites
Seperating the renderer from the game code is not so difficult in theory. All you need to do is make a class (or a collection of routines if not C++) which provides functions that call on the underlying API.

For example, you might have a function called Renderer::drawArray, which takes vertex/texture/normal data. This uses OpenGL calls to render the a vertex array. Another routine might be called Renderer::drawVerts, which renders the array of vertices using glVertex* calls.

You could define a class called RenderableObject, and each Renederer::draw method accepts a reference/pointer to a RenderableObject as an arg. This allows the draw method to obtain all the needed data via get methods.

By storing each objects data as intrinsic types(floats) rather than GLfloat or D3DVertex, you can make a renderer class compatible with both D3D and OpenGL, your own software engine, and/or any other 3D API that comes along.

As for optimizations, thre are several you can make. State change reduction is a big one. State changes are expensive, so you want to make as few as possible. One method is to make a ''shader tree''. Basically you make a tree of state changes, and attach RenderableObjects to each node. Children nodes use the states set by parents. You run through each node, set the states for that node, render all attached objects, then move on to the next child node. Sounds good on paper, but could take a bit of work to implement. I read an article a while back on delphi3d.net which you might want to look at (if it''s still there).

Another obvious optimization is to include different code paths for different cards. For example, you might have several versions of your OpenGL renderer class. One version would use standard GL calls to draw arrays, while another might use vendor specific calls to do the same (such as NV''s VAR extensions). Check on game load what sort of card the player has and load the appropriate renderer.

Use glVertex2i for any 2d elements, such as GUI and text. Not a big deal for a small number of elements, but you could certainly see a slight speed up over glVertex2f if you have a lot of 2d rendering to do (such as a large amount of text, and each char is a poly).

There are other optimizations, of course, and a lot of info on tbe net. I hope this helps.




Share this post


Link to post
Share on other sites
Tahnks for your replies... As for the Quake2 tip, it was quite fun to see the code that made up the game, especelly the core rendering routines, and realize that it really isn´t that far from what you code yourself. The download of the source was acctually already underway when I wrote the original post, but nevertheless, a big thanks!

As for optimizations, I´ve got quite a collection of articles collected already, and I´m working my way through... :-)
What I was looking for was really hardware-related documentation I guess... When you read something like "store the vertex arrays in video memory" in the NVSDK, I wanted to know more precisly how different GL Commands stores it´s data, and how data is transferred around hardware-wise.

BTW: Someone knows who took my MS Office CD? I need it! Alot of the docs in the NVSDK is in .ppt (PowerPoint) format, and I havn´t installed it. And now the CD is long gone it seems... :-)

Another conclusion I´ve come to is that I shouldn´t... Well, honestly, I´m sometimes too lazy to acctually wade through source code, instead relying on the matters beeing covered in some article, tutorial or book. No good. I guess the eveng i spent reading the q2 source and taking notes, and looking things up in the Blue Book and such gave me more kowledge than three days of article reading. And besides, you always pick up one or two programming tricks along the way...

Another weird behaviour I have is that when I look at other ppls code, I usually finds it a bit harder to read than my own (strange, isn´t it...?). Quite often, in the past(sureee...) I´ve found myself adjust to other peoples way of coding, because, I thought, since it was harder to read, it must be better, since the guy who wrote it is probably a better programmer... I did this alot in programming class as well, admiring the work of classmate, until one day I discovered that my way of doing it was easier, simpler... and more efficient, acctually. That last point made me rethink what I was doing, and ever since, I know usually try to do things my was, and it usually works equally good. And if it doesn´t, well, I still have lots of things to learn, and you can only learn from mistakes...

Sincerly, Martin "Xarragon" Persson

Share this post


Link to post
Share on other sites