engine/graphics cards/the larger picture.

Started by
8 comments, last by Krohm 15 years, 11 months ago
Hey i'm new to game dev, haven't started yet! and i am already drowning in literature which all seems to contradict an article read 10 minutes beforehand. What seems the most confusing to me is the concept of the engine, mainly because it changes readily depending on who's describing what it is, but in general it seems to me the engine is actually the game, the physics, the ai, and perhaps the only changeable things are the models and perhaps some scripted events, else everything happens in this "engine". What i was wondering is where ray-tracing etc comes into the equation, so i'm not sure what the link between opengl/directx is, if an engine does all the above things, then surely the only thing left would be the input. But also where do graphics cards fit into this whole equation, for example they all boast about new shader techonologies and anti-aliasing etc, to me this sounds like things that would be done by a ray-tracer, so do 3d models get "sent" raw to the graphics card, which then does all this ray-tracing for you? Add to this the addition of physics processors, is this simply another cpu, which is tuned for typical physics equations? i.e. acts as another cpu core? Where does one technology end and another pick up? The reason i ask is simply because i'd like to combine 3 seperate things in modelling waves, collision detection and ray tracing, which i've been doing in my computer science degree, and make them into a extremly simple example of a game, to the extent of simply pushing a button to drop a box into a sea-wave. Any advice or simply pointers to references that cover this kind of things would be greatly appreciated.
Advertisement
Quote:Original post by luzzeh
Hey i'm new to game dev, haven't started yet! and i am already drowning in literature which all seems to contradict an article read 10 minutes beforehand.

This isn't uncommon. Everyone has opinions and contradictions are everywhere. I take it you aren't new to programming, just new to creating games? This is relatively important for the kind of help you will receive.
Quote:
What seems the most confusing to me is the concept of the engine, mainly because it changes readily depending on who's describing what it is, but in general it seems to me the engine is actually the game, the physics, the ai, and perhaps the only changeable things are the models and perhaps some scripted events, else everything happens in this "engine".

The definition of "engine" is very fluid. The engine is the underlying framework for a game. Typically it will handle a variety of things like rendering, input, sound, and other general items, and might get as specific as providing base classes to inherit from, though this is more likely to be found in an engine for a specific type of game that will always have similar resources. The game logic itself is not typically included in the definition of an engine, but rather sits on top of it and utilizes the engine. The definition is loose, but in general, an engine does the "behind the scenes" work, making certain tasks which always have to be done easier. For example, an engine might have a function called CreateWindow(). All the user of the engine has to do is type CreateWindow(), while the engine executes the platform specific code. This helps to provide a cushion from all the various libraries a game might use. Also, the point of an engine is to leave it general enough to be re-usable, which is why actual game code would not be included in it.

Quote:...I'm not sure what the link between opengl/directx is, if an engine does all the above things, then surely the only thing left would be the input. But also where do graphics cards fit into this whole equation, for example they all boast about new shader techonologies and anti-aliasing etc, to me this sounds like things that would be done by a ray-tracer, so do 3d models get "sent" raw to the graphics card, which then does all this ray-tracing for you? Add to this the addition of physics processors, is this simply another cpu, which is tuned for typical physics equations? i.e. acts as another cpu core? Where does one technology end and another pick up?

As I hinted at before, the person writing the engine uses OpenGL or Direct3D, among many other libraries, to write the engine. The engine hides the specific functions of these libraries and might simplify the rendering code to Engine.DrawTriangle() instead of having to code all the vertices and etc. yourself. Some engines might even use both OpenGL and Direct3D to render, depending on the capabilities of the computer or preference of the programmer.

Things like Direct3D and OpenGL access the video memory present on todays video cards to perform certain operations. Since the GPU specializes in things like Vertex processing and shaders, it is able to do these things much quicker than the CPU, which can spend its time processing game logic. Certain cards are capable of doing certain things, and that is why they advertise those features.

Quote:
The reason i ask is simply because i'd like to combine 3 seperate things in modelling waves, collision detection and ray tracing, which i've been doing in my computer science degree, and make them into a extremly simple example of a game, to the extent of simply pushing a button to drop a box into a sea-wave.

Any advice or simply pointers to references that cover this kind of things would be greatly appreciated.

I don't have any references for things specific to your project but I'll conclude with this: Take some time to understand the sort of hierarchy involved in programming. Libraries like OpenGL or Direct3D provide functions to work with the video card, which actually processes information. Without these libraries you cannot access the hardware. An engine makes use of various libraries like this to access various pieces of hardware and provides a basic functionality which is required to create a game. While the definition of engine is blurred, there is a distinction between engine and game code, with the line being drawn generally at what is re-usable. This is because the point of an engine is to give a re-usable base to start from an to speed up the production of a game.

I just realized that's a really long post there, and hopefully its all fairly clear and I didn't manage to confuse you more. ;)
i just want to add that raytracing is not viable for games with our current hardware.
the grafics card is merly rasterizing the triangles ( converting them too pixels )


( sure, there are some raytracing demos out there, but they inarguably look like crap and perform poorly )
in simple terms, an 'engine' is some peice of software that does a complete job of something. For example, Physics engine does physics stuff. Graphics engine or Rendering engine does graphics stuff, etc. Well, for the same of argument, lets limit our discussion to "Rendering Engine".

Each person that codes their own rendering engine has their own take on how it should work. There are design patterns and techniques for making an engine seem streamlined and easy to use. There are also optimization techniques that may also limit the st ruction of how an engine is built. It is all built upon personal preference, the experience of the programmer, and what is available in the language that they are using.

A ray-tracing engine is from the family of rendering engines, but it has a specific purpose. It employs lots of math and advanced techniques to make a pretty picture. The link between a rendering (or ray-tracing) engine and DirectX/OpenGL is that the engine itself utilizes DirectX or OpenGL for pushing pixels onto the screen. DirectX and OpenGL are just a bunch of functions that have ties to the graphics card. They are the tools that programmer uses to make things visible and they by themselves, are not an engine.

The graphic card is the finish line of what an engine produces. Essentially, you rendering engine does a bunch of calculations, it calls a bunch of functions in either DirectX or OpenGL, and those functions eventually push a bunch of data onto the graphics card which then gets painted on the screen. Its a bit more complex then this, but you get the point. Ray-tracing is not done on the graphics card because the card itself is a general purpose device that only works on very simple data. For example, your model of a teapot is made up of these tiny points called Vertices (Vertex). These points, when pooled together make a mesh. This is the type of data your video card understands. Basically it takes 3 Vertices and makes a triangle out of it, and then takes a bunch of triangles and makes a model out of it. The ray-tracing portion has to be done by you. The graphics card nor the APIs you use have no support for ray-tracing (although, some are available).

A physics card, graphics card and CPU are all types of processors. Each one has a specific purpose and has specialized handling for data that is built into the hardware itself. For example, a physics processor has special built in functions to do physics calculations. A graphics card has special processing built in to do visual effects (These are called Shaders). And so on. This doesnt mean you can't do physics on a regular CPU or even GPU, you can. The down side is that it will be slower because the hardware is not designed for that specific purpose. Hence, much like ray-tracing has to be done on the CPU because there is no GPU available out there that does pure ray-tracing.

So, what does all this mean to your game. Basically you will need to build a rendering engine which can take raw vertices, do some calculations on them, plug them into the API of your choice and then display them on the screen. I believe this is your first step. Alternatively, you can grab an existing engine, such as Irrlicht, OGRE, etc. and just use those as your rendering platform. As for collision detection, you will have to write a physics engine that works off of your rendering engine. That means that it will not be a separate DLL or entity, rather it will be compiled with your rendering engine, and any update code you have will trigger an update of your physics simulation. Lastly, your ray-tracing will be a subset of your rendering engine as well. It will be the portion of your engine that does ray-tracing calculations before the data is sent out to the graphics card. Personally, I cannot comment more on this since I have never written a ray-tracing engine.

What I can suggest:
- If you want to build your own engine from scratch, Pick an API (DirectX or OpenGL) and use that. Both have support libraries (such as GLUT for OpenGL, and DXUT for DirectX) that can help you set up a window and get rendering.
- If you want to build your own physics simulation engine, there are many links on this site that can help you with the math.
- As goes for ray-tracing.

If you haven't used a graphics API before, take a look at some sites first and see what the code is doing:

OpenGL tutorial: http://nehe.gamedev.net/
DirectX tutorial: http://www.two-kings.de/

I'm sure there are more links scattered through this site.
------------Anything prior to 9am should be illegal.
Quote:Original post by luzzeh
Hey i'm new to game dev, haven't started yet! and i am already drowning in literature which all seems to contradict an article read 10 minutes beforehand.


Yes, because individual experiences are very specific, and engines try to make something generic. Jack of many trades, master of none.

Too many engines also lack any concrete goals, so there's much hand-waiving going on.

A game engine is nothing more but a fancy game loop:
initializeScene();while (running) {  processInput();  processLogic();  render();}
The benefits of engine come from additional features. Some provide simple AI, others fast rendering, third support plenty of scene definitions, and so on...

A car is a car. But some come with A/C, others come with leather seats, third come with 4x4 drive. And few come with all of the above. And then, there's that 60 yeard old shell of a car that still drives and still servers the purpose without any of those things.

But arguing which is best at general level is counter-productive. Leather seats may be fancy, but you wouldn't want them for a family with 3 small children. And A/C may be great, but completely redundant if you live in the arctic. Same for sunroof.

Quote:What i was wondering is where ray-tracing etc comes into the equation


It doesn't, unless you write a ray-tracer, and those are mostly CPU-based, since current graphic cards don't do ray-tracing.

Quote:so i'm not sure what the link between opengl/directx is


They are how you talk to graphics card. Two different "dialects", same task. Without them, you'd need to program graphic cards directly, in assembly, and individually for each version of each chipset and configuration.

Quote:if an engine does all the above things, then surely the only thing left would be the input.


Engines also handle the input.

Quote:But also where do graphics cards fit into this whole equation


They are the ones that make billions of calculations that you tell them to do. The boring stuff, like calculating trillions of dot products 500 times faster than CPU could.

Quote:for example they all boast about new shader techonologies and anti-aliasing etc, to me this sounds like things that would be done by a ray-tracer
These are generally richer instruction sets, so you basically get more done with simpler commands.

Quote:so do 3d models get "sent" raw to the graphics card
Simply put, yes. Then you say: "Light is on top, there's some fog, and camera is here". And the rendered scene appears on the screen.

Quote:which then does all this ray-tracing for you?
Current GPUs don't do ray-tracing. They use scanline rendering. There's currently no practical ray-tracing GPU in the market, at least not one fit for consumer markets.

Quote:Add to this the addition of physics processors, is this simply another cpu, which is tuned for typical physics equations?
Yes. They are also far from mainstream, and the market doesn't seem to be too excited about them, which is shown by lack of adoption in consumer markets.

Quote:The reason i ask is simply because i'd like to combine 3 seperate things in modelling waves, collision detection and ray tracing, which i've been doing in my computer science degree, and make them into a extremly simple example of a game, to the extent of simply pushing a button to drop a box into a sea-wave.
What you want is far from trivial, includes some nasty math, and will likely require lots of memory. This type of problems is currently under active research.

You also seem to be focused on ray-tracing. This is something current graphic cards, such as those supported by DX or OGL cannot really help you much with. You'll be doing most of it on CPU alone. As such, you don't really need any of those.

Quote:Any advice or simply pointers to references that cover this kind of things would be greatly appreciated.


SigGraph is one concentration of this type of research. That gives you the current bleeding edge into such technologies.
Out of all the coding/programming forums i've used and i have stumbled across many a problem! i don't think i've ever got such a quick and thorough response.

Thanks alot, it's defintely cleared up alot of the questions i had about what go's where etc.

I'm aware that the specific "project" i stated is by no means a easy task, and perhaps i sound more centered around ray-tracing because i just built one, and want to see it used for something more then making pretty snooker balls lol, but i'm defintely not drawn to that as the main project idea, just the best way of trying to illustrate the problems i was having conceiving "the larger picture". I just thought that by starting quite high up, and removing aspects like AI and scripting, i could use knowledge on things i'm interested in and have studied to try teach aspects of game dev, before moving onto the actual game. I'm sure like most wanting to get into game dev, i've had my fill of "hello world's", although i'm sure i'll end up creating a "tetris" or "space invaders".

Just to ask more specifically, say i had my mathmatical equations dealing with the movement of waves, and this created a net of points, which would form a polygonal/nurb mesh, and this was updated and sent 10 times a second, i would be using the api opengl, and passing it this model, which would then know how to send it to the gfx card, and the gfx card would "draw" this on the screen. Rather than the engine calculating what the image should look like pixel by pixel and painting this on the screen?

Defintely know i'm miles ahead of myself at the moment, but getting the larger picture helps knowing what i'm going towards.
Quote:Original post by luzzeh
Rather than the engine calculating what the image should look like pixel by pixel and painting this on the screen?


Essentially thats where OpenGL comes in. It does all your "picture making" calculations with the help of the video card's powerful processor. The point of using it is so that you don't have to do the calculations yourself, on a general purpose processor.
------------Anything prior to 9am should be illegal.
Quote:Original post by luzzeh

Just to ask more specifically, say i had my mathmatical equations dealing with the movement of waves, and this created a net of points, which would form a polygonal/nurb mesh, and this was updated and sent 10 times a second, i would be using the api opengl, and passing it this model, which would then know how to send it to the gfx card, and the gfx card would "draw" this on the screen. Rather than the engine calculating what the image should look like pixel by pixel and painting this on the screen?



Something made out of dynamic geometry that's being constantly updated is a bit of an exceptional case when it comes to real-time 3D graphics, but yes that's basically how you would handle it. Your application code would layout a set of data describing where the different vertices of the wave are located, and also some textures and perhaps other parameters. Then using OpenGL function calls you would send off this data to the GPU to be rasterized and rendered.

Note however that in modern graphics, some or all of this might be offloaded to the GPU to be run as a shader program. For example the application might simple specify some parameters for those mathematical equations you speak, and those equations might be worked out to determine vertex locations in a vertex shader program running on the GPU.

Quote:Original post by luzzeh

Jwhich would form a polygonal/nurb mesh, and this was updated and sent 10 times a second, i would be using the api opengl, and passing it this model, which would then know how to send it to the gfx card, and the gfx card would "draw" this on the screen. Rather than the engine calculating what the image should look like pixel by pixel and painting this on the screen?


You define vertices, and polygons (mesh). You then let graphics card know about this. Typically, you can use tens, even hundreds of thousands of polygons per scene. Modern graphic cards will have no problem rendering those at 60+ fps, the main limiting factor will be scene organization.

Graphic card contains its own frame buffer (bitmap in the size of width-by-height), to which it renders. Unless you're after post-processing, you do not need to deal with that.

The simplest way to do this is simply to say:
gx_draw_mesh(....);gx_render_scene();
whenever you want to update the scene.

Whether you call this once every second or hundred times doesn't matter, it will merely affect the "smoothness" of your animation.

The whole point of graphics APIs lies in that they understand the graphic primitives and common idioms. They know what vertex, polygon, face, matrix, camera, light, orientation, dot product and all that are, so you just use that.

Graphic cards however are specialized, dedicated hardware. This means they do what they do, in the way they do it. You cannot start from scratch and do it differently - you are prescribed exactly how to do things.

And the black art of high-performance graphics programming comes from understanding the symbiosis between this API, system buses, various bandwidth constraints, CPU, scheduling, parallel execution, asynchronous nature of code and more....

And this is what engines try to accomplish - hide as much of this very complex and dedicated topics from people who just need some value (such as game).
Quote:Original post by LackOfGrace
i just want to add that raytracing is not viable for games with our current hardware.
We could even discuss if it'll be viable EVER.
After all, as GPUGems2 shows, a few passes of bounced ambient occlusion is visually undistinguishable from a light transfer function made out of raytracing.
Similarly, raytraced reflections doesn't look much better than a properly done standard reflection. You cannot compare today's game reflection to reasearch or offline stuff. I'm 100% sure it is possible to beat raytracing in 100% of the cases on a properly designed system.
A 0-bounce raytrace looks crap just like standard rendering.

Don't believe it? Have a trip in Blender or whatever.

Next cards to support raytrace? I'll wait for it, but given today's functionality and market I would rather expect them to work it out using already existing features.

Where's the exciting with raytracing really? I cannot see it.

Previously "Krohm"

This topic is closed to new replies.

Advertisement