Jump to content

  • Log In with Google      Sign In   
  • Create Account

#ActualL. Spiro

Posted 17 July 2013 - 02:55 AM

They aren’t responsible for just the buffer parts.

 

The CVertexBuffer class also has an interface for filling in the vertices, normals, etc.  It has to be very dynamic and the implementation is non-trivial, since it must be as flexible as possible for all possible use-cases, so in this this topic for his first engine I simplified my explanation.

 

For textures though a small distinction should be made.

A .PNG image could be loaded from disk, scaled up to the nearest power-of-2, and saved as a .BMP or your own custom format, all while never having been a texture seen by the GPU.

So in that case don’t confuse a CImage class with a CTexture2D (these are just example class names).

 

There are in fact so many things you can do to images specifically without them ever being used as textures or seen by the GPU that in my engine there is an entire library just for image loading and manipulating (LSImageLib).

My DXT compression tool links to LSImageLib.lib and not to LSGraphicsLib.lib, even though it is clearly meant only for making textures specifically for the GPU.

 

Just because images often get used by the GPU, do not conceptually stumble over the different responsibilities between images and textures.  A CImage has the task of loading and manipulating image data, possibly converting it to DXT etc., but has no idea what a GPU or texture is.  A CTexture2D’s job is specifically to take that CImage’s texel information and send that to the GPU.  It is a bridge between image data and GPU texture data.

 

 

That is how textures get knit together and vertex buffers do a little more than just handle the OpenGL ID given to them.

As far as meshes go, a “mesh” doesn’t even necessarily relate to everything that can be rendered and as such has no business being a concept of which the renderer is aware.

GeoClipmap terrain for example is not a mesh, nor is volumetric fog.

In fact, at most meshes are related to only a handful of different types of renderables.

 

Each type of renderable knows how it itself is supposed to be rendered, thus they are above the graphics library/module.  If in a hierarchy of engine modules/libraries the graphics module/library is higher than models, terrain, etc., it is exactly backwards.

 

I am not sure of to what abstraction you are referring.

The graphics library exposes primitive types such as vertex buffers and textures that can be created at will by anything.

Above that models, terrain, and volumetric fog create the resources they need from those exposed by the graphics library by themselves.

Above that a system for orchestrating an entire scene’s rendering, including frustum culling, render-target swapping, sorting of objects, post-processing, etc., needs to exist.

 

This higher-level system requires cooperation between other sub-systems.

The only library in your engine that actually knows about every other library and class is the main engine library itself.  Due to its all-encompassing scope (and indeed its very job is to bring all of the smaller sub-systems together and make them work together) it is the only place where a scene manager can exist, since that is necessarily a high-level and broad-scoped set of knowledge and responsibilities.

 

All objects in the scene are held within the scene manager.  It’s the only thing that understands that you have a camera and from it you want to render all the objects, perform post-processing, and end up with a final result that could either be sent to the display or further modified by another sub-system (what happens after it does its own job of rendering to the request back buffer is none of its concern—it only knows that it needs to orchestrate the rendering process).

 

People are usually not confused on the way logical updates happen, only on how renders happen, but they are exactly the same.  In logical updates, the scene manager uses its all-empowering knowledge to gather generic physics data on each object in the scene and hands that data off to the physics engine which will do its math, update positions, and perform a few callbacks on objects to handle events, updated positions, etc.

The physics engine doesn’t know what a model or terrain is.  It only knows about primitive types such as AABB’s, vectors, etc.

 

Nothing changes when we move over to rendering.

The scene manager does exactly the same thing.  It culls out-of-view objects with the help of another class (again, a process that only requires 6 planes for a camera frustum, an octree, and AABB’s—nothing specific to any type of scene object) and gathers relevant information on the remaining objects for rendering.

 

The graphics library can assist with rendering by sorting the objects, but that does not mean it knows what a model or terrain is.

It’s exactly like a physics library.  You gather from the models and terrain the small part of information necessary for the graphics library to sort objects in a render queue.  That means an array of texture ID’s, a shader ID, and depth.  It doesn’t pull any strings.  It is very low-level and it works with only abstract data.

 

Why?

How could Bullet Physics be implemented into so many projects and games if it knew what terrain and models were?

The physics library and graphics library are at exactly the same level in the overall engine hierarchy: Below models, terrain, and almost everything else.

 

 

L. Spiro


#2L. Spiro

Posted 16 July 2013 - 07:38 PM

They aren’t responsible for just the buffer parts.

 

The CVertexBuffer class also has an interface for filling in the vertices, normals, etc.  It has to be very dynamic and the implementation is non-trivial, since it must be as flexible as possible for all possible use-cases, so in this this topic for his first engine I simplified my explanation.

 

For textures though a small distinction should be made.

A .PNG image could be loaded from disk, scaled up to the nearest power-of-2, and saved as a .BMP or your own custom format, all while never having been a texture seen by the GPU.

So in that case don’t confuse a CImage class with a CTexture2D (these are just example class names).

 

There are in fact so many things you can do to images specifically without them ever being used as textures or seen by the GPU that in my engine there is an entire library just for image loading and manipulating (LSImageLib).

My DXT compression tool links to LSImageLib.lib and not to LSGraphicsLib.lib, even though it is clearly meant only for making textures specifically for the GPU.

 

Just because images often get used by the GPU, do not conceptually stumble over the different responsibilities between images and textures.  A CImage has the task of loading and manipulating image data, possibly converting it to DXT etc., but has no idea what a GPU or texture is.  a CTexture2D’s job is specifically to take that CImage’s texel information and send that to the GPU.  It is a bridge between image data and GPU texture data.

 

 

That is how textures get knit together and vertex buffers do a little more than just handle the OpenGL ID given to them.

As far as meshes go, a “mesh” doesn’t even necessarily relate to everything that can be rendered and as such has no business being a concept of which the renderer is aware.

GeoClipmap terrain for example is not a mesh, nor is volumetric fog.

In fact, at most meshes are related to only a handful of different types of renderables.

 

Each type of renderable knows how it itself is supposed to be rendered, thus they are above the graphics library/module.  If in a hierarchy of engine modules/libraries the graphics module/library is higher than models, terrain, etc., it is exactly backwards.

 

I am not sure of to what abstraction you are referring.

The graphics library exposes primitive types such as vertex buffers and textures that can be created at will by anything.

Above that models, terrain, and volumetric fog create the resources they need from those exposed by the graphics library by themselves.

Above that a system for orchestrating an entire scene’s rendering, including frustum culling, render-target swapping, sorting of objects, post-processing, etc., needs to exist.

 

This higher-level system requires cooperation between other sub-systems.

The only library in your engine that actually knows about every other library and class is the main engine library itself.  Due to its all-encompassing scope (and indeed its very job is to bring all of the smaller sub-systems together and make them work together) it is the only place where a scene manager can exist, since that is necessarily a high-level and broad-scoped set of knowledge and responsibilities.

 

All objects in the scene are held within the scene manager.  It’s the only thing that understands that you have a camera and from it you want to render all the objects, perform post-processing, and end up with a final result that could either be sent to the display or further modified by another sub-system (what happens after it does its own job of rendering to the request back buffer is none of its concern—it only knows that it needs to orchestrate the rendering process).

 

People are usually not confused on the way logical updates happen, only on how renders happen, but they are exactly the same.  In logical updates, the scene manager uses its all-empowering knowledge to gather generic physics data on each object in the scene and hands that data off to the physics engine which will do its math, update positions, and perform a few callbacks on objects to handle events, updated positions, etc.

The physics engine doesn’t know what a model or terrain is.  It only knows about primitive types such as AABB’s, vectors, etc.

 

Nothing changes when we move over to rendering.

The scene manager does exactly the same thing.  It culls out-of-view objects with the help of another class (again, a process that only requires 6 planes for a camera frustum, an octree, and AABB’s—nothing specific to any type of scene object) and gathers relevant information on the remaining objects for rendering.

 

The graphics library can assist with rendering by sorting the objects, but that does not mean it knows what a model or terrain is.

It’s exactly like a physics library.  You gather from the models and terrain the small part of information necessary for the graphics library to sort objects in a render queue.  That means an array of texture ID’s, a shader ID, and depth.  It doesn’t pull any strings.  It is very low-level and it works with only abstract data.

 

Why?

How could Bullet Physics be implemented into so many projects and games if it knew what terrain and models were?

The physics library and graphics library are at exactly the same level in the overall engine hierarchy: Below models, terrain, and almost everything else.

 

 

L. Spiro


#1L. Spiro

Posted 16 July 2013 - 06:38 PM

They aren’t responsible for just the buffer parts.

 

The CVertexBuffer class also has an interface for filling in the vertices, normals, etc.  It has to be very dynamic and the implementation is non-trivial, since it must be as flexible as possible for all possible use-cases, so in this this topic for his first engine I simplified my explanation.

 

For textures though a small distinction should be made.

A .PNG image could be loaded from disk, scaled up to the nearest power-of-2, and saved as a .BMP or your own custom format, all while never having been a texture seen by the GPU.

So in that case don’t confuse a CImage class with a CTexture2D (these are just example class names).

 

There are in fact so many things you can do to images specifically without them ever being used as textures or seen by the GPU that in my engine there is an entire library just for image loading and manipulating (LSImageLib).

My DXT compression tool links to LSImageLib.lib and not to LSGraphicsLib.lib, even though it is clearly meant only for making textures specifically for the GPU.

 

Just because images often get used by the GPU, do not conceptually stumble over the different responsibilities between images and textures.  A CImage has the task of loading and manipulating image data, possibly converting it to DXT etc., but has no idea what a GPU or texture is.  a CTexture2D’s job is specifically to take that CImage’s texel information and send that to the GPU.  It is a bridge between image data and GPU texture data.

 

 

L. Spiro


PARTNERS