Sign in to follow this  
Anessen

OpenGL Confused: Very large environments

Recommended Posts

I've got an idea for a 3D application, but I'm trying to understand how I'm going to get this to work. The 3D scene created with OpenGL uses 32bit integers. What if the environment that I want to render is bigger than this? I need to use 64bit integers to render a 3D model a certain distance away from the "camera". I've played games like Frontier First Encounters that render huge environments in 3D and I'm trying to understand how this is done without having to jump through many hoops... anyone tried anything like this before?

Share this post


Link to post
Share on other sites
I've never actually done it myself, but the principle is pretty simple, If some thing is close, you should render a high poly version of the model, and if some thing is far away, you should only render a low poly version of the model. THis is called Level of Detail (LOD).

Afaik openGL uses float for distance.. So I really don't see how thats a problem. If it's because of your own map format, you should consider to split your map into regions.

Hope this helps :)

Share this post


Link to post
Share on other sites
If you want to render a truly huge scene, such as a realistic-scale solar system, or worse yet, universe, then you have to use some sort of coordinate hierarchy.

The idea is that you might model the Earth in metres, and attach it to the solar system, which is modeled in kilometres. The solar system in turn is attached to the milky way, which is modeled in light-years, which is connected to the local cluster, measured in millions of lightyears.

When you go to render the scene, you traverse the hierarchy downwards, and at each level, a 32-bit float has plenty of precision.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anessen
The 3D scene created with OpenGL uses 32bit integers. What if the environment that I want to render is bigger than this? I need to use 64bit integers to render a 3D model a certain distance away from the "camera".


Where does it use integers?

Share this post


Link to post
Share on other sites
I've had a similar problem with a game I'm working on right now. You, probably, won't need to go as far as I have to solve the problem. Work out how large your game world (that is, one area of it at a time) and decide your units from there. In case you have ran into an actual problem with why floats are bad (you will want to be using floats, not integers in OpenGL, unless maybe you're on an embedded platform using OpenGL ES), this is my solution:

In my game I've divided the world up into zones - each zone is it's own coordinate system, but it has a 3D integer vector for it's location rather than a 32bit float or matrix transform - that way I get perfect precision for a zone's location. Each zone is 1000x1000x1000 units (-500 to 500), and that's float. Done it like this so I have perfect precision at all times, whilst not having too many zones (solar system...big place). It's all stored in a hash map for quick lookup and so I don't need to have zones exist that don't actually have anything in them. Dynamically create/destroy them as objects pass in and out of them. Rendering far away zones is done with some manual placement and scaling on the model view per zone - precision doesn't matter for rendering far off objects really, not in my case. For very far away zones I can just render them as a star, or not at all. For the record it's an outer space game. I worked out that just travelling earth-moon distance would put me well outside of decent precision, and I'm working in 1 unit is 1km! Sounds overkill, but it works nicely having 1Mmx1Mmx1Mm zones - as I only need good precision down to around 1m.

Share this post


Link to post
Share on other sites
Thanks for your responses!

And yeah I meant float for actually rendering the scene, the world coordinates of the objects are stored as integers. I will need to use LODs for objects otherwise the poly count is going to go just insane for a start.

What I am making a game that would have to model a whole solar system at a time in real scale. What I am having problems understanding is how you can get enough precision to render this scene using float values, because the player's movement must be smooth relative to close objects but at the same time I can see the 3D models of objects that are many thousands of kilometers away (very large planets, stars etc).

I understand that I can model the locations of objects in a very large environment using a coordinate hierarchy, basically subdividing out grid spaces. What I'm having problems with is drawing that. I am quite new to 3D graphics (moving from 2D) so maybe I'm just missing something obvious here.

Share this post


Link to post
Share on other sites
You need to keep track of your objects with 64-bit floating point vectors and matrices to model an entire solar system, yet be able to move the camera to any position and see small details on each planet. I did this in my engine for exactly this reason.

You need to perform one other "trick" to make this work with current GPUs (because they only support 32-bit floating-point numbers). When your camera is near a particular planet, you need to make that object (or the camera) the "zero point", and subtract that position from the position of every object to compute "current pseudo-world coordinates positions". In most cases, it is easier to make the camera position the origin of this coordinate system, though that is a bit problematic if your engine supports multiple cameras (like mine does). Once you convert the positions of other objects into this new coordinate system, you can convert those positions to 32-bit floating-point and let the GPU shaders render as usual.

BTW, you have a choice --- you can simply translate the origin of the world coordinate-system to the camera position, but leave the axes alone, or you can transform the coordinate system so the axes of the camera become the axes of your new coordinate system. I prefer the former, largely because I support multiple cameras.

Share this post


Link to post
Share on other sites
OK, I understand that. But, I just tried to make a very large environment using a 1 unit to 1 metre scale... I drew a very large box (imagine drawing a box around Jupiter, 142984000 metres in each direction) and I had a camera that I could move around and moved forwards and backwards in very large steps too (millions of metres). What I found is that there were a lot of graphical glitches with bits of the box disappearing as I moved the camera around.

Share this post


Link to post
Share on other sites
Quote:
Original post by Anessen
What I found is that there were a lot of graphical glitches with bits of the box disappearing as I moved the camera around.
That is caused by a lack of depth buffer precision, which is the next issue you have to deal with. Sean O'Neil has a post on the subject, which method I am using currently. Ysenaya and a few others had a neater solution using logarithmic depth buffers, which you should be able to find around GameDev.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Anessen
What I found is that there were a lot of graphical glitches with bits of the box disappearing as I moved the camera around.
That is caused by a lack of depth buffer precision, which is the next issue you have to deal with. Sean O'Neil has a post on the subject, which method I am using currently. Ysenaya and a few others had a neater solution using logarithmic depth buffers, which you should be able to find around GameDev.
Note that recent GPUs and shader-languages support floating-point depth buffers. I believe this is more-or-less equivalent to logarithmic depth buffers, except floating-point depth buffers are now a built-in capability, and therefore requires NO special code in your program or shaders.

Share this post


Link to post
Share on other sites
Quote:
Original post by maxgpgpu
Quote:
Original post by swiftcoder
Quote:
Original post by Anessen
What I found is that there were a lot of graphical glitches with bits of the box disappearing as I moved the camera around.
That is caused by a lack of depth buffer precision, which is the next issue you have to deal with. Sean O'Neil has a post on the subject, which method I am using currently. Ysenaya and a few others had a neater solution using logarithmic depth buffers, which you should be able to find around GameDev.
Note that recent GPUs and shader-languages support floating-point depth buffers. I believe this is more-or-less equivalent to logarithmic depth buffers, except floating-point depth buffers are now a built-in capability, and therefore requires NO special code in your program or shaders.


It doesn't matter if you're planning ahead or not - not everyone can or wants to upgrade. It's still a very good idea to provide something for those who won't be upgrading to the most recent hardware.

Share this post


Link to post
Share on other sites
Quote:
Original post by maxgpgpu
Note that recent GPUs and shader-languages support floating-point depth buffers. I believe this is more-or-less equivalent to logarithmic depth buffers, except floating-point depth buffers are now a built-in capability, and therefore requires NO special code in your program or shaders.
I can't comment on that (although cameni agrees with you). However, I personally found that a floating point depth buffer was insufficient even for a planetary renderer (let alone the entire solar system), thus why I am using a variation on Sean O'Neil's method. I may revisit this decision at some point in the future, as floating-point depth buffers become more common.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoderI can't comment on that (although cameni agrees with you). However, I personally found that a floating point depth buffer was insufficient even for a planetary renderer (let alone the entire solar system), thus why I am using a variation on Sean O'Neil's method. I may revisit this decision at some point in the future, as floating-point depth buffers become more common.
Actually I haven't tried the floating point depth buffer yet, but as I think about it now there still can be the same problem to some extent, for the large scale rendering. With the logarithmic Z-buffer all 32 bits are used, whereas the exponent of the floating point number is only 8 bits. The 1/Z curve is really unfriendly in this regard.

Share this post


Link to post
Share on other sites
We all assume f32 (single-precision) depth-buffers, not f16... correct?

Do remember, most objects that are extremely far away are single pixels, unless you intend to [figuratively speaking] look through 1000 power telescopes. When objects are so far away they are only 1 or 2 pixels in size, you really won't visually notice any z-buffer errors.

Now, some exceptions do exist, but the realities of astronomy tend to make them non-problematic. For example, the sun and jupiter are so large that they will still be several pixels in size even at large distances. So you could imagine looking at jupiter pass in front or behind the sun from neptune, for example, and both objects might be larger than 1 pixel. However, their distances are sooooo extremely different from each other, even with problematic resolution in the z-buffer, the "depth" of the sun and jupiter surely will not be the same... will they? Can you give a real-universe example of a problem that a simple f32 depth-buffer cannot handle correctly?

Have you tested f32 depth-buffers and actually visually SEEN a problem in the graphics rendering? If so, I would be inclined to sit myself down, work out the math for several approaches, and pay very close attention to their consequences.

I do not understand the point of "designing for older systems" however. Unless I am missing something, only moderately new GPU hardware supports fragment shaders that let you perform the depth decision-making explicitly AND lets you store, retrieve and test 32-bit integer depth values. Thus I don't see that limiting ourselves to "fairly up-to-date GPUs" can be avoided at all for our purposes.

One other option. The current generation of GPUs supports f64 variables and math, and f64 variables are supported in OpenCL and CUDA. So one other approach is to perform the depth computation in a supporting OpenCL function. Unfortunately, I haven't studied how annoying or difficult it is to mix shader code with OpenCL code - I only know it can be done, and works.

I suppose another solution might be to keep two depth buffers, one being the "upper 32-bits" of distance, and the other being the "lower 32-bits of distance". Obviously, one or both of these depth-buffers needs to be held in a framebuffer object, because the built-in methodologies do not seem to support two depth-buffers. I suppose, if we didn't need a stencil buffer (we wish), we could write fragment shader code to put 32-bits of depth information in the stencil buffer, and another 32-bits of depth information in the depth-buffer. OTOH, I don't see how that's more efficient than reading and writing additional depth information into attached framebuffers ala depth-info = gl_FragData[n] and gl_FragData[n] = depth-info.

Share this post


Link to post
Share on other sites
the simple solution which works on allhardware + requires no math is to draw things in 2(or more) phases

eg

clear depth
set z from 100km -> ~100,000 km
draw stuff here

clear depth buffer
set z from 10 -> ~110k meters
draw stuff here

Share this post


Link to post
Share on other sites
Quote:
Original post by maxgpgpu
Now, some exceptions do exist, but the realities of astronomy tend to make them non-problematic. For example, the sun and jupiter are so large that they will still be several pixels in size even at large distances. So you could imagine looking at jupiter pass in front or behind the sun from neptune, for example, and both objects might be larger than 1 pixel. However, their distances are sooooo extremely different from each other, even with problematic resolution in the z-buffer, the "depth" of the sun and jupiter surely will not be the same... will they? Can you give a real-universe example of a problem that a simple f32 depth-buffer cannot handle correctly?
What you are saying would be perfectly true if the Z-buffer stored the depth value directly. The scales of things and distances when they are still visible would play nicely with the floating point depth. But instead it stores value of Z/W that has an abnormal resolution close to the near plane, but it falls down rapidly with distance.

Hmm, let's do a quick evaluation - for a scene like this: the near plane at 0.1m and the far plane at 300km (but that almost doesn't matter at all with the value of near plane). At 100km the Z/W value changes by 1e-10 per meter, at 10km the derivative is 1e-8. However, the problem here is that Z/W with rising Z approaches 1.0 (and not 0.0 where precision would be plenty), and the resolution of a 32b float around the value of 1.0 is somewhere around 1e-7, if I'm computing it right. That would make f32 depth buffer precision somewhere around 1km at that distance. Which is not what I'd expect from a floating point depth buffer at first thought [oh]

There might be something to be done about it though, as AFAIK the floating point depth values aren't clamped to 0..1, so it could be possible to set znear to a much larger value to reclaim the precision. If the clipping could be handled separately, somehow.

@zedz: yes, it looks simple, but having used it previously I must say that it is slower and/or there are problems at the boundaries, so one has to manage the terrain chunks and objects there. Can be done, but I wish there was a better and simpler solution with depth buffer.

Share this post


Link to post
Share on other sites
>>that it is slower

use

clear depoth
#A
glDepthRange(0,0.5);
#B
glDepthRange(0.5,1.0);

voila, no speedloss, what u are doing with Z/W will in fact result in a speed loss

>>there are problems at the boundaries
perhaps, but take your example 0.1m->300km.
now on earth u cant see anything 300km away typically due to curvature + haze, in the above picture of yours the furtherest mountain is ~10km

thus in such a scenario
A/ 0.1->20km // stuff on ground
B/ 10m->1000km // stuff in air

Share this post


Link to post
Share on other sites
Quote:
Original post by zedz
>>there are problems at the boundaries
perhaps, but take your example 0.1m->300km. now on earth u cant see anything 300km away typically due to curvature + haze, in the above picture of yours the furtherest mountain is ~10km
I have the Sun and Moon visible from the surface of the Earth, which makes at least 2 depth layers, plus at least 2 more for the Earth itself - complexity starts to add up fast.

The bigger issue with the depth layering is that it doesn't interact well with deferred rendering, because you can't reconstruct position from depth at different layers. Planetary effects such as oceans and atmospheric scattering are considerably cheaper to render with deferred shading, so I can't really afford to switch back to a forward renderer just to work around depth layers.

Share this post


Link to post
Share on other sites
Quote:
Original post by cameni
Quote:
Original post by maxgpgpu
Now, some exceptions do exist, but the realities of astronomy tend to make them non-problematic. For example, the sun and jupiter are so large that they will still be several pixels in size even at large distances. So you could imagine looking at jupiter pass in front or behind the sun from neptune, for example, and both objects might be larger than 1 pixel. However, their distances are sooooo extremely different from each other, even with problematic resolution in the z-buffer, the "depth" of the sun and jupiter surely will not be the same... will they? Can you give a real-universe example of a problem that a simple f32 depth-buffer cannot handle correctly?
What you are saying would be perfectly true if the Z-buffer stored the depth value directly. The scales of things and distances when they are still visible would play nicely with the floating point depth. But instead it stores value of Z/W that has an abnormal resolution close to the near plane, but it falls down rapidly with distance.

Hmm, let's do a quick evaluation - for a scene like this: the near plane at 0.1m and the far plane at 300km (but that almost doesn't matter at all with the value of near plane). At 100km the Z/W value changes by 1e-10 per meter, at 10km the derivative is 1e-8. However, the problem here is that Z/W with rising Z approaches 1.0 (and not 0.0 where precision would be plenty), and the resolution of a 32b float around the value of 1.0 is somewhere around 1e-7, if I'm computing it right. That would make f32 depth buffer precision somewhere around 1km at that distance. Which is not what I'd expect from a floating point depth buffer at first thought [oh]

There might be something to be done about it though, as AFAIK the floating point depth values aren't clamped to 0..1, so it could be possible to set znear to a much larger value to reclaim the precision. If the clipping could be handled separately, somehow.

Am I correct to infer that you are willing to require GPU cards that support vertex and fragment shaders? I assume so. Then why not perform Z-depth tests on straightforward distance values in floating point. In other words, forget Z/W... just perform depth tests based upon Z == distance.

If you have a relatively new GPU card, you can store Z distances in glFragDepth. If you have older GPU cards, you can attach a simple f32 monochrome framebuffer to glFragData[1] (or [2], or [3]), and write your Z distances in there. In fact, unless I'm forgetting something, you should be able to store an f32 Z-distance value into a 32-bit integer depth-buffer, as long as you configure OpenGL to not read/write/test that buffer itself (disable depth-buffering in OpenGL, and you explicitly read/write/test the buffer yourself in fragment shader code).

To speed up the process, you can eliminate the divide-by-W (if it actually requires a divide operation... not certain off hand) by performing the Z/W in the vertex shader and passing the Z-distance in an interpolating out variable. This way the fragment shader gets exact, interpolated Z-depth values without any need to perform a [relatively-slowish] divide operation. I do this trick in my vertex shaders for a different purpose (to pass normalized light->object vectors to the fragment shader, with true distances in the .w components).

In short, I suspect the best approach is to find a way to make the GPU perform Z-depth tests (not Z/W pseudo-depth tests) with f32 values.

Wait a second. Why not convert the transformed position.xyzw in the vertex shader to position.xyz1 (w component == 1.0000)? Then if your application has OpenGL create an f32 depth-buffer, the Z/W depth-test in hardware is the same as a Z depth-test. No?

Share this post


Link to post
Share on other sites
Quote:
Original post by maxgpgpu
In short, I suspect the best approach is to find a way to make the GPU perform Z-depth tests (not Z/W pseudo-depth tests) with f32 values.

Wait a second. Why not convert the transformed position.xyzw in the vertex shader to position.xyz1 (w component == 1.0000)? Then if your application has OpenGL create an f32 depth-buffer, the Z/W depth-test in hardware is the same as a Z depth-test. No?
Well the problem is that the correct value of W is required because the rasterizer has to interpolate 1/W (and tex coords/W etc) to perform perspective correct texturing.

To get around that hardwired /W operation, value of Z can be premultiplied by W in the vertex shader, or, as you say, it can be written to glFragDepth in the pixel shader. Writing it in vertex shader leads to artifacts for polygons crossing the camera plane, where the 1/W changes rapidly. Using it in pixel shader with glFragDepth effectively disables fast-Z rejects, although this seems not to be a problem so far. Nevertheless, I'm using the vertex shader trick normally, and the pixel shader trick only on the objects close to the near camera plane.

However, unless there's something that can be done with the floating point depth buffer setup that would render these tricks in shaders unnecessary, I'll bet using the logarithmic depth buffer can give you a better precision due to better utilization of 32 bits.

Share this post


Link to post
Share on other sites
Quote:
Original post by zedz
clear depoth
#A
glDepthRange(0,0.5);
#B
glDepthRange(0.5,1.0);

voila, no speedloss, what u are doing with Z/W will in fact result in a speed loss

>>there are problems at the boundaries
perhaps, but take your example 0.1m->300km.
now on earth u cant see anything 300km away typically due to curvature + haze, in the above picture of yours the furtherest mountain is ~10km

thus in such a scenario
A/ 0.1->20km // stuff on ground
B/ 10m->1000km // stuff in air

Of course I have been using the depth range partitioning. But I was trying to say that it is slower when I have to do all the management. On the terrain I can see mountains as far as 150km (even more so now because the haze is unrealistically thin), so I have terrain tiles covering that whole range. I had to split the range 3 times for that, and could not use just the quadtree level to determine what tiles go where because the error metric would occasionally determine that a more distant tile but with larger features requires a refine, resulting in z-buffer artifacts because of the overlapping depth ranges. Then there's splitting the in-air objects, etc etc

All in all, using the logarithmic depth buffer showed to be much easier and elegant for me and others doing planetary rendering, even though it's not without problems.

Share this post


Link to post
Share on other sites
Wow.. these threads on "super huge rendering ranges!" always make me think.
Sure, there is the distinct case of planet rendering, where you want to go from the surface of a planet, out into space, over to another planet.
But on the surface of a world?
How far away IS the horizon? not 150Km for sure. Now, given that I've been bored more than once while driving between states, I'll say for sure that lots of valleys and tall mountains will give you places where you can see mountains 10-20miles before you get to them.

There is a big difference between having a world that is 150Km in size, and needing to render ALL of that as visible.

Share this post


Link to post
Share on other sites
Of course, that depends on what you are trying to do.
On the ground the visibility of mountains can be 20-30 miles at best, but from a plane at 14,000 feet you can see mountains 200 miles distant due to thinner air.
If you want an engine capable of this all you have to handle it somehow.

But that doesn't matter. Even at 10 miles you will have the problems with depth buffer. I thought floating point depth buffer would handle that. But it looks like it's adding precision where there was already plenty, and not helping much with the problematic distant part.

Share this post


Link to post
Share on other sites
So it looks the solution to the floating point depth buffer precision problem is easy. Swapping the values of far and near plane and changing the depth function to "greater" inverts the z/w shape so that it iterates towards zero with rising distance, where there is a plenty of resolution in the floating point.

I've also found an earlier post by Humus where he says the same thing, and also gives more insight into the old W-buffers and various Z-buffer properties and optimizations.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628378
    • Total Posts
      2982334
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now