• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By dgi
      Hey all ,
      For a few days I'm trying to solve some problems with my engine's memory management.Basically what is have is a custom heap with pre allocated memory.Every block has a header and so on.I decided to leave it like that(not cache friendly) because my model is that every block will be large and I will have a pool allocators and stack allocators dealing with those blocks internally. So far so good I figure out how to place my per scene resources . There is one thing that I really don't know how to do and thats dealing with containers.What kind of allocation strategy to use here.
      If I use vector for my scene objects(entities , cameras , particle emitters .. ) I will fragment my custom heap if I do it in a standard way , adding and removing objects will cause a lot of reallocations . If I use a linked list this will not fragment the memory but it's not cache friendly.I guess if a reserve large amount of memory for those vectors it will work but then I will waste a lot memory.I was thinking for some sort of mix between a vector and a linked list , where you have block of memory that can contain lets say 40 items and if you go over that number a new one will be created and re location of the data would not be needed.There would be some cache misses but it will reduce the fragmentation.
      How you guys deal with that ? Do you just reserve a lot data ?
    • By Hermetix
      I am trying to setup the custom wizard for making a 3ds MAX 2018 plug-in (to export a character animation data), but I can't locate the wizard file folder to put the .vsz file in. In the 3ds MAX 2018 docs, it only mentions where the folder is in VS 2015 (VC/vcprojects). It's a VC++ project, but I don't see any folder in VC for the wizard files. I'm using VS 2017 update 15.5.6 Enterprise, and the folders in VC are: Auxiliary, Redist and Tools.
    • By elect
      ok, so, we are having problems with our current mirror reflection implementation.
      At the moment we are doing it very simple, so for the i-th frame, we calculate the reflection vectors given the viewPoint and some predefined points on the mirror surface (position and normal).
      Then, using the least squared algorithm, we find the point that has the minimum distance from all these reflections vectors. This is going to be our virtual viewPoint (with the right orientation).
      After that, we render offscreen to a texture by setting the OpenGL camera on the virtual viewPoint.
      And finally we use the rendered texture on the mirror surface.
      So far this has always been fine, but now we are having some more strong constraints on accuracy.
      What are our best options given that:
      - we have a dynamic scene, the mirror and parts of the scene can change continuously from frame to frame
      - we have about 3k points (with normals) per mirror, calculated offline using some cad program (such as Catia)
      - all the mirror are always perfectly spherical (with different radius vertically and horizontally) and they are always convex
      - a scene can have up to 10 mirror
      - it should be fast enough also for vr (Htc Vive) on fastest gpus (only desktops)

      Looking around, some papers talk about calculating some caustic surface derivation offline, but I don't know if this suits my case
      Also, another paper, used some acceleration structures to detect the intersection between the reflection vectors and the scene, and then adjust the corresponding texture coordinate. This looks the most accurate but also very heavy from a computational point of view.

      Other than that, I couldn't find anything updated/exhaustive around, can you help me?
      Thanks in advance
    • By KarimIO
      Hey guys! Three questions about uniform buffers:
      1) Is there a benefit to Vulkan and DirectX's Shader State for the Constant/Uniform Buffer? In these APIs, and NOT in OpenGL, you must set which shader is going to take each buffer. Why is this? For allowing more slots?
      2) I'm building an wrapper over these graphics APIs, and was wondering how to handle passing parameters. In addition, I used my own json format to describe material formats and shader formats. In this, I can describe which shaders get what uniform buffers. I was thinking of moving to support ShaderLab (Unity's shader format) instead, as this would allow people to jump over easily enough and ease up the learning curve. But ShaderLab does not support multiple Uniform Buffers at all, as I can tell, let alone what parameters go where. 
      So to fix this, I was just going to send all Uniform Buffers to all shaders. Is this that big of a problem?
      3) Do you have any references on how to organize material uniform buffers? I may be optimizing too early, but I've seen people say what a toll this can take.
    • By abarnes
      Hello All!
      I am currently pursuing a degree in video game programming, so far I have completed an intro to programming course and object oriented programming course. Both were taught using C++ as the programming langauge which I know is very popular for game development, but in these classes we do not actually do any game development. I would like to start to build my skills with C++ for game development as that is a common required thing for a job and am looking for ways to do this. Any recommendations such as books to read or youtube videos to watch will be greatly appreciated!
  • Advertisement
  • Advertisement

Geometry clipmaps and array of texure

Recommended Posts

Hi all

this is my first post on this forum.

First of all i want to say you that i've searched many posts on this forum about this specific argument, without success, so i write another one....

Im a beginner.

I want use GPU geometry clipmaps algorithm to visualize virtual inifinte terrains. 

I already tried to use vertex texture fetch with a single sampler2D with success.


Readed many papers about the argument and all speak about the fact that EVERY level of a geometry clipmap, has its own texture. What means this exactly? i have to 

upload on graphic card a sampler2DArray?

With a single sampler2D is conceptually simple. Creating a vbo and ibo on cpu (the vbo contains only the positions on X-Z plane, not the heights)

and upload on GPU the texture containing the elevations. In vertex shader i sample, for every vertex, the relative height to te uv coordinate.

But i can't imagine how can i reproduce various 2d footprint for every level of the clipmap. The only way i can imagine is follow:

Upload the finer texture on GPU (entire heightmap). Create on CPU, and for each level of clipmap, the 2D footprints of entire clipmap.

So in CPU i create all clipmap levels in terms of X-Z plane. In vertex shader sampling these values is simple using vertex texture fetch.

So, how can i to sample a sampler2DArray in vertex shader, instead of upload a sampler2D of entire clipmap?



Sorry for my VERY bad english, i hope i have been clear.


Share this post

Link to post
Share on other sites

Ok. But say i have a complete  2d footprints of a clipmap (12 blocks mesh, two L-shaped meshes and the cross mesh), how can i, in vertex shader, reproduce the various scaled versions (1 for each level)? The only solution i can think is produce on CPU all scaled version of 2D vertex positions, and it means that i produce (if say L the number of clipmap levels) L versions of the 2D footprints and send them on GPU. 



Share this post

Link to post
Share on other sites
Posted (edited)

So there is a difference between "Geometry Clipmaps" and "Texture Clipmaps".


Geometry Clipmaps are described here: http://hhoppe.com/geomclipmap.pdf

The algorithm is a continous LOD for the mesh


Texture Clipmaps are described here: http://developer.download.nvidia.com/SDK/10/direct3d/Source/Clipmaps/doc/Clipmaps.pdf

The algorithm handles the visualization of very large textures by using a stack of LOD textures as opposed to the usual pyramid.


I used Geometry Clipmaps in the early days (before switching over to a quad tree based algorithm), and I used a toroidal addressing method for the texture (See the Texture Clipmaps paper for an explanation)


To answer your question: You can scale and offset your parts in the vertex shader using data from a constant buffer eg.

pos_base = pos_input_vs + meshOffset;
tex_coord = pos_base.xy
pos_world = pos_base * LODscale + LODoffset

Where meshOffset is to place the L-shaped part at the right location and LODscale and LODoffset is to get it into the right size and position in the world.



Edited by semler

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement