Sign in to follow this  
gboxentertainment

OpenGL Voxel Cone Tracing Experiment - Progress

Recommended Posts

Hi all,

 

I thought I might share with you all the latest progress of my voxel cone tracing engine, implemented in OpenGL 4.3 and based on Alex Nankervis' voxel cone tracing method (http://www.geeks3d.com/20121214/voxel-cone-tracing-global-illumination-in-opengl-4-3/) - except, I have built on that by creating my own engine from scratch and implementing a number of things differently.

 

Here are a number of screenshots to show my results so far - of course there are many things that need to be improved, which you will notice a number of artifacts that exist in my test engine:

 

Basic scene with cone-traced soft-shadows, specular reflections, as well as light specular:

[attachment=13661:gibox0.jpg][attachment=13662:gibox1.jpg]

 

Single-bounce gi vs "unlimited" multi-bounce gi (unlimited in quotes being that the intensity of each subsequent bounce converges towards zero):

[attachment=13663:gibox2-1.jpg][attachment=13664:gibox2.jpg]

 

First voxelization of scene (done only in first frame) and then the revoxelization loop of the scene after cone tracing for "unlimited" bounces:

[attachment=13665:gibox2-2.jpg][attachment=13666:gibox2-3.jpg]

 

Just showing off the quality of the specular reflections:

[attachment=13667:gibox3.jpg][attachment=13668:gibox4.jpg]

 

and finally...

Single-bounce emissive vs multi-bounce emissive ([edit] I made a mistake here where I accidentally had the green wall outside of the scene volume, thus it was not correctly lit, which is why there is no color bleeding). The last image shows that there is a major problem with the revoxelization of the emissive scene (which leads to flickering artifacts - I guess this may not be a problem in a real game because it can add an effect of flickering lights):

[attachment=13669:giboxemissive1.jpg][attachment=13670:giboxemissive0.jpg][attachment=13671:giboxemissive2.jpg]

 

To give you guys an idea of the scale of this scene, here are the specs:

  • 64x64x64 voxels for the entire scene
  • Runs at around 30-35fps on 1024x768 on my gtx485m i7 2720qm with 8gb ram Windows 7 64-bit. Drops to about 23fps if I get close to the Buddha model.
  • 64-bit OpenGL 4.3
  • 1 3D texture using dominant axis voxelization (plus a second 3D texture for image atomic average operations which reduces flickering artifacts significantly - but doesn't eliminate them).
  • (4 diffuse cones traced + 1 specular cone traced + 1 shadow cone traced in direction of light) x 2 (2nd time for revoxelization to achieve "unlimited" multiple bounces).
  • Buddha model is the most costly object, with over 500,000 vertices.
  • I also apply the lighting and shadows prior to voxelization (and it is applied in each pass). Until someone comes up with a convincing explanation, I don't see any advantage to splatting light into the 3D texture (which is not as accurate) after voxelization from a lightmap texture that has world position information in it.

Things that I need to improve (hopefully I can get some advice from this community):

 

  • If you notice the specular reflection under the red box - part of it is missing which I think is caused by some opacity problem during cone trace of the base mip level:

[attachment=13672:gibox0-1.jpg][attachment=13673:gibox0-2.jpg]

 

  • You would have noticed that the soft shadows are very voxelized and have holes in them - again, some opacity issue that might be related to the red box specular reflection problem. What I had to do was to increase the step size of just the initial cone trace step because originally, the shadow cone tracing was producing the self-shadowing artifacts as shown in the second image, even though the smaller step size produced more accurate shadows without holes in them:

[attachment=13675:giboxshadows3.jpg][attachment=13676:giboxshadows4.jpg]

 

  • In the images above of the emissive test, emissive objects really bring out the incorrect filtering, because I believe I am using 2x2x2 bricks during the mip-mapping process. Another explanation could be that i am not distributing my four diffuse cones evenly enough.
  • In some cases, activating multiple bounces actually makes the lighting look worse than a single-bounce due to the scene progressively reducing in intensity for each bounce. However, I think I can address this by turning up the intensity of the direct lighting in each bounce pass.
  • I need to implement an octree structure using 3x3x3 bricks.
  • I will probably implement some sort of performance debugging which shows the cost in ms of each action.
Edited by gboxentertainment

Share this post


Link to post
Share on other sites

I'd like to know how you make the multi bounce with that method ? great work smile.png

Good question, I actually couldn't believe it myself when I heard about unlimited bounces. The way you would do a second bounce with voxel cone tracing is by revoxelizing the scene after the first bounce - this captures the all of the lighting, which includes the indirect lighting captured by the first trace. A second mip-map filtering pass will then filter this captured indirect+direct lighting through the 3D texture mip-map levels. The second cone trace will then capture this indirect+direct lighting and thus a second bounce will automatically result - quite clever really.

 

If you re-use the same 3D texture for every bounce, you can just loop the process so that with every frame, you are voxelizing a scene that has the lighting information from the previous frame already captured.

 

Of course, in my implementation, it is somewhat sensitive and there are still a lot of issues which requires a lot of tweaking.

Share this post


Link to post
Share on other sites
Lightness1024    933

Oh I see. thanks :)

because in the original paper, the multi bounce technique explanation is lost in the middle of a very long chat about how tricky it is, rather than to talk about the fundamentals. And frankly I never got it. Now it makes sense, though it sounds painfully slow.

Is there a way to optimize the reconstruction ?

Maybe on the base that the voxels will have the same geometric source, therefore same position, and the octree will have the same structure; so maybe just re-feeding the lighting into the existing voxel ?

..

Ah wait you're using a grid, and not an octree, that may accelerate things quite a lot.

We have yet to see somebody else than Crassin doing an octree implementation of that method. I think we can wait... because frankly, from the paper it sounds like something that would take months of blind debugging. The note about the iterative pass system for octree's nodes construction, with waiting queues of threads that were causing collisions on the previous pass is crazy. Not to mention the goddamn "brick" stuff, (the 3x3x3 clusters). that one is just ultimate crazy. I can't believe anyone can get this implementation right. Even Crassin, there must be bugs in his stuff.

Share this post


Link to post
Share on other sites

Ah wait you're using a grid, and not an octree, that may accelerate things quite a lot.

 

Yes, you are correct. Unlimited bounces requires dynamic voxelization and with an octree structure, that would just be too expensive to be doing every frame for an entire scene. Because I'm not using an octree structure, my implementation is only as expensive as doubling the amount of cone traces (because the scene only needs to be revoxelized and mip-mapped once per frame).

 

I was originally going to try and implement an octree structure; however, upon hearing about sparse textures in the new amd gpus, I'm not sure whether it would be worth the time and effort. Unless someone can prove me wrong, I believe that sparse textures may make octrees irrelevant - or alternatively, for extremely large scenes, you could use some sort of high-level (low detail) octree structure and the finer details would use sparse textures.

 

Then again, if the next generation of consumer nvidia cards have dynamic parallelism available (which is currently only available in the overvalued teslas), then I believe that octrees can become very fast and it would be reasonable to build them per frame (eg. the galaxy clusters tech demo).

Share this post


Link to post
Share on other sites

I've recently discovered that the previous images I posted of a purely emissive scene was implemented wrongly - I had mistakenly left some aspect of the spotlight in my calculations and thus it was not truly an emissive scene.

 

It took a bit of tweaking to get the purely emissive lighting working properly:

[attachment=13744:giboxemissive3.jpg]

 

The infinite-bounce lighting seems to be extremely sensitive to adjustments in parameters. If I got it wrong, this occurs:

[attachment=13748:giboxemissivewrong.jpg]

 

Here's another problem that i just can't seem to fix:

[attachment=13745:giboxemissive4.jpg][attachment=13746:giboxemissive5.jpg]

You will notice that when the buddha is placed to the left of the light-sphere, there is a volume of space where, if it falls - it blocks out all light to the red wall. This does not occur at all when it is to the left of the light-sphere.

It occurs in the single-bounce scene as well so it must be some sort of mip-mapping error. I've tested the scene with only one cone traced along all surface normals and the problem still occurs so it can't be anything to do with not distributing cones evenly enough.

 

Another issue you will notice is the red wall seems to be a lot more well lit than the blue wall, even though the sphere is slightly more closer to the blue wall. If I move the red wall a little bit more towards the right, it becomes a bit more even:

[attachment=13747:giboxemissive6.jpg]

This has nothing to do with vicinity to the light - it seems to be something to do with the mip-map filtering or voxelization where objects in the wrong locations would not be voxelized or mip-map filtered correctly.

 

The key difficulty of implementing physically correct emissive lighting is the amount of tweaking required in all of the indirect lighting parameters. The mip-map filtering that I have done really loses its effect at higher intensities. When I move the light-sphere, there is no smooth transition between high level voxels so you get jumps in the lighting. I guess that emissive lighting can only be used in a subtle nature as it would be unfeasible for lighting an entire scene. I would probably also put a limit to the number of bounces as well.

Edited by gboxentertainment

Share this post


Link to post
Share on other sites
Lightness1024    933

I think like you, but it is common, like you did, to discover bugs after random chance, and improve quality thanks to fixing them.

Though sometime it happens that fixing a bug degrades quality, or worse, kills all of your parameters finetuning and you have to redo everything.

It happened to me I don't know how many times when I implemented light propagation volumes.

I made it support the sun, then finetuned it great. then support the spot lights, and finetuning the spots broke the sun, and then no way to make them work together because of huge intensity differences. the spot would be ridiculously weak and contribute to nothing to the indirect solution.

It caused problems of accumulation, because LPV works by accumulating Reflective Shadow Maps (VPLs) pixels into the light volume and if you add 1000 vpls to one voxel, you must get your range very carefully so that it respects the sensitivity of the FP16 format. And that caused me terrible color tints with curious jumps from green to pink when I would vary a light intensity just because some floating point increment would jump at different time for different color channels.

So the "fix" is to rescale everything but doing that is a hell. nothing can stick to theretical values that would ensure energy conservation for instance. because none of these values work in practice. (for example, the flux should be solid-angle*intensity thus the flux through one face of a cube is 4*PI/6 * intensity, using this formula creates energy gain, an epsilon has to be used to empirically modulate that so that propagation attenuates cutely...)

all of this shi** is hell on earth, and really makes one hope that we could simply do stupid monte carlo path tracing at each frame and be done with it.

one day....

Share this post


Link to post
Share on other sites

really makes one hope that we could simply do stupid monte carlo path tracing at each frame and be done with it.

 

I've thought about implementing realtime pathtracing with random parameter filtering to minimize the noise, but I got lost in the process about creating an acceleration structure. Now I've gone back into vct and I probably still need to implement an acceleration structure (octrees). Maybe one day I might get back into pathtracing. In the meanwhile, I've gotten too far into this vct engine and its at least getting some half-decent results, despite the many artifacts. It doesn't have to be perfect, but I guess it would be good to iron out as many of the artifacts as possible.

 

Anyhow, here's my latest results on standard lighting + colored emissive lighting in the same scene:

[attachment=13815:giboxemissive7.jpg][attachment=13816:giboxemissive8.jpg]

 

The main question is - would emissive objects still have shadows from the spotlight in the scene? Anyhow, I've worked out a way of making the objects not cast a shadow if they are emissive. In terms of having non-emissive objects cast shadows by the emissive object, this implementation probably doesn't have the level of accuracy for that to be possible, nevertheless, I cannot make the emissive object strong enough with the current mip-map level fade ratio, so I will need to input some sort of emissive intensity factor in the mip-map process.

 

[EDIT]

So it turns out that emissive objects do cast shadows provided that the object is not brighter than the scene light.

[attachment=13817:giboxemissive9.jpg]

 

Also, I forgot to mention that in an attempt to make the emissive object physically based, those artifacts occur with multiple bounces. Something to do with opacity issues again.

 

[EDIT 2]

I've also just realised that in order to increase the emissive object's intensity, I would need to find out a way of altering the mip-map of that particular object. Because the mip-map filtering process takes the entire scene (inside the 3D texture), objects are not treated individually. I think the only way around this is to have an emissivity falloff factor included in each voxel during voxelization.

Edited by gboxentertainment

Share this post


Link to post
Share on other sites

Here's transparency:

[attachment=13828:giboxtransparency0.jpg][attachment=13829:giboxtransparency1.jpg]

 

Not sure entirely how correct this is - the backface seems to be on top in the depth order. I've tried placing the code:

	glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
	glEnable(GL_BLEND);

 

both before and after enabling depth test, but doesn't make any difference. You will notice when I go to the back of the glass Buddha that only his back shows, whereas when I am in front of him, both his back and front faces are shown. There is no backface culling that I am aware of.

 

Other things I have to add is refractions and and caustics.

Share this post


Link to post
Share on other sites

Cone-traced refraction without transparency (Translucency):

 

[attachment=13830:giboxrefraction.jpg]

 

So it seems that I've got three choices:

  1. Render using alpha blended transparency which cannot provide refractions (but still looks cool).
  2. Cone trace refraction by copying the specular cone trace and replacing the reflection vector with a refraction vector - very slow (13fps with the Buddha) and still very blurry even with a 0.01 cone ratio. At some view directions, the object does not look translucent at all.
  3. Render refraction using cube maps - may provide most accurate image but I'd imagine this would also be quite slow. I might give this a go though.

[attachment=13831:giboxrefraction1.jpg]

 

The sphere is a lot faster, as expected [27 fps]

I still would like to get clearer refractions.

 

Here's a translucent cube:

[attachment=13832:giboxrefraction2.jpg]

 

One thing that is missing from cone-traced refraction is the back faces are not captured - not sure if there is a possible way to do this or to fake it.

Edited by gboxentertainment

Share this post


Link to post
Share on other sites
FreneticPonE    3294

I'm going to go ahead and say "Accurate refraction, who cares?" Don't particularly see the appeal of it, not at the performance hit being talked about, not above just a screenspace refraction anyhow.

 

It's nice in hypothesis, and then if you actually look at the results, it's not really that spectacular unless you are looking at some extremely specific object designed to show it off.

 

A thing that might be worth exploring more is voxel anti aliasing. If your mesh partially covers a voxel space, might it be better to have that voxel "partially" filled? Assuming you can get it working I wonder what the cone traced shadows would look like. At best it might also help with emissive stuff "popping" into a neighboring voxel, and it's associated light popping as well. You could get as many emissive objects as you'd want in a scene, unlimited shadows casting lights definitely sounds like a worthy goal, even if the shadowing is going to be a bit, blocky or something.

 

Great work so far smile.png

Edited by Frenetic Pony

Share this post


Link to post
Share on other sites

Really I've just been trying to get as accurate refractions as I can get for the sake of seeing if I can. But you are correct, I think I will just push this one aside - being a natural engineer, I tend to become obsessed with the little details sometimes too much - although that's what got me into graphics programming in the first place.

 

More importantly though, I do need to work out a way of either getting more accurate cone-traced soft shadows, or somehow masking the very voxelized core of the shadow that is closest to the occluder. I think the latter will be more achievable:

 

My idea is to render a shadow map and this would be blended in with the cone traced soft shadow. What I would start off doing is to fade out the shadow map at a certain distance away from each occluder. I remember being able to do this with Unreal Engine 3 shadows, however I never understood the technical specifics of it. I'd like to know if anyone knows of a way of doing this?

 

[attachment=13839:givox7.jpg]

 

Another thing I have to resolve is - if you look at the translucent cube, you will notice its shadow is incorrect. This is related to the issue in the following image, where the reflection of the red box only shows the top and side voxels as opaque and anything in shadow is transparent:

 

[attachment=13840:gibox0-1.jpg]

 

Technically, the base mip should be completely opaque with an alpha value of 1 - even when I try forcing the base mip alpha values to 1 the problem still persists.

 

A thing that might be worth exploring more is voxel anti aliasing. If your mesh partially covers a voxel space, might it be better to have that voxel "partially" filled?

 

So currently, my voxels are just blended together and when I sample for the cone trace, I take a voxel offset for six directions:

 

[attachment=13841:givox8.jpg]

 

which pretty much achieves what you are saying - unless you are referring to something else? Anyhow, I have heard an octree structure might be able to provide more accurate voxel antialiasing - which is something I will implement later down the track.

Share this post


Link to post
Share on other sites

Here's some more cone-traced transparency tests, where I've made everything semi-transparent:

 

[attachment=13899:giboxtransparency2.jpg][attachment=13900:giboxtransparency3.jpg][attachment=13901:giboxtransparency4.jpg]

 

Just for a test, when I make everything else completely opaque - this is what I see when I look through a transparent sphere:

[attachment=13903:giboxtransparency5.jpg]

 

Just like the specular issue, everything else becomes transparent (especially notice the red wall), when it should not.

I know that it is to do with the way values are accumulated from the cone trace after the samples are filtered.

 

Filtering is required for smooth speculars and smooth refraction, however, because every object is only surface voxelized, the opaque surface voxels are filtered together with the transparent empty voxels inside each mesh. When the cone traces through an object, it does not saturate immediate after hitting the object due to the semi-opacity and thus continues on, adding values from behind the object - which leads to the transparent results.

 

I guess the best way to resolve this is to use solid voxelization; however, I'm pretty sure that Crassin uses surface voxelization according to his OpenGL Insights chapter, and in one of his videos he shows a very defined specular image with the moving hand, and there are no opacity issues.

 

I'm not sure whether this is because I am not filtering correctly because I am not using 3x3x3 bricks. The issue is still present when I use 512x512x512 voxel resolution for this small scene.

Edited by gboxentertainment

Share this post


Link to post
Share on other sites

I might as well show off the effects that my current engine can achieve, before I proceed to improving it by implementing solid voxelization and octrees:

 

[attachment=13926:giboxbumps0.jpg][attachment=13927:gibox6.jpg]

[attachment=13928:gibox7.jpg][attachment=13929:gibox8.jpg]

[attachment=13930:giboxemissive10.jpg][attachment=13931:gibox-cyber.jpg]

 

I'm trying to put together a video soon as well.

Share this post


Link to post
Share on other sites
bwhiting    1562

Looks really good! Video footage is a must to get a better idea of how it is in action/motion :D

How much time have you spent optimizing i.e. is there room for improvement or have you exhausted everything?

Share this post


Link to post
Share on other sites

Looks really good! Video footage is a must to get a better idea of how it is in action/motion
How much time have you spent optimizing i.e. is there room for improvement or have you exhausted everything?

 

Actually I have not optimized very much at all. No octrees...yet. The scene is entirely within one 64x64x64 3D texture. Entire scene is revoxelized per frame.

I never even purposely implemented any other screen-space techniques - apart from the cone tracing itself, which is naturally a screen-space algorithm.

The only other optimization that I had done was substituting a low-poly version of the Stanford Buddha model for the voxelization pass.

 

I am in the process of creating a tech demo video that shows off each effect. Recently Ive been side tracked by a sudden obsession with trying to implement tessellation. I'd also would like to fix the shadow issue - which I believe that switching to solid voxelization will resolve the problem...hopefully. But I guess I'll end up finishing the video before that.

Share this post


Link to post
Share on other sites
FreneticPonE    3294

The specular reflections look great! Any idea what's causing that slowdown between a hundred spheres and a thousand spheres? As in, any idea what's costing the most between the two? Increase in tighter specular cones being traces, increased voxelization, etc.

 

As for what I meant by anti-aliasing, I realized I'm not sure what I meant. Or rather I'm not sure how you'd go about implementing it. With voxelization in the way everyone does it you're always implementing solid voxel blocks. Somewhere in my head there's this idea of implementing blocks with an alpha value to cover cases where a mesh is partly in a voxel's space but not enough to register in solid. As in, when the Buddha model is moving back and forth instead of it's voxelization popping from 0 to 1 in a binary fashion from voxel space to voxel space, there'd be a smooth transition, filling in an alpha value of that voxel space as more of the mesh takes it up.

 

I'm just not sure how to do it, there's already holes here, and thin geometry and etc. is already a problem. In my head's it's making cone traced shadows a lot smoother in terms of movement though.

Share this post


Link to post
Share on other sites

Any idea what's causing that slowdown between a hundred spheres and a thousand spheres?

 

I'm pretty sure that this is mostly to do with actually rendering the spheres themselves and not the cone-tracing. I've yet to test it though. What I would like to do next is to implement some sort of compute shader particle system that is globally lit by the vct and see if there is better performance.

 

 

Somewhere in my head there's this idea of implementing blocks with an alpha value to cover cases where a mesh is partly in a voxel's space but not enough to register in solid. As in, when the Buddha model is moving back and forth instead of it's voxelization popping from 0 to 1 in a binary fashion from voxel space to voxel space, there'd be a smooth transition, filling in an alpha value of that voxel space as more of the mesh takes it up.

 

Very interesting idea! I think it may be possible now that you mention it in that way. You are correct, there is no gradual transition of moving each triangle between voxel spaces. Now that I think about it, the semi-transparency of the alpha values are calculated during the mip-mapping stage and filtered in the vct pass. If I can bring this filtering back to the voxelization pass, I may just be able to get what you describe - and by pre-filtering I may be able to keep the same cost.

 

[EDIT]

Now that I think about it a bit more, the only way to get a gradual transition of moving each triangle between voxels is to calculate voxel occupancy percentage. As this technique uses the hardware rasterizer to split the triangles into voxels, I can't imagine that there is a way to do this efficiently - unless someone can suggest a way to do this. However, if this could be done easily, then general anti-aliasing wouldn't be such a hassle as it is today. Supersampling seems to be the most effective method quality-wise - which in voxel cone tracing would just be increasing the voxel resolution. Its possible that I can take the concepts of an antialiasing technique such as msaa, smaa or fxaa and apply this to the voxelization stage.

Edited by gboxentertainment

Share this post


Link to post
Share on other sites

Looking at Crassin's video of the animated hand in the Sponza scene, I see that he was able to move this hand without any of the artifacts present in my engine. Even if I turn up my voxelization to 256x256x256 (which for my small scene, it would be a much higher resolution than the 512x512x512 that Crassin uses in the large Sponza scene), I still get the flickering artifacts in parts of my models when different voxels are being interchanged due to the change in location of each poly. This is most apparent with the Buddha model, where flickering between light and dark occurs most at the hands.

 

I found a clue on this site of how to possibly solve this problem: http://realtimevoxels.blogspot.com.au/

The author mentions a similar problem when he tries to implement a cascaded view-centred voxel approach, as an alternative to octrees. His solution is:

"Any slight movement of the camera causes different fragments to be rendered, which causes different voxels to be created. We can fix this by clamping the axis-aligned orthographic projections to world-space voxel-sized increments."

 

Here is how I apply my orthographic projections in the geometry shader:

#version 430

layout(triangles) in;
layout(triangle_strip, max_vertices=3) out;

in VSOutput
{
	vec4 ndcPos;
	vec4 fNorm;
	vec2 fTexCoord;
	vec4 worldPos;
	vec4 shadowCoord;
} vsout[];

out GSOutput
{
	vec4 ndcPos;
	vec4 fNorm;
	vec2 fTexCoord;
	vec4 worldPos;
	vec4 shadowCoord;
	mat4 oProj;
} gsout;

uniform mat4 voxSpace;

void main()
{
	gsout.oProj = mat4(0);
	vec4 n = normalize(vsout[0].fNorm);

	for(int i = 0; i<gl_in.length(); i++)
	{
		gsout.fNorm = n;
		
		float maxC = max(abs(n.x), max(abs(n.y), abs(n.z)));
		float x,y,z;
		x = abs(n.x) < maxC ? 0 : 1;
		y = abs(n.y) < maxC ? 0 : 1;
		z = abs(n.z) < maxC ? 0 : 1;

		vec4 axis = vec4(x,y,z,1);
		
		if(axis == vec4(1,0,0,1))
		{
			gsout.oProj[0] = vec4(0,0,-1,0);
			gsout.oProj[1] = vec4(0,-1,0,0);
			gsout.oProj[2] = vec4(-1,0,0,0);
			gsout.oProj[3] = vec4(0,0,0,1);
		}	
		else if(axis == vec4(0,1,0,1))
		{
			gsout.oProj[0] = vec4(1,0,0,0);
			gsout.oProj[1] = vec4(0,0,1,0);
			gsout.oProj[2] = vec4(0,-1,0,0);
			gsout.oProj[3] = vec4(0,0,0,1);
		}
		else if(axis == vec4(0,0,1,1))
		{
			gsout.oProj[0] = vec4(1,0,0,0);
			gsout.oProj[1] = vec4(0,-1,0,0);
			gsout.oProj[2] = vec4(0,0,-1,0);
			gsout.oProj[3] = vec4(0,0,0,1);
		}

		gl_Position = gsout.oProj*voxSpace*gl_in[i].gl_Position;
		gsout.ndcPos = gsout.oProj*voxSpace*vsout[i].ndcPos;
		gsout.fTexCoord = vsout[i].fTexCoord;
		gsout.worldPos = vsout[i].worldPos;
		gsout.shadowCoord = vsout[i].shadowCoord;

		EmitVertex();
	}
	
}

 

 

Where vsout[i].ndcPos is worldSpace*vPos from the vertex shader and the "voxSpace" world to voxel space transformation is obtained from:

sceneBounds[0] = glm::vec3(-2.5f, -0.005f, -3);
sceneBounds[1] = glm::vec3(2.5f, 2, 2);
glm::vec3 sceneScale = 1.0f/(sceneBounds[1] - sceneBounds[0]);
voxScale = min(sceneScale.x, min(sceneScale.y, sceneScale.z));
voxOff = -sceneBounds[0];

models->voxSpace = glm::translate(glm::mat4(1.0f), glm::vec3(-1.0f, -1.0f, -1.0f))*glm::scale(glm::mat4(1.0f), glm::vec3(2.0f))*glm::scale(glm::mat4(1.0f), glm::vec3(voxScale))*glm::translate(glm::mat4(1.0f), voxOff);

 

 

So I've done the transformation of gsout.oProj*voxSpace*vsout[i].ndcPos, so can anyone elaborate on what "clamping to world-space voxel-sized increments" actually means?

Edited by gboxentertainment

Share this post


Link to post
Share on other sites
jcabeleira    723

So I've done the transformation of gsout.oProj*voxSpace*vsout[i].ndcPos, so can anyone elaborate on what "clamping to world-space voxel-sized increments" actually means?

 

I don't think that trick applies to you. The realtimevoxels use a voxel volume that follows the camera to provide GI around the viewer for arbitrarly large scenes.

The clamping they suggest is to be applied to the volume position, not the voxels/fragments/triangles or whatsoever. That is, instead of the volume following the camera position exactly, it will instead snap to the nearest voxel position.

And although in theory it's a good idea, in practice it doesn't fix the flickering because as the volume moves a slightly different portion of the scene is included in the volume which is enough to change the entire lighting in a noticeable way.

Share this post


Link to post
Share on other sites

That is, instead of the volume following the camera position exactly, it will instead snap to the nearest voxel position.

 

Ah, that makes sense now thanks.

I have also been trying to implement some sort of cascaded approach as a faster alternative to octrees. I'm sure that if I make the base volume large enough, and the second volume the next mip-level up, then it may be feasible, but I guess I will just have to get it working first to find out.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now