Jump to content

  • Log In with Google      Sign In   
  • Create Account


RPTD

Member Since 16 Jun 2001
Offline Last Active Sep 15 2013 05:07 PM
-----

#5081680 low sample count screen space reflections

Posted by RPTD on 30 July 2013 - 06:31 AM

The basic idea behind SSR is clear to me. Although there is little useful information around the basic idea is to ismply march along a ray in either view or screen space. Personally I do it in screen space as I think this is better but that I can't say for sure for the lack of information around.

 

Whatever the case the common approach seems to be to do a linear stepping along the ray and then doing a bisecting search to refine the result. The bisecting search is clear and depending on the step size is around 5-6 steps for a large screen and a ray running across a large part of the screen. The problematic part is the step size.

 

I made tests with a step size of 20 (not counting the refinement). In this case for a large screen (1680x1050 as an example) this gives for a moderately long ray bouncing from one side of the screen to the other of lets say 1000 pixel length a step size of 1000/20 = 50 pixels. This is quite large and steps right scross thiner geometry like for example the edges of the boxes in the test-bed I put together attached below (smaller than 1680x1050 as it's from the editor). Furthermore it leads to incorrect sampling as seen on the right side.

test1b.jpg

 

Now I've seen other people claiming they do (on the same large screen or larger) 16 samples only even for long rays running across the screen. 16 Samples is even less than the 20 I used in the test which already misses geometry a large deal. Nobody ever stated though how these under-sampling issues work out with such a low sampling count. In my tests I required 80-100 samples to keep these undersampling issues somewhat at bay (speed is gruesome).

 

So the question is:

 

1) how can 16 samples for the linear search possibly work without these undersampling issues?

 

 

 

Another issue is stuff like a chair or table resting on the ground. All rays passing underneath would work with an exhaustive search across the entire ray. With the linear test though the test goes into the bisecting phase at the first step the ray crosses geometry like the table or chair. The bisecting test then finds no solution and thus leaks the env-map through. Some others seem to not be affected by this problem but what happens there? Do they continue steping along the ray if the bisecting fails? This though would increase the sample count beyond 20+6 and kills the worst case. So another question is:

 

2) with rays passing underneath geometry at the first linear search hit and bisecting fails to return a result, what do you do? continue on the ray with worse worst case sample count or fail out?

 

3) how to detect these cases properly to fade out? bluring or more intelligent?




#4925732 ATI inital rendering delay up to multiple seconds

Posted by RPTD on 27 March 2012 - 10:46 AM

I would recommend always performing the creation of mip-maps and the compression of your texture data at data-build time, instead of at run-time.
There are many libraries to help with this task, such as:
http://code.google.c...-texture-tools/
http://code.google.com/p/libsquish/
http://developer.amd...es/default.aspx

Figured that much out myself too. Looks like I have to do the compression software side. I had this plan anyways so that's not much of a bummer. Downloaded that libsquish already but could not yet put it to use. Coming soon.

@vNeeki:
Create all mip map levels in software either using your own software or some other software. Since it's down-sampling by 2 (hence 4 pixels into 1) you can use a simple box filter for that. Once done you can upload each level using glTexImage2D specifying 0, 1, ... , N as the first parameter to write the appropriate mip map level.


#4925103 ATI inital rendering delay up to multiple seconds

Posted by RPTD on 25 March 2012 - 07:11 AM

That's like magic... whenever I post here a question after weeks of looking for an answer I stumble across the solution soon after.

Figured out what causes the problem. I disabled mip mapping while using compression (hence GL_COMPRESSED_RGB_ARB respectivly GL_COMPRESSED_RGB_ARBA) and then the initial delay vanished. So the conclusion is that if a compression format is used with mip mapping ATI doesn't compress the data while using glTexImage2D but delays the compression of the data until the texture is rendered for the first time. With either compression disabled or mip mapping disabled the delay vanished. So you can't do both (compression and mip mapping) on ATI without running into the delay problem.


#2789532 things you should and shouldn't do when writing stories

Posted by RPTD on 26 November 2004 - 09:14 AM

if would perhaps refine rule 2 a bit. it is ok and often very helpfull if you know the basic gameplay you aim at before you go into the story. it does not help much to think of characters and their abilities if the game later on doesn't honor this. i would think about both in the same time instead of one by one.


#418056 Memory-Array vs Dynamic-VBO: which is better?

Posted by RPTD on 05 October 2006 - 11:43 AM

I'm trying to optimize my render code. During this I noticed that the copying values to the VBO takes quite some time. The VBO is a dynamic one as the mesh bends around ( creature ). Now I question myself if I am quicker using a memory-array instead of making a VBO. I also would like to keep in mind the memory consumption. With higher resolution meshes the VBO data can quickly explode eating precious texture memory. Is it worth dropping a VBO in favor of cpu memory especially if the data changes every frame?


PARTNERS