I could imagin the drawing is done in linear space, while yours and gimps blending works in srgb space.
another point might be the internal precision, especially when you blend a lot of layers on top, the quality loss due to rounding can be quite noticeable, having 16bit internal precision can be quite beneficial.
(newer CPUs have instructions to convert float16 to float32 aka half to float, this could speed your rendering up).
output the depth into the alpha channel, it's not very accurate, but enough for your filter.
as an optimization for your current 3 phases, swap 1 and 2. it's beneficial, especially on mobile devices, to render one screen in one go, instead of switching from it to a temporary and back, as your GPU needs to store and restore the framebuffer into internal caches. doing it in one go means 67% traffic saving (just one write, instead of write, read, write).
you see the characters intersect properly with the waterplane (check the char on left most), yet there is a black outline, in motion it also looks fluid like MD2 animated meshes, very smooth, viewed from several angels without popping like prerendering would do, without false intersections like sprites would do. I'm quite convinced it's polys+edge detection..
which involves using the vertex normals to find the dominant axis of each triangle
is that a typo or you're really doing that? (it's not fully clear to me from your source, but it seems like).
you need to calculate the face normal, in the GS, not using the Vertex normals, all vertices per face need to be transformed in the same way. could explain those random voxelizations you got for the sphere and buddah, and the broken edges of your cube.
I've tried this at different resolutions, FPS is still much higher
is it still 160 vs 260 fps? half resolution is way faster, I would expect it's not related to context switching.
how does your timer work? is it some highly accurate timer or something like gettickcount?
I would suggest to not take the render time around 'renderframe', but start the time at frame 0, stop it at frame 100, divide 100 by the seconds you got.
I don't know what else you're doing beside 'renderframe', but there is a chance your drawcalls have not even be send out to the gpu when this function returns and in that case, while the actual rendering is going on, you are doing 'world update' etc.
it's quite common that driver queue up to 5 frame, so to really get the average fps, you need to measure across several frames.
if some results are fishy, it's common the measurement is buggy , so no offense, but that's something to verify.
- usually, if the fog is dense enough to hide objects, it will also hide the sky, so you'll end up with a gray sky above.
- the skybox shows curvature, while your 'earth' is flat/infinite.
technically it's better to think of fog as a blend of the scene and the skybox, I'd suggest to do exactly that. use the depth to blend the rendered 3d scene with the skybox in the distance, it will look like objects appear out of the distance fog, yet you'll still have the proper skybox rendered.
it's of course a bit fishy if objects appear in-front of clouds, but it might still look ok, otherwise the cloud layer would need to be rendered separately, so 'fog' should just blend to the atmosphere, not the clouds, but obviously, they are embeded in skyboxes usually,
calculate the difference between your positions and the real positions, then you know the 3 floats that should be somewhere in the file, then you could try to find them. floats will obviously not be found by perfect binary comparison, but rather if you seek for values that are +- 0.001 . once you find those in the raw file, you could try to deduce in which chunks they are hidden.
while you are using min and max, the paper actually states
Vector (mx,my,mz) represents the center of the AABB. Absolute values of the normal vector of the plane (a,b,c) transform all possible values to the first octant so its dot product with the vector representing a half of the AABB diagonal (dx,dy,dz) will be always positive.
in that case, I'd suggest to trace in screenspace for occlusion, the rim lighting effect you get is usually getting occlusion from surfaces further away into the screen. you could trace rays to check for occlusion (depthbuffer-z closer than the ray depth), similar to screenspace reflections.
so I think it's the other way around. there is really object motion blur on moving objects, but the first person weapon has no motion blur, as it's bound to your head, if it would start to blur while moving, it would rather look like you're on drugs than a directional blur due to motion.