
Advertisement
theagentd
Member
Content Count
197 
Joined

Last visited
Community Reputation
990 GoodAbout theagentd

Rank
Member
Personal Information

Role
Programmer

Interests
Programming
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.

The main point of gamma correction is to increase the precision around the blacks. Basically, the actual color intensity that a given value from 0.0 to 1.0 corresponds to is color^2.2, meaning that half the precision of your 8bit colors is dedicated to the lower 21% of the color intensities your monitor can display, while the other 79% has much lower precision. This is why it was still kept around even after CRTs. Indeed, all normal image files are stored in sRGB space already. This means that the image can be displayed correctly on the screen without any further processing. However, if you want to do any kind of processing of the image, for example scale the color intensities, do blending, do lighting, etc, you need to convert the colors to linear space, process the image, then convert it back to sRGB again for displaying correctly. For example, a consequence of this is that 255/2 = ~187 in sRGB. For your game, any texture you read from a file you should load as a GL_SRGB texture. This allows you to keep the full precision of the image data, but automatically convert it to linear space on demand. You need to take caution in what render target texture format you use as well. If you were to render linear color values to a GL_RGB8 render target, you would completely ruin the precision of the blacks, giving you a huge amount of banding in dark areas. A simple solution is to use GL_RGB16F or even GL_R11F_G11F_B10F, which has a similar precision to sRGB. Another option is to use a GL_SRGB8 texture as your render target together with glEnable(GL_FRAMEBUFFER_SRGB). GL_FRAMEBUFFER_SRGB causes OpenGL to automatically convert any values you write to an sRGB render target to sRGB space. With this enabled, your GL_SRGB8 render target functions exactly like a GL_RGB8 render target, just with much more precision around the blacks (and lower towards whites) meaning that you won't lose precision inbetween. You also need to convert the entire image back to sRGB space before you display it in a final postprocessing step, which you seem to be doing. Note that correct sRGB is not the same as pow(x, 1.0 / 2.2), but it is quite close. Regarding the washed out look: This is a natural consequence of the gamma correction at the end. Many people argue that not having gamma correction gives a "deeper" look, but it also makes it impossible to get accurate results. For example, if you add two lights together the result should be twice as bright, but if you try to do lighting without gamma correction the result will look closer to 4x as bright, making tweaking scene colors and special effects very difficult. You should rework your colors and assets to look good with gamma correction on. The earlier you make the switch, the less painful it will be. If you are using HDR, the tone mapping function has a much bigger impact on how washed out the image looks than the gamma correction has. Also, you can test your monitor's gamma using this website: http://www.lagom.nl/lcdtest/gamma_calibration.php

3D Calculated projected sphere's ellipse shape for SSAO sampling
theagentd posted a topic in Graphics and GPU Programming
Hello! I have a very highquality and fast SSAO implementation right now which is a bit backwards that I have a problem with. In most (really bad IMO) tutorials online, SSAO is done by sampling random points in a half sphere around the view space position of each pixel, then computing "occlusion" from depth differences. In contrast, I do random samples in a circle around each pixel, compute the view space position of each sample, then compute occlusion in 3D in view space, taking the distance and direction towards each sample into account. This produces very good quality results with only a small number of samples (together with a bilateral blur). However, this technique has an issue with the perspective projection at the edges of the camera. The farther out towards the edges you get from the camera, the more "stretched" the view gets. What I really want to do is just sample the 2D area of a projected 3D sphere with a certain radius at each pixel, which is what the usual SSAO examples are doing. In addition, I've recently experimented with eye motion tracking, which can produce extremely skewed/stretched projection matrices, which massively worsen this issue. To get this right, I'd essentially need to go from view space to screen space, sample depth at that position, then unproject that pixel back to view space again. This would need ~40% more ALU instructions than my current version, so it'd make it really slow. This lead me to thinking that it'd be great if I could keep the original 2Dsampling code and simply compensate for the stretching in 2D. As I already have a 2D matrix which handles random rotation and scaling of the sample offsets, if I could bake in the circle elongation into that matrix I could compensate for the stretching without having to add any persample cost to the shader! It'd just need some extra setup code. All this leads to my question: Given a sphere in view space (position, radius) and a projection matrix, how do I calculate the shape of the resulting ellipse from projecting the circle? Optimally, I'd want to calculate the center of the ellipse in NDC, then a 2D matrix which when multiplied by a random 2D sample offset on a unit circle (x^2+y^2<1.0) results in a position on the ellipse in NDC instead. So far I've found this, which exactly shows the axes/vectors I want to find: https://www.shadertoy.com/view/XdBGzd. This doesn't however use a projection matrix, and from what I can tell does all its calculations in view space, which I would like to avoid if possible. I've been trying to essentially play around with the math to try to get it to work out, and I think I've almost nailed calculating the center of the ellipse in NDCs, but I don't even really know where to start on calculating the long and short axes of the ellipse... This kind of stretch compensation could have a lot of cool uses, as it ensures that (with properly configured FOV) things that are supposed to be round always look round from the viewer's viewpoint. Theoretically, it could also be used for bloom (to make sure the blur is uniform from the viewer's viewpoint), depth of field, motion blur, etc. EDIT: Indeed, I believe the center of the ellipse in NDC is vec2 ellipseCenter = ndc.xy * (1.0 + r^2 / (ndc.w^2  r^2)) where ndc.xy is the projected center of the sphere, r is the radius of the sphere and ndc.w is the (positive) view space depth of the sphere. 
GL/VK(/DX) abstraction layer: coordinate origin differences
theagentd replied to theagentd's topic in Graphics and GPU Programming
Thanks, good to know that textureSize doesn't have much overhead. AMD's ancient shader analyzer seemed to equate it with the cost of a texture sample, which seemed a bit weird to me, but I assumed that it was simply more expensive than a uniform read. Well, if Unity can't come up with a good solution, I guess there's no way to hide the origin difference completely. It's probably best to just expose it to the user and let them deal with it using some simple tools/macrosish stuff. 1 reply

1

I tried your demo, MJP. It has a huge amount of ghosting when rotating. Do you not do any reprojecting?

OpenGL GL/VK(/DX) abstraction layer: coordinate origin differences
theagentd posted a topic in Graphics and GPU Programming
Hello. I'm working on an abstraction layer for OpenGL and Vulkan, with the plan to add other APIs too in the future like DirectX 12, possibly even Metal. I'm coming from OpenGL, but I've spent a huge amount of time reading up on how Vulkan works. Now I want to write a powerful abstraction layer so that we can write our game once and have it run on multiple APIs so that we can support more hardware and OS combinations. I want to do write this myself and not use any libraries for this. The target is a minimal function set of OpenGL 3.3 with some widely supported extensions to match some Vulkan features, with OpenGL extensions allowing for more advanced features that are supported in both APIs. My ultimate goal is to minimize or even eliminate the amount of APIspecific code the user has to write, and in my investigations I found out that Vulkan uses a different NDC system (z from 0 to 1) and origin (upper left) than OpenGL. The NDC z range is a good change, as it allows for the 32bit float depth buffer reverse depth trick and in general has better precision, so I want to embrace that whenever possible. This is pretty easy to do using either NV_depth_buffer_float or ARB_clip_control, whichever is supported, both of which are supported by both AMD and Nvidia. For certain Intel GPUs and very old AMD GPUs that support neither of those two, a simple manual fallback for the user of the abstraction is to modify the projection matrices they use, which is easy with the math library I use, so I consider this a "solved" problem. The coordinate system origin difference is a much tougher nut to crack. It makes the most sense to go with Vulkan's origin, as it's the standard in all other APIs as well (DirectX and Vulkan). I see two possible solutions to the problem, but they either require manual interaction from the user or force me to inject mandatory operations into shaders/the abstractions making it slower and more limited. I'm not sure, but it seems like ARB_clip_control can be used to solve this problem by changing the origin, but I'm not sure if it covers everything (texture coordinate origin, glViewport() origin, glTex*() functions origin, etc). Regardless, it's not something that I can rely on being supported. Solution 1: Just roll with it. Let OpenGL render everything upside down with its lowerleftcorner origin, then just flip the image at the end to correct it. This is a very attractive solution because it adds zero overhead to a lot of functions: + glViewport() and glScissor() just work. + The matching texture coordinate origin means that rendertotexture sampling cancels out, which means that sampling render targets in GLSL using texture() and texelFetch() both work without any modifications. + gl_FragCoord.xy both work without any modifications. + Possible culling differences due to face order differences can be easily compensated for in the API. + No dependence on the window/framebuffer size. The only disadvantage, and it's a major disadvantage, is that textures loaded from disk (that weren't rendered to) will be flipped due to the mismatch in origin. There is no simple solution to this:  Flipping all textures loaded from disk. That's a huge amount of CPU (or GPU if compute shaders are supported) overhead that adds a huge amount of complexity for precompressed textures. In addition to flipping, I'd need to go in and manually flip the indices in the blocks of all texture compression formats I want to support, and this would have to happen when streaming in textures from disk. We cannot afford to duplicate all our texture assets just to support OpenGL, and the CPU overhead during streaming is way too much for lowend CPUs, so this is not a feasible solution.  Have the user mark which textures it reads from disk and which it reads from render targets, and manually flip the ycoordinate before sampling the texture. This could be done by a simple macro injected into the OpenGL GLSL that the user calls on texture coordinates to flip them for OpenGL for texture(), but solving it for texelFetch() requires querying the texture's size using textureSize(), which I think would add a noticeable amount of overhead in the shader. In addition, in some cases the user may want to either use preloaded texture or a rendered texture for the same sampler in GLSL, at which point more overhead would need to be introduced.  Leave it entirely to the user to solve it by flipping texture coordinates in the vertex data, etc. I would like to avoid this as it requires a lot of effort for the user of the abstraction layer, even though it most likely provides the best performance. Solution 2: Flip everything to perfectly emulate VK/DX/Metal's origin. Pros: + Identical result with zero user interaction. + No need for manual texture coordinate flipping; GLSL preprocessor just injects flipping to all calls to texture() and texelFetch() (including variations). The cons are essentially everything from solution 1, PLUS overhead on a lot of CPUside functions too: glViewport(), glScissor(), etc, which requires the window/framebuffer size, ALL texture fetches would need their coordinates inverted (not just fetches from disk loaded textures). Is there a cleaner solution to all this? =< There must be a lot of OpenGL/DirectX abstractions out there that have to deal with the same issue. How do they do it? 1 reply

1

When is sRGB conversion being done in Vulkan?
theagentd replied to Anden's topic in Graphics and GPU Programming
Data sampled from an sRGB image view is always converted to linear when read by a shader. Similarly, writing to an sRGB image view will convert the linear values written by the shader to sRGB automatically. Blending is also performed in linear space for sRGB render targets. Compared to OpenGL, Vulkan's sRGB features are much simpler. sRGB is simply an encoding for color data. It's simply an encoding with more precision around 0.0 and less around 1.0. That's all you really need to know when you use it. It's similar to how 16bit floats, 10bit floats, RGB9E5, etc work. You don't have to care about the actual encoding, just write and read linear colors. The intermediate encoding just affects the precision. 
Phong tessellation not smooth?
theagentd replied to theagentd's topic in Graphics and GPU Programming
Thank you so much for all your help! Here's the view of the SSAO before and after tessellation. Note that I still haven't implemented the quadratic normal interpolation, so this isn't the final look of the SSAO with tessellation. Tessellation off: Tessellation on: It's not perfect, but a lot of the remaining problems are due to the original mesh. I'm quite happy with the result; hopefully the normal interpolation will make it even better I'm afraid I can't really discuss the mesh. =/ It's a "new" way of making terrain meshes (as in probably 10 000 other people have coded the same and I just don't know about it), but we'll see what comes of it. 
Phong tessellation not smooth?
theagentd replied to theagentd's topic in Graphics and GPU Programming
I found the bug. The calculation for tPN2 and tPN1 wasn't multiplying the dotproduct result by 1/3, which is why I needed that alpha of 1/3 to compensate. I no longer need a 1/3 alpha, but there are still discontinuities in the normal. The continuity is correct at the 3 corners, but between the corners the normals gets more and more different between triangles. For a somewhat tessellated sphere it pretty much 100% smooth, but it doesn't turn a cube with smoothed normals into a shape with smooth normals. (I tried outputting the normal as a color as you suggested, but it was difficult for me to see the color difference. In the pictures below, I use a simple dot(normal, viewPosition) fake diffuse light to visualize the normal.) However, compared to this screenshot I dug up from the Unreal Engine docs, it looks pretty much the same. I have checked and rechecked the math a million times, and it's all correct. The reason why my code is shorter is: 1. because I don't parallelize over the XYZcoordinates, instead parallelizing over control points. The article I linked computes one of X, Y or Z of all 10 control points in each invocation of the shader, while mine computes one corner and the two control points between the ith and the (i+1)%3th control point per invocation, while the last 10 control point is computed in the evaluation shader (for now). This turns the article's 6 dot products per invocation to just 2 per invocation and is much less code. 2. because I don't do the quadratic normal interpolation for the vertices yet, as I'm testing the smoothness of the generated vertex positions. From what I can tell, I seem to have hit the limit of current tessellation algorithms. =/ Well, it's better than Phong tessellation at least, but still not perfect. Thank you so much for your help, unbird! 
Phong tessellation not smooth?
theagentd replied to theagentd's topic in Graphics and GPU Programming
I implement PN triangle tessellation; made a better implementation than the article I linked using the math it listed. Here's the surface textured but with triangle face normals. Ignore the texture stretching. Here is the same thing drawn with PN tessellation, still with face normals on (still the normals of the actual geometry): I'm a bit worried though; I needed to use an alpha value of 1/3 to get this result, or all the triangles just looked super puffy. It also doesn't handle all edges continuously, as you can see on the bulge to the right. Is this a limitation of PN triangles or do I have a bug somewhere? Here are my shaders. Sorry, they're not very readable... Tessellation control: out vec3 tPN3[]; out vec3 tPN2[]; out vec3 tPN1[]; //in main() int i0 = gl_InvocationID; int i1 = (gl_InvocationID + 1) % 3; vec3 p0 = vViewPosition[i0]; vec3 p1 = vViewPosition[i1]; vec3 n0 = vNormal[i0]; vec3 n1 = vNormal[i1]; vec3 dp = p1  p0; tPN3[gl_InvocationID] = p0; tPN2[gl_InvocationID] = p0*(2.0/3.0) + p1*(1.0/3.0)  dot(dp, n0) * n0; tPN1[gl_InvocationID] = p0*(1.0/3.0) + p1*(2.0/3.0)  dot(dp, n1) * n1; Tessellation evaluation: in vec3 tPN3[]; in vec3 tPN2[]; in vec3 tPN1[]; //in main() vec3 b300 = tPN3[0]; vec3 b030 = tPN3[1]; vec3 b003 = tPN3[2]; vec3 b210 = tPN2[0]; vec3 b021 = tPN2[1]; vec3 b102 = tPN2[2]; vec3 b120 = tPN1[0]; vec3 b012 = tPN1[1]; vec3 b201 = tPN1[2]; vec3 e = (b210 + b021 + b102 + b120 + b012 + b201) * (1.0/6.0); vec3 v = (b300 + b030 + b003) * (1.0/3.0); vec3 b111 = e + 0.5*(ev); vec3 rawPosition = interpolate(tPN3); tc2 *= 3.0; vec3 pnPosition = tc3[0] * b300 + tc3[1] * b030 + tc3[2] * b003 + tc2[0] * tc1[1] * b210 + tc2[1] * tc1[2] * b021 + tc2[2] * tc1[0] * b102 + tc1[0] * tc2[1] * b120 + tc1[1] * tc2[2] * b012 + tc1[2] * tc2[0] * b201 + 6.0 * tc1[0] * tc1[1] * tc1[2] * b111; vViewPosition = mix(rawPosition, pnPosition, alpha); //alpha=1.0/3.0 Thanks a lot for everything, unbird! You've been tremendously helpful! Edit: What do you mean by "debugging approach"? 
Phong tessellation not smooth?
theagentd replied to theagentd's topic in Graphics and GPU Programming
Thanks a lot for the info! I tried using an alpha of 3/4 and it did improve it a tiny bit, but I'm just not sure if it's good enough yet. Is there some other technique which has higher quality and produces a continuous result? What about those PN triangles in the article about phong tessellation? Are there any other good techniques for smoothing the triangles? Due to the way the texturing works, I can't smooth it with a height map. 
Hello. I'm implementing Phong tessellation to smooth out my terrain a bit. I've tried two completely different implementations, and I'm pretty sure I'm doing everything correct. I've implemented the phong tessellation technique from http://onrendering.blogspot.se/2011/12/tessellationongpucurvedpntriangles.html, essentially a copypaste job, and it's identical to the "homemade" solution I was using before. I had a problem where I wasn't normalizing the normal in the vertex shader (it got scaled by the normal matrix), which messed up the phong calculations, but I've now ensured that that doesn't happen. I found the problem when I noticed that my SSAO was still giving triangle shaped artifacts on smooth curved surfaces. As a test, I tried rendering the triangle normal calculated per pixel using normalize(cross(dFdx(vViewPosition), dFdy(vViewPosition))), and this lead me to the following results. NO TESSELLATION, RAW TRIANGLE NORMALS: PHONG TESSELLATION: As you can see, phong tessellation seems to add a certain level of smoothness inside each triangle, but the triangle edges have completely different normals. I don't get how this could happen. Like I said, I'm 99.99% sure my tessellation implementation is correct since it's pretty much copypasted from the article I linked above AND my other implementation looks identical. Tessellation control: out float termIJ[]; out float termJK[]; out float termIK[]; float phong(int i, vec3 q){ vec3 q_minus_p = q  vViewPosition[i]; return q[gl_InvocationID]  dot(q_minus_p, vNormal[i]) * vNormal[i][gl_InvocationID]; } //In main() termIJ[gl_InvocationID] = phong(0, vViewPosition[1]) + phong(1, vViewPosition[0]); termJK[gl_InvocationID] = phong(1, vViewPosition[2]) + phong(2, vViewPosition[1]); termIK[gl_InvocationID] = phong(2, vViewPosition[0]) + phong(0, vViewPosition[2]); Tessellation evaluation: in float termIJ[]; in float termJK[]; in float termIK[]; #define tc1 gl_TessCoord //in main() vec3 tc2 = tc1*tc1; vec3 tIJ = vec3(termIJ[0], termIJ[1], termIJ[2]); vec3 tJK = vec3(termJK[0], termJK[1], termJK[2]); vec3 tIK = vec3(termIK[0], termIK[1], termIK[2]); vViewPosition = tc2[0]*tViewPosition[0] + tc2[1]*tViewPosition[1] + tc2[2]*tViewPosition[2] + + tc1[0]*tc1[1]*tIJ + tc1[1]*tc1[2]*tJK + tc1[2]*tc1[0]*tIK; Hence, my question is: is this a limitation of phong tessellation or is there something wrong with my vertex inputs (or even my implementation)?

What is the exact correct normal map interpretation for Blender?
theagentd replied to theagentd's topic in Graphics and GPU Programming
I'm pretty sure the blurring is just a different style of filling out the unused space of the normal map. The internals look identical after all. That normal map was baked with Substance Painter. 
What is the exact correct normal map interpretation for Blender?
theagentd replied to theagentd's topic in Graphics and GPU Programming
Sifting through the entire source code of Blender would be a huge amount of work, especially since it's written in a programming language I'm not particularly experienced with... Otherwise, that would indeed be the "easiest" solution. Here are the results of a baked normal map: http://imgur.com/a/nWoOA This normal map for a simple smooth cube was generated using Substance Painter. We tried to make sure that the mesh was triangulated, but the end result still sucks. Although the mesh gets closer to the right result (y is inverted in this as you can see in the normal map, which looked the most correct), but it's... "wobbly" and uneven even though it should be perfectly flat like the original highpoly model... It's clear that I'm using a different tangent basis from, well, everything else in the entire world it seems. EDIT: Ahaa! This: http://gamedev.stackexchange.com/questions/128023/howdoesmikktspaceworkforcalculatingthetangentspaceduringnormalmapping seems to be exactly what I'm looking for!!! Of course it's unanswered... 
What is the exact correct normal map interpretation for Blender?
theagentd replied to theagentd's topic in Graphics and GPU Programming
Thanks a lot for your response, we'll be sure to set up our Blender settings correctly, but the problem I'm talking about comes from very subtle errors in direction, not obvious things like inverted normals or inverted Y coordinates. I need to figure out the exact algorithm that Blender uses for normal mapping to ensure that I use the same normals, tangents, bitangents and normalization steps as they do, or subtle errors will be introduced... Still, thank you a lot for all that information, we'll be sure to take it all into consideration. We're still trying to learn our ways around Blender, but I will try to do this as soon as I can. 
What is the exact correct normal map interpretation for Blender?
theagentd posted a topic in Graphics and GPU Programming
Hello. We're trying to bake normal maps for a lowpoly model from a highpoly model in Blender. The output is weird as hell with lots of color gradients that seemingly seem to turn smoothed out normals back into sharp edged objects again. (There are even seemingly inverted normals and other nonsensical crazy normals, but that's not really the problem.) The problem here is that these smooth>sharp normal maps don't show up correctly in our own engine. It seems like even a tiny minor difference in algorithms is enough to completely destroy the look, as a surface that is supposed to be flat suddenly becomes a tiny bit bent. What I want to know is what exactly I should do to get the exact same result as Blender gets in our game engine. >What normals, tangents and bitangents are used for the baking? >Is it possible to use tangentless shaders that calculate the tangents and bitangents using derivatives (this is what we use right now) and get correct results? >If not:  Is it possible to generate the tangents (in an identical way to the way Blender does it), or do they have to be exported by Blender to be correct?  When are normals, tangents and bitangents supposed to be normalized? Per vertex? Per pixel? Not at all? Should the normal mapping result be normalized? What is the exact set of operations that Blender does to apply a normal map (generated by itself) to an object, and how do I replicate those using shaders? Our current normal mapping code for calculating the tangent space, for reference: http://pastebin.com/cP92PQVr Any help would be very appreciated. I've been battling this normal map interpretation mismatch for so long now...

Advertisement