• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

theagentd

Members
  • Content count

    196
  • Joined

  • Last visited

Community Reputation

990 Good

About theagentd

  • Rank
    Member
  1. Hello! I have a very high-quality and fast SSAO implementation right now which is a bit backwards that I have a problem with. In most (really bad IMO) tutorials online, SSAO is done by sampling random points in a half sphere around the view space position of each pixel, then computing "occlusion" from depth differences. In contrast, I do random samples in a circle around each pixel, compute the view space position of each sample, then compute occlusion in 3D in view space, taking the distance and direction towards each sample into account. This produces very good quality results with only a small number of samples (together with a bilateral blur). However, this technique has an issue with the perspective projection at the edges of the camera. The farther out towards the edges you get from the camera, the more "stretched" the view gets. What I really want to do is just sample the 2D area of a projected 3D sphere with a certain radius at each pixel, which is what the usual SSAO examples are doing. In addition, I've recently experimented with eye motion tracking, which can produce extremely skewed/stretched projection matrices, which massively worsen this issue. To get this right, I'd essentially need to go from view space to screen space, sample depth at that position, then unproject that pixel back to view space again. This would need ~40% more ALU instructions than my current version, so it'd make it really slow. This lead me to thinking that it'd be great if I could keep the original 2D-sampling code and simply compensate for the stretching in 2D. As I already have a 2D matrix which handles random rotation and scaling of the sample offsets, if I could bake in the circle elongation into that matrix I could compensate for the stretching without having to add any per-sample cost to the shader! It'd just need some extra setup code. All this leads to my question: Given a sphere in view space (position, radius) and a projection matrix, how do I calculate the shape of the resulting ellipse from projecting the circle? Optimally, I'd want to calculate the center of the ellipse in NDC, then a 2D matrix which when multiplied by a random 2D sample offset on a unit circle (x^2+y^2<1.0) results in a position on the ellipse in NDC instead. So far I've found this, which exactly shows the axes/vectors I want to find: https://www.shadertoy.com/view/XdBGzd. This doesn't however use a projection matrix, and from what I can tell does all its calculations in view space, which I would like to avoid if possible. I've been trying to essentially play around with the math to try to get it to work out, and I think I've almost nailed calculating the center of the ellipse in NDCs, but I don't even really know where to start on calculating the long and short axes of the ellipse... This kind of stretch compensation could have a lot of cool uses, as it ensures that (with properly configured FOV) things that are supposed to be round always look round from the viewer's viewpoint. Theoretically, it could also be used for bloom (to make sure the blur is uniform from the viewer's viewpoint), depth of field, motion blur, etc. EDIT: Indeed, I believe the center of the ellipse in NDC is vec2 ellipseCenter = ndc.xy * (1.0 + r^2 / (ndc.w^2 - r^2)) where ndc.xy is the projected center of the sphere, r is the radius of the sphere and ndc.w is the (positive) view space depth of the sphere.
  2. Thanks, good to know that textureSize doesn't have much overhead. AMD's ancient shader analyzer seemed to equate it with the cost of a texture sample, which seemed a bit weird to me, but I assumed that it was simply more expensive than a uniform read.   Well, if Unity can't come up with a good solution, I guess there's no way to hide the origin difference completely. It's probably best to just expose it to the user and let them deal with it using some simple tools/macros-ish stuff.
  3. I tried your demo, MJP. It has a huge amount of ghosting when rotating. Do you not do any reprojecting?
  4. Hello.   I'm working on an abstraction layer for OpenGL and Vulkan, with the plan to add other APIs too in the future like DirectX 12, possibly even Metal. I'm coming from OpenGL, but I've spent a huge amount of time reading up on how Vulkan works. Now I want to write a powerful abstraction layer so that we can write our game once and have it run on multiple APIs so that we can support more hardware and OS combinations. I want to do write this myself and not use any libraries for this. The target is a minimal function set of OpenGL 3.3 with some widely supported extensions to match some Vulkan features, with OpenGL extensions allowing for more advanced features that are supported in both APIs.   My ultimate goal is to minimize or even eliminate the amount of API-specific code the user has to write, and in my investigations I found out that Vulkan uses a different NDC system (z from 0 to 1) and origin (upper left) than OpenGL. The NDC z range is a good change, as it allows for the 32-bit float depth buffer reverse depth trick and in general has better precision, so I want to embrace that whenever possible. This is pretty easy to do using either NV_depth_buffer_float or ARB_clip_control, whichever is supported, both of which are supported by both AMD and Nvidia. For certain Intel GPUs and very old AMD GPUs that support neither of those two, a simple manual fallback for the user of the abstraction is to modify the projection matrices they use, which is easy with the math library I use, so I consider this a "solved" problem.   The coordinate system origin difference is a much tougher nut to crack. It makes the most sense to go with Vulkan's origin, as it's the standard in all other APIs as well (DirectX and Vulkan). I see two possible solutions to the problem, but they either require manual interaction from the user or force me to inject mandatory operations into shaders/the abstractions making it slower and more limited. I'm not sure, but it seems like ARB_clip_control can be used to solve this problem by changing the origin, but I'm not sure if it covers everything (texture coordinate origin, glViewport() origin, glTex*() functions origin, etc). Regardless, it's not something that I can rely on being supported.     Solution 1: Just roll with it. Let OpenGL render everything upside down with its lower-left-corner origin, then just flip the image at the end to correct it. This is a very attractive solution because it adds zero overhead to a lot of functions:  + glViewport() and glScissor() just work.  + The matching texture coordinate origin means that render-to-texture sampling cancels out, which means that sampling render targets in GLSL using texture() and texelFetch() both work without any modifications.  + gl_FragCoord.xy both work without any modifications.  + Possible culling differences due to face order differences can be easily compensated for in the API.  + No dependence on the window/framebuffer size.   The only disadvantage, and it's a major disadvantage, is that textures loaded from disk (that weren't rendered to) will be flipped due to the mismatch in origin. There is no simple solution to this:  - Flipping all textures loaded from disk. That's a huge amount of CPU (or GPU if compute shaders are supported) overhead that adds a huge amount of complexity for precompressed textures. In addition to flipping, I'd need to go in and manually flip the indices in the blocks of all texture compression formats I want to support, and this would have to happen when streaming in textures from disk. We cannot afford to duplicate all our texture assets just to support OpenGL, and the CPU overhead during streaming is way too much for low-end CPUs, so this is not a feasible solution.  - Have the user mark which textures it reads from disk and which it reads from render targets, and manually flip the y-coordinate before sampling the texture. This could be done by a simple macro injected into the OpenGL GLSL that the user calls on texture coordinates to flip them for OpenGL for texture(), but solving it for texelFetch() requires querying the texture's size using textureSize(), which I think would add a noticeable amount of overhead in the shader. In addition, in some cases the user may want to either use preloaded texture or a rendered texture for the same sampler in GLSL, at which point more overhead would need to be introduced.  - Leave it entirely to the user to solve it by flipping texture coordinates in the vertex data, etc. I would like to avoid this as it requires a lot of effort for the user of the abstraction layer, even though it most likely provides the best performance.     Solution 2: Flip everything to perfectly emulate VK/DX/Metal's origin. Pros:  + Identical result with zero user interaction.  + No need for manual texture coordinate flipping; GLSL preprocessor just injects flipping to all calls to texture() and texelFetch() (including variations).   The cons are essentially everything from solution 1, PLUS overhead on a lot of CPU-side functions too: glViewport(), glScissor(), etc, which requires the window/framebuffer size, ALL texture fetches would need their coordinates inverted (not just fetches from disk loaded textures).         Is there a cleaner solution to all this? =< There must be a lot of OpenGL/DirectX abstractions out there that have to deal with the same issue. How do they do it?
  5. Data sampled from an sRGB image view is always converted to linear when read by a shader. Similarly, writing to an sRGB image view will convert the linear values written by the shader to sRGB automatically. Blending is also performed in linear space for sRGB render targets.   Compared to OpenGL, Vulkan's sRGB features are much simpler. sRGB is simply an encoding for color data. It's simply an encoding with more precision around 0.0 and less around 1.0. That's all you really need to know when you use it. It's similar to how 16-bit floats, 10-bit floats, RGB9E5, etc work. You don't have to care about the actual encoding, just write and read linear colors. The intermediate encoding just affects the precision.
  6. Thank you so much for all your help!   Here's the view of the SSAO before and after tessellation. Note that I still haven't implemented the quadratic normal interpolation, so this isn't the final look of the SSAO with tessellation.   Tessellation off:     Tessellation on:   It's not perfect, but a lot of the remaining problems are due to the original mesh. I'm quite happy with the result; hopefully the normal interpolation will make it even better   I'm afraid I can't really discuss the mesh. =/ It's a "new" way of making terrain meshes (as in probably 10 000 other people have coded the same and I just don't know about it), but we'll see what comes of it.
  7. I found the bug. The calculation for tPN2 and tPN1 wasn't multiplying the dot-product result by 1/3, which is why I needed that alpha of 1/3 to compensate. I no longer need a 1/3 alpha, but there are still discontinuities in the normal.   The continuity is correct at the 3 corners, but between the corners the normals gets more and more different between triangles. For a somewhat tessellated sphere it pretty much 100% smooth, but it doesn't turn a cube with smoothed normals into a shape with smooth normals.     (I tried outputting the normal as a color as you suggested, but it was difficult for me to see the color difference. In the pictures below, I use a simple dot(normal, viewPosition) fake diffuse light to visualize the normal.)             However, compared to this screenshot I dug up from the Unreal Engine docs, it looks pretty much the same.       I have checked and rechecked the math a million times, and it's all correct. The reason why my code is shorter is:   1. because I don't parallelize over the XYZ-coordinates, instead parallelizing over control points. The article I linked computes one of X, Y or Z of all 10 control points in each invocation of the shader, while mine computes one corner and the two control points between the i-th and the (i+1)%3-th control point per invocation, while the last 10 control point is computed in the evaluation shader (for now). This turns the article's 6 dot products per invocation to just 2 per invocation and is much less code.   2. because I don't do the quadratic normal interpolation for the vertices yet, as I'm testing the smoothness of the generated vertex positions.     From what I can tell, I seem to have hit the limit of current tessellation algorithms. =/ Well, it's better than Phong tessellation at least, but still not perfect. Thank you so much for your help, unbird!
  8. I implement PN triangle tessellation; made a better implementation than the article I linked using the math it listed.   Here's the surface textured but with triangle face normals. Ignore the texture stretching.   Here is the same thing drawn with PN tessellation, still with face normals on (still the normals of the actual geometry):     I'm a bit worried though; I needed to use an alpha value of 1/3 to get this result, or all the triangles just looked super puffy. It also doesn't handle all edges continuously, as you can see on the bulge to the right. Is this a limitation of PN triangles or do I have a bug somewhere?   Here are my shaders. Sorry, they're not very readable...   Tessellation control: out vec3 tPN3[]; out vec3 tPN2[]; out vec3 tPN1[]; //in main() int i0 = gl_InvocationID; int i1 = (gl_InvocationID + 1) % 3; vec3 p0 = vViewPosition[i0]; vec3 p1 = vViewPosition[i1]; vec3 n0 = vNormal[i0]; vec3 n1 = vNormal[i1]; vec3 dp = p1 - p0; tPN3[gl_InvocationID] = p0; tPN2[gl_InvocationID] = p0*(2.0/3.0) + p1*(1.0/3.0) - dot(dp, n0) * n0; tPN1[gl_InvocationID] = p0*(1.0/3.0) + p1*(2.0/3.0) - dot(-dp, n1) * n1; Tessellation evaluation: in vec3 tPN3[]; in vec3 tPN2[]; in vec3 tPN1[]; //in main() vec3 b300 = tPN3[0]; vec3 b030 = tPN3[1]; vec3 b003 = tPN3[2]; vec3 b210 = tPN2[0]; vec3 b021 = tPN2[1]; vec3 b102 = tPN2[2]; vec3 b120 = tPN1[0]; vec3 b012 = tPN1[1]; vec3 b201 = tPN1[2]; vec3 e = (b210 + b021 + b102 + b120 + b012 + b201) * (1.0/6.0); vec3 v = (b300 + b030 + b003) * (1.0/3.0); vec3 b111 = e + 0.5*(e-v); vec3 rawPosition = interpolate(tPN3); tc2 *= 3.0; vec3 pnPosition = tc3[0] * b300 + tc3[1] * b030 + tc3[2] * b003 + tc2[0] * tc1[1] * b210 + tc2[1] * tc1[2] * b021 + tc2[2] * tc1[0] * b102 + tc1[0] * tc2[1] * b120 + tc1[1] * tc2[2] * b012 + tc1[2] * tc2[0] * b201 + 6.0 * tc1[0] * tc1[1] * tc1[2] * b111; vViewPosition = mix(rawPosition, pnPosition, alpha); //alpha=1.0/3.0 Thanks a lot for everything, unbird! You've been tremendously helpful!     Edit: What do you mean by "debugging approach"?
  9. Thanks a lot for the info!   I tried using an alpha of 3/4 and it did improve it a tiny bit, but I'm just not sure if it's good enough yet. Is there some other technique which has higher quality and produces a continuous result? What about those PN triangles in the article about phong tessellation? Are there any other good techniques for smoothing the triangles? Due to the way the texturing works, I can't smooth it with a height map.
  10. Hello.   I'm implementing Phong tessellation to smooth out my terrain a bit. I've tried two completely different implementations, and I'm pretty sure I'm doing everything correct. I've implemented the phong tessellation technique from http://onrendering.blogspot.se/2011/12/tessellation-on-gpu-curved-pn-triangles.html, essentially a copy-paste job, and it's identical to the "homemade" solution I was using before. I had a problem where I wasn't normalizing the normal in the vertex shader (it got scaled by the normal matrix), which messed up the phong calculations, but I've now ensured that that doesn't happen.   I found the problem when I noticed that my SSAO was still giving triangle shaped artifacts on smooth curved surfaces. As a test, I tried rendering the triangle normal calculated per pixel using normalize(cross(dFdx(vViewPosition), dFdy(vViewPosition))), and this lead me to the following results.     NO TESSELLATION, RAW TRIANGLE NORMALS:     PHONG TESSELLATION:     As you can see, phong tessellation seems to add a certain level of smoothness inside each triangle, but the triangle edges have completely different normals. I don't get how this could happen. Like I said, I'm 99.99% sure my tessellation implementation is correct since it's pretty much copy-pasted from the article I linked above AND my other implementation looks identical.   Tessellation control: out float termIJ[]; out float termJK[]; out float termIK[]; float phong(int i, vec3 q){ vec3 q_minus_p = q - vViewPosition[i]; return q[gl_InvocationID] - dot(q_minus_p, vNormal[i]) * vNormal[i][gl_InvocationID]; } //In main() termIJ[gl_InvocationID] = phong(0, vViewPosition[1]) + phong(1, vViewPosition[0]); termJK[gl_InvocationID] = phong(1, vViewPosition[2]) + phong(2, vViewPosition[1]); termIK[gl_InvocationID] = phong(2, vViewPosition[0]) + phong(0, vViewPosition[2]); Tessellation evaluation: in float termIJ[]; in float termJK[]; in float termIK[]; #define tc1 gl_TessCoord //in main() vec3 tc2 = tc1*tc1; vec3 tIJ = vec3(termIJ[0], termIJ[1], termIJ[2]); vec3 tJK = vec3(termJK[0], termJK[1], termJK[2]); vec3 tIK = vec3(termIK[0], termIK[1], termIK[2]); vViewPosition = tc2[0]*tViewPosition[0] + tc2[1]*tViewPosition[1] + tc2[2]*tViewPosition[2] + + tc1[0]*tc1[1]*tIJ + tc1[1]*tc1[2]*tJK + tc1[2]*tc1[0]*tIK; Hence, my question is: is this a limitation of phong tessellation or is there something wrong with my vertex inputs (or even my implementation)?
  11. I'm pretty sure the blurring is just a different style of filling out the unused space of the normal map. The internals look identical after all. That normal map was baked with Substance Painter.
  12. Sifting through the entire source code of Blender would be a huge amount of work, especially since it's written in a programming language I'm not particularly experienced with... Otherwise, that would indeed be the "easiest" solution.   Here are the results of a baked normal map: http://imgur.com/a/nWoOA This normal map for a simple smooth cube was generated using Substance Painter. We tried to make sure that the mesh was triangulated, but the end result still sucks. Although the mesh gets closer to the right result (y is inverted in this as you can see in the normal map, which looked the most correct), but it's... "wobbly" and uneven even though it should be perfectly flat like the original high-poly model... It's clear that I'm using a different tangent basis from, well, everything else in the entire world it seems.     EDIT: Ahaa! This: http://gamedev.stackexchange.com/questions/128023/how-does-mikktspace-work-for-calculating-the-tangent-space-during-normal-mapping seems to be exactly what I'm looking for!!! Of course it's unanswered...
  13. Thanks a lot for your response, we'll be sure to set up our Blender settings correctly, but the problem I'm talking about comes from very subtle errors in direction, not obvious things like inverted normals or inverted Y coordinates. I need to figure out the exact algorithm that Blender uses for normal mapping to ensure that I use the same normals, tangents, bitangents and normalization steps as they do, or subtle errors will be introduced... Still, thank you a lot for all that information, we'll be sure to take it all into consideration.     We're still trying to learn our ways around Blender, but I will try to do this as soon as I can.
  14. Hello.   We're trying to bake normal maps for a low-poly model from a high-poly model in Blender. The output is weird as hell with lots of color gradients that seemingly seem to turn smoothed out normals back into sharp edged objects again. (There are even seemingly inverted normals and other nonsensical crazy normals, but that's not really the problem.) The problem here is that these smooth--->sharp normal maps don't show up correctly in our own engine. It seems like even a tiny minor difference in algorithms is enough to completely destroy the look, as a surface that is supposed to be flat suddenly becomes a tiny bit bent. What I want to know is what exactly I should do to get the exact same result as Blender gets in our game engine.     >What normals, tangents and bitangents are used for the baking? >Is it possible to use tangent-less shaders that calculate the tangents and bitangents using derivatives (this is what we use right now) and get correct results? >If not: - Is it possible to generate the tangents (in an identical way to the way Blender does it), or do they have to be exported by Blender to be correct? - When are normals, tangents and bitangents supposed to be normalized? Per vertex? Per pixel? Not at all? Should the normal mapping result be normalized? What is the exact set of operations that Blender does to apply a normal map (generated by itself) to an object, and how do I replicate those using shaders?     Our current normal mapping code for calculating the tangent space, for reference: http://pastebin.com/cP92PQVr     Any help would be very appreciated. I've been battling this normal map interpretation mismatch for so long now...
  15. Thanks for all the awesome responses! I'm not entirely sure I can implement this though... I will try to check out some Java libraries for accomplishing all this for me, hopefully. And here I thought I had a new idea... >___>   The only thing I had really heard about before was meta-balls.