• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

sebjf

Members
  • Content count

    22
  • Joined

  • Last visited

Community Reputation

187 Neutral

About sebjf

  • Rank
    Member
  1. Hi richardurich, I think thats it. I first put a DeviceMemoryBarrier() call between the InterlockedMin() and a re-read of the value. That didn't work. Though it may be to do with clip() (I recall reading about the effect this had on UAVs and will see if I can find the docs). Then, I removed the test entirely and wrote a second shader to draw the contents of the depth buffer - and that appears to be very stable. I will see if I can get it to work for a test in a single shader. Though I could probably refactor my project to just use a single uav, which would be more efficient. Thank you very much for your insights. I have been working at that for 2 days! Sj
  2. The colour values are always  <=0 or >=1, I make sure of that by setting it when I made the test data. (I also check it in RenderDoc). Though currenly the shader is written as  float c_norm = clamp(col.x, 0, 1) * 100; uint d_uint = (uint)c_norm; uint d_uint_original = 0; just to be sure.   I am using the colour values to make this minimal example as they are easy to control. In my real project the masked value is more complex, but as can be seen bug occurs even with something as simple as vertex colours.     Yes that's right - it has three possible values: 0, 1 (from the fragments) or 0xFFFFFFFF, which is the initialisation value. I have confirmed this is the case using the conditional as well. Thats why I suspect its a timing issue rather than, say, reading the wrong part of memory or not binding anything, even though I can't fully trust the debugger. This is meant to be the absolute simplest case I can come up with that still shows the issue.
  3. Hi samoth, Yes it does, in this narrow case anyway - usually I use asuint() or as you say multiply by a large number then cast. Above I did a direct cast because it was easy to see which triangle wrote each value when checking the memory in RenderDoc. (I've tried all sorts of casts and scales to see if that was causing this issue however and none have an effect.)
  4. @richardurich I originally tried to upload it to the forum but kept receiving errors. I've added the relevant code though, as actually as you say its not too long. I can't see anything I could change in it though - e.g. calling InterlockedMin like a method as one would with a RWByteAddress type just results in a compiler error.
  5. Hi,   I am working in Unity, trying to create depth-buffer-like functionality using atomics on a UAV in a pixel shader.   I find though that it does not behave as expected. It appears as if the InterlockedMin call is not behaving atomically.   I say appears, because all I can see is that the conditional based on the original memory value returned by InterlockedMin does not behave correctly. Whatever causes incorrect values to be returned from InterlockedMin also occurs in the frame debugger - Unity's and RenderDoc - so when debugging a pixel it changes from under me!   By changing this conditional I can see that InterlockedMin is not returning random data. It returns values that the memory feasibly would contain, just not what should be the minimum.     Here is a video showing what I mean: https://vid.me/tUP8 Here is a video showing the same behaviour for a single capture in RenderDoc: https://vid.me/4Fir   (In that video the pixel shader is trying to use InterlockedMin to draw only the fragments with the lowest vertex colours encountered so far, and discard all others.)   Things I have tried: RWByteAddressBuffer instead of RWStructuredBuffer Different creation flags for ComputeBuffer (though since its Unity the options are limited and opaque) Using a RenderTexture instead of a ComputeBuffer Using the globallycoherent prefix Clearing the buffer in the pixel shader then syncing with a DeviceMemoryBarrier() call Clearing the buffer in the pixel shader every other frame with a CPU set flag Using a different atomic (InterlockedMax()) Using a different slot and/or binding calls   Here is the minimum working example that created those videos: https://www.dropbox.com/s/3z2g85vcqw75d1a/Atomics%20Bug%20Minimum%20Working%20Example.zip?dl=0   I can't think of what else to try, I don't see how the issue could be anything other than the InterlockedMin call, and I don't see what else in my code could affect it...   Below is the relevant fragment shader: float4 frag (v2f i) : SV_Target { // sample the texture float4 col = i.colour; float c_norm = clamp(col.x, 0, 1);    //one triangle is <=0 and the other is >=1 uint d_uint = (uint)c_norm; uint d_uint_original = 0; uint2 upos = i.screenpos * screenparams; uint offset = (upos.y * screenparams.x) + upos.x; InterlockedMin(depth[offset], d_uint, d_uint_original); if (d_uint > d_uint_original) { clip(-1);    //we havent updated the depth buffer (or at least shouldnt have) so don't write the pixel } return col; } With the declaration of the buffer being: RWStructuredBuffer<uint> depth : register (u1); And here is how the buffer is being bound and used: // Use this for initialization void Start () {         int length = Camera.main.pixelWidth * Camera.main.pixelHeight;         depthbufferdata = new uint[length];         for(int i = 0; i < length; i++)         {             depthbufferdata[i] = 0xFFFFFFFF;         }         depthbuffer = new ComputeBuffer(length, sizeof(uint)); } // Update is called once per frame void OnRenderObject () {         depthbuffer.SetData(depthbufferdata); // clears the mask. in my actual project this is done with a compute shader.         Graphics.SetRandomWriteTarget(1, depthbuffer);                          material.SetVector("screenparams", Camera.main.pixelRect.size);         material.SetPass(0);         Graphics.DrawMeshNow(mesh, transform.localToWorldMatrix); } Sj
  6. Hi,   I am working in Unity on pieces of shader code to convert between a memory address and a coordinate in a uniform grid. To do this I use the Modulo operator, but find odd behaviour I cannot explain.   Below is a visualisation of the grid. It simply draws a point at each gridpoint. The locations for each vertex are computed from the offset into the fixed size uniform grid. I.e. The vector cell is computed from the vertex shader instance ID, and this is in turn converted into NDCs and rendered.   I start with the naive implementation:   uint3 GetFieldCell(uint id, float3 numcells) { uint3 cell; uint  layersize = numcells.x * numcells.y; cell.z = floor(id / layersize); uint layeroffset = id % layersize; cell.y = floor(layeroffset / numcells.x); cell.x = layeroffset % numcells.x; return cell; }   And see the following visual artefacts:   [attachment=35344:modulo_1.PNG]   I discover that this is due to the modulo operator. If I replace it with my own modulo operation:    uint3 GetFieldCell(uint id, float3 numcells) { uint3 cell; uint  layersize = numcells.x * numcells.y; cell.z = floor(id / layersize); uint layeroffset = id - (cell.z * layersize); cell.y = floor(layeroffset / numcells.x); cell.x = layeroffset - (cell.y * numcells.x); return cell; }   The artefact disappears:   [attachment=35345:modulo_3.PNG]   I debug one of the errant vertices in the previous shader with RenderDoc, and find that it is implemented using frc, rather than a true integer modulo op, leaving small components that work their way into the coordinate calculations:   [attachment=35346:modulo_2.PNG]   So I try again:   uint3 GetFieldCell(uint id, float3 numcells) { uint3 cell; uint  layersize = numcells.x * numcells.y; cell.z = floor(id / layersize); uint layeroffset = floor(id % layersize); cell.y = floor(layeroffset / numcells.x); cell.x = floor(layeroffset % numcells.x); return cell; }   And it wor...! Oh... ...That's unexpected:   [attachment=35347:modulo_4.PNG]   Can anyone explain this behaviour?   Is it small remainders of the multiplication of the frc result with the 'integer' as I suspect? If not, what else? If so, why does surrounding the result with floor() not work? (Its not optimised away, I've checked it in the debugger...)   Sj
  7. Thanks MJP!   Do you know what MSDN meant by that line in my original post? It says 'resource' specifically, rather than view - but then the whole thing is pretty ambiguous.   Sj
  8. Hi,   I have a compute shader which populates an append buffer, and a another shader that reads from it as a consume buffer.    Between these invocations, I would like to read every element in the resource in order to populate a second buffer.     I can think of a couple of ways to do it, such as using Consume() in my intermediate shader, and re-setting the count of the buffer afterwards. Or, binding the resource as a regular buffer and reading the whole thing.    There doesn't seem to be a way to set the count entirely on the GPU, and its not clear if the second method is supported (e.g. "Use these resources through their methods, these resources do not use resource variables.").   Is there any supported way to read an AppendStructuredBuffer without decreasing its count?   Thanks!     (PS. Cross-post at SO: http://stackoverflow.com/questions/41416272/set-counter-of-append-consume-buffer-on-gpu)
  9. Thanks for the replies! They are much appreciated and are all very helpful! I have much to learn about the animators workflow but its a lot better now I can see the use cases of the tools I am looking at.
  10. Hello,   I am looking at animating with Maya as I would like to understand the animators workflow, but am being confused by what I am finding online.   I always thought a rig was a basic skeleton + additional information such as joint constraints/IK solver parameters/etc. When I search for rigs for Maya though I am finding what look like complete characters - they even come with hair and multiple outfits. I am similarly confused by half the goals for this Kickstarter: https://www.kickstarter.com/projects/cgmonks/morpheus-rig-v20 (i.e. why such a tool would need to come with its own props?)   What is the purpose of these 'complete' rigs that are more like characters than rigs? Are the artists meant to use them as an asset in their game or render? Or are they just to be used by the animator, and then the modeller will take the skeleton and skin the actual character mesh?   What is the term for what I thought was a rig?   Sj          
  11. [size=3]Hi Graham,[/size] [size=3]First, sorry for the late reply, I am starting to wonder if I am completely misunderstanding the "Follow This Topic" button![/size] [size=3]To clarify, the first image is the 'detailed mesh' the second is the 'physical mesh'. The 'physical mesh' is literally the detailed mesh with overlapping polygons removed (and in this example, it was manual). This may require some explanation:[/size] [size=3]In my project, I am working on automatic mesh deformation whereby my algorithm fits one mesh over another. To do this, I reduce the target mesh to a simplified 'physical mesh' and check for collisions with a 'face cloud'. The 'face cloud' consists of the baked faces of every mesh making up the model(s) that the target mesh should deform to fit. (The target mesh when done will completely encompass the face cloud.)[/size] [size=3]For each point in the 'physical mesh', I project a ray and test for intersections with the face cloud, find the furthest one away then transform that control point to this position.[/size] [size=3]Before this is done, I 'skin' my detailed mesh to the 'physical mesh' - for each point in the detailed mesh (regardless of position/normal etc) I find the closest four points in the 'physical mesh', then weight the point to each of them (where the weight is the proportion of each points distance, to the sum of the distances); the result is, when the 'physical mesh' is deformed, each point in the 'detailed mesh' is deformed linearly with it.[/size] [size=3]The purpose of this is to preserve features such as overlapping edges, buttons, etc because with these, the normals of each point cannot be relied upon to determine which side of the surface the point exists on, hence the need for a control mesh. [/size] [size=3]What I am attempting to create in the 'physical mesh' is simply a single surface where all the points' normals accurately describe that surface.[/size] [size=3]So far, I do this by using the skinning data to calculate a 'roaming' centre of mass for each point, which is the average position of the point + all others that share the same bones. Any point whose normal is contrary to (Point Position - Centre Of Mass for that Point), is culled. (But is still deformed correctly because it is skinned to the surrounding points which are not deformed)[/size] [size=3]This whole setup is designed for user generated content, hence why I can't do what normal sensible people do and just have artists build a collision mesh in Max, it is also why I cannot make any assumptions about the target mesh*.[/size] [size=3]*Well, I can make some assumptions, for 1. I can assume it is skinned, and that the mesh it is deforming to fit is also skinned. Since I started using the skinning data the peformance (quality of results) has increased dramatically.[/size] [size=3]For more complex meshes though I still need a better solution, as it won't cull two points that sit very close, one outside the collision mesh, one inside (and hence when deformed the features are crushed as only one pulls its skinned verts out). [/size] [size=3]Your idea for ray tracing to find overlapping polys sounds very promising, I will look into this, Thanks![/size] [size=3]Seb[/size]
  12. In my project I am working on a 'subset' of cloth simulation in which I attempt to fit one mesh over another. My solution involves deforming a 'physical mesh' based on depth fields and using it to control the deformation of the complex, detailed mesh. I have seen impressive mesh optimization methods, but I don't want to optimize the mesh so much as extract part of it. What I want is a way to approximate the 'inside surface' of a mesh, since in the 'real world' this is what would interact physically with mesh being deformed with. Take the images below; the second mesh contains no overlapping polygons - the lapels, shoulder straps and buttons are gone - it is a [i]single surface[/i] consisting of the points closest to the character. [attachment=8440:jck.jpg] (Checking for and removing overlapping polygons would be one way I suppose, but how to decide which are the 'outer' and which are the 'inner' bearing in mind the normals of the semantically inside polys won't necessarily emanate from the geometric centre of the mesh) Does anyone know of an existing implementation that does something like this?
  13. Hi TheUnbeliever, Thank you! I don't know how I read that as d1, d2 and d3 the first time round. (I still think they are very obscurely named variables!) It is somewhat clearer what is happening. As I see it now, when the sum of the distances is calculated each distance is actually weighted by the angle of that point to the 'main point'. This would be so that when a vertex lies close to the vector between two control points, the third points influence is reduced, as the technical distance may be close but the practical deformation is controlled by the control points at either side right?
  14. In my project, I want to deform a complex mesh based on a much simpler proxy mesh. For this, I need to skin my complex mesh so that each vertex is affected by one or more control points on the proxy mesh and will transform linearly with them. This paper - [url="http://ivizlab.sfu.ca/arya/Papers/Others/Feature-based%20Mesh%20Deformation%20for%20MPEG-4%20Faces.pdf"]http://ivizlab.sfu.c...PEG-4 Faces.pdf[/url] - Feature Point Based Deformation for MPEG-4 Facial Animation, describes on pages 4 and 5 how to do what I want, I believe. If I am understanding it right, that algorithm finds the closest control point for a vertex, then the two that flank that vertex. The weight for each control point (Feature Point in the paper) is proportional to the distance to each of these points, relative to the others. Therefore, the weights sum should be 1 and the vertex will move with the plane defined by the control points. There are a couple of things I do not understand though: 1.[b] In equation (2), what are d12 and d13[/b], these are not defined in figure (1). Are they equivalent to d2 and d3? Or d1 - d2, d1 - d3? 2. [b]When you have the inverted proportional distance, what is the purpose of taking the Sine of it[/b]? (Equation (4)) Finally, in equation (5) on page (6), why is the deformation of the vertex calculated in that way? [b]Why is the displacement not simply[/b]: SUM( controlpoint_0_displacement * controlpoint_0_weight, ..., controlpoint_n_displacement * controlpoint_n_weight ) Could anyone who knows whats going on explain? Thanks! SJ
  15. Hi Spek, thinking out loud is what I am after [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] For this part of the project real-time cloth animation is in the back of my mind but at the moment I am focusing on what originally I was looking at as a subset of it. This system has user generated content in mind and so its designed to fit meshes together but does not allow for any significant assumptions about the target mesh. The purpose of this is not to run a continuous simulation but to merely deform a mesh once (to start with) so that it fits a character it was not designed for. I have read about image space based collision detection for actual real-time cloth simulation (e.g. [url="http://ecet.ecs.ru.acad.bg/cst04/docs/siiia/315.pdf"]http://ecet.ecs.ru.a...s/siiia/315.pdf[/url]) and generally, the implementations use multiple cameras, as you say rendering the same object from multiple sides to account for the limited information in a single depth map. My ideal implementation would be comparing the surface to a 3D volume without a doubt; this is the best real-world approximation and I like that it would be a 'one stop' for the complete mesh and would avoid needing to cull false positives. I haven't seen anything though yet which suggests the performance could be comparable to the image space implementation. The biggest problems I have with my implementation so far really come down to two things: 1. Loss of fine detail on deformations 2. Culling heuristics (that is, deciding which points in a 3D space are affected using only a 2D expression of this space) (1) I am experimenting with now. My plan is to take the 'reference' depth (depth map of the item) and the 'deviations' (difference between this map and the map with the character drawn). Each point in the target mesh is projected into screen space, then its position is changed to the one retrieved in world space from the combination of those depth maps. I can then alter how each point is transformed by filtering the 'deviation map' (e.g. by applying a Gaussian blur, to filter out the small high frequency changes in depths and thus preserve the details of the mesh while still transforming the section as a whole outside the character mesh). This preserves the performance characteristics (O(number of vertices in target mesh)) and should preserve a satisfactory level of detail for cloth. What I really want though is a way to build a series of control points from the deviation map, almost like the low detail variant you referred to, but this set of control points would be arbitrary, pulling specific parts of the mesh based on the 'clusters' of pixels in the deviation map. This would give poorer performance (O(no. verts * no. deformers)), but it would preserve the mesh detail, would require no clever culling heuristics, and would be easily extended to support more involved cloth simulation. I have attached a couple of before and after (1 iteration) images showing how its working now. This is without the blur, with vertices transformed using the picture plane normal. I reckon I could extend the culling to make some clever use of the stencil buffer but I still think a deformer set would be worth the performance hit esp. when I can do it on the GPU with OpenCL. (This is buoyed by my latest test which deformed all vertices (14k) in the mesh based on deformers created for each deviating pixel (10k) which OpenCL processed for all intensive purposes instantaneously (the results were [i]Completely Wrong[/i] in every way possible - but it got them wrong Quick! ;)) [attachment=8169:1.PNG][attachment=8170:2.PNG]