• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

327 Neutral

About quiSHADgho

  • Rank
  1. I am actually using the Device1, Surface1, Factory1 etc. interfaces but I cannot enable FeatureLevel 11.1 because the Windows 7 driver only implements WDDM 1.1 and SharpDX tells me it needs WDDM 1.2 for FeatureLevel 11.1. Nevertheless it is working as expected inside the application. Currently I am rendering directly on the backbuffer surface with D2D but almost every program I know and use that is providing an OSD or using hooks in one way or the other is not working. Even Intel GPA is failing to provide the OSD and it crashes most of the time. I am also pretty sure RivaTuner Statistics Server is hooking up to the application because there is a significant difference on how long it takes for the application to close if its hooked up. I also tested it on my Notebook which has a Nvidia card....no OSD.
  2. Hi guys,   I have a small issue with the interoperability between D3D11 and D2D. I mean not exactly between those two more with those and other programs like Afterburner, Fraps, OBS etc. But I will start at the very beginning. When I ported my DX9 engine to DX11 (SharpDX) I got rid of all the Spritebatch stuff and implemented a D2D UI. I used Shared Surfaces for that and it is working. I have no issues with it. The only thing that was really strange about the engine was that Multi-GPU (Crossfire in my case) seemed not work although this is more of a driver "thing" and I can't really do anything about it. So yesterday I had some time to dig into it. In one of my test projects crossfire was working fine so I added code from my engine to see where it would change. It was pretty easy to find. I was creating the texture with SharedKeyMutex flag. Like this: this.d3d11Texture = new Texture2D(device, new Texture2DDescription() { Height = form.Height, Width = form.Width, MipLevels = 1, ArraySize = 1, Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.SharedKeyedmutex }); SharpDX.Direct3D10.Device1 d3d10_device = new SharpDX.Direct3D10.Device1(factory.GetAdapter(0), SharpDX.Direct3D10.DeviceCreationFlags.BgraSupport, SharpDX.Direct3D10.FeatureLevel.Level_10_0); SharpDX.DXGI.Resource sharedResource = d3d11Texture.QueryInterface<SharpDX.DXGI.Resource>(); SharpDX.Direct3D10.Texture2D d3d10Texture = d3d10_device.OpenSharedResource<SharpDX.Direct3D10.Texture2D>(sharedResource.SharedHandle);             this.mutexd3d10 = d3d10Texture.QueryInterface<KeyedMutex>();             this.mutexd3d11 = d3d11Texture.QueryInterface<KeyedMutex>(); Surface surface = d3d10Texture.AsSurface(); d2dFactory = new SharpDX.Direct2D1.Factory1(); d2dRenderTarget = new RenderTarget(d2dFactory, surface, new RenderTargetProperties(RenderTargetType.Hardware, new PixelFormat(Format.Unknown, SharpDX.Direct2D1.AlphaMode.Premultiplied), 0, 0, RenderTargetUsage.None, SharpDX.Direct2D1.FeatureLevel.Level_10)); With this code and Crossfire enabled I have 100% load on the first GPU and 4% on the second while rendering the UI. When I disable the UI the second GPU goes to 0% load. So I had to get rid of the flag. This is what I came up with: this.d3d11Texture = new Texture2D(device, new Texture2DDescription() { Height = form.Height, Width = form.Width, MipLevels = 1, ArraySize = 1, Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None }); d2dFactory = new SharpDX.Direct2D1.Factory1(); Surface1 surface1 = d3d11Texture.QueryInterface<Surface1>(); d2dRenderTarget = new RenderTarget(d2dFactory, surface1, new RenderTargetProperties(RenderTargetType.Hardware, new PixelFormat(Format.Unknown, SharpDX.Direct2D1.AlphaMode.Premultiplied), 0, 0, RenderTargetUsage.None, SharpDX.Direct2D1.FeatureLevel.Level_10)); It works just fine and even Crossfire is working as expected....BUT now there are no overlays or video recordings possible. Afterburner or Fraps just show nothing and recording results in a black video. The problem does not only happen when I actually use the RenderTarget. It is sufficient that it is created. So if I comment out the line where it is created I have the overlays back, Crossfire works and I can record videos but of course I cannot use the UI. So what am I missing here?   And here are some infos you might wanna know: SharpDX 2.5 and SharpDX 2.6.3 (tried both, same results) Windows 7 x64 Prof 2x Radeon R290X / 2x Radeon 7870
  3. I agree with Tiago. Since you are referring to a "huge" level I suppose there is lot of space between the objects. When you are rendering each object independent from the other you can cull them before the render call to save gpu power and also sort them so you can avoid overdraw. Additionally if you have many instances of one model try to use hardware instancing to save draw calls and state changes. And yes meshparts share their vertex and indexbuffer in XNA unless they are getting to big.
  4. [quote name='riuthamus' timestamp='1358563079' post='5023078'] We do not need anything epic, in fact i dont mind going with a 5/6 polygon model of planes that are slightly bent. Like i said my goal is to make something like firefall. The major question is how to put a shadow under them ( doesnt need to reflect the texture just an ambient blur ) and how to render light on them. And if there are methods to render it better that would help as well. Your engine looks impressive. [/quote] Then you just need to make a decision based on what features you are going to implement beside grass. Is there enough space that mid-level system can go with more geometry than add some and if not then go with textured planes and apply some sin and cos functions in the shader to get them waving. When it comes to shadows I did not put enough research into it to say if there is a simple or better method as rendering it with your shadowmaps. And for lighting just use double-sided lighting.
  5. Unity

    Ok so here are my thoughts: Point lights shining through walls is just normal and I dont think that most or even any of todays graphics engines care about this. Normally it's the light artists job to place spot lights and point lights in a appropriate way. The easiest way to get rid of it is to use shadowmaps but in complex scenes and in games where the player is capable of placing nearly an infinite amount of light entities this will kill every system. What you could try is to change the light volumes of each light individually. Instead of drawing a sphere use a box and cut out everything that is behind a wall. If you want to implement point light shadows you could look into Dual Paraboloid Shadow Mapping: http://gamedevelop.eu/en/tutorials/dual-paraboloid-shadow-mapping.htm Maybe there are better and newer techniques but I cannot think about any at the moment. Of course you can go with any shadowmapping technique you like, then you just need to draw a bunch of shadowmaps per light. I didnt read the whole soft shadow tutorial but it seems that they are just using a simple blur which should not use 400 MB but I dont know your engine setup. I think you just need to read through some different techniques and choose the one you like the most there tons of tutorials and examples out there. My personal favorit is Variance Shadow Mapping: http://www.punkuser.net/vsm/ It is easy to implement and easy to combine with CSM or Parallel Splits or whatever you choose and gives the possibility to use linear filtering methods, gauss and even MSAA. Now some words about the issues you wrote about in the old thread: I think the bright center is basically a different material for light entities boosting the light power so that the bloom can give it a halo. And The Witcher is using a strong bloom there a bit too strong for my taste. The background where the light is shining through the wood and getting a blue halo is a slight overkill. If your lights are to dull overall you could also try adjusting your tonemapper. Lowering the white value will make the colors burn out earlier and produces brighter images but you will also loose more details.
  6. Hmm strange, for me it's not lighting anything without that line but however I'm glad it's finally working.
  7. Ok here is my next try. I knew I missed something and I hope I finally got it. First of all I'm using the output.Position.w value for the depth now so it looks like this: output.Position.z = log(output.Position.w*0.001f + 1) / log(0.001f*FarPlane +1) * output.Position.w; And here is the reconstruction code: float4 mapValue = tex2D(DepthMapSampler, PSIn.texCoord + dimensionOffset); float depth = mapValue.x; depth = ((pow((0.001 * FarPlane) + 1, depth) - 1) / 0.001); depth /= (NearPlane*FarPlane/(FarPlane-NearPlane)); PSIn.ProjPos.xy *= depth; float4 invProjPos = mul(PSIn.ProjPos ,xInvProjection); invProjPos.z = -depth; float4 worldPos = mul(invProjPos, xInvView); worldPos /= worldPos.w; The thing I missed was the division. Cameni gave me the hint when he said the w-component contains the view space depth and he is right it contains it but is not equal to it due to the projection matrix multiplication so you need to extract it before you can use it. Be careful with the signs. I'm using a right handed projection Matrix and I think you need to change them when using a lefthanded. It should be enough to add a negativ sign to the division.
  8. Not yet but I'm still on it
  9. I dont get it. It's always the same, using output.Position.z or output.Position.w makes no difference. The light is only visible from inside the volume. I'm pretty sure I just missing a additional math operation but I dont see it and I need fresh view. So I will look into it later that evening (it's about 1pm here) so hopefully I get more for you tonight or tomorrow.
  10. I'm not using the values in the depthbuffer just for outputting depth in a pre-pass so I cannot say how it behaves when using it as a depthbuffer value. Did you try to change the ZFunc-state in the geometry pass? Thats pretty unlikely but I will also have a look into my depth texture with pix maybe the values are reversed
  11. Weird I cannot think of any difference between our methods. I can give you more of my code but I dont think it's relevant there seems to be another problem somewhere else. Edit: Damn it...I'm afraid I was too tired last night when I wrote my post and today I could not remember the most important change I did right away when I started experimenting with it yesterday. The thing is I'm not using the screenspace.z value in depth calculation anymore. Instead I go with the viewspacePos.z so you get something like this: output.Position.z = log(0.001 * viewSpacePosition.z + 1) / log(0.001 * FarPlane + 1) * output.Position.w; that made more sense to me because it was said that z is in viewspace already after reconstruction and I forgot to mention it...Stupid me...
  12. Yes PSIn.ProjPos is the screen position. I suppose you are using the standart deffered approach where you are rendering a light volume for each light while I'm going with a fullscreen pass therefor I dont need the division with the w-component but you will still need it.   And yes that is the exact same function. Has anything changed? First I had the multiplication without the "-" so the light was moving over terrain when the camera rotated but there was no longer this problem with the light beeing visible from the inside of the light radius.   And thanks for clearing it up
  13. It is too bad that there are no recent papers or I just cannot find them. I think with todays hardware there is enough power to introduce a geometry based grass method in addition to a more "lower-end" technique for older systems. For me the question is, do you want some good looking grass or do you want some expensive effects like bokeh etc that are not worth it in my opinion. I dug out an early version from my engine where everything was unoptimized and reimplemented the geometry approach I mentioned in my last post. While it's not perfect yet it improves the quality of the scenes alot although I'm satisfied with the look of my textured quads method I will optimize it bit more and leave as a feature for mid to high-end systems. And since you are working with DirectX11 you can use things like geometry shader and tesselation which should give you a decent performance boost.   The pic was made with mid density [attachment=13268:screencap1.png]
  14. Sorry for the late reply I got screwed by my Nvidia mobile chip when I started testing the new revision of my engine on low-end hardware and needed to fix that first. So I finally got time to look at your problem again and I think I got it...hopefully. I cannot say why it was not working in first place though I'm pretty sure I had it that way before but however here is my code: float4 mapValue = tex2D(DepthMapSampler, PSIn.texCoord + dimensionOffset); float depth = mapValue.x; depth = ((pow((0.001 * xFarClip) + 1, depth) - 1) / 0.001); float2 invProjPos = mul(PSIn.ProjPos * -depth, xInvProjection); float4 worldPos = mul(float4(invProjPos, depth,1), xInvView); That should do it. Just get the depth from your DepthBuffer (here it's my seperate RenderTarget), reconstruct the viewspace depth value, multiply the screenspace position with the negativ depth and the inverse projection. Then there is just the normal multiplication with the inverse view left. I have one question though: I tried using log depth in my depth buffer because my nvdia chip seems to have a terrible depth precision even close to the camera while all my Radeons are working fine but I got the interpolation issue so at some angles and distances close to the near plane the geometry is disappearing. I know it's because of how the GPU is interpolating the values between vertices but I'm curious if you experienced this in your game or is the vertex density in Voxel based worlds high enough that it never occurs. Maybe the main factor is the depth buffer itself using a 32bit buffer instead of a 24bit could help I dont know...
  15. When I started developing the terrain system of my engine I thought long about that and read some articles. In most of todays games or let me say yesterdays games because I dont own many that where released in the past 1-2 years I hate the grass. Often its looking very bad from above or its always rotating to face the camera. I ended up implenting some techniques. Only one had sufficient quality but it would not win a ressource-saver-price. The basic idea was to render each blade of grass, divide into into slices to get the possibility to animate it and fill it with a gradient color in the shader. It was possible to scale the quality with the number of slices but since it required a lot of them to look good I decided to put the geometry into the object and not into the grass. There was a name for it but I cannot remember it... Next technique was only a experiment i did out of interest. It was still rendering each blade of grass but this time without the slices. So I got 4 verts for each blade and did the animation in the pixelshader. Basically the gradient color shifted from left to right and back but it looked kinda disturbing and I wanted a technique capable of rendering different types of grass. So I'm going with the basic method of rendering 4 verts with a texture. I can put many textures in one and select the relevant one randomly when creating the vertexbuffer. The alpha channel of the texture contains a noise to control fading in the distance...Thats it and best part is it dont look like sheets of paper from above. There is still some optimization potential but I'm happy with it for now. I also read about using Parallax Occlusion Mapping to get a grass effect but that would not be very performant either.