• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

WFP

GDNet+ Basic
  • Content count

    114
  • Joined

  • Last visited

Community Reputation

2779 Excellent

About WFP

  • Rank
    Member

Personal Information

Social

  • Twitter
    @willp_tweets

Recent Profile Visitors

6484 profile views
  1. You could use them, or keep your sky dome and in the vertex shader do something like: //... some code posH = mul(inputPos, viewProjMtx); // this is your SV_Position output posH.z = posH.w; // when the divide by w occurs, this will make the depth 1.0 (far plane) //... whatever else You could also just use a full-screen triangle or quad and use the same "push it to the far plane" logic. Then in the pixel shader, color it based off the y component of a view ray (a ray from starting at the camera and extending out into the scene) - all depends on how fancy you want to get. If you stick with the dome, make sure you only transform it by the camera's position, and not rotation - otherwise the entire sky will spin as the camera spins.
  2. This warning usually means that the resource you're using in **SetShaderResources is bound as a render target. The D3D11 runtime will remove it for you and give you a warning, but for correctness (warning-free output) you should either overwrite what's bound to the output merger before setting the shader resources OMSetRenderTargets(1, &newRTV, nullptr); or clear what's currently bound by setting it to null: OMSetRenderTargets(0, nullptr, nullptr);
  3. To expand just a bit on Kylotan's accurate response - in your specific application, what does a "unit" mean? In other words, is the value "1 unit" representative of 1 meter, 1 centimeter, or some other value altogether? If 1 unit = 1 cm, then a sky 1000 units away would be 10 meters away. Alternatively, if your app takes place entirely on the ground and the sky is supposed to look infinitely far away, you could always force the geometry to the far plane in the vertex shader and have its position transform with the camera position to give the illusion that it's so far away it doesn't move when the camera moves a bit.
  4. I won't have time to do a full install and all that, but if you can grab a capture with RenderDoc (or a few captures with different deficiencies showing), I might be able to step through the shader that way and see what could be going on. Can't promise when I'll have a lot of focused time to sit down with it, but I'll try to as soon as I can.
  5. Admittedly, I've done very little with stereo rendering, but perhaps this offset needs to be accounted for in your vertex shader? float4 stereo = StereoParams.Load(0); float separation = stereo.x * (leftEye ? -1 : 1); float convergence = stereo.y; viewPosition.x += separation * convergence * inverseProj._m00; The view ray you create in the vertex shader may need to be adjusted similarly to align correctly. Just kinda guessing at the moment
  6. For thoroughness's sake, would you mind switching them back to non-negated versions and seeing if that helps? Could you also test with the original rayLength code? Does the game use a right- or left-handed coordinate system?
  7. Is there a particular reason you're negating these values? traceScreenSpaceRay(-rayOriginVS, -rayDirectionVS
  8. Crap! Sorry again for the delay, let's see if we can get you sorted out If this is a full-screen pass, I would suggest trying the position reconstruction from my other post and see how that works out for you. Also - are you sure that the water surface is written to the depth buffer? If the game draws it as a transparent surface, it may disable writes when drawing it. That would mean you're actually using the land surface underneath the water as the reflection start point. I would first try using the projected view ray I mentioned and see if that gets you anywhere: o10 = mul(modelView, v0.xyzw); //viewPosition o10 = float4(o10.xy / o10.z, 1.0f, o10.w); If you're doing the SSR as part of the water shader itself, i.e. at the same time as drawing the transformed water geometry, you should be able to calculate the view space position and pass it through to the pixel shader, then use that value directly instead of combining it with a linearized depth value. Let me know if any of that helps.
  9. At first glance, this seems incorrect to me: float3 rayOriginVS = viewPosition * linearizeDepth(depth); Can you tell me what value you're storing in viewPosition? In my implementation, that line describes a ray from the camera projected all the way to the far clip plane. In the vertex shader: // project the view-space position to the far plane vertexOut.viewRay = float3(posV.xy / posV.z, 1.0f); Using that multiplied by linearized depth lets you reconstruct view-space position in the pixel shader, which is what you need for rayOriginVS. MJP's entire series is a great resource on reconstructing position from depth, but the last post, https://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/, is closest to what I use for most cases. I don't own/haven't played the game, so this is strictly a guess, but to me it looks like the rocks (as well as the boat and fisherman in it) are billboarded sprites. If that's the case, there's a good chance they aren't even being written to the depth buffer, which means there would be nothing for the ray-tracing step to detect. Are you able to pull a linearized version of the depth buffer that shows what is and isn't actually stored in it? That could be really helpful for debugging . P.S. I think you might be the same person that was asking about this (Dirt 3 mod + 3DMigoto) on my blog post on glossy reflections. Sorry I never got back to you - to be completely honest, I got really busy with a few things around then and forgot over time.
  10. On mobile right now, but briefly - the HLSL compiler will spit out the same intermediate bytecode regardless of what your system specs are. When your application calls a D3D function to load the shader from the pre-compiled bytecode, it will be compiled again in a vendor-specific way before actually being used. So to answer your question - you can compile a shader on one machine with a dedicated GPU and run it on another machine with an integrated GPU just fine.
  11. Passes are typically ran sequentially. If you ran 100 passes, the instructions would get executed serially just like any other instruction. Looking at it at a bit of a finer grain - the CPU instructions will get executed serially, and move along. At some point in the future the commands that the CPU generates and sends to the GPU will get executed in the order sent. Your program should be no more or less prone to freezing while processing a loop of 100 render passes than it would be a loop of anything else.
  12. Yep, that's exactly it. When a technique (for example, a blur) has multiple passes, a pass is typically just a separate draw call with either a new shader, updated data, or both. Exactly that.
  13. The Gaussian blur is separable, meaning it can be done in one direction, then the other and have the same result as if you did a kernel with all surrounding points included. In other words, for a 3x3 blur, you can do an X pass with 3 taps (the middle sample and one on either side) and a Y pass with 3 taps (the middle sample and one above and below) for a total of 6 taps. If you did it in one pass, you would need to sample all nine points, i.e. the center point, the left and right, the above and below, and all four corners - those are values you get for "free" when you break it up into two passes. For a small blur width like the above example, it might be faster to use one pass, but for larger kernels, you can see that the number of samples in the single pass start to add up. For a 7x7 kernel, you would need 49 taps total for a single pass, but only 14 if you break it into separate passes.
  14. According to the documentation (link below), yes it is a necessity.  There is some inheritance for bundles, but between direct command lists the pipeline state is reset. https://msdn.microsoft.com/en-us/library/windows/desktop/dn899196(v=vs.85).aspx#Graphics_pipeline_state_inheritance
  15. Each separate command list should be seen as having it's own completely separate state and setup. You should call it from each individually. You'll also need to set your description heaps, etc., (even if they're the same) on each.