• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

763 Good

About dietrich

  • Rank
  1. Not an expert on the topic myself, but here is a series of blog posts you may find useful, if you haven't seen it yet: link.
  2. Matrices m1 and m3 are more or less equal, as expected, and look something like this, if you print them out: |-0.00  0.00 -1.00  0.00| | 0.71  0.71 -0.00  0.00| | 0.71 -0.71 -0.00  0.00| | 0.00  0.00  0.00  1.00| DirectXMath uses row-vector convention under the hood, so you would transform a vector like so: v*M1*M2*M3. Matrix concatenation order is then M1*M2*M3. Since XMMatrixRotationRollPitchYaw rotates first around X, then around Y, your m3 matrix holds the expected transform. Here is a great article about row vs column-vector convention: link.   Now to that "more or less" part. Transforming a vector by either m1 or m3 should produce visually indistinguishable results, however, these matrices aren't equal up to binary representation. That's easy to see, if we inspect the memory, where the matrices are located, or even simply add a few decimal places to the print out: m1 | 0.000000089 -0.000000030 -0.999999821  0.000000000| | 0.707106709  0.707106829 -0.000000089  0.000000000| | 0.707106650 -0.707106709  0.000000119  0.000000000| | 0.000000000  0.000000000  0.000000000  1.000000000| m3 |-0.000000119  0.000000000 -0.999999881  0.000000000| | 0.707106709  0.707106709 -0.000000084  0.000000000| | 0.707106650 -0.707106769 -0.000000084  0.000000000| | 0.000000000  0.000000000  0.000000000  1.000000000| I believe, that is due to the fact, that m1 and m3 were constructed differently and hence went through a different number of different floating point operations. XMMatrixRotationRollPitchYaw doesn't actually use matrix multiplication at all, but rather goes through an intermediate quaternion representation to compute the resulting rotation matrix. And floating point operations tend to accumulate error, which eventually leads to slightly different results.
  3. If you're looking for a way to render your scene at a fixed resolution, no matter the window size, I think it will be best to always resize your swapchain's buffers, but only render to a portion of the backbuffer by keeping the correct width and height in the D3D11_VIEWPORT. Keep in mind though, that this may not be what the user expects, especially if the window becomes smaller than the specified resolution and part of the rendered scene get cut off.
  4. I've just been experimenting with window resizing, and I'm fairly sure that what you're seeing isn't the correct behavior. If you don't do any handling of WM_SIZE message, the swap chain should just stretch the back buffer to match the front buffer, ie your window client area, when presenting it to the screen. If you're seeing your window background instead, it probably means that IDXGISwapChain::Present method isn't working correctly. Could you take a look at the HRESULT it's returning or post your rendering code here? Also, are you using debug DirectX runtime (creating d3d device with D3D11_CREATE_DEVICE_DEBUG flag)? It may report some relevant issues too.
  5.   Shouldn't the minOrtho.x and minOrtho.y (maxOrtho.x and maxOrtho.y) actally be equal? Since you position your light at the frustumCenter, minOrtho and maxOrtho should just be offsets from the origin in the light view space, after you transform them by the lightViewMatrix (see image below). And since the shadow map bound is square, their x and y components should be equal. There actually may be an error somewhere, if they are not.     That needn't always be true, though. Another valid approach is to always position a directional light source at the origin and encode the offset into minOrtho and maxOrtho values. If that were the case, their x and y components could have arbitrary values:  
  6.   If I understand you correctly, that isn't actually true. When you add the radius vector to the frustum center here //Create the AABB from the radius glm::vec3 maxOrtho = frustumCenter + glm::vec3(radius, radius, radius); glm::vec3 minOrtho = frustumCenter - glm::vec3(radius, radius, radius); you insure, that it is enclosed in a sphere that doesn't change, no matter how the frustum is oriented. The outer square in the image below is the shadowmap, C is frustum center and R is radius: In this respect your videos look correct though: the shadow map doesn't change size or orientation when the camera rotates, so that's good.     A couple of things look suspicious to me. First, this line: lightOrthoMatrix = glm::ortho(maxOrtho.x, minOrtho.x, maxOrtho.y, minOrtho.y, near, far); I believe, it should be min first, max second. Don't think this is the issue, but the resulting matrix probably flips the scene, so I'd rather avoid that.   Second, why do the shadowmaps you draw onscreen look rectangular? Is this just a projection issue? The movement in the second video looks fine to me, I'd rather say that the shadowmap doesn't cover the view frustum entirely. I don't see any issues with the code though, so...   ...finally, it could be useful to hack in some debug features to better understand, what is happening. Drawing the scene from the point of view of the light is one option, rendering the view frustum and shadow bound edges to the frame buffer as lines is another. The latter helped me deal with the similar issues in my code a lot, since it becomes obvious, whether your frustum and shadow bound are properly aligned and have expected dimensions.
  7. Hi, Niruz91, I may be missing something, but I don't really see, how this gives you the amount of world units per shadow map texel: GLfloat worldUnitsPerTexel = lengthOfDiagonal / 1024.0f; Shouldn't you be dividing the dimensions of the shadow map bound instead of the camera frustum diagonal? Also, since the light frustum will probably be rectangular, the shadow map will have different resolution in terms of world units along the X and Y axis, yielding different values for the components of your vWorldUnitsPerTexel vector. Assuming minO and maxO are the corners of the shadow bound, it will be something like glm::vec3((maxO.x - minO.x) / 1024.0f, (maxO.y - minO.y) / 1024.0f, 0.0f); EDIT: Sorry, just read through the microsoft code and it looks like it does the same thing, which means it's probably not the issue
  8.   thank for your answer but now directx is in win sdk where can i find the examples?     The tutorial samples are located here, and here you can find the rest of the samples. I haven't tried these myself, but it seems they are just updated versions of the examples provided with the former DirectX SDK.   Personally I still just have a copy of June 2010 SDK installed for the sake of having the tutorials offline. Searching MSDN can help figure out whether the stuff in the SDK is outdated and how to replace it.
  9. I believe that the perspective matrix in DirectX is already defined so that any point, that falls into the view frustum, is projected into a point in the range of ( -1, -1, 0 ), ( 1, 1, 1 ), so no additional transform is necessary to map the canonical view volume into clip space, as they are pretty much the same.   This need not be true for any perspective matrix: for instance, the most simple example of such a matrix can look something like this: 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 Multiplying a point P( x, y, z, 1 ) by this matrix yields P'( x, y, z, z ), and after homogenization (division by the w-component) is applied, this becomes P''( x/z, y/z, 1, 1 ). P'' is the projection of point P onto the plane z=1, with perspective foreshortening applied (note that x/z and y/z get smaller as the distance from the viewer increases).   Although the basic idea behind the perspective matrix in DirectX is the same, its layout is a bit more complicated. It scales the point's x and y coordinates to the [-1, 1] range. The z-coordinate is also preserved (unlike the example above) and mapped into [0, 1] range. You can read about DirectX (and OpenGL) projection matrices in grater detail for example in this article (section on Perspective projection).       Actually, it depends. How do you define your world space, i.e. what units do you use and which direction are the x and y axes pointing? The decisions are more or less arbitrary and will depend mostly on your personal taste, but sticking to them consistently matters a lot. The matrix you've constructed should work, if your world units are pixels (sprite positions and scr_width, scr_height are measured in pixels), and you have the X-axis pointing to the right and Y-axis pointing up. If you prefer working in screen space, with the Y-axes pointing down, then your matrix will actually translate your entire screen out of the canonical view space, so nothing will be visible. To fix that you would need to translate +1, not -1 on the y axis, and also mirror the y-coordinates: 2/scr_width   0             -1 0             -2/scn_height  1 0             0              1         Not sure if I understand you correctly. You don't have to limit you model coordinates to screen_width, screen_height, unless it is supposed to always stay visible on the screen, I guess. Also, if you position your models in, say, meters as opposed to pixels, it's also perfectly fine, you'll just need a slightly different matrix to transform them to normalized coordinates. As far as camera is concerned - again, it's totally up to you to choose, whether to use one or not. I'd say if your screen doesn't move at all, you don't need a camera. Otherwise having a camera can be convenient, as you will always have a point of reference "attached" to the visible part of your world and will be able to transform your scene into normalized view space based on that point of reference. If I'm not much mistaken, for a 2D camera you'll only need to specify the position and the view width and height. Then to transform a point from world to clip space you will just need to translate it by -cameraX, -cameraY and scale by 2/cameraViewWidth, 2/cameraViewHeight. You can think of it as the more general case to the matrix you've constructed, where a camera is positioned at (scr_width/2, scr_height/2) and its view dimensions are (scr_width, scr_height).   Well, hopefully it helps or at least doesn't confuse you even more Please correct me if something I've said is utter nonsense, as I can't say I'm 100% confident with the topic either
  10. In this case there's no notion of camera space at all: you supply the clip space coordinates to the vertex shader, and the pipeline after the vertex shader expects such coordinates, thus no transformation is needed. The viewport transform is then applied, mapping this coordinates into the render target space, that is correct. Clip space coords range from -1 to 1 on the X and Y axes, and your quad spans the exact same range. As the clip space is mapped to the viewport, so is your quad, effectively becoming a rectangle covering the entire viewport. That's exactly what view and projection transforms do: the view matrix transforms the scene so that the camera is positioned at the origin, then the projection transform maps the camera view volume to the canonical view volume (-1, -1, 0), (1, 1, 1), producing clip space coordinates. You can also multiply these two matrices together, which would give you a single transform, that maps vertices from world space directly into clip space.
  11. Pseudomarvin, can you show the code you use to create the ortho projection matrix? One possible reason why you dont get any shadows is that no objects are actually inside the view volume, so no depth information is rendered. An easy way to check that would be to render the scene from your directional light source - ie, use its view and projection matrices instead of the camera ones. If you don't see anything drawn to the screen, some of the values must be incorrect. Also, I believe it is preferable to use point sampling to sample the shadow map (and to do the filtering manually in the shader, if you need it). You dont want to interpolate the depth values from the adjacent texels, rather the shadowed/lit samples which you get after comparing the fragment depth with the depth values from the texture.
  12.   Could you be more specific, please? I build the application from the command line, so there shouldn't be anything the environment does behind my back. If I build with full optimizations and without generating debug info, which is more or less what Release mode does, but create the ID3D11Device with D3D11_CREATE_DEVICE_DEBUG flag set, the break still occurs. If I don't use the DirectX debug layer, the program terminates normally. I don't think it solves the problem, though - it probably just doesn't get reported.
  13. Found someone else mention the same behavior on these forums here. So its not just me then, good   Mona2000, looks like something like this happens indeed. The question is, is that a bad thing? With main RAM the memory manager makes sure to free any leaking resources after the program has exited (true for Windows, at least, also probably for Mac and unix, not sure about consoles though), does the same happen with video memory? If so, no harm done, I guess.
  14. Haha, detecting an NVidia user by just a few lines of debug output is pretty cool   Not that I need Fraps all that much - I'll eventually set up some kind of debug overlay for that sort of info anyway. Just curious, why does the bug happen and whether it's me who's causing it. Thanks for the ShadowPlay idea though - I'll look into it and will probably start using instead of Fraps in general.   As for the potential bug, I was able to repro using Microsoft's SKD Sample (link) by changing sd.SampleDesc.Count and sd.SampleDesc.Quality to something higher than 1 and 0, say, 2 and 2. All the same, three live parentless objects are reported by the debugger.   Guess I'll have to find a different PC to verify, and call it confirmed:)
  15. Hello, everyone,   let me share a weird bug with you and hope, that someone can help me deal with it.   I've just started a new project using DirectX11 and this bug wouldn't let me move any further than a blank screen. What happens is when the program is terminating it either crashes (if I run the executable) or says it has triggered a breakpoint (if run in a debugger). After the breakpoint I'm able to continue and the program terminates with the following output:   blocks.exe has triggered a breakpoint. 'blocks.exe' (Win32): Unloaded 'C:\Windows\System32\nvwgf2umx.dll' 'blocks.exe' (Win32): Unloaded 'C:\Windows\System32\psapi.dll' The thread 0x1e8c has exited with code 0 (0x0). D3D11 WARNING: Live child object (0x0000000004E89D50, RefCount=1) with no live parent (0x000000000031BAD8). [ STATE_CREATION WARNING #0: UNKNOWN] D3D11 WARNING: Live child object (0x0000000004E8A1B0, RefCount=1) with no live parent (0x000000000031BAD8). [ STATE_CREATION WARNING #0: UNKNOWN] D3D11 WARNING: Live child object (0x0000000004EA79C0, RefCount=1) with no live parent (0x000000000031BAD8). [ STATE_CREATION WARNING #0: UNKNOWN] The program '[7880] blocks.exe' has exited with code 0 (0x0).   The breakpoint occurs in my Renderer class' destructor (the class just encapsulates some minimal DirectX functionality): Renderer::~Renderer() { ID3D11Debug* DebugDevice = nullptr; device_->QueryInterface(__uuidof(ID3D11Debug), (void**)(&DebugDevice)); RELEASE( defaultRasterizerState_ ); RELEASE( backBufferView_ ); RELEASE( swapChain_ ); RELEASE( context_ ); RELEASE( device_ ); DebugDevice->ReportLiveDeviceObjects(D3D11_RLDO_DETAIL); RELEASE(DebugDevice); }   Through a bit of experimenting I found out, that all the following conditions must be met, for the error to occur: 1) DirectX Debug Layer is enabled (I guess, the problem still persists when it's not, there's just no one to break), 2) Fraps overlay is enabled, 3) The swapchain is created with multisampling turned on.   Using ReportLiveDeviceObjects() I learned, that there are indeed live objects at the specified addresses, and they are ID3D11InputLayout, ID3D11VertexShader and ID3D11PixelShader. I'm not creating those (the program only clears the back buffer and presents it to the screen), and I made sure to release my objects.   My guess is that these are the shaders and the input layout Fraps uses to draw its frame counter, and for some reason it fails to cleanup after itself. I wonder, what does it have to do with multisampling, though? When it is disabled, no live objects are reported.   Well, this one's getting a bit long, sorry for that:) I guess I'll try to reproduce the bug with a clean/minimal program, and meanwhile, if someone has any ideas on why does it happen or how to fix it, I'd greatly appreciate any help!