dietrich

Member
  • Content count

    15
  • Joined

  • Last visited

Community Reputation

765 Good

About dietrich

  • Rank
    Member

Personal Information

  • Interests
    Art
    Design
    Programming
  1. 3D Horrible texture/object popping

    While not exactly blending, several modern games do try to smooth out LOD popping by using a dithering-like effect. Here's a blog post on how Assassin's Creed 3 does it; Far Cry 4 uses a similar effect, and maybe GTA V too?
  2. I believe, font height isn't actually height of capital letters, it's the entire vertical extent of the font (consider, eg. glyphs "Ć" and "g"). So the height of glyph "C" will be less than 32, in your bitmap it's roughly 18 pixels. If you set the height to 18, it should fix the vertical stretching. Then, there's the problem of the rendering being to small. Will it help, if I point out, that it's exactly half the size of what you expect?:)
  3. Is Sublime Text a valid option for C++ development?

    I use Sublime for hobby projects (C-style C++ and shaders mostly) and I find it much more comfortable, fluid and frustration-free than Visual Studio's built-in text editor, that I use at work. For me it was enough to justify having to set up command-line compilation. Agree 100%, but using an external text editor doesn't really prevent one from debugging in VS - so why not enjoy the benefits of both? One can always edit code in Sublime, compile and Alt-Tab into Visual Studio to do some debugging.
  4. First you would load your image into an array of bytes, representing the pixels of the image, essentially a bitmap. Then you can manipulate this array any way you like, including extracting portions of it to form a new texture. A pixel at location {x, y} would be addressed as data[y*imageWidth + x] To load a bitmap you could write you own parser by looking at the specs of a specific file format (BMP is fairly straightforward to load, others more challenging), or you could save yourself some time and use a library that does it for you. I prefer stb_image, it's lightweight and easy to use. After that it's simply a matter of using DirectX API to initialize a Texture2D with your data. IIRC, you can pass a pointer to your bitmap as pSysMem member of D3D11_SUBRESOURCE_DATA, when calling ID3D11Device::CreateTexture2D. Another option would be to actually preprocess you font into a set of textures, one per character. Again, stb_truetype by the very same Sean Barrett could do that for you. Yet another option is to actually use a single font texture and use appropriate UV coordinates to draw a protion of your texture, containing the desired character. Personally, I would go with this option (I have tried both recently, and having a single texture just meant that much less bookkeeping, although it may well be different with your project), and since you already know the texcoords of each character in you texture, it shouldn't be too hard to implement.
  5. Not an expert on the topic myself, but here is a series of blog posts you may find useful, if you haven't seen it yet: link.
  6. XMMatrixRotationRollPitchYaw

    Matrices m1 and m3 are more or less equal, as expected, and look something like this, if you print them out: |-0.00  0.00 -1.00  0.00| | 0.71  0.71 -0.00  0.00| | 0.71 -0.71 -0.00  0.00| | 0.00  0.00  0.00  1.00| DirectXMath uses row-vector convention under the hood, so you would transform a vector like so: v*M1*M2*M3. Matrix concatenation order is then M1*M2*M3. Since XMMatrixRotationRollPitchYaw rotates first around X, then around Y, your m3 matrix holds the expected transform. Here is a great article about row vs column-vector convention: link.   Now to that "more or less" part. Transforming a vector by either m1 or m3 should produce visually indistinguishable results, however, these matrices aren't equal up to binary representation. That's easy to see, if we inspect the memory, where the matrices are located, or even simply add a few decimal places to the print out: m1 | 0.000000089 -0.000000030 -0.999999821  0.000000000| | 0.707106709  0.707106829 -0.000000089  0.000000000| | 0.707106650 -0.707106709  0.000000119  0.000000000| | 0.000000000  0.000000000  0.000000000  1.000000000| m3 |-0.000000119  0.000000000 -0.999999881  0.000000000| | 0.707106709  0.707106709 -0.000000084  0.000000000| | 0.707106650 -0.707106769 -0.000000084  0.000000000| | 0.000000000  0.000000000  0.000000000  1.000000000| I believe, that is due to the fact, that m1 and m3 were constructed differently and hence went through a different number of different floating point operations. XMMatrixRotationRollPitchYaw doesn't actually use matrix multiplication at all, but rather goes through an intermediate quaternion representation to compute the resulting rotation matrix. And floating point operations tend to accumulate error, which eventually leads to slightly different results.
  7. If you're looking for a way to render your scene at a fixed resolution, no matter the window size, I think it will be best to always resize your swapchain's buffers, but only render to a portion of the backbuffer by keeping the correct width and height in the D3D11_VIEWPORT. Keep in mind though, that this may not be what the user expects, especially if the window becomes smaller than the specified resolution and part of the rendered scene get cut off.
  8. I've just been experimenting with window resizing, and I'm fairly sure that what you're seeing isn't the correct behavior. If you don't do any handling of WM_SIZE message, the swap chain should just stretch the back buffer to match the front buffer, ie your window client area, when presenting it to the screen. If you're seeing your window background instead, it probably means that IDXGISwapChain::Present method isn't working correctly. Could you take a look at the HRESULT it's returning or post your rendering code here? Also, are you using debug DirectX runtime (creating d3d device with D3D11_CREATE_DEVICE_DEBUG flag)? It may report some relevant issues too.
  9.   Shouldn't the minOrtho.x and minOrtho.y (maxOrtho.x and maxOrtho.y) actally be equal? Since you position your light at the frustumCenter, minOrtho and maxOrtho should just be offsets from the origin in the light view space, after you transform them by the lightViewMatrix (see image below). And since the shadow map bound is square, their x and y components should be equal. There actually may be an error somewhere, if they are not.     That needn't always be true, though. Another valid approach is to always position a directional light source at the origin and encode the offset into minOrtho and maxOrtho values. If that were the case, their x and y components could have arbitrary values:  
  10.   If I understand you correctly, that isn't actually true. When you add the radius vector to the frustum center here //Create the AABB from the radius glm::vec3 maxOrtho = frustumCenter + glm::vec3(radius, radius, radius); glm::vec3 minOrtho = frustumCenter - glm::vec3(radius, radius, radius); you insure, that it is enclosed in a sphere that doesn't change, no matter how the frustum is oriented. The outer square in the image below is the shadowmap, C is frustum center and R is radius: In this respect your videos look correct though: the shadow map doesn't change size or orientation when the camera rotates, so that's good.     A couple of things look suspicious to me. First, this line: lightOrthoMatrix = glm::ortho(maxOrtho.x, minOrtho.x, maxOrtho.y, minOrtho.y, near, far); I believe, it should be min first, max second. Don't think this is the issue, but the resulting matrix probably flips the scene, so I'd rather avoid that.   Second, why do the shadowmaps you draw onscreen look rectangular? Is this just a projection issue? The movement in the second video looks fine to me, I'd rather say that the shadowmap doesn't cover the view frustum entirely. I don't see any issues with the code though, so...   ...finally, it could be useful to hack in some debug features to better understand, what is happening. Drawing the scene from the point of view of the light is one option, rendering the view frustum and shadow bound edges to the frame buffer as lines is another. The latter helped me deal with the similar issues in my code a lot, since it becomes obvious, whether your frustum and shadow bound are properly aligned and have expected dimensions.
  11. Hi, Niruz91, I may be missing something, but I don't really see, how this gives you the amount of world units per shadow map texel: GLfloat worldUnitsPerTexel = lengthOfDiagonal / 1024.0f; Shouldn't you be dividing the dimensions of the shadow map bound instead of the camera frustum diagonal? Also, since the light frustum will probably be rectangular, the shadow map will have different resolution in terms of world units along the X and Y axis, yielding different values for the components of your vWorldUnitsPerTexel vector. Assuming minO and maxO are the corners of the shadow bound, it will be something like glm::vec3((maxO.x - minO.x) / 1024.0f, (maxO.y - minO.y) / 1024.0f, 0.0f); EDIT: Sorry, just read through the microsoft code and it looks like it does the same thing, which means it's probably not the issue
  12.   thank for your answer but now directx is in win sdk where can i find the examples?     The tutorial samples are located here, and here you can find the rest of the samples. I haven't tried these myself, but it seems they are just updated versions of the examples provided with the former DirectX SDK.   Personally I still just have a copy of June 2010 SDK installed for the sake of having the tutorials offline. Searching MSDN can help figure out whether the stuff in the SDK is outdated and how to replace it.
  13. I believe that the perspective matrix in DirectX is already defined so that any point, that falls into the view frustum, is projected into a point in the range of ( -1, -1, 0 ), ( 1, 1, 1 ), so no additional transform is necessary to map the canonical view volume into clip space, as they are pretty much the same.   This need not be true for any perspective matrix: for instance, the most simple example of such a matrix can look something like this: 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 0 Multiplying a point P( x, y, z, 1 ) by this matrix yields P'( x, y, z, z ), and after homogenization (division by the w-component) is applied, this becomes P''( x/z, y/z, 1, 1 ). P'' is the projection of point P onto the plane z=1, with perspective foreshortening applied (note that x/z and y/z get smaller as the distance from the viewer increases).   Although the basic idea behind the perspective matrix in DirectX is the same, its layout is a bit more complicated. It scales the point's x and y coordinates to the [-1, 1] range. The z-coordinate is also preserved (unlike the example above) and mapped into [0, 1] range. You can read about DirectX (and OpenGL) projection matrices in grater detail for example in this article (section on Perspective projection).       Actually, it depends. How do you define your world space, i.e. what units do you use and which direction are the x and y axes pointing? The decisions are more or less arbitrary and will depend mostly on your personal taste, but sticking to them consistently matters a lot. The matrix you've constructed should work, if your world units are pixels (sprite positions and scr_width, scr_height are measured in pixels), and you have the X-axis pointing to the right and Y-axis pointing up. If you prefer working in screen space, with the Y-axes pointing down, then your matrix will actually translate your entire screen out of the canonical view space, so nothing will be visible. To fix that you would need to translate +1, not -1 on the y axis, and also mirror the y-coordinates: 2/scr_width   0             -1 0             -2/scn_height  1 0             0              1         Not sure if I understand you correctly. You don't have to limit you model coordinates to screen_width, screen_height, unless it is supposed to always stay visible on the screen, I guess. Also, if you position your models in, say, meters as opposed to pixels, it's also perfectly fine, you'll just need a slightly different matrix to transform them to normalized coordinates. As far as camera is concerned - again, it's totally up to you to choose, whether to use one or not. I'd say if your screen doesn't move at all, you don't need a camera. Otherwise having a camera can be convenient, as you will always have a point of reference "attached" to the visible part of your world and will be able to transform your scene into normalized view space based on that point of reference. If I'm not much mistaken, for a 2D camera you'll only need to specify the position and the view width and height. Then to transform a point from world to clip space you will just need to translate it by -cameraX, -cameraY and scale by 2/cameraViewWidth, 2/cameraViewHeight. You can think of it as the more general case to the matrix you've constructed, where a camera is positioned at (scr_width/2, scr_height/2) and its view dimensions are (scr_width, scr_height).   Well, hopefully it helps or at least doesn't confuse you even more Please correct me if something I've said is utter nonsense, as I can't say I'm 100% confident with the topic either
  14. In this case there's no notion of camera space at all: you supply the clip space coordinates to the vertex shader, and the pipeline after the vertex shader expects such coordinates, thus no transformation is needed. The viewport transform is then applied, mapping this coordinates into the render target space, that is correct. Clip space coords range from -1 to 1 on the X and Y axes, and your quad spans the exact same range. As the clip space is mapped to the viewport, so is your quad, effectively becoming a rectangle covering the entire viewport. That's exactly what view and projection transforms do: the view matrix transforms the scene so that the camera is positioned at the origin, then the projection transform maps the camera view volume to the canonical view volume (-1, -1, 0), (1, 1, 1), producing clip space coordinates. You can also multiply these two matrices together, which would give you a single transform, that maps vertices from world space directly into clip space.
  15. Depth texture empty (shadowmapping)

    Pseudomarvin, can you show the code you use to create the ortho projection matrix? One possible reason why you dont get any shadows is that no objects are actually inside the view volume, so no depth information is rendered. An easy way to check that would be to render the scene from your directional light source - ie, use its view and projection matrices instead of the camera ones. If you don't see anything drawn to the screen, some of the values must be incorrect. Also, I believe it is preferable to use point sampling to sample the shadow map (and to do the filtering manually in the shader, if you need it). You dont want to interpolate the depth values from the adjacent texels, rather the shadowed/lit samples which you get after comparing the fragment depth with the depth values from the texture.