Mchart

Members
  • Content count

    10
  • Joined

  • Last visited

Community Reputation

130 Neutral

About Mchart

  • Rank
    Member
  1. Hi everyone! I have implemented a torrance-sparrow BRDF shader and I'd like to know if a table or a rule of thumb exist to map the params of the model (F0, rho s, rhoughness) to real material.    Thank you!   Micheal
  2.   By  "DX," do you mean Direct2D? Direct3D 11? DirectX 9 or 10?   It's not clear what you mean by "framework." The Direct3D 11 API, for instance, relies on several "underlying" frameworks or layers - DXGI, hardware drivers, etc., none of which a starting D3D programmer needs to be intimately familiar with.   As it appears you don't want to buy a book or work through tutorials, with regard to a "guided walkthrough" (somehow different than a tutorial? - not clear), the SDK that, for instance, comes with Visual Studio has excellent documentation, but won't provide you with a "walkthrough."   However, googling for "guided walkthrough direct3d 11" yields ~6 million hits. Perhaps one of those will provide you a start. Some of them, for instance, discuss porting OpenGL apps to Direct3D, etc.       By  "DX," do you mean Direct2D? Direct3D 11? DirectX 9 or 10?   It's not clear what you mean by "framework." The Direct3D 11 API, for instance, relies on several "underlying" frameworks or layers - DXGI, hardware drivers, etc., none of which a starting D3D programmer needs to be intimately familiar with.   As it appears you don't want to buy a book or work through tutorials, with regard to a "guided walkthrough" (somehow different than a tutorial? - not clear), the SDK that, for instance, comes with Visual Studio has excellent documentation, but won't provide you with a "walkthrough."   However, googling for "guided walkthrough direct3d 11" yields ~6 million hits. Perhaps one of those will provide you a start. Some of them, for instance, discuss porting OpenGL apps to Direct3D, etc.       Sorry I clearly explained myself pretty poorly. My English is failing me.  I mean D3D11, by framework I mean API wrappers for example.    I want to look at tutorials and books, actually I don't care if I have to read many of them I am open to anything! :)    I'd like a tutorial like for example this one I used for opengl: http://www.opengl-tutorial.org/  that goes from zero to something a tad more advanced.      Sorry again if I didn't explained myself clearly 
  3. Hi all, I have a mid level knowledge of graphics in theory and I have a bit of experience in OpenGL, but I want to learn DX now.    It's not a problem buying a book or reading several tuts, but what I'd like is:   1- When I was learning OpenGL many books and tutorials were strongly relying on underlying framework (i.e. wrappers or compulsory engines). If possible I would like something that does not "hide" anything.  2- I know that documentation is often a good place where to start, but I tend to enjoy more a guided walk through.   Are the resources provided with the SDK enough to cover my necessities?      Thank you very much,   Micheal
  4.   I am doing trilinear (GL_MIN and GL_MAG with GL_LINEAR and I generate mipmaps) but the result is somehow disappointing. I have no penumbra at all.  I am using almost exactly the code here: http://fabiensanglard.net/shadowmappingVSM/  and my result is http://imgur.com/nkVtTYI which is rather blocky and no much better than a basic less than test :\   I know I can increase the resolution, but still no better than a simple test. PCF gives much more pleasing results     Have you tried aniso? How about more samples? Have you played with minimum variance constant? How about fast single pass 3x3 blur before creating mipmaps?       Thank you for your reply. Yes I've tried aniso and the result is still the above. Regarding the min. variance constant what it change is just what change with the classic epsilon bias (i.e. acne and peter panning) and change the "blackness" of the shadow which however remains uniform. nothing about the penumbra.  As for the 3x3 pass before the mip maps I wanted to avoid to write another shader for it, also for my application I may have to re-render the shadow maps at each frame so if I have to perform a filtering everytime, what's the advantage over a simple PCF?  Note that my subject is one and all I'm dealing with is the correct selfshadowing as the in the picture above, don't know if this changes anything. 
  5.   I am doing trilinear (GL_MIN and GL_MAG with GL_LINEAR and I generate mipmaps) but the result is somehow disappointing. I have no penumbra at all.  I am using almost exactly the code here: http://fabiensanglard.net/shadowmappingVSM/  and my result is http://imgur.com/nkVtTYI which is rather blocky and no much better than a basic less than test :\   I know I can increase the resolution, but still no better than a simple test. PCF gives much more pleasing results
  6. Hi!    I have finished right now to implement the VSM, they seem to work fine in terms of visibility, however I can't notice any improvement in quality if compared with the basic shadow map test (less than). Is this technique beneficial only if a pre-filtering of the maps is performed or I am doing something wrong? Is there a way to obtain a better result with them without using additional passes (e.g. a previous blur with a gaussian blur)?      Thank you!
  7.   First of all thanks for the reply! I'm using glm::perspective to build the matrix which I believe wants the full angle. I want to make sure that a certain object is always in the frustum, even if the distance between camera/bounding sphere is very small.   (Also I found that for the way I compute the distance it should be asin(r/d), but still same distortion)
  8. Hi there!   I've a bounding sphere of say radius r. I want that this sphere occupies always more or less the same portion of space in the rendered image even if the camera change its position.  To do so I've thought of changing dynamically the FOV according to the movement of the camera. Note that the camera is "looking at" the centre of the sphere. Armed with pen and paper and my old trigonometric knowledges I've come up with the formula   FOV = 2 * arctan(radius/distance) that was also confirmed after a bit of googling.  This value is fed to glm::perspective. where the distance is obtained by   distance = glm::distance(SphereCentre, cameraPosition) I've doublechecked the centroid position and radius in world space. (directly on the data I have, I do not perform any transformation on my mesh)   My problem though is that what happens is not what I'm expecting, at all. First of all the portion of scene seen is much more than the wanted sphere, but also  if the distance become lower the "sphere" go farther away, like a zooming out effect. I would expect this if changing just the FOV, but shouldn't this be countered by the usage of the above formulas?     EDIT: If I remove the 2* in front of the previous formula (2*arctan... ) it is sort of ok-ish if not for a terrible distortion I have when the distance is very small. Why is so? (both, why the distortion and why without the 2* it is sort of ok, I'm quite certain of my calculations) 
  9. Linear Depth Buffer

      Thank you Samith, just one question. In which space t and s are distances eye-near/eye-far? If I look in my projection matrix (built via glm::perspective) at position 3,3 and 3,4 what I see is something like -1.03 and -1 where my clipping planes are [0.1, 8] 
  10. Hi I'm new here I hope I'm not making mistakes with this post! if so please tell me!    I'm  rendering a scene on a render target, I've then to access its depth information in a post-processing stage. In this stage I need linear depth info. I've been browsing the internet quite a lot and found a lot of different approaches/alternatives. What do you think is the way to go? I'm a bit of a newbie with this stuff so can you give me an hint/pointer on the possible GLSL code?   Thank you very much :)    Micheal