Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Mchart

Member Since 25 Jun 2014
Offline Last Active Oct 16 2014 01:59 PM

Topics I've Started

Real world material - Torrance-Sparrow BRDF Params

15 October 2014 - 09:28 AM

Hi everyone!

I have implemented a torrance-sparrow BRDF shader and I'd like to know if a table or a rule of thumb exist to map the params of the model (F0, rho s, rhoughness) to real material. 

 

Thank you!

 

Micheal


Best learning resource for an OpenGL programmer

25 September 2014 - 08:40 AM

Hi all,

I have a mid level knowledge of graphics in theory and I have a bit of experience in OpenGL, but I want to learn DX now. 

 

It's not a problem buying a book or reading several tuts, but what I'd like is:

 

1- When I was learning OpenGL many books and tutorials were strongly relying on underlying framework (i.e. wrappers or compulsory engines). If possible I would like something that does not "hide" anything. 

2- I know that documentation is often a good place where to start, but I tend to enjoy more a guided walk through.

 

Are the resources provided with the SDK enough to cover my necessities? 

 

 

Thank you very much,

 

Micheal


VSM without filtering the maps and easiest way to filter them

15 August 2014 - 01:14 PM

Hi! 

 

I have finished right now to implement the VSM, they seem to work fine in terms of visibility, however I can't notice any improvement in quality if compared with the basic shadow map test (less than). Is this technique beneficial only if a pre-filtering of the maps is performed or I am doing something wrong? Is there a way to obtain a better result with them without using additional passes (e.g. a previous blur with a gaussian blur)? 

 

 

Thank you!


Problem with FOV that fits a volume

06 July 2014 - 01:37 PM

Hi there!

 

I've a bounding sphere of say radius r. I want that this sphere occupies always more or less the same portion of space in the rendered image even if the camera change its position.  To do so I've thought of changing dynamically the FOV according to the movement of the camera. Note that the camera is "looking at" the centre of the sphere.

Armed with pen and paper and my old trigonometric knowledges I've come up with the formula

 

 FOV = 2 * arctan(radius/distance) 

that was also confirmed after a bit of googling.  This value is fed to glm::perspective.

where the distance is obtained by
 
 distance = glm::distance(SphereCentre, cameraPosition)  

I've doublechecked the centroid position and radius in world space. (directly on the data I have, I do not perform any transformation on my mesh)

 

My problem though is that what happens is not what I'm expecting, at all. First of all the portion of scene seen is much more than the wanted sphere, but also  if the distance become lower the "sphere" go farther away, like a zooming out effect. I would expect this if changing just the FOV, but shouldn't this be countered by the usage of the above formulas?  

 

EDIT: If I remove the 2* in front of the previous formula (2*arctan... ) it is sort of ok-ish if not for a terrible distortion I have when the distance is very small. Why is so? (both, why the distortion and why without the 2* it is sort of ok, I'm quite certain of my calculations) 


Linear Depth Buffer

25 June 2014 - 03:27 PM

Hi I'm new here I hope I'm not making mistakes with this post! if so please tell me! 

 

I'm  rendering a scene on a render target, I've then to access its depth information in a post-processing stage. In this stage I need linear depth info. I've been browsing the internet quite a lot and found a lot of different approaches/alternatives. What do you think is the way to go? I'm a bit of a newbie with this stuff so can you give me an hint/pointer on the possible GLSL code?

 

Thank you very much :) 

 

Micheal


PARTNERS