Jump to content
  • Advertisement

jjtulip

Member
  • Content Count

    13
  • Joined

  • Last visited

Community Reputation

187 Neutral

About jjtulip

  • Rank
    Member
  1. jjtulip

    Energy conservation and rhoS

      Gotcha, thanks!   Just to be sure, when you write    It is actually (1 - ?s) and the term, right?        And last question: I usually use a scalar value in [0...1] derived from refractive indices for the fresnel (I always see this term referenced as R0 or F0). But I very recently bumped into some sources that suggest the usage instead of the specular colour, is that really the correct route? In case, should the fSpecReflectance be my regular spec colour ( R,G,B in [0...1] ) scaled by the R0 value?       
  2. I am trying to read a bit about PBR and I have a doubt about a term rho_s for the Torrance Sparrow model that is referenced as a scaling coefficient.  Is that right to assume that including this the following formula is right from an energy conservation point of view:     click   Pardon me if this is complete bogus, but I am still trying to understand. If the above is wrong how can I combine specular and diffuse?    Thanks!   EDIT: To clarify,  my reasoning was that as long the specular BRDF and the diffuse one are energy conserving they shouldn't be more than 1 right? Now if rho is in the range of [0 ... 1] the above formula should guarantee that the sum of the two is 1. But is that correct? 
  3. jjtulip

    Resource management

    I learned quite a lot via these replies, thanks!  I understood that a big Resource Manager is a bad idea as it is a Resource base class. Suppose now I do end up with a Manager per type of asset and I am giving away raw pointers, why would I want shared_ptr instead of a unique one? I see why I'd need a shared one if I give away weak_ptr, but why with raw pointers?   
  4. First of all I am sorry if I am opening a thread about something already discussed here, but I feel I have to be active in a topic to wrap my head around.     I am currently trying to re-design my renderer to make it a bit more flexible and kinda "expandable" into a full engine some day.  Up to now I really didn't need a resource manager, but now I feel I do.    My initial idea was something like:      - Each resource (Geometry, Material, Textures etc.) inherit from a base class that just contains an ID (probably the filename) and a virtual method getFromFile to load from a personal definition file.    - Resource manager: loads through the above getFromFile the resources and keep them in an hash table indexed by their ID. I still have a small system so I don't really see the point of specific managers, am I wrong in believing this? Should I always have specialized managers?   My doubts/questions after a bit of googling are:    What should the resource manager give to who request resources? My first uneducated guess was to go for shared_ptr, but then I looked around and they're generally considered bad for perf reasons and I can  see that.   Second option is give away raw pointers and add to Resource a ref count and at the moment of change of level, the resource manager will delete every resource that have a ref count of 0.  Third option, and the one I most need clarification for, is the usage of a Resource Handler. What a good handler should have ideally and how should I organize and give away my resources in such case?    What is the best option in your opinion?     Second, and smaller issue, is it better to let the Resource itself to parse the file and build itself or should I give this responsibility to a Parser class? Or maybe the resouce manager itself (although this last option doesn't sound much right to me)   Thank you! 
  5. I am trying to wrap my head around IBL and I am currently looking at the UE4 approach . I think I understood the concept behind, but I am still a bit dubious on a simple implementation detail due to my inexperience with OpenGL or similar.    I need to filter a cubemap on various mip levels and I wanted to try it in shader, but is at this point that my doubts come in. How can I approach this?    My googling led me to a way to create a cubemap FBO, but the only thing I can think of is  process each face singularly using quads as geometry and outputting to the right cubemap face. Is there any way to make this more clever, directly outputting to the final cubemap in one go?  I don't have a scene to render in it, just "copy" and filter an existing cubemap.   Thanks!
  6. In my forward renderer I am using a PBR shader for my materials and the final lighting equation looks something like (in sort of pseudo code and ignoring the fact that I have multiple analytical lights) diffuseComponent = diffuseLighting * materialDiffuseColor; specularComponent = specBRDF * materialSpecColor; outputCol = (1.0 - specularIntensity) * diffuseComponent + specularIntensity * specularComponent ; And I am now willing to incorporate in this a basic IBL for both diffuse lighting and reflection.    I was reading a bit online but I found only material where they have entirely reflective material ( the final color is the queried reflection from cubemap ) or with just diffuse.      For the diffuse I thought of just multiply my diffuseComponent by the value retrieved from irradiance map.  I am now unsure where to put the reflection. I that this should be related to the roughness of the surface, but while I have a couple ideas but neither satisfy me.    Where should I put the reflections so to maintain a PBR shading? Adding params wouldn't be an issue.      Thank you!
  7. [SOLVED] Apparently I didn't passed correctly one of the variables from TCS to TES. Strangely it was a quite irrelevant one, but now is fixed. 
  8. Hi all! I'm recently considering adding an Tessellation shaders to my pipeline, but I have some question that I'd like to clear off my mind before moving on.    At the moment my VS calculates the positions of vertices in various spaces (world space, view space, screen space) but also for example coordinates in light space for the shadow calculation. Moreover I also calculate the TBN vectors for normal mapping in the fragment shader. Nothing more.  As after the vertex shader new vertices are created by the tessellation step, should I move all the above to the TES leaving the vertex shader with the only responsabilities of transforming to world space and passing stuff to the TCS?  I believe the answer is yes, but being I a complete newbie in this I'd like to ask first as such change will require a big change in other parts of my applications. Also, this will obviously decrease the performances as the instructions are executed more often, so what can I still do, safely, in the vertex shader?    Thank you very much
  9. jjtulip

    Irradiance Filtering with SH

      Thanks!  Well I tried with an hdr version of the same cubemap the result mantains the reddish hue, but my result is still very different from the one produced with the cubemapgen tool      Whereas the one presented in the above article is:     Now, the screenshot of my result has been taked from gDebugger so maybe the tonemapping is different (hence the texture is more dark, right?), but still the "shape" of the image is quite different
  10. jjtulip

    Irradiance Filtering with SH

      No, I'm not using an HDR cubemap, but why it doesn't work? The AMD tool doesn't seem to make such limitations
  11. Hi!   I'm trying to include in my code something like is done here: http://seblagarde.wordpress.com/2012/06/10/amd-cubemapgen-for-physically-based-rendering/ for the irradiance map computation. Now although my code is a bit different, it strongly resemble what is in the post I linked before. Or at least I think so.  The problem is that for the cubemap that is there as reference:       I obtain this result which is both different from the one in the site and also doesn't have the reddish hue that I was expecting:        I can post the code but is quite a lot. Is there evident what can be wrong?    What I'm doing is:   For each face    for each u,v          Compute direction vector for (u,v)         Get solid angle         Compute SH terms using  the "Y"s found on Stupid SH Tricks  (         Use these to project the above said vector in SH      for each u,v .... SHTerms = getSHTerms(...); for(int i=0; i<SHTerms.size(); i++){ projected.at(i) += RGB * SHTerms.at(i) * solidAngle; }         Normalize each of these projected multiplying by 4*pi/sumSolidAngles Then, with these projected values I get back to the env map using a formula like  projectedValue * SHTerm(i) * band(i) where i loops through the number of SH terms (the "Ys" )      These values are written back to the various cube faces and the result is the one posted above.      What can be wrong? Thank you very much. 
  12. Sorry, soved this. Mod I ask you to delete the topic if possible as is useless now, was indeed a stupid thing.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!