Lightness1024

Members
  • Content count

    232
  • Joined

  • Last visited

Community Reputation

933 Good

About Lightness1024

  • Rank
    Member

Personal Information

  1. Dealing with frustration

    "hackers and painters" by Paul Graham What you talk about is a bit like the white page syndrome isn't it. We all go through that, and yes TODO lists only grow, rarely shrink. Especially when you are alone. To successfully get a personal project to reach a state you can be proud of, you need to keep scale down, leverage libraries, take shortcuts, try to avoid generic & robust "production-like" support of the stuff you do, go straight to your use case only. There will be time way later, to think about "but what about those IGP uses, or what about linux..." in the meantime if you have choices between "generic" and "specific", only consider cost. Sometimes though, you can get both, for example: is it better to use boost filesystem for a neat platform independent code, or Win32 API to go straight to business ? Turns out boost FS is the cheaper option, and it's more generic only as the cherry on top of the cake. But that's not the case of most choices you are going to face. If something bores you, find a library, if some specific problem is core to your passion, do it yourself.
  2. R&D [PBR] Renormalize Lambert

    well apparently disney is not complexed by just adding both: but still it appears to be a subject of pondering: https://computergraphics.stackexchange.com/questions/2285/how-to-properly-combine-the-diffuse-and-specular-terms https://gamedev.stackexchange.com/q/87796/35669 This nice paper, from paragraph 5.1 speaks of exactly what I'm concerned with: http://www.cs.utah.edu/~shirley/papers/pg97.pdf And they propose an equation (equation 5) one page later that looks quite different from disney's naive (as it seems to me) approach.
  3. R&D [PBR] Renormalize Lambert

    @FreneticPonE are you talking about this: I've never seen this magic, seems interesting though. This is just further confusing me unfortunately. Let's say I chose a lambert for diffuse and cook torrance for speculars, am I supposed to just add the two ? Lambert doesn't even depend on roughness so mirror surfaces are going to look half diffuse half reflective if just adding both. How one would properly combine a lambert diffuse and a pbr specular ?
  4. Hello, I'd like to ask your take on Lagarde's renormalization of the Disney BRDF for the diffuse term, but applied to Lambert. Let me explain. In this document: https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf (page 10, listing 1) we see that he uses 1/1.51 * percetualRoughness as a factor to renormalize the diffuse part of the lighting function. Ok. Now let's take Karis's assertion at the beginning of his famous document: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf Page 2, diffuse BRDF: I think his premise applies and is enough reason to use Lambert (at least in my case). But from Lagarde's document page 11 figure 10, we see that Lambert looks frankly equivalent to Disney. From that observation, the question that naturally comes up is, if Disney needs renormalization, doesn't Lambert too ? And I'm not talking about 1/π (this one is obvious), but that roughness related factor. A wild guess would tell me that because there is no Schlick in Lambert. and no dependence on roughness, and as long as 1/π is there, in all cases Lambert albedo is inferior to 1, so it shouldn't need further renormalization. So then, where does that extra energy appear in Disney ? According to the graph, it's high view angle and high roughness zone, so that would mean, here: (cf image) This is super small of a difference. This certainly doesn't justify in my eyes the need for the huge darkening introduced by the 1/1.51 factor that enters in effect on a much wider range of the function. But this could be perceptual, or just my stupidity. Looking forward to be educated Bests
  5. I'm building a photonmapping lightmapper

    Ok little report from the front. I figured that one big problem I have is a problem of precision during reconstruction of the 3d position from the lumel position. I have a bad spatial quantization, using some bias helped remove some artefacts, but the biggest bugs arn't gone.   anyway, some results applied to real world maps show that indirect lighting shows and makes some amount of difference:   (imgur album)      
  6. I'm building a photonmapping lightmapper

    Awight, there we go, the drawing:   Ok so with this, it's much clearer I hope, the problem is by what figure should I be diving the energy gathered at one point ? The algo works this way: first pass is direct lighting and sky ambient irradiance. second pass creates photons out of the lumels from the first pass. the third pass scans the lightmap lumels and do the final gather. The gather works by sending 12 primary rays, finding k-neighbors from the hit position, tracing back to origin and summing Lambertians. Normally one would think we have to divide by number of samples, but the samples can vary according to photon density, and the density is not significant because I store a color in them. (i.e. their distribution is not the mean of storing the flux like in some implementations) Also, the radius depends on the primary ray length, which means more or less photons will be intercepted in the neighborhood depending if it hits close or far. And finally the secondary rays can encounter occluders, so it's not like it will gather N and we can divide by N. If we divide by the number of rays that arrive at the origin, we are going to have an unfair energy boost. I tend to think I should divide by the number of intercepted photons in the radius ?   Edit: that's the photon map visualizer. I made so that colors = clusters.
  7. Hey guys, I wanted to do some reporting on a tech I'm trying to achieve here.   I took my old (2002) engine out of its dusty shelves and tried to incorporate raytracing to render light on maps, I wrote a little article about it here: https://motsd1inge.wordpress.com/2015/07/13/light-rendering-on-maps/   First I built an adhoc parameterizer, a packer (with triangle pairing and re-orientation using hull detection), a conservative rasterizer, a 2nd set uv generator; And finally, it uses embree to cast rays as fast as possible. I have a nice ambient evaluation by now, like this:   but the second step is to get final gathering working. (as in Henrik Wan Jensen style) I am writing a report about the temporary results here: https://motsd1inge.wordpress.com/2016/03/29/light-rendering-2/   As you can see it's not in working order yet. I'd like to ask, if someone already implemented this kind of stuff here, did you use a kd-tree to store the photons ? I'm using space hashing for the moment. Also, I have a bunch of issues I'd like to discuss: One is about light attenuation with distance. In the classic 1/r*r formula, r depends on the units (world units) that you chose. I find that very strange. Second, is about the factor by which you divide to normalize your energy knowing some amount of gather rays.   My tech fixes the number of gather rays by command line, somehow (indirectly), but each gather ray is in fact only a starting point to create a whole stream of actual rays that will be spawned from the photons found in the vicinity of the primary hit. The result is that i get "cloud * sample" number of rays, but cloud-arity is very difficult to predict because it depends on the radius of the primary ray. I should draw a picture for this but I must sleep now, I'll do it tomorrow for sure. But for now, the question is that it is kind of fuzzy how many rays are going to get gathered, so I can't clearly divide by sampleCount, nor can I divide by "cloud-arity * sampleCount" because the arity depends on occlusion trimming. (always dividing exactly by number of rays would make for a stupid evaluator that just says everything is grey) ... I promise, a drawing, lol ;) In the meantime any thought is welcomed
  8. for each in: Pythonic?

    If your list contains multiple, non-polymorphic datatypes, then you are doing something evil (or spent too much time in LISP).   Edit: just to clarify, a list containing multiple, non-polymorphic datatypes is almost impossible to operate on idiomatically. You are forced to either know the exact type and order of each element ahead of time (in which case a tuple would be more appropriate), or fallback to a chain of [tt]if insinstance(a, Foo): else:[/tt] statements, which is just bad software design.   Certainly not. python can work on different types just like some kind of type erasure system. they are not polymorphic but some operations can apply to a whole variety of types. This is the same than working against the highest interface in OOP. In python you can do it too without any contracts. take a look at that : http://lucumr.pocoo.org/2011/7/9/python-and-pola/ There is not even a mention of classes and inheritance and yet python can accept code that is even more powerful than multiple inheritance, because you can work against "operators" (c.f `len` example in the link). If you keep that in mind as a discipline when writing functions, you open your possible accepted types to a larger variety.
  9. You don't want to write half a page because .... you are lazy ? It will be tough implementing a nice ocean system if you are that lazy. Choose, either you don't care about sharing what you do, or you care and you write your half page. Half a page is nothing, read at least the papers you linked, how many pages do they have ? how many months you think it took those researchers to do their work AND cherry on top, write about it and share it publicly ? Meanwhile, what are you doing ?
  10. yes report quickly because its going to get out soon, and ms implementation of C++11 seems to already have enough basic bugs like that on its record. I reported one myself. seems like they have a poor test suite, or/and just few users really using their soft.
  11. You're all wrong, you should use morton code (z order) for better cache coherency on adjacent cells :p
  12. VS 2012 express any good?

    There is a major limitation in Express editions compared to Pro/enterprise/ultimate, you cannot install plugins. So no visual assist, no incredibuild, no phat studio, no global zoom, no resharper or watnot...
  13. Are you sure this is even this extract of code that takes 1 hour ? when you pause the debugger it always stops in this function ?