• Advertisement

Lightness1024

Member
  • Content count

    236
  • Joined

  • Last visited

Community Reputation

933 Good

About Lightness1024

  • Rank
    Member

Personal Information

  1. Opinions on cryptocurrencies

    Nice, I think that's a good perspective and a bit of fairness is overdue in the debate. Absolutely, that's what I start with in my article (linked in OP). True. But there is indeed a small amount of bias for increase in value. Pure speculation would not leave you with more than 50/50 chance of loss vs profit. However, if we take BTC: that is a coin with a fixed supply, because of people who lose their keys (coin burning), and because of the entropy in the real economy "always rising" (added value theory), BTC is a deflationary currency by nature, which means it IS an investment. Also there is a bandwagon effect which is why some people have accused it of being a Ponzi (more people coming is the reason for rise in value, which is not sustainable -> pyramid effect.) But I refute this accusation in my article. So this comment has been a subject of extensive discussion in the field, and this doesn't have to be a red flag at all. Buffet said he will not invest in Apple either. You take it or leave it if your personality matches his, that's all, but it's not an absolute red flag. Authority is not an argument (one of the rhetoric fallacies). This is very much and sadly true, not so much in BTC because of the very large capitalization, but on other smaller coins it's the wild wild west and lots have been left holding the bag. I personally think this is not a problem at all, and it's a anarcho liberal point of view, the thing is just to be aware of this and hedge according to risks you can take, if you can't, just don't play, but there is no need to dismiss the game altogether because the players are rough.
  2. Opinions on cryptocurrencies

    All the reasons cited above are valid concerns to me. That's why my prefered cryptocurrency and the only one I can endorse with mind honesty, is nano (ex RaiBlock). They use multiple blockchains and no mining. So no waste of energy, instant transactions, and no fees. The host of the network nodes are idealists like hosts of tor nodes. There is less corruption in nano space thanks to no gold rush. It doesn't solve the economics though. (supply/inflation/volatility..) I've head the opinion though that economics could be a part of the system, since lots of those things work with votes already, we can imagine a crypto with director interests and all central bank prerogatives included in the system and the people would have control by votes. so a replica of the current system but with full democracy.
  3. Opinions on cryptocurrencies

    Ok for the value. And as for the "one unified" that won't work, that is a very sad observation. But I could tolerate the next best thing: The minimum number of different currencies. Let's take the Euro crisis as an example of why a unified currency can't work, and let's establish the fact that we need some amount of elasticity in the value and the inflation rates of currencies based on local specificities, we still can attempt to minimize how many fragments are needed no ? Your third point was usage in relation to goods. Yes, adoption is the main issue cryptocurrencies are facing. But, will really buy/sell transactions for material goods, create a "peg" on the value of those objects ? didn't we see hyper inflation in period of war make the price of bread skyrocket to billions of Deutshmark in Germany ? It seems the value can still be volatile even when massively adopted. But inflation should be something controlled by the central banks, so in a system where the monetary base is fixed, there can be only deflation. And the speed at which it happens cannot spin out of control since the deflation is bound to the production volume of the whole economy, and not arbitrary loans, compensatory easing, or interest rates which are not controllable parameters in current cryptos. So the volatility diminution would have to come from an increase in liquidity only no ?
  4. Opinions on cryptocurrencies

    It's not really related to gaming, but many games nowadays include an economy a la second life, with their own token/coin and downloadable contents. There even exists a token that has been designed just for games: enjin coin I personally don't believe in this, I think we should have a unified currency that has so much volume that the value becomes stable. fragmenting currencies into 3000 coins like today creates volatility. I thought there were so many problems in general with cryptocurrencies I had to write a long rant, I made a full fledged article about issues here: https://motsd1inge.wordpress.com/2018/02/10/cryptocurrencies-not-there-yet/ So, 'd love to hear your thoughts about its content and if there are points you disagree and stuff.
  5. Dealing with frustration

    "hackers and painters" by Paul Graham What you talk about is a bit like the white page syndrome isn't it. We all go through that, and yes TODO lists only grow, rarely shrink. Especially when you are alone. To successfully get a personal project to reach a state you can be proud of, you need to keep scale down, leverage libraries, take shortcuts, try to avoid generic & robust "production-like" support of the stuff you do, go straight to your use case only. There will be time way later, to think about "but what about those IGP uses, or what about linux..." in the meantime if you have choices between "generic" and "specific", only consider cost. Sometimes though, you can get both, for example: is it better to use boost filesystem for a neat platform independent code, or Win32 API to go straight to business ? Turns out boost FS is the cheaper option, and it's more generic only as the cherry on top of the cake. But that's not the case of most choices you are going to face. If something bores you, find a library, if some specific problem is core to your passion, do it yourself.
  6. R&D [PBR] Renormalize Lambert

    well apparently disney is not complexed by just adding both: but still it appears to be a subject of pondering: https://computergraphics.stackexchange.com/questions/2285/how-to-properly-combine-the-diffuse-and-specular-terms https://gamedev.stackexchange.com/q/87796/35669 This nice paper, from paragraph 5.1 speaks of exactly what I'm concerned with: http://www.cs.utah.edu/~shirley/papers/pg97.pdf And they propose an equation (equation 5) one page later that looks quite different from disney's naive (as it seems to me) approach.
  7. R&D [PBR] Renormalize Lambert

    @FreneticPonE are you talking about this: I've never seen this magic, seems interesting though. This is just further confusing me unfortunately. Let's say I chose a lambert for diffuse and cook torrance for speculars, am I supposed to just add the two ? Lambert doesn't even depend on roughness so mirror surfaces are going to look half diffuse half reflective if just adding both. How one would properly combine a lambert diffuse and a pbr specular ?
  8. Hello, I'd like to ask your take on Lagarde's renormalization of the Disney BRDF for the diffuse term, but applied to Lambert. Let me explain. In this document: https://seblagarde.files.wordpress.com/2015/07/course_notes_moving_frostbite_to_pbr_v32.pdf (page 10, listing 1) we see that he uses 1/1.51 * percetualRoughness as a factor to renormalize the diffuse part of the lighting function. Ok. Now let's take Karis's assertion at the beginning of his famous document: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf Page 2, diffuse BRDF: I think his premise applies and is enough reason to use Lambert (at least in my case). But from Lagarde's document page 11 figure 10, we see that Lambert looks frankly equivalent to Disney. From that observation, the question that naturally comes up is, if Disney needs renormalization, doesn't Lambert too ? And I'm not talking about 1/π (this one is obvious), but that roughness related factor. A wild guess would tell me that because there is no Schlick in Lambert. and no dependence on roughness, and as long as 1/π is there, in all cases Lambert albedo is inferior to 1, so it shouldn't need further renormalization. So then, where does that extra energy appear in Disney ? According to the graph, it's high view angle and high roughness zone, so that would mean, here: (cf image) This is super small of a difference. This certainly doesn't justify in my eyes the need for the huge darkening introduced by the 1/1.51 factor that enters in effect on a much wider range of the function. But this could be perceptual, or just my stupidity. Looking forward to be educated Bests
  9. I'm building a photonmapping lightmapper

    Ok little report from the front. I figured that one big problem I have is a problem of precision during reconstruction of the 3d position from the lumel position. I have a bad spatial quantization, using some bias helped remove some artefacts, but the biggest bugs arn't gone.   anyway, some results applied to real world maps show that indirect lighting shows and makes some amount of difference:   (imgur album)      
  10. I'm building a photonmapping lightmapper

    Awight, there we go, the drawing:   Ok so with this, it's much clearer I hope, the problem is by what figure should I be diving the energy gathered at one point ? The algo works this way: first pass is direct lighting and sky ambient irradiance. second pass creates photons out of the lumels from the first pass. the third pass scans the lightmap lumels and do the final gather. The gather works by sending 12 primary rays, finding k-neighbors from the hit position, tracing back to origin and summing Lambertians. Normally one would think we have to divide by number of samples, but the samples can vary according to photon density, and the density is not significant because I store a color in them. (i.e. their distribution is not the mean of storing the flux like in some implementations) Also, the radius depends on the primary ray length, which means more or less photons will be intercepted in the neighborhood depending if it hits close or far. And finally the secondary rays can encounter occluders, so it's not like it will gather N and we can divide by N. If we divide by the number of rays that arrive at the origin, we are going to have an unfair energy boost. I tend to think I should divide by the number of intercepted photons in the radius ?   Edit: that's the photon map visualizer. I made so that colors = clusters.
  11. Hey guys, I wanted to do some reporting on a tech I'm trying to achieve here.   I took my old (2002) engine out of its dusty shelves and tried to incorporate raytracing to render light on maps, I wrote a little article about it here: https://motsd1inge.wordpress.com/2015/07/13/light-rendering-on-maps/   First I built an adhoc parameterizer, a packer (with triangle pairing and re-orientation using hull detection), a conservative rasterizer, a 2nd set uv generator; And finally, it uses embree to cast rays as fast as possible. I have a nice ambient evaluation by now, like this:   but the second step is to get final gathering working. (as in Henrik Wan Jensen style) I am writing a report about the temporary results here: https://motsd1inge.wordpress.com/2016/03/29/light-rendering-2/   As you can see it's not in working order yet. I'd like to ask, if someone already implemented this kind of stuff here, did you use a kd-tree to store the photons ? I'm using space hashing for the moment. Also, I have a bunch of issues I'd like to discuss: One is about light attenuation with distance. In the classic 1/r*r formula, r depends on the units (world units) that you chose. I find that very strange. Second, is about the factor by which you divide to normalize your energy knowing some amount of gather rays.   My tech fixes the number of gather rays by command line, somehow (indirectly), but each gather ray is in fact only a starting point to create a whole stream of actual rays that will be spawned from the photons found in the vicinity of the primary hit. The result is that i get "cloud * sample" number of rays, but cloud-arity is very difficult to predict because it depends on the radius of the primary ray. I should draw a picture for this but I must sleep now, I'll do it tomorrow for sure. But for now, the question is that it is kind of fuzzy how many rays are going to get gathered, so I can't clearly divide by sampleCount, nor can I divide by "cloud-arity * sampleCount" because the arity depends on occlusion trimming. (always dividing exactly by number of rays would make for a stupid evaluator that just says everything is grey) ... I promise, a drawing, lol ;) In the meantime any thought is welcomed
  12. for each in: Pythonic?

    If your list contains multiple, non-polymorphic datatypes, then you are doing something evil (or spent too much time in LISP).   Edit: just to clarify, a list containing multiple, non-polymorphic datatypes is almost impossible to operate on idiomatically. You are forced to either know the exact type and order of each element ahead of time (in which case a tuple would be more appropriate), or fallback to a chain of [tt]if insinstance(a, Foo): else:[/tt] statements, which is just bad software design.   Certainly not. python can work on different types just like some kind of type erasure system. they are not polymorphic but some operations can apply to a whole variety of types. This is the same than working against the highest interface in OOP. In python you can do it too without any contracts. take a look at that : http://lucumr.pocoo.org/2011/7/9/python-and-pola/ There is not even a mention of classes and inheritance and yet python can accept code that is even more powerful than multiple inheritance, because you can work against "operators" (c.f `len` example in the link). If you keep that in mind as a discipline when writing functions, you open your possible accepted types to a larger variety.
  13. You don't want to write half a page because .... you are lazy ? It will be tough implementing a nice ocean system if you are that lazy. Choose, either you don't care about sharing what you do, or you care and you write your half page. Half a page is nothing, read at least the papers you linked, how many pages do they have ? how many months you think it took those researchers to do their work AND cherry on top, write about it and share it publicly ? Meanwhile, what are you doing ?
  • Advertisement