jerrycao_1985

Members
  • Content count

    25
  • Joined

  • Last visited

Community Reputation

145 Neutral

About jerrycao_1985

  • Rank
    Member
  1. Where is the cosine factor in extended LTE?

    Same for me, but i think a better example for the question is: Taking a pictuture of a white wall, euqually lit over its entire area, why are the corners of the picture darker than the center? I've found this wikipedia page about that: https://de.wikipedia.org/wiki/Cos4-Gesetz But i don't know how to get the english version. (There is one about vignetting, but vignetting is the wrong term and has different reasons)   I think this cos4 law is the key to my questions.   My guess is that to avoid vignetting effect, the We factor (importance function) is proportion the inverse of cos4. And since the pdf of primary ray contains cos3, it just gets three of the four cos cancelled out. And the left one in the denominator gets cancelled with the cos factor in the LTE, hidden in the G(v_0 - v_1) component.   I've checked the pbrt-v3 implementation, it works this way. https://github.com/JerryCao1985/pbrt-v3/blob/master/src/cameras/perspective.cpp
  2. Where is the cosine factor in extended LTE?

    The edge receives less light because both the distance and the angle to the projector are larger than at the center. Classic Rendering eq. defines that correctly, but i doupt that's comparable to the way a camerea captures the image to film. Might have to do with optics of the lens. Game devs typically use a vignette effect if thy care at all but most probably they don't care for physical correctness here. What book? What the hell is LTE? And what is ray pdf? (Belongs to the other thread you started, but i could not resist) What i mean is, you need to provide more information to get some answers ;)     Physically based rendering. LTE stands for light transport equation, or rendering equation. By ray PDF, I mean the probability density function value of a specific ray.   That's more of an offline rendering question than a game development one, :)
  3. By expending the rendering equation, you will have a nice symmetric equation describing the light transport, please refer to the pbrt book at the page of 760. (Sorry about the large size of this equation, I've no idea how to scale it....)   What I'm wondering is that where is the cosine factor at the camera side of this equation? I never recall any renderer taking it into consideration, at least for real time rendering engine.   Take a real world example here, you are watching a movie, which is displaying a uniform white image, let's say radiance of each ray is exactly 1, obviously the ones that hit the center of the screen will reflect more lights to the viewer, while the ones hit around the edge of the screen will be a little bit darker depending on the FOV of the projector, no matter how small it is, it should be there.   So what is the real world solution for this issue? scale the radiance by a factor of 1/cos(theta)? Or totally ignore it since theta should be very small.
  4. Hi   I'm wondering why do we need to take the pdf of primary ray into account and ignore primary ray pdf in traditional path tracing.   In a bidirectional path tracing algorithm, the connected segment won't take pdf from any point to the other because those two points are generated from more earlier points. My take on it is that it is pretty similar to light tracing by connecting light sample and camera eye point. (I didn't consider anything about finite size aperture with DOF effect, maybe it is relevant.)   Appreciate for any tips.   Thanks Jerry
  5. What is a uber shader?

    I'm wondering what a uber shader is. My first impression about a uber shader is putting everything in one shader and choosing the path dynamically. Does the shaders in Unreal Engine count as uber shader? They have plenty of branches in their shader code, however most of them are based on static conditions which are defined by macros during shader compiling, not run time. Do these kind of shader fall into the cateogry of uber shader? Thanks
  6. Thanks for all of the answers, they are very helpful.
  7. Hi all   what does the flag "D3DCREATE_SOFTWARE_VERTEXPROCESSING" mean? What it tells in the official document is quite limited, just the following line.      Specifies software vertex processing.   So what does it mean by software vertex processing? Does it mean that all of the vertex shader will be processed by CPU? I've totally no idea of it, however i guess it shouldn't be the case. Or it will be very slow.   Please give me some pointers on it.   Thanks Jerry
  8. Thanks guys.   I think there are two issues if perspective division is done in vs.    1. Since some points could be behind the eye, which means that the w component in clip space is negative. The output of Vertex shader is supposed to be in clip space. Of course you can do perspective division in VS and mathematically doing perspective division twice won't hurt anything. However, w will be 1 if perspective divide is done in VS. With w equaling to 1, the hardware is unable to reject any point behind the eye which will lead to wrong result of rasterization.  2. Since w component of vertex output is used to interpolate attribute, you can't change it arbitrarily. Or there will be wrong attribute interpolation, in other words, perspective correction won't work for attribute.
  9. Hi:   I've tried to append a very simple line of code onto my vertex shader which works well for everything. The code is something like this:   output.pos /= output.pos.w;   Yes, I'm doing perspective division by myself. I knew that the hardware will do it for me right after vertex shader. However, I want to see if anything goes wrong if it's done in vertex shader.   It turned out that everything related to this shader is wrong, not just about the vertex attribute, also the position.   So, what's wrong with that code?   Thanks in advance for your attention.
  10. How does depth bias work in DX10 ?

    It's weird.   1. The vertex with the maximum depth value in a primitive may be far away from the pixel to be shaded. It may be even outside the render target. 2. r is a integer , 23 , 10 or 52 , whatever. How does it relate to the maximum depth value in a primitive ?
  11. Hi all   I'm kind of confused by this article. http://msdn.microsoft.com/en-us/library/windows/desktop/cc308048(v=vs.85).aspx   There is a formula in it : Bias = (float)DepthBias * 2**(exponent(max z in primitive) - r) + SlopeScaledDepthBias * MaxDepthSlope;   First , what's ** after the number "2" ? Is it a typo ? Second , about this part , exponent(max z in primitive) - r , could someone give me a more clear explanation ? It doesn't make any sense to me.   Thanks in advance.
  12. All   I've been wondering how exactly are magfilter and minfilter defined in DX. I didn't find anything mention it in the official doc , did I miss anything?   Here is my understanding of these two terms. Pixels are not dots , they are actually grids. When sampling from a texture , we are sampling an area of the texture , not a single point . So some kind of filter is used to alleviate the artifacts. If the area is much larger than a texel , it's minfilter. And If the area is much smaller than a texel , it's magfilter. But what if the area is less than a texel along one axis but much larger than a texel's grid along the other axis , which filter do we use ? ( I knew that it's a perfect situation for anisotropic filter , however , let's just forget about it. Assume we set point filter for magfilter and linear for min , what's the filter we will use in the above situation? )   Thanks for ur attention.
  13. according to the document of glDepthRange  It's during transformation between NDC and window coordinate , much earlier than fragment processing.
  14. hi everyone   It occurred to me that the depth value in NDC is between -1 and 1 , while the depth value in depth buffer is between 0 and 1.   So where did the change happened ?   And most importantly , why does OpenGL bother to make the range between -1 and 1, then change it to 0 and 1. DX does it in a simpler way , range of depth is always between 0 and 1 after perspective divide by default.
  15. hi all   Here is my case: An empty 2d texture is created and then it is filled with some memory without mipmap information.   I want to generate mip mapping for the texture. It's dx11. I see there is an interface called "GenerateMips" could do some sort of job. But "GenerateMips" requires the "D3D11_RESOURCE_MISC_GENERATE_MIPS" , which is only available if the texture is both a render target and shader resource. My texture is just used for texture mapping , it's not a render target of course.   So how can I generate mip map using DX11 interface. ( I know I can generate mipmap using cpu code , but I just want to know the professional way of doing it.)   Thanks