Posted by dpadam450
on 17 September 2016 - 09:57 AM
If you cut the extra specular and fresnel stuff out of PBR, which is a lot of the code trying to get the most realistic specular, you are left with a simple thing: metals have no diffuse term only specular. Non-metals have diffuse + specular.
And roughness is just a term for the physical surface and simply changes the directions of light bounces.
Posted by dpadam450
on 03 September 2016 - 01:26 PM
how do I update the normals before I bind the model to be rendered
You never update the normals. They are calculated offline based on the surface, not based off of any view/model matrix. Normals represent the surface when you talk about calculating them. What you are talking about is strictly transforming them, which is the same as a vertex. The inputs stay the same, the outputs change relative to the camera view.
Your vertex shader should output a new normal every frame, by taking OutputNormal = normalMatrix*in_normal;
normalMatrix should be computed once and sent down. The normal matrix is simply the same as the model matrix (minus translation) if you have no non-uniform scaling (IE scaling is fine as long as it is equal across x,y and z). Otherwise the normal matrix is what you have, and should again be computed only once. So don't compute it if you don't need to.
FYI: The inverse of a 3D rotation matrix (no scaling applied to the model) ... inverse = transpose. So in the case of no scaling on your matrix. You are simply transposing the model matrix twice, which gives you the model matrix as the result anyway.
If you sell a product in EU on your own, I believe you will have to collect VAT tax for the consumers country or yours. If you are making a business deal with a company that is selling your product or paying you for a service, then no, you don't pay income tax in their country, just yours.
It is always a spotlight and always used.....it's a computing system, it is always fetching memory. Not sure what you meant by frame buffer. FBO? If you want to see the performance impact of GPU Ram speed, then download a perfomance/OC tool and turn down the memory speed about 500Mhz while in a game and see what happens to the framerate.
If you are talking system RAM, then no that speed doesn't matter too much with interfacing to the GPU, at least if you look at low speed DDR3 vs high speed DDR3, the benchmarks I've seen might get 1 FPS cump from 59 to 60.
In the procedure of making a game, which part apply shader to models ?
You should be doing this at the art level. You need to assign specific shaders. For instance clothing pieces will have a specific clothing shader because it has very small cloth details and should have a detail texture.
I've had issues before either in the rendering or the reflection vector. I suggest using a sphere to debug.
In my current shader, I have to flip the x-axis of the reflection vector: texCube(envMap, vec3(refVec.x*-1, refVec.y, refVec.z) ). Pretty sure that is just an issue with my current code as I dont recall doing that before. But I've had issues where I had to convert the vector from GL to DX coordinates because the cube map is stored in directX coordinates most likely. So try flipping y and z, and/or negating one of those after flipping.
aside from that, you could also be potentially rendering to the wrong cube face on accident or using the wrong view matrix to render to the face.
vec4 R_World = inverseView * vec4(R,0.0);
You could also using an interpolate/varying of the world eye/camera and get rid of the matrix multiply in the pixel shader.
Out of curiosity, how is it done in 3D to know when you reach a time when extra details adds too many burden and steals an unacceptable amount of cycles
When you are writing the same pixel over and over (meaning you have an asteroid behind an asteroid behind an asteroid and if you draw in reverse order it will continually update the same pixel), or you have very tiny triangles. If you have triangles that are 1x1 pixel or so, then it will chew up performance. Pixel shaders run on 2x2 blocks of pixels for mip mapping reasons so if you have a bunch of tiny triangles it causes a ton more work at a single pixel location.
In the case of an asteroid field, most of the screen isn't going to be drawn, just a bunch of small/fairly small asteroids, that is why you want to LOD down to like 20 verts or so because you don't even know the difference at that far away. You can't make out much geometric shape.
If you are using instancing, that is 10 different models, so only 10 draw calls. So how many models do you have? What is the vertex count? Why not LOD the asteroid geometry into 1 or 2 more LOD levels. If you are using displacement mapping then LOD geometry is as simple as a sphere with less subdivisions.
If you really wanted to get fancy, those 10 draw calls could be reduces to 1 if you are passing per object matrices and per object id's in range 1 to 10. In the shader you can use a texture array and use the id to look at a certain displacement map. (Not sure if texture arrays are supported in vertex shader but they probably do in the modern feature set).
Regardless, how many triangles are you pushing? I would think it would be much simpler to just LOD until performance is good (you could get it down to probably 20 verts per asteroid). You should easily be able to push a million verts which would be 50,000 asteroids above 60fps.
Don't have the time to look through all your code. From the images though it looks like you have only applied specular. Where is the basic diffuse ( dot(normal, light) )?. In using a cube map you would get the diffuse term by mip mapping the cube map and using the normal of the surface to look up into the cube map at the lowest mip (1x1 for each face). The downsampling gives you the average of light in a hemisphere, which is what diffuse is. The sum of all light that can hit the surface and bounce back.
Isn't the whole point of smoothing groups so you avoid having to manually create an edge loop though? Note I've only used blender, and 'make sharp' on an edge does (I think) the same thing, it internally forces a duplicate of vertices on edges so they can have different normals and not share the same normal.
Well for one, he didn't say he was going to bake normal maps, and I don't bake normal maps all the time for every object in my game. For a cartoony looking game like zelda wind waker or so, you wouldn't need normal maps in your entire game really.
Sub-surface division will take a quad and split it into 4 quads down the center. It doesn't make edges sharper. The "make sharp" on an edge I believe is a feature to lock the edges from moving during sub-dividing. (IE make a cube in blender and apply sub-division, it will warp the cube into a sphere, instead of just making the edges stay and subdivide around them). I always manually put edge loops around my models, more control and doesn't add more than a minute of work. But also, as said if I don't intend to use a normal map at all or at some far away LOD, then I need those loops in the low-poly, not just the high poly, otherwise I will get artifacts like this guy.
For one, this is why we have normal maps. You can bake nice normals at a super high poly resolution. Your real issue is just not enough edge loops.
Anytime you have a vertex that is on an edge that is 90 degrees, or close to it, you need to create loops around it, otherwise it is interpolation lighting over such high angles. Also, this is why you usually apply sub surface division and then back normals onto low poly meshes. For this same type of reason. More polys = better surface representation and lighting.