• Create Account

Member Since 18 Nov 2005
Online Last Active Today, 03:40 PM

### #5260160Per pixel sprite depth

Posted by on 02 November 2015 - 11:56 AM

because the sprite depth is in range [0,1] for the given tile. But the camera depth buffer range [0,1] is used for whole camera frustrum.

If you are exporting 2d images with a depth of 0 to 1, we will call this reference sprite relative depth.

Camera space will use 0 to 1, yes.

So the obvious solution is given a sprite's depth location, we add (or subtract) depth from the sprite location in the camera. So we need a way to translate sprite relative depth into the world.  You need to scale the relative sprite depth by the real world depth of the 2d sprite.

If you are rendering a couch and it is spanning 3 tiles in depth, which equates to the orthographic depth range .5 to .7, then obviously your 2d sprite is going to make between .5 and .7

OutputDepth = .5 + (.5 - .7)*pixelSpriteDepth

So the couch is .2 in the depth dimension in length. Your 2d sprite then represents a .2 range in depth values.

.........seems pretty simple, not sure what else you would be asking.

### #5259694PBR Metalness equation

Posted by on 30 October 2015 - 12:37 AM

Kind of long, about to sleep. My main point I was getting at with this equation was:

A mirror/metal spoon etc. Pure reflective metal with no surface imperfections, colored or not, will always reflect 100% of the light. At any direction to the surface. Once you approach 90 degrees to the surface, the color of the metal is not present because of the fresnel effect bounding pure light off the surface. And your equations have to break down to what I posted:

Output = 0*red  + mix(  cubeMap*red,  cubeMap,  fresnelAmount)

Any colored mirror has to break down to that. Because a bathroom mirror has no tint, it has no fresnel effect because it already reflects everything, and for a bathroom mirror your equation breaks down futher into:
Output = cubeMap

### #5259693PBR Metalness equation

Posted by on 30 October 2015 - 12:23 AM

I don't get it then. The equation you posted has been around forever. This is the same thing I've been using for 10 years (except for the G part) unless I missed something big. I thought the whole PBR idea was replacing the actual "point" light with real cubemaps, since light comes from everywhere and surfaces have micro-indents/imperfections etc, that we blur this cubemap down which works as an approximation of light coming and going in random directions.

That's why I'm confused in general. Because the incoming light on a surface is a perfect image. Lots of little normals scatter it. So I thought N*L was being replaced completely because there really isn't 1 normal unless we zoom into a surface pretty much close to the atomic level. So we factor N*L as a blurred environment map.  How is that not the case in PBR? This makes logical sense to me.

I think I'm just going to write a shader that makes sense as a modified version of basically what we have been talking about is the same thing just some minor thing I'm not understanding.

But in real physical lighting, and as I understood PBR, there is no actual L vector. There are just photons coming from all directions with different intensities.
I guess I just thought PBR was the defined by that. Everything else you are saying is that, I need to only change 2 variables in the old equations and I have PBR? I've never seen an example of PBR without a cubemap and you are suggesting that is some other topic that isn't needed.

I guess I'm just going to have to fiddle around with some shaders and see what results I get relative to these other PBR viewers.

Posted by on 29 October 2015 - 10:31 PM

Try deleting MAX_NUM_LIGHTS. It doesnt appear you are using it, and may be complaining about any unused uniforms. Also, this is specific to openGL, not general graphics so it should have gone in that sub-forum.

### #5259663PBR Metalness equation

Posted by on 29 October 2015 - 08:03 PM

I'm close but I don't get how F0 = a color. Fresnel is a scalar I thought? An amount of reflectivness given a direction of the viewer to the surface. So how do I plug a color in to the fresnel equation?

This picture demonstrates why I'm confused. I have Roughness = 0. BaseColor = Red.  Then the top image is pure metal, bottom is pure plastic.

Metal
Since metal has no diffuse, the red must come from the specular portion of the equation, which means my cube map must multiplied by red. However in image 3, we have the fresnel effect, where at high angles, it is not multiplied by red. So the only equation that works for this is.

Output = 0*red  + mix(  cubeMap*red,  cubeMap,  fresnelAmount);

So we are either direct on viewing a tinted cubemap  or at high angles fresnel goes to 1.0, in which case the cubemap takes over completely.  For some reason I feel like this is wrong but based on these images that is the only mathematics that makes sense to what I see.

Plastic
To arrive at this plastic value which is image #2, it seems like:
Output = red*cubeMap  +   cubeMap*fresnelAmount;

Our diffuse material reflects only red values based on light intensity incoming and then we have specular added ontop of this, which appears to be untinted specular.

However, I feel like both of these are wrong.

### #5259645PBR Metalness equation

Posted by on 29 October 2015 - 05:42 PM

Ok I see F0 is when the angle to the surface is 0.  I already understood this idea, just didn't know what it was. I'll respond if I end up getting with this shader.

### #5259570PBR Metalness equation

Posted by on 29 October 2015 - 08:45 AM

Lot's of things confusing me here. In the Marmoset example, F0 is in range 0 to 1, but your second comment says to use a color for F0. I guess that is fine because is the amount of R,G,B independent reflection. Either way, just a multiplication on the specular computation.

Q1: For a non metal that is glossy, if F0 = vec3(.03,.03,.03), then how will the equation ever reflect full light? I'm definitely missing some equations here.

For a shiny plastic, say a guitar or piece of marble.  roughness = 0, metal= 0, diffuse = dark green:
---->dark green * textureCube(mipLevel 0)   + textureCube(mipLevel0)*vec3(.03, .03, .03)  ?

My current understanding which must be wrong because shiny dark green plastic just comes out to be shiny dark green plastic with a super tiny amount of specular.

Q2: What is used for the incoming light values for diffuse?

IncomingSpecularLightValueAtPixel = textureCube( reflected eye over surface normal,  roughnessValue)

IncomingDiffuseLightValueAtPixel  = same thing?

### #52595002d image position

Posted by on 28 October 2015 - 08:55 PM

Change your ortho/perspective matrix to be in a known range such as 0 to 1  or -1 to 1. This way you are dealing with more or less percentage of position from the bottom left hand corner to the upper right hand corner. Or if you want to have your perspective matrix in pixel values instead:

For instance if you are building a level in 1024x768 and you want a sprite in the middle of the screen:

x = 512

y = 384

Scale to range 0 to 1
x = .5
y = .5

Someone with resolution 1920 x 1080 plays game. Take your scaled values and put them in the 1920x1080 range:
new x =  .5*1920
new y = .5*1080

Your image is now always in the middle of the screen.

### #5259289Texturing tools you should know about

Posted by on 27 October 2015 - 10:41 AM

They are aimed at speeding up texturing processes. It has nothing to do with where the models came from. As long as you have a model with UV's and a normal map it is a good tool.

If you look at just about any hard surface model, people spend time going around every edge to create "edge wear". In one of the links I posted you can see how it automatically does this with a button and a slider. And it creates edge wear on the entire model. Something you would have to perform manually. You also do have the option to select brushes and paint directly onto the 3d model using projection, rather than in photoshop working in 2D.

Typically you have grunge brushes and scratches etc that go on the surfaces and it can place these for you as well.

Also for PBR it supports painting materials so it has the proper metal/reflective properties.  It also helps that all of this stuff updates in real time and you get to see your work immediately.

### #5253127Efficient rendering of a dynamic grid

Posted by on 19 September 2015 - 10:04 PM

Well this is my idea, may or may not be faster, definitely seems a bit lighter as it doesn't modify any terrain data. It is it's own system:

1.) Create the + grid texture (offline, done in photoshop). Load it in. In SC2 this texture represents 10x10 tiles.

2.) Create an FBO the same size as the + grid texture.

3.) Put mouse in the world, calculate its 2D grid position, grab the 5 tiles left, 5 right etc LOGICALLY (complete cpu side).

4.) Load FBO, do a for loop i = 10, j = 10, draw 100 quads based on the logical can or cant walk.

---> to optimize just have a VBO containing 100 quads and update all vertex colors in one update call (instead of updating your other VBO which since vertices in this case aren't consecutive, I assume you have to call re-upload buffer about 10 times.......or are you literally updating the ENTIRE buffer? I hope not).

5.) At this point we have updated the color array VBO for just 100 "fake" cells (not all 100x100). We have called draw on this array. So our FBO holds a small 2d image of green and red tiles, a top down image of them.)

6.) In the same VBO draw your + grid texture with blending or whatever. So what you have is the + grid and the colors in 1 texture. Then we want to stamp that top down right onto the terrain. So we use projective texturing. You have the mouse position on the terrain, so you can use projective texturing. The terrain didn't know any extra data, just that an image was projected on it (which it sounds like you already have).

If you don't understand then, oh well, sounds like your way works just fine. And I can't explain it any better. You create a stamp with all that, and stamp it on the terrain via projective texturing, instead of the vertex knowing it has a color.

### #5252539Foliage Collision

Posted by on 16 September 2015 - 11:00 AM

Well, does this work with multiple spheres? Does it work with other types of objects?

One thing you could look at doing is a texture that gets projected from above (a heightmap) and you could have a buffer to draw into, with a sphere, and you render the sphere with front face culling so the bottoms of the objects render that are lowest to the ground. Then you output the height of the pixel above the height of the terrain height map. You could also for every pixel output some rotation angle from the center of the object (you may be able to use the normal for something like this).  Since it does seem like you can run a blade of grass from all angles and it doesnt just move down, but it rotates.

Unless they have a generic object like a statue that can be thrown on the grass, it may just be something simpler. The rotation part is what throws me off and would take me a while to think about doing it efficiently.

### #5252184Linux c++ debugging

Posted by on 14 September 2015 - 08:18 AM

I had no linux experience coming into my latest job. Some of our code is on linux and I'm so used to visual studio. I've used GDB as a debugger but it's a pain coming from Visual Studio. Are there any suggestions on programming tools on Linux?

### #5251904Bad performance when rendering medium amount of meshes

Posted by on 12 September 2015 - 10:58 AM

What is also bad, unless this is just a test, is that your objects are so small that every 28 triangles it draws, you have to stall the GPU to figure out what is going to happen next and setup things.  You want the GPU to just draw as many triangles in one go as you can.

### #5230493degenerate triangles

Posted by on 22 May 2015 - 03:32 PM

It shouldn't matter which one you choose, though webGL may be a lot slower so indices might be faster.

but instead use drawArrays with gl_triangels and a degenerate triangle to separate quads.

You would simply just have duplicated vertices where each triangle has 3 unique vertices. The degenerate thing would be if you were drawing indexed vertices using triangle strips/fans where you send -1 I believe as a degenerate index which tells the GPU to draw a brand new polygon that isn't attached to the edges of the previous triangle.

### #5229015How to create an MSAA shader? (For deferred rendering)

Posted by on 14 May 2015 - 01:50 PM

You have some misconceptions here:

MSAA takes a pixel and chops it into 2x2 sub-pixels (or more if you go higher quality). The pixel shader gets run ONCE for the pixel. If all of those 2x2 sub-pixels are inside a triangle, it will write the same color value 4 times. If the sub pixels fall on a triangle edge, it writes the pixel shader value only for the pixels the triangle lives in.

At the end of the day, the buffer is downsampled to ONE pixel by averaging the four sub-pixel values. So: There is no such thing as "Screen Space MSAA". There is no blurring, that is only with FXAA, because it has no sub-pixels, so it has to just smear spots with high contrast.

So when you are talking about deferred, you are talking about making a texture that is supporting MSAA (meaning it is actually much bigger because it has to hold all these sub-pixels) and then you can call a downsample function or something. I'm not sure how that specifically works but I'm assuming you just call a "gl" command to resolve the MSAA buffer to an actual texture.

Also, this makes it so that you are writing more data to your gBuffer for deferred so it could slow things down significantly. You could try only using MSAA on your color buffer and not any other ones.

PARTNERS