In what sense? Surely they can't be purely random vertices to make some globs. Can you post an image of a bunch of these generated objects? You can always write an unwrapper that will unwrap 3D meshes. How good will it be depends on what your goal is. I don't know what it is called in other tools, but Blender uses one called Unwrap Smart Projections, which is probably the best method you could use given any 3D mesh. I don't know all the inner details of the algorithm, but I have an idea.
Posted by dpadam450
on 02 November 2015 - 11:56 AM
because the sprite depth is in range [0,1] for the given tile. But the camera depth buffer range [0,1] is used for whole camera frustrum.
If you are exporting 2d images with a depth of 0 to 1, we will call this reference sprite relative depth.
Camera space will use 0 to 1, yes.
So the obvious solution is given a sprite's depth location, we add (or subtract) depth from the sprite location in the camera. So we need a way to translate sprite relative depth into the world. You need to scale the relative sprite depth by the real world depth of the 2d sprite.
If you are rendering a couch and it is spanning 3 tiles in depth, which equates to the orthographic depth range .5 to .7, then obviously your 2d sprite is going to make between .5 and .7
OutputDepth = .5 + (.5 - .7)*pixelSpriteDepth
So the couch is .2 in the depth dimension in length. Your 2d sprite then represents a .2 range in depth values.
.........seems pretty simple, not sure what else you would be asking.
Kind of long, about to sleep. My main point I was getting at with this equation was:
A mirror/metal spoon etc. Pure reflective metal with no surface imperfections, colored or not, will always reflect 100% of the light. At any direction to the surface. Once you approach 90 degrees to the surface, the color of the metal is not present because of the fresnel effect bounding pure light off the surface. And your equations have to break down to what I posted:
Any colored mirror has to break down to that. Because a bathroom mirror has no tint, it has no fresnel effect because it already reflects everything, and for a bathroom mirror your equation breaks down futher into:
Output = cubeMap
I don't get it then. The equation you posted has been around forever. This is the same thing I've been using for 10 years (except for the G part) unless I missed something big. I thought the whole PBR idea was replacing the actual "point" light with real cubemaps, since light comes from everywhere and surfaces have micro-indents/imperfections etc, that we blur this cubemap down which works as an approximation of light coming and going in random directions.
That's why I'm confused in general. Because the incoming light on a surface is a perfect image. Lots of little normals scatter it. So I thought N*L was being replaced completely because there really isn't 1 normal unless we zoom into a surface pretty much close to the atomic level. So we factor N*L as a blurred environment map. How is that not the case in PBR? This makes logical sense to me.
I think I'm just going to write a shader that makes sense as a modified version of basically what we have been talking about is the same thing just some minor thing I'm not understanding.
But in real physical lighting, and as I understood PBR, there is no actual L vector. There are just photons coming from all directions with different intensities.
I guess I just thought PBR was the defined by that. Everything else you are saying is that, I need to only change 2 variables in the old equations and I have PBR? I've never seen an example of PBR without a cubemap and you are suggesting that is some other topic that isn't needed.
I guess I'm just going to have to fiddle around with some shaders and see what results I get relative to these other PBR viewers.
Try deleting MAX_NUM_LIGHTS. It doesnt appear you are using it, and may be complaining about any unused uniforms. Also, this is specific to openGL, not general graphics so it should have gone in that sub-forum.
I'm close but I don't get how F0 = a color. Fresnel is a scalar I thought? An amount of reflectivness given a direction of the viewer to the surface. So how do I plug a color in to the fresnel equation?
This picture demonstrates why I'm confused. I have Roughness = 0. BaseColor = Red. Then the top image is pure metal, bottom is pure plastic.
Since metal has no diffuse, the red must come from the specular portion of the equation, which means my cube map must multiplied by red. However in image 3, we have the fresnel effect, where at high angles, it is not multiplied by red. So the only equation that works for this is.
So we are either direct on viewing a tinted cubemap or at high angles fresnel goes to 1.0, in which case the cubemap takes over completely. For some reason I feel like this is wrong but based on these images that is the only mathematics that makes sense to what I see.
To arrive at this plastic value which is image #2, it seems like:
Output = red*cubeMap + cubeMap*fresnelAmount;
Our diffuse material reflects only red values based on light intensity incoming and then we have specular added ontop of this, which appears to be untinted specular.
Lot's of things confusing me here. In the Marmoset example, F0 is in range 0 to 1, but your second comment says to use a color for F0. I guess that is fine because is the amount of R,G,B independent reflection. Either way, just a multiplication on the specular computation.
Q1: For a non metal that is glossy, if F0 = vec3(.03,.03,.03), then how will the equation ever reflect full light? I'm definitely missing some equations here.
For a shiny plastic, say a guitar or piece of marble. roughness = 0, metal= 0, diffuse = dark green:
---->dark green * textureCube(mipLevel 0) + textureCube(mipLevel0)*vec3(.03, .03, .03) ?
My current understanding which must be wrong because shiny dark green plastic just comes out to be shiny dark green plastic with a super tiny amount of specular.
Q2: What is used for the incoming light values for diffuse?
IncomingSpecularLightValueAtPixel = textureCube( reflected eye over surface normal, roughnessValue)
Change your ortho/perspective matrix to be in a known range such as 0 to 1 or -1 to 1. This way you are dealing with more or less percentage of position from the bottom left hand corner to the upper right hand corner. Or if you want to have your perspective matrix in pixel values instead:
For instance if you are building a level in 1024x768 and you want a sprite in the middle of the screen:
x = 512
y = 384
Scale to range 0 to 1
x = .5
y = .5
Someone with resolution 1920 x 1080 plays game. Take your scaled values and put them in the 1920x1080 range:
new x = .5*1920
new y = .5*1080
Your image is now always in the middle of the screen.
They are aimed at speeding up texturing processes. It has nothing to do with where the models came from. As long as you have a model with UV's and a normal map it is a good tool.
If you look at just about any hard surface model, people spend time going around every edge to create "edge wear". In one of the links I posted you can see how it automatically does this with a button and a slider. And it creates edge wear on the entire model. Something you would have to perform manually. You also do have the option to select brushes and paint directly onto the 3d model using projection, rather than in photoshop working in 2D.
Typically you have grunge brushes and scratches etc that go on the surfaces and it can place these for you as well.
Also for PBR it supports painting materials so it has the proper metal/reflective properties. It also helps that all of this stuff updates in real time and you get to see your work immediately.
Posted by dpadam450
on 19 September 2015 - 10:04 PM
Well this is my idea, may or may not be faster, definitely seems a bit lighter as it doesn't modify any terrain data. It is it's own system:
1.) Create the + grid texture (offline, done in photoshop). Load it in. In SC2 this texture represents 10x10 tiles.
2.) Create an FBO the same size as the + grid texture.
3.) Put mouse in the world, calculate its 2D grid position, grab the 5 tiles left, 5 right etc LOGICALLY (complete cpu side).
4.) Load FBO, do a for loop i = 10, j = 10, draw 100 quads based on the logical can or cant walk.
---> to optimize just have a VBO containing 100 quads and update all vertex colors in one update call (instead of updating your other VBO which since vertices in this case aren't consecutive, I assume you have to call re-upload buffer about 10 times.......or are you literally updating the ENTIRE buffer? I hope not).
5.) At this point we have updated the color array VBO for just 100 "fake" cells (not all 100x100). We have called draw on this array. So our FBO holds a small 2d image of green and red tiles, a top down image of them.)
6.) In the same VBO draw your + grid texture with blending or whatever. So what you have is the + grid and the colors in 1 texture. Then we want to stamp that top down right onto the terrain. So we use projective texturing. You have the mouse position on the terrain, so you can use projective texturing. The terrain didn't know any extra data, just that an image was projected on it (which it sounds like you already have).
If you don't understand then, oh well, sounds like your way works just fine. And I can't explain it any better. You create a stamp with all that, and stamp it on the terrain via projective texturing, instead of the vertex knowing it has a color.