Jump to content

  • Log In with Google      Sign In   
  • Create Account

Styves

Member Since 17 Dec 2009
Offline Last Active Today, 04:21 AM

Posts I've Made

In Topic: Fast Way To Determine If All Pixels In Opengl Depth Buffer Were Drawn At Leas...

Yesterday, 02:51 AM

Writing your own rasterizer isn't really going to solve your problem, since you won't be utilizing your GPU at all (or if you use compute, not as efficiently as you could be). Just leave that stuff to the GPU guys, they know what they're doing. :)

 

Anyway, do you really need such precise culling? I mean, are you absolutely sure you're GPU bound? Going into such detail just to cull a few triangles might not be worth it, and could hurt your performance rather than help if you're actually CPU bound since modern GPUs prefer to eat big chunks of data more than they like to issue a draw call for each individual triangle. If you have bounding box culling on your objects, and frustum culling, then I think that's all you'll really need unless you're writing a big AAA title with a very high scene complexity.

 

Just bear in mind that Quake levels were built for some different hardware constraints, so you should probably break up the obj model you have into small sections to avoid processing the entire mesh in one chunk so that you can leverage those two culling systems a little more.

 

That said, if you really want to have some proper occlusion culling for triangles, you can either check out the Frostbite approach (it's quite complicated iirc), or try implementing a simple Hi-Z culling system using Geometry Shaders (build a simple quad-tree out of your zbuffer and do quad-based culling on each triangle using the geometry shader). The later is simpler to implement and I've had pretty good results with it.


In Topic: First-person weapon rendering techniques

01 June 2016 - 01:48 PM

EH9V4BC.gif

xD I was specifically thinking of Crysis 1, but yeah lol this is also shows a method you could try.


In Topic: First-person weapon rendering techniques

01 June 2016 - 02:56 AM

Mirrors Edge had animations that were tailored to look good in first person, which prevent the neck from hitting the camera, etc. In third person these animations look hilarious, but they look great in first person. You can see them look funny in your shadows though, or if you watch some hacked 3rd person youtube videos.

 

Other games, like Crysis or Halo, simply render the lower half of the body (not the whole body, they either have a separate model built from the 3rd person model, or they on-the-fly clip the geometry) from the waist down. You can't look any lower than something like 70-80 degrees in these games, so the lowest you can look will still not show the entire torso. The first person hands are totally separate and rendered the traditional way (floating hands with a gun). This looks great IMO, and is pretty easy to setup - no new animations, and you can do it with the existing third person model if you clip off the top of the mesh.

 

In one game (I forget which) they actually don't treat the camera as a single rotating point in space, but rather move the camera when it rotates to simulate neck movement, which makes the camera lean forward when looking down, etc. You can apply this as well to avoid neck problems, but it'll change affect the way camera motion works.


In Topic: How could I use bent normal map

17 April 2016 - 04:41 PM

Calculating bent normals is just an extension of AO calculation where the direction of each sample is also averaged along with the occlusion amount.

 

 

4491947_1329650854zqXO.png

 

 

The bent normals from substance painter seem correct. In any region that there is no AO, your normal won't be bent in any direction, therefore in tangent space it's pointing straight up along the surface normal. This is why you only see details in the ears.

 

I have no idea how Unity handles the bent normals but the usual way is to, instead of applying AO as a standard multiplier onto the lighting result (which gives this weird muddy look), use the dot product of the bent normal and the light instead. This is how we handle SSDO in CryENGINE for example.

 

I've only ever used a texture-based version of this technique once, however in this version the bent normals also contained information from the original tangent space normal map, and was used directly during lighting in place of the original.


In Topic: resolution scaling

02 April 2016 - 10:15 AM

Keep in mind that a lot of games don't do full 1080p rendering for everything. Post FX, Particles, SSAO, to name a few, are all done at lower resolutions and then merged with the full resolution image.

 

So if you're looking for a 100% natively rendered 1080p game, you probably won't find one today.

 

The important point to focus on is that, as long as the geometry is rendered at 1080p, people will consider it a 1080p game. If it's scaled down in one dimension (1280x1080) then you won't be considered "full 1080p", but you also won't be considered "not 1080p". People are kinda weird, they don't even notice that this is what they're doing. In the end, all they notice is "clarity", so if it looks even a little blurry they'll start pointing fingers at your resolution. It's pretty clear that they don't really understand how graphics work... hence talking about Textures being sharp and crisp when the image looks nice, since it's the only graphics term they really know.

 

An example of how this logic can easily be confused is when Guerilla did interlacing for the multiplayer of Killzone: Shadow Fall. They rendered at 1/2 width and alternated between even and odd pixels every frame while blending it with a reprojected previous frame. It caused a pretty big divide between players, some siding with "reprojection doesn't count! it's not true 1080p!" and some siding with "the final output is a 1080p image, it's 1080p".

 

They even almost got sued over it. ┐( ̄∀ ̄)┌


PARTNERS