Execute shader depending on what's visible on screen

Started by
33 comments, last by Meltac 9 years, 9 months ago

High all,

In a 3D first-person / ego perspective game I've got a simple script that checks whether the player is currently located inside of one of a list of specified coordinate boxes (say, in a room of a list of defined rooms), and execute some shader / post process accordingly.

Now what I would need instead is to check whether what the players currently sees is the inside of one of those coordinate boxes, i.e. he might still be outside but on screen sees only the inside of the box, or vice-versa, looks from within the box to the outside.

I could do such a check either by scripting the engine, or in the responsible post process shader.

How would I achieve this? Any suggestions are welcome.

Advertisement
The usual way is to just do a frustrum check against the box or its contents. If the box intersects (or something inside it intersects), the player sees the box (or some of its contents). This can be done on the GPU but there is a large delay with running a shader computation and then reading back the results, which of course you'd need to do to select the right shader (and a shader that does the check and has all the rendering modes internally would likely be slower, too).

The frustrum intersection is your best bet, IMO.

Sean Middleditch – Game Systems Engineer – Join my team!

To do an if/else between two different post-processing shaders:
1) read the depth of each pixel, use it to reconstruct position, test if the position is within your bounding box. Return white for true and black for false.
2) generate mipmaps for this black/white texture.
3) render your optional "if" post-processing quad. In the vertex shader, read from the 1x1 pixel mipmap of the black/white texture. If some threshold isn't met (tex>0 for "any pixels from the box visible", tex==1 for "all visible pixels are in the box") then multiply the vertex position by zero, which effectively hides the quad.
4) render an alternative "else" post-processing quad. In the vertex shader, perform the same test, but inverted (using 1-tex).

[edit]There's a bunch of caveats involved in doing this kind of thing in D3D9 though.

* To read the depth buffer in step (1), you either need to use D3DFMT_INTZ, or render depth to a regular texture yourself using a shader.

* To read from a texture in a vertex shader ("VTF") in steps (3/4), you likely need to use a floating point texture format such as D3DFMT_R16F or D3DFMT_A16B16G16R16F.

* Step 2 is automatic in D3D9 if you created that texture with the D3DUSAGE_AUTOGENMIPMAP flag.

Thanks for your suggestions, guys smile.png

Ok, I've checked the suggestions. It seems that I could do some sort of frustum culling or per-pixel-box-check, but both approaches require a rather ugly set of calculations which I'd like to avoid if there is a simpler way.

What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side):

  • Camera world position
  • Camera direction
  • Camera FOV
  • 2 Box corner world coordinates (left-bottom-front, right-top-back)

What I do not have right away:

  • View frustrum definition (near/far plane or say 6 planes defining frustum)
  • Any specific pixel information (uv, view space position, depth or the like)

What I would like to calculate:

  • Percentage of screen "covered" by box.

Any clues on how to perform such calculation?

Nobody?

I had thought of some check using the view angles between the camera direction and the view vector (direction) to the box's corners or its center point. That way I could for example determine whether a certain point (i.e. corner vertex) is located inside or outside of the screen boundaries, only be checking view angle and distance, thus avoiding the requirement of any world-to-screen-space or whatever matrix calculations.

What do you guys think of such an approach?


Now what I would need instead is to check whether what the players currently sees is the inside of one of those coordinate boxes, i.e. he might still be outside but on screen sees only the inside of the box, or vice-versa, looks from within the box to the outside.


What I would like to calculate:
Percentage of screen "covered" by box.


That way I could for example determine whether a certain point (i.e. corner vertex) is located inside or outside of the screen boundaries, only be checking view angle and distance, thus avoiding the requirement of any world-to-screen-space or whatever matrix calculations.

What exactly are you trying to do again?


What exactly are you trying to do again?

Not sure if you're kidding me, or what's so hard to understand after all my descriptions... but well, OK:

As written in my initial post, I want my host application to decide which (pixel) shader to execute, depending on if the camera's view shows mainly (i.e. high percentage of screen coverage) the inside of an area defined by an (imaginary) cube / box. It is not sufficient to know whether the camera / player is located inside of that cube, but I need to know if what is visible on screen is the inside of that cube.

Think of a cubic area of radioactivity that should cause the execution of some noise post-process effect when the player is inside of or near that area, but only if he's facing / seeing the area.

I do not want to use frustum culling to find the intersection between the view frustum and the box, which would be the "classic" approach for the given task. The main reason for this is that I do not have a near and far plane given in my host app, only the FOV plus by calculation the distance and angle to any point on the map.

I do not want to use Hodgman's idea either because my host app knows crap about any DirectX calls, which would be required for that approach.

So I need some other way to check what the camera currently sees, a way that requires nothing but the things that I actually have in my application, that is as said before:

  • Camera world position
  • Camera direction
  • Camera FOV
  • 2 Box corner world coordinates (left-bottom-front, right-top-back)

plus the mentioned functions to calculate the distance and view angle to a given world space point.

Was that clear now? Thanks for any suggestions.

It definitely wasn't clear what you wanted (probably the reason why no one replied after you rejected the other two solutions). You've made it more clear now, thanks.


I do not want to use frustum culling to find the intersection between the view frustum and the box, which would be the "classic" approach for the given task. The main reason for this is that I do not have a near and far plane given in my host app, only the FOV plus by calculation the distance and angle to any point on the map.

It seems like you only care about the 2d projection of the box on the screen (since you have no near/far plane information).

The most straightforward idea I can think of is to use an arbitrary near/far plane, along with the camera properties you know about. That should allow you to get a view frustum. Then just compare against the 4 side planes of the view frustum (ignoring the front and back since they are arbitrary, since your choice of near/far plane was arbitrary, and you don't care about that anyway).

Or, alternately, now that you have an arbitrary projection matrix that allows you to project the box coordinates into 2d (ignoring the z coordinates, since, again, it's based on your arbitrary choice of near/far planes), just check the 2d box coordinates against the screen boundaries.

Thank you.

This topic is closed to new replies.

Advertisement