Execute shader depending on what's visible on screen

Recommended Posts

Meltac    508

High all,

In a 3D first-person / ego perspective game I've got a simple script that checks whether the player is currently located inside of one of a list of specified coordinate boxes (say, in a room of a list of defined rooms), and execute some shader / post process accordingly.

Now what I would need instead is to check whether what the players currently sees is the inside of one of those coordinate boxes, i.e. he might still be outside but on screen sees only the inside of the box, or vice-versa, looks from within the box to the outside.

I could do such a check either by scripting the engine, or in the responsible post process shader.

How would I achieve this? Any suggestions are welcome.

Edited by Meltac

Share on other sites
SeanMiddleditch    17565
The usual way is to just do a frustrum check against the box or its contents. If the box intersects (or something inside it intersects), the player sees the box (or some of its contents). This can be done on the GPU but there is a large delay with running a shader computation and then reading back the results, which of course you'd need to do to select the right shader (and a shader that does the check and has all the rendering modes internally would likely be slower, too).

The frustrum intersection is your best bet, IMO.

Share on other sites
Hodgman    51234

To do an if/else between two different post-processing shaders:
1) read the depth of each pixel, use it to reconstruct position, test if the position is within your bounding box. Return white for true and black for false.
2) generate mipmaps for this black/white texture.
3) render your optional "if" post-processing quad. In the vertex shader, read from the 1x1 pixel mipmap of the black/white texture. If some threshold isn't met (tex>0 for "any pixels from the box visible", tex==1 for "all visible pixels are in the box") then multiply the vertex position by zero, which effectively hides the quad.
4) render an alternative "else" post-processing quad. In the vertex shader, perform the same test, but inverted (using 1-tex).

There's a bunch of caveats involved in doing this kind of thing in D3D9 though.

* To read the depth buffer in step (1), you either need to use D3DFMT_INTZ, or render depth to a regular texture yourself using a shader.

* To read from a texture in a vertex shader ("VTF") in steps (3/4), you likely need to use a floating point texture format such as D3DFMT_R16F or D3DFMT_A16B16G16R16F.

* Step 2 is automatic in D3D9 if you created that texture with the D3DUSAGE_AUTOGENMIPMAP flag.

Edited by Hodgman

Share on other sites
Meltac    508

Thanks for your suggestions, guys

Share on other sites
Meltac    508

Ok, I've checked the suggestions. It seems that I could do some sort of frustum culling or per-pixel-box-check, but both approaches require a rather ugly set of calculations which I'd like to avoid if there is a simpler way.

What I actually have at the point where I would like to do the check (high-level script on CPU, not GPU side):

• Camera world position
• Camera direction
• Camera FOV
• 2 Box corner world coordinates (left-bottom-front, right-top-back)

What I do not have right away:

• View frustrum definition (near/far plane or say 6 planes defining frustum)
• Any specific pixel information (uv, view space position, depth or the like)

What I would like to calculate:

• Percentage of screen "covered" by box.

Any clues on how to perform such calculation?

Share on other sites
Meltac    508

Nobody?

I had thought of some check using the view angles between the camera direction and the view vector (direction) to the box's corners or its center point. That way I could for example determine whether a certain point (i.e. corner vertex) is located inside or outside of the screen boundaries, only be checking view angle and distance, thus avoiding the requirement of any world-to-screen-space or whatever matrix calculations.

What do you guys think of such an approach?

Share on other sites
phil_t    8084

Now what I would need instead is to check whether what the players currently sees is the inside of one of those coordinate boxes, i.e. he might still be outside but on screen sees only the inside of the box, or vice-versa, looks from within the box to the outside.

What I would like to calculate:
Percentage of screen "covered" by box.

That way I could for example determine whether a certain point (i.e. corner vertex) is located inside or outside of the screen boundaries, only be checking view angle and distance, thus avoiding the requirement of any world-to-screen-space or whatever matrix calculations.

What exactly are you trying to do again?

Share on other sites
Meltac    508

What exactly are you trying to do again?

Not sure if you're kidding me, or what's so hard to understand after all my descriptions... but well, OK:

As written in my initial post, I want my host application to decide which (pixel) shader to execute, depending on if the camera's view shows mainly (i.e. high percentage of screen coverage) the inside of an area defined by an (imaginary) cube / box. It is not sufficient to know whether the camera / player is located inside of that cube, but I need to know if what is visible on screen is the inside of that cube.

Think of a cubic area of radioactivity that should cause the execution of some noise post-process effect when the player is inside of or near that area, but only if he's facing / seeing the area.

I do not want to use frustum culling to find the intersection between the view frustum and the box, which would be the "classic" approach for the given task. The main reason for this is that I do not have a near and far plane given in my host app, only the FOV plus by calculation the distance and angle to any point on the map.

I do not want to use Hodgman's idea either because my host app knows crap about any DirectX calls, which would be required for that approach.

So I need some other way to check what the camera currently sees, a way that requires nothing but the things that I actually have in my application, that is as said before:

• Camera world position
• Camera direction
• Camera FOV
• 2 Box corner world coordinates (left-bottom-front, right-top-back)

plus the mentioned functions to calculate the distance and view angle to a given world space point.

Was that clear now? Thanks for any suggestions.

Edited by Meltac

Share on other sites
phil_t    8084

It definitely wasn't clear what you wanted (probably the reason why no one replied after you rejected the other two solutions). You've made it more clear now, thanks.

I do not want to use frustum culling to find the intersection between the view frustum and the box, which would be the "classic" approach for the given task. The main reason for this is that I do not have a near and far plane given in my host app, only the FOV plus by calculation the distance and angle to any point on the map.

It seems like you only care about the 2d projection of the box on the screen (since you have no near/far plane information).

The most straightforward idea I can think of is to use an arbitrary near/far plane, along with the camera properties you know about. That should allow you to get a view frustum. Then just compare against the 4 side planes of the view frustum (ignoring the front and back since they are arbitrary, since your choice of near/far plane was arbitrary, and you don't care about that anyway).

Or, alternately, now that you have an arbitrary projection matrix that allows you to project the box coordinates into 2d (ignoring the z coordinates, since, again, it's based on your arbitrary choice of near/far planes), just check the 2d box coordinates against the screen boundaries.

Meltac    508

Thank you.

Share on other sites
Promit    13246

Don't occlusion queries give you exactly what you're looking for? You might need to retool a bit in order for them to fit your game code, but issuing a query will actually tell you how many pixels are affected.

Share on other sites
Jason Z    6434

Don't occlusion queries give you exactly what you're looking for? You might need to retool a bit in order for them to fit your game code, but issuing a query will actually tell you how many pixels are affected.

I second this approach - it sounds like an exact match for what you are trying to do, and there is support for it built into the API itself.  You can even do asynchronous queries too.

Share on other sites
Hodgman    51234

From his posting history, I think he's making a graphical mod for STALKER, which means doing raw D3D stuff is out of the picture.

What kind of rendering operations can you perform, Meltac?

Share on other sites
Meltac    508

From his posting history, I think he's making a graphical mod for STALKER, which means doing raw D3D stuff is out of the picture.

That is true. I stopped mentioning the X-Ray engine that STALKER uses as most people here won't post *any* replies otherwise because they don't know that engine.

What kind of rendering operations can you perform, Meltac?

As the version of X-Ray I'm working with is not open, I have only access to those parts of the engine that are, this is, a LUA based scripting sub-engine and the pure HLSL vertex and pixel shaders. No applicable DirectX host application here. So, any D3D... function calls and the like are absolutely out of scope. Probably I shouldn't have tagged this thread D3D9, but I wanted to make clear that the engine is basically built onto DirectX 9.

This said, they are virtually no "rendering" operations in the usual way possible at all. All I've got are the mentioned camera properties and the world space coordinates of the box to check against. The operations I can do must be pure math / programming algorithms without any dependency to the DirectX API or any GPU specifics.

The LUA script part provides extensions for matrix / vector math operations, though. Theoretically this should be sufficient to do the job when applied to the given point and direction coordinates and the FOV angle taken into account.

EDIT:
And to highlight it once again, I do not have *any* view / world / transformation / projection / inverse or whatever matrix available, only the point coordinates of the camera and the box.

Edited by Meltac

Share on other sites
Meltac    508

That's what I've meant. As soon as Hodgman has mentioned STALKER and raw D3D stuff being out of the picture, no soul seems to dare to reply

Edited by Meltac

Share on other sites
Jason Z    6434

That's what I've meant. As soon as Hodgman has mentioned STALKER and raw D3D stuff being out of the picture, no soul seems to dare to reply

I don't have any problem with the STALKER engine - I just don't know how to implement what you are asking without access to the API...  Is it really necessary to use the STALKER engine, or could you upgrade to something more open?

Share on other sites
Meltac    508

I don't have any problem with the STALKER engine - I just don't know how to implement what you are asking without access to the API... Is it really necessary to use the STALKER engine, or could you upgrade to something more open?

I am developing mods for STALKER which are supposed to run on a normal installation of that game, so "upgrading" to whatever different engine is not an option.

I am pretty, pretty sure that the task I'm asking for is very well doable without directX API, probably even in multiple different ways. Probably I should have asked in a more math related forum rather than here, as what I am intending to do really doesn't require any D3D stuff because it's mainly a question of vector math (even though it might be well doable using the directX API as well).

Share on other sites
Hodgman    51234
As well as the math angle, it might help to describe the problem from a design perspective too. Is this something to do with radiation or anomaly zones? You want the players screen to undergo an effect when they stare into an anomaly, etc?

Also, as well as post-processing shaders, can you place custom meshes into the world and put custom materials/shaders on them? Maybe there's a solution down this path as well?

Share on other sites
Meltac    508

it might help to describe the problem from a design perspective too. Is this something to do with radiation or anomaly zones? You want the players screen to undergo an effect when they stare into an anomaly, etc?

Also, as well as post-processing shaders, can you place custom meshes into the world and put custom materials/shaders on them? Maybe there's a solution down this path as well?

I have made several post-processing effects that affect only certain areas on the game level map while leaving others untouched. One prominent example would be reflective surfaces, say the tile floor in some lab casting diffuse light reflections. The reflection stuff is made by a post-process shader. Implementation details of that post-process are off-topic here.

As I have no means to control which areas on the map have a reflective floor by material or color or any other 2D or 3D property I need to manually define arrays of coordinate set for those areas where the post-process should be executed. Then, my CPU-side script should check whether what the player currently sees is "mostly" part of such a defined coordinate set (e.g. tile floor to be rendered reflective), and if so, set some engine variable that will be read by the GPU-side shader to enable the according post-process. It's no an exact match then because the percentage of screen coverage will decide whether to enable a specific shader effect, but it's a approximative approach.

Edited by Meltac

Share on other sites
kalle_h    2464

it might help to describe the problem from a design perspective too. Is this something to do with radiation or anomaly zones? You want the players screen to undergo an effect when they stare into an anomaly, etc?

Also, as well as post-processing shaders, can you place custom meshes into the world and put custom materials/shaders on them? Maybe there's a solution down this path as well?

I have made several post-processing effects that affect only certain areas on the game level map while leaving others untouched. One prominent example would be reflective surfaces, say the tile floor in some lab casting diffuse light reflections. The reflection stuff is made by a post-process shader. Implementation details of that post-process are off-topic here.

As I have no means to control which areas on the map have a reflective floor by material or color or any other 2D or 3D property I need to manually define arrays of coordinate set for those areas where the post-process should be executed. Then, my CPU-side script should check whether what the player currently sees is "mostly" part of such a defined coordinate set (e.g. tile floor to be rendered reflective), and if so, set some engine variable that will be read by the GPU-side shader to enable the according post-process. It's no an exact match then because the percentage of screen coverage will decide whether to enable a specific shader effect, but it's a approximative approach.

Do you have access for any gbuffer data?

Share on other sites
Meltac    508

Do you have access for any gbuffer data?

No, as I said no directX API or other raw 3D / GPU instructions. Only (repeated for the 3rd time now):

• Camera world position
• Camera direction
• Camera FOV
• 2 Box corner world coordinates (left-bottom-front, right-top-back)

plus the mentioned functions to calculate the distance and view angle to a given world space point.

Is it really that hard?

Edited by Meltac

Share on other sites
Hodgman    51234

Do you have access for any gbuffer data?

No, as I said no directX API or other raw 3D / GPU instructions. Only (repeated for the 3rd time now):

• Camera world position
• Camera direction
• Camera FOV
• 2 Box corner world coordinates (left-bottom-front, right-top-back)
plus the mentioned functions to calculate the distance and view angle to a given world space point.

Is it really that hard?
He was asking about the inputs to your post-processing shaders - whether you get each pixel'a diffuse color, specular, normal, depth, etc... Or whether you just get the final 'lit' pixel colours.

Share on other sites
belfegor    2834

...
half4 _P=tex2Dproj(s_position,I.tc0);
...
...tex2Dlod(s_normal, float4(texCoord + bdelta,0,0)...


I see that you have access to gbuffer data.

Share on other sites
Meltac    508

Oh, sorry. I might have misunderstood the question.

So yes in my post-process pixel shader I can access:

- color

- normal

- position (not sure what space, I'd guess view space)

but: how would that help in any way when I don't have any means to fill or alter that gbuffer data on the application host side where I need to do the distinction?

Edited by Meltac

Share on other sites
belfegor    2834

Its view space position (A16R16G16B16F), you do not modify gbuffer you can transform your point in view space and compare z to see if it is visible, but i am not sure exactly what you want to do so let someone else propose some solutions.

Edited by belfegor

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account