What exactly are you trying to do again?
Not sure if you're kidding me, or what's so hard to understand after all my descriptions... but well, OK:
As written in my initial post, I want my host application to decide which (pixel) shader to execute, depending on if the camera's view shows mainly (i.e. high percentage of screen coverage) the inside of an area defined by an (imaginary) cube / box. It is not sufficient to know whether the camera / player is located inside of that cube, but I need to know if what is visible on screen is the inside of that cube.
Think of a cubic area of radioactivity that should cause the execution of some noise post-process effect when the player is inside of or near that area, but only if he's facing / seeing the area.
I do not want to use frustum culling to find the intersection between the view frustum and the box, which would be the "classic" approach for the given task. The main reason for this is that I do not have a near and far plane given in my host app, only the FOV plus by calculation the distance and angle to any point on the map.
I do not want to use Hodgman's idea either because my host app knows crap about any DirectX calls, which would be required for that approach.
So I need some other way to check what the camera currently sees, a way that requires nothing but the things that I actually have in my application, that is as said before:
- Camera world position
- Camera direction
- Camera FOV
- 2 Box corner world coordinates (left-bottom-front, right-top-back)
plus the mentioned functions to calculate the distance and view angle to a given world space point.
Was that clear now? Thanks for any suggestions.