Sign in to follow this  
cebugdev

OpenGL: mix 2D and 3D rendering question

Recommended Posts

Hi all, 

 

a little background, our current opengl based system uses 2D orthographic projection to draw 2D elements into screen like adding images, texts, etc.

When adding objects, coordinates are specified in screen coordinates (top left is 0,0 and bottom right is width, height), 

for the upcoming upgrade, we want to add rotation feature to these objects, however we cant do that using orthographic projection, so what i did is to add a perspective as well as a camera matrix 

to draw the object in 3D and add rotation, but if we do it in 3D then it takes a 3D coordinate and not the screen space coordinate which is not the specification,

so what i did is, accept the 2D X,Y coordinate as input, unProject it and use the result as my new base X, Y coordinate for my object,

my problem is i cannot seem to make to make it work

here is the code (stripped down for simplicty):

    // Projection matrix 4:3 ratio
	glm::mat4 projectionMatrix = glm::perspective(glm::radians(45.0f),
		4.0f / 3.0f, 0.1f, 100.0f);

	// Camera matrix
	glm::mat4 viewMatrix = glm::lookAt(
		glm::vec3(0, 0, 5), // Camera is at 0,0,5 in World Space, our triangle is at 0 z axis
		glm::vec3(0, 0, 0), // and looks at the origin
		glm::vec3(0, 1, 0)  // Head is up (set to 0,-1,0 to look upside-down)
		);

	glm::mat4 model = glm::mat4(1.0f);		//identity
	
    // sample Rectangle input (X, Y, X2,Y2) in screen space is (0,0,100,100) i set 0 for Z as screen space does not have a Z
	glm::vec3 un = glm::unProject(glm::vec3(0, 0, 0), viewMatrix, projectionMatrix, viewport);
	glm::vec3 un_2 = glm::unProject(glm::vec3(100, 100, 0), viewMatrix, projectionMatrix, viewport);

   // Result
   // un : un = {x=-0.0552284457 y=-0.0414213315 z=4.90000010 ...}
   // un_2: un_2 = {x=-0.0414213352 y=-0.0276142210 z=4.90000010 ...}


So the plan here is to use the unprojected result to construct a triangle and apply transformation such as rotation without breaking the existing code that accepts 2D space as coordinate.

so in summary is accept 2D coordinates and draw a 3D object on that area with transformation.

on the above code, it looks like the result is in the near clipping plane and when i draw a rectangle using those points, it is not visible on the screen (camera is at Z=5 position).

any idea on how to do this? i know that Z is a player here somewhere but i dont know how. like the Z on the above result is on the near clipping plane, i just want it to be visible on the screen and appear on the specific 2D spot.

let me know if you have any ideas on how to work on this black magic of a problem, Thank you in advance

Share this post


Link to post
Share on other sites
51 minutes ago, cebugdev said:

unProject it and use the result as my new base X, Y

I get why you'd do this in some scenarios, but it strikes me as a weird approach to a relatively simple problem.  The game I'm currently working on has 3d elements (the world) with 2d elements on top of it (the UI).  The two of them use entirely different shaders and transforms - when drawing 2d elements, I skip the perspective transform altogether rather than transform and have to un-transform it.  The 3d stuff gets transformed the same as anyone else would do it, some variation of local, world, camera, perspective transforms, etc., but the 2d stuff simply gets a single transform to go from my reference-sized "ui space" to the device space or whatever the name for that space is.

Unless I'm misunderstanding what you're trying to do.

Edited by trjh2k2

Share this post


Link to post
Share on other sites
On 10/5/2017 at 9:59 PM, trjh2k2 said:

I get why you'd do this in some scenarios, but it strikes me as a weird approach to a relatively simple problem.  The game I'm currently working on has 3d elements (the world) with 2d elements on top of it (the UI).  The two of them use entirely different shaders and transforms - when drawing 2d elements, I skip the perspective transform altogether rather than transform and have to un-transform it.  The 3d stuff gets transformed the same as anyone else would do it, some variation of local, world, camera, perspective transforms, etc., but the 2d stuff simply gets a single transform to go from my reference-sized "ui space" to the device space or whatever the name for that space is.

Unless I'm misunderstanding what you're trying to do.

the project im working now is not a game, originally everything is rendered in 2D using opengl orthographic projection, inputs are specified a 2D X, Y coordinates in screen space, and it will be positioned there like how UI are positioned in games, 

it is just lately that they want to put a 3D rotation feature on those 2D elements without breaking the original specification that the user must specify a 2D coordinates to place an object in the screen, since you cannot perform the full rotation specially rotating in Y axis in an orthographic projection, i "enabled" the 3D mode by providing the view and projection matrix while gettinga 2D coordinates as input.

hope i explained it clear.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this