Sign in to follow this  

Draw overlay quad/shapes when background quad is zoomed in

This topic is 400 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

 

I'm trying to draw an overlay quad/shape on top of another quad.

 

Now i could have my background quad zoomed in (using ortho camera) while i'm trying to draw my overlay object. 

Thus, what i really want is to draw the overlay shape which takes into consideration how much i'v already zoomed in.

 

As currently it doesn't expand/enlarge when i zoom in and thus keeps the overlay shape of the same size.

 

Do i need to manually enlarge the shape and then draw or could it be all done in shader ?

 

Any suggestion how i could approach this.

 

Note: Using SharpDx & C# & Directx11

Share this post


Link to post
Share on other sites

A picture of what you're trying to do might help.

 

But, if I'm picturing this in my head correctly, you could use a Scaling Matrix on the quad, in which the amount of scaling would be correlative to the difference between the last frames zoom value, and the current frames zoom value. (Really anything to use as a scalar for the scaling of the overlay)

 

Using a scaling matrix is pretty simple to. Something like this:

worldMatrix = scale() * rotation() * translation();

Marcus Hansen

Edited by markypooch

Share this post


Link to post
Share on other sites

Thanks for the reply @markypooch

 

I tried to upload some screenshots but for some reason they wont upload (no specific reason)

so i have uploaded them to my dropbox.

 

Screenshot 1 Link: Scenario where the overlay quad (in yellow) are drawn on background quad

 

Screenshot 2 Link: Scenario where i have zoomed in and then drawn my overlay (in yellow)

 

Also,

 

My quads are drawn using actual screen co-ordinates which are transformed to world space in the shader file.

All the quad i.e both background and overlay use the same matrices.

 

However, at this point I cant understand how would the scaling matrix fit as a solution ?

Would you kindly explain/elaborate.

 

Thanks

Share this post


Link to post
Share on other sites

Well now that I see what you're trying to do. Than, yeah a scaling matrix isn't much of a solution. I was kinda going off this:

 

 

Thus, what i really want is to draw the overlay shape which takes into consideration how much i'v already zoomed in

As currently it doesn't expand/enlarge when i zoom in and thus keeps the overlay shape of the same size.

 

A scaling matrix will scale the quad by some magnitude of your choosing. This is, it can enlarge the quad itself the more you zoom up on the license plate. But, it all depends on what you're trying to do. 

 

If your expecting an algorithm to run through all sorts of different pictures of cars with license plates, and occlude them, you may want to look into an image detection algorithm, which if that's the case, i'll pass this topic off to someone else who has more knowledge in this area.

 

Marcus Hansen

Share this post


Link to post
Share on other sites

The way i zoom is i try to keep the point under the mouse at the same position by moving quad left/right/top/bottom if it needs to.

 

Also, i wasn't expecting an image detection algorithm.  The background image used was from live camera feed.

Share this post


Link to post
Share on other sites
I haven't rendered to offscreen texture before but can look into it.

However that brings me to next question.
The yellow quads which I draw were simple example of quads filled with color as I do use different effects such as pixelated or blur the quad represented by yellow color.

Could they still be achieved when trying to render offscreen texture as per your suggestion ?

Currently how I render:
1. Draw background quad
2. Draw my yellow quads
3. Present the buffer or invalidate surface as using D3DImage part of WPF

In order to do render to offscreen texture:
1. Have 2 render targets
2. Draw background quad on texture render target which not my main back buffer
3. Draw my yellow quad as above
4. Convert the offscreen render target to shader resource view
5. Map this shader resource view to update my back buffer which my main render target
6. Present the back buffer or invalidate surface as above

Are my steps correct or am I approaching this completely in a wrong way.

Share this post


Link to post
Share on other sites
Could they still be achieved when trying to render offscreen texture as per your suggestion ?

 

By and large, "offscreen" is simply another "surface" you can render to, just like the back buffer. There's a little work in setting it up, and copying it over to the back buffer, but that's about it.

Edited by Norman Barrows

Share this post


Link to post
Share on other sites

thanks for the update @Norman Barrows

 

Yes i agree as after researching & reading on it.

 

However from what i read rendering to texture can have performance issues. 

If this is the case then whats the main reason/best case scenario for its usage ?

 

Also the confusion i  still have :

1. As to how to copy it over to back buffer as in my case Invalidate the surface as using D3DImage (part of WPF) ?

2.  How would my ortho camera work in this case.

 

Though i'm still investigating the above two points, since i've just started to implement it in code.

Share this post


Link to post
Share on other sites
If this is the case then whats the main reason/best case scenario for its usage ?

 

when its the best / only way to achieve the desired effect, whatever the desired effect in question may be.  

 

Often times its used to render to texture, where you render to an offscreen surface, then use that surface to create a texture, then use that texture in a subsequent draw call to texture a mesh.   A rear view mirror in a car driving simulator would be an example of this.  They create an offscreen surface, point the camera looking back, render the scene to the offscreen surface, then point the camera back forward again and render the scene,  and use the offscreen surface as the texture for the rear view mirror. And Voila!  you have real working rear view mirrors!  Pretty cool huh?

 

But just from that brief description, you can see that its jumping through a number of hoops to do all that - thus the performance hit.  Its usually an effect that is used sparingly.

 

As with everything , the proof is in the pudding, you won't know if it will work for you until you try it. All you can do is do your research, and become familiar with the methods available, and try those that look promising.

Edited by Norman Barrows

Share this post


Link to post
Share on other sites
As to how to copy it over to back buffer as in my case Invalidate the surface as using D3DImage (part of WPF) ?

 

Can't help you there, I don't use WPF.   Don't even know if WPF can do it. the DirectX API can for sure. But I'm not sure about Windows Presentation Foundation.

 

From:

https://msdn.microsoft.com/en-us/library/aa970268(v=vs.110).aspx

 

"Windows Presentation Foundation (WPF) in Visual Studio 2015 provides developers with a unified programming model for building modern line-of-business desktop applications on Windows."

 

So yeah, its a business app API. I'd be kind of surprised if it supported stuff like render to texture.  You may have to use the underlying DirectX API.


 

 

 How would my ortho camera work in this case.

 

No different really, render to offscreen surface is simply a way to give you multiple back buffers you can target.    Once you set the target surface, everything else is normal rendering, IE set the view mat and lights,  and draw everything (lights! camera! action!).    

 

Of course then all you have is this "bitmap" sitting in ram (or vidram most likely), which you then have to do something with - which probably means copying it somewhere so it can then be used.    So if you created an offscreen surface in sysram (slow!) it would most likely copy the back buffer to ram, then copy it back to video texture memory when you used the surface to create a texture.   Obviously you want to used the fastest ram possible for offscreen surfaces - or that RAM  which requires less copying of things from here to there.    

 

And don't forget the inherent performance hit from the fact that each offscreen render is basically another entire call to render to draw a single frame.   So you have to render TWICE!   - or more if you use more than one offscreen surface in the scene.   So if you render twice, your FPS will more of less drop by 50%, depending on how much you draw.   That's usually the big performance hit of render to surface - its an additional call to render(). Somewhat similar to stencil buffer volume shadow techniques where you'd have to render THREE times - one to add it in, one to subtract it out, and one using the result as a bitmask - as i recall - well - something like that. <g>. 

Edited by Norman Barrows

Share this post


Link to post
Share on other sites

thanks for the reply @Norman Barrows

 

After reading and implementing it in the code, i can get the render to texture work with WPF (as i used previously), with DirectX API underneath the hood.

 

However i still have a issue when i zoom in. 

 

Issue: 

See image flashing and weird shaped quad being rendered.

i have attached a zip with the 3 videos in it. (just pretend you don't see watermark in the video)

 

Download Link:  https://dl.dropboxusercontent.com/u/5196736/Users/Kartik/DirectX/Dx%20Capture.zip

 

Any ideas as to why the above issue would occur ?

Share this post


Link to post
Share on other sites

OK i have figured out as to why the images were being rendered with weird shaped quad being rendered.

 

The reason for this is that my matrices for WVP were getting transposed incorrectly on every alternate render call (transposed matrix being transposed again) , since they need to be in "row major".

Edited by dave09cbank

Share this post


Link to post
Share on other sites

Ok i have progressed further with the rendering to texture. 

 

At this moment my rendered texture isn't rendered of correct size. The size changes once i perform zoom by updating the  Z value on the view-projection matrix which eventually stops updating its size once its reached the correct size.

 

For whatever reason the texture is not filling the entire quad and i dont know why.

How i know this: In my shader i just return a single colour [gray float4(0.5f, 0.5f, 0.5f, 1) ] instead of the sampled colour. 

 

And from what i conclude is this: my texture is either of the not same size as the quad or something else is going on.

 

The issue seen is via this link

 

Any suggestions anyone ?

Edited by dave09cbank

Share this post


Link to post
Share on other sites

Ignore the previous post #14

 

It was cause of my stupidity. 

 

the reason for the issue was that the texture to which i would render to my entire scene wasn't being initialised correctly for the correct size which had caused the issue above.

In case it helps someone else.

Share this post


Link to post
Share on other sites

This topic is 400 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this