Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


VanillaSnake

Member Since 27 Mar 2006
Offline Last Active Today, 12:18 AM

Posts I've Made

In Topic: How to control a wacom pen in software?

Today, 12:20 AM

Wacom publishes APIs and sample code for integrating with them. Judging by the examples, it should be fairly straightforward to inject the requisite structs.

 

Yea I went through a few examples they had before making this thread, it does seem fairly straight forward, but I wasn't sure that PS even uses the wacom api. I mean it supports a whole range of tablets by various manufacturers so maybe they're using some windows API as a cover-all. 

 

@Servant of the Lord, hmm, that probably would work, I'm going to look into it if using their native API won't work. 

 

I kind of wanted the easiest method, hence why I'm even using photoshop in the first place. It's not too difficult to write a simple barebones drawing program, but I want this project to get done in a less then a week so looking to save time anywhere I can. Thanks for the suggestions never the less, I will try the wacom's api and see where it takes me.


In Topic: How to control a wacom pen in software?

Yesterday, 07:31 PM

I think it's a confusing post, what I want to do is have an app draw in photoshop. I can just do that by sending photoshop the wm_mousemove and wm_lbdown messages and it would draw, but I would like to incorporate pressure and tilt into the commands. Since I own a wacom tablet and I know photoshop works with it, so I was wondering if I can send the same messages that the wacom pen is sending photoshop. I've looked into the wacom api and it has a message wt_packet which seems to do just that I was just wondering if it's going to work or maybe there are other ways, I'm not even sure if photoshop is listening for that message. I could just use a message tracker and see what's being sent, but I just wanted to check here first.


In Topic: Why does this code work? (Drawing in screen space)

28 May 2015 - 02:39 PM

 

 
So the vertex shader transformes the camera space into clip space under the hood without me specifying it to do so? And then this clip space gets transformed into the viewport space correct? And I'm supplying the coordinates already in clip space. But I'm still not sure why I'm getting a correct rectangular view that's not stretched if I'm specifying the a width and height of 2 making it a square? I understand if the clip space was from 0 to 1, in which case 1 would just be 100% coverage, but 0 to 2?

In this case there's no notion of camera space at all: you supply the clip space coordinates to the vertex shader, and the pipeline after the vertex shader expects such coordinates, thus no transformation is needed. The viewport transform is then applied, mapping this coordinates into the render target space, that is correct.
Clip space coords range from -1 to 1 on the X and Y axes, and your quad spans the exact same range. As the clip space is mapped to the viewport, so is your quad, effectively becoming a rectangle covering the entire viewport.

Edit: sorry another question, is it possible to go from world space directly into clip space? Can I just apply a clip space transform to the object in world space to get it there?

That's exactly what view and projection transforms do: the view matrix transforms the scene so that the camera is positioned at the origin, then the projection transform maps the camera view volume to the canonical view volume (-1, -1, 0), (1, 1, 1), producing clip space coordinates. You can also multiply these two matrices together, which would give you a single transform, that maps vertices from world space directly into clip space.

 

 

thank you, that made it much clearer. I also read up on canonical view volumes to get on more solid ground. What I'm still a _little_ hazy about is that you said the projection transform produces clip space. I thought that there has to be a clip matrix to transform from projection to clip space? And I was wondering where that happens, because I don't remember setting a clip matrix anywhere, does DX do it under the hood? Or is it some covert function like SetViewPort or something?

 

Also I just worked out a little math yesterday, and I came up with this matrix to get from world to clip directly. Just as a side note, I'm working in 2D so Z is always 1.  If I have a vector in world space with x, y coords, then in clip space x' = (x/screen_width) * 2 and y' = (y/screen_height)*2 that would make the vector lie in 0 to 2 space, then I subtract 1 from each to put it into canonical space. So the final matrix comes out to

2/scr_width  0            -1

0          2/scn_height  -1 

0         0                       1

 

will this work ok?

 

edit: A: no it won't work! the model is not is screen_space Q: what if I make sure that the model never exceeds screen space dimensions in the world, as in the world coordinates will always be more then 0 and less then screen_height, screen_widht? I want to transform a 2D sprite in a 2D software rasterizer, so I really don't need the projection, cameras space etc? Or would it still be better to use them?


In Topic: Why does this code work? (Drawing in screen space)

28 May 2015 - 12:15 AM

 


Also I read that I have to account for the pixel center/cell center and subtract 0.5f from everything, but my textured quad is displayed correctly as it is? Why?

This was changed in DX10. Texels are now pixel centered, so you don't have to make the adjustment manually.

 

 

I was actually just going to follow up on that, thanks for clearing that up.


In Topic: Why does this code work? (Drawing in screen space)

27 May 2015 - 09:43 PM

The output coordinates of your vertex shader are in what they call clip space, any coordinates outside of the (-1, -1, 0), (1, 1, 1) range after perspective division (dividing the xyz output of the vertex shader by w) will get clipped by the pipeline.

After that a so-called viewport transform gets applied mapping your xy coordinates to (0, 0), (viewport_width, viewport_height), with y getting flipped.

 

In your case you are already outputting vertices in normalized device coordinates, so your plane shows up correctly after the viewport transform.

 

Thanks, that was a good explanation. I want to make sure I got it right. So the vertex shader transformes the camera space into clip space under the hood without me specifying it to do so? And then this clip space gets transformed into the viewport space correct? And I'm supplying the coordinates already in clip space. But I'm still not sure why I'm getting a correct rectangular view that's not stretched if I'm specifying the a width and height of 2 making it a square? I understand if the clip space was from 0 to 1, in which case 1 would just be 100% coverage, but 0 to 2?

 

Edit: sorry another question, is it possible to go from world space directly into clip space? Can I just apply a clip space transform to the object in world space to get it there?


PARTNERS