Help/Advice requested for moving and scaling[SOLVED]

Started by
6 comments, last by Nanoha 8 years, 3 months ago

Not sure if this is the most appropriate sub forum choice, possibly maths or mobile but here we go.

Unfortunately not quite as simple as the title suggests; I have an app and at the basic level I just have a quad that I wish to move around. Dragging it around with 1 finger is easy. Zooming via pinch using 2 fingers is also easy enough. Dragging around using both fingers while zooming was awkward but I managed it a bit. Dragging around with 2 fingers then releasing, then dragging with one finger then zooming out then dragging with two fingers etc.. that's where I am getting stuck. I managed to get it working quite well but when zoomed in and I touch both fingers it causes a jump. Basically it got very complicated very fast and the more edge cases I try to deal with the more complex it becomes. I am hoping for some general advice of how to deal with this.

Some points/thoughts:

What 'space' should I work in? I have been working in screen space (-1+1, -1+1) but my quad actually represents a different space itself so I could work in that space instead.

I need to be able to zoom in on the centre of the 2 fingers rather than the origin.

How do I store my values? position and scale or as a transform matrix (it will end up being a matrix, I can extract position/scale from the matrix if I need to).

When zooming I found I needed to have an extra zoom variable that was temporarily applied while using two fingers then permanently applied when I released (this made things messy).

Translation need to go slower when zoomed and faster when zoomed out.

I have good input events, when something is touched, released, dragged and with pointer indices too so actually getting and understanding the input is simple.

Any advice on how I can implement this in a clean way? As long as I zoom on the origin rather than on the centre of the two touch points I can do this without issue but I need that.

Thanks.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

Advertisement

when zoomed in and I touch both fingers it causes a jump.

How do you track motion while zooming? Probably by the centre between two fingers?

Anyway, when you just found second touch (zoom/drag started), don't apply motion to objects on that event.

Only save initial position of anchor point. On next motion events you'll calc difference between new anchor position and initial position, and move your object proportionally to that difference vector.


What 'space' should I work in?

Doesn't really matter. You'll scale motion vector anyway, only coefficients will change.


How do I store my values?

I find it convenient to store location/look_at points and up vector. Normalized direction vector and right vector recalculated on location/look_at changes. It's trivial to move camera then.


When zooming I found I needed to have an extra zoom variable that was temporarily applied while using two fingers then permanently applied when I released (this made things messy).

No need in separate zoom variable. On first touch of second finger you'll save initial distance between touches.

On next motion events you'll find new distance. Relative difference in distance can be treated as motion of "zoom dial", which is directly applied/converted to camera's scale/fov/whatever.


Translation need to go slower when zoomed and faster when zoomed out.

Since you're moving it in plane parallel to screen, you'll need cam's up/right vectors, nonlinearly scaled it proportionally to current zoom (that depends on your projection), scaled by touch motion vector and added to camera position.

Thanks for the reply. I forgot to mention this is entirely 2D. I will try out some of your suggestions.

You mentioned anchor points, I quite like the idea of anchoring. The thought I had was to project the screen touch point into 'world space' and then those two points would be anchored:

Ascreen * transformcurrent = Aworld; I would know Ascreen and transform and use that to find Aworld

Then when I drag to another point Bscreen it should still map to Aworld which I found previously now I have:

Bscreen * transformnew = Aworld

I could solve to find the new transformation. Since I am only dragging I know only position will change so it should be easy to solve. When scaling I could use the centre as an anchor point and the distance between the two touch points and use that to work out scale/position. The transformation matrix should only ever be of the form:

[s 0 0 x]

[0 s 0 y]

[0 0 1 0]

[0 0 0 1]

So I should be able to solve that with a little algebra. Is that a sensible solution? I'll never have rotation, only a single scale value and a position/offset.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

Done it! It's neat and it works very well. Thanks for the tips.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

Then when I drag to another point Bscreen it should still map to Aworld which I found previously now I have:

Ah, I missed the point.

Ok, considering the case when you store camera's location and scale parameters, and not final matrix.

To scale something around selected centre, you move that centre to origin, scale around origin, then move back to selected centre.

Transform matrix should look like this:

[attachment=30155:scale_around_centre.gif]

Or in its final form:

[attachment=30156:scale_final.gif]

s is scale factor, x/y is scaling centre.

Now, you have screen coordinates of a scaling touch, it needs to be unprojected into world space to get scaling centre x/y. You build that matrix, and transform camera's location with it.

After setting camera to new location and new scale, entire scene will move/scale, while maintaining screen location of a scaling centre.

Here's sample code in Lua. Camera stores its location in world space and zoom factor (inverse of scene geometry scale).


local function scale_around_centre(point, scale, scaling_centre)
    local sm = {
        {scale, 0, scaling_centre[1]-scale*scaling_centre[1]},
        {0, scale, scaling_centre[2]-scale*scaling_centre[2]},
        {0,     0, 1}
    }

    local x,y = point[1], point[2]
    return {
        x*sm[1][1]+y*sm[1][2]+sm[1][3],
        x*sm[2][1]+y*sm[2][2]+sm[2][3]
    }
end

-- current state of camera
local camera = {
    location = {0,0}, -- camera location in world space
    zoom = 2          -- zoom >1 means zoom out
}

local new_zoom = 6 -- desired new zoom
local screen_touch = {0.75,0.25} -- dummy touch in screen space

-- unproject touch
local world_touch = {
    screen_touch[1]*camera.zoom+camera.location[1],
    screen_touch[2]*camera.zoom+camera.location[2]
}

local relative_scale = new_zoom/camera.zoom
local new_location = scale_around_centre(camera.location, relative_scale, world_touch)

-- set camera new parameters
camera.location = new_location
camera.zoom = new_zoom

-- calc new screen location of same world_touch
local new_touch = {
    (world_touch[1]-camera.location[1])/camera.zoom,
    (world_touch[2]-camera.location[2])/camera.zoom
}

-- new_touch should be equal to original screen_touch

Was writing it too long smile.png

Check if this math simplifies something.

I appreciate the response. I worked entirely in world space and when I tested it it magically also just worked for scaling around a specific point without me actually having to do anything with it.

Out of curiosity, what do you use to generate those matrix images?

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.


what do you use to generate those matrix images?

It's http://www.wolframalpha.com


what do you use to generate those matrix images?

It's http://www.wolframalpha.com

Brilliant, thanks.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

This topic is closed to new replies.

Advertisement