Constraining motion in 2D and 3D manipulation
First Claim
1. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
- accessing a three-dimensional object-space data set from a computer memory storage device, the three-dimensional object-space data set representing a three-dimensional object in object-space and specifying points on a surface of the three dimensional object in three dimensions in object-space;
accessing an initial object transformation vector that specifies at least a location and a rotational orientation of the three-dimensional object in world-space;
determining, using at least one computer processor, a three-dimensional world-space data set by applying the initial object transformation vector to the three-dimensional object-space data set, the three-dimensional world-space data set representing the three dimensional object in world-space and specifying points on the surface of the three dimensional object in three dimensions in world-space;
accessing, from the computer memory storage device, data representing an image plane in world-space;
determining a two-dimensional screen-space data set corresponding to a three-dimensional view of the three-dimensional object by projecting the three-dimensional world-space data set onto the image plane based on the data representing the image plane, the two-dimensional screen-space data set specifying points on the surface of the three dimensional object as viewed on the image plane in two dimensions in screen-space;
rendering, on the multi-touch display device, the three-dimensional view of the three-dimensional object based on the two-dimensional screen-space data set;
detecting touching by one or more input mechanisms of a first touch point on the multi-touch display device, the first touch point being located at a first initial screen-space location;
matching the first touch point to a first contact point on the surface of the three-dimensional object, the first contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the first initial screen-space location of the first touch point;
tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first touch point from the first initial screen-space location to a first final screen-space location;
accessing an object-space constraint point, the object-space constraint point being a point on the surface of the three-dimensional object in object-space;
determining a world-space constraint point by transforming the object-space constraint point using the initial object transformation vector, the world-space constraint point being located at a first world-space location;
determining a new object transformation vector that, when applied to the three-dimensional object-space data set, results in a new three-dimensional world-space data set that when projected onto the image plane results in a new two-dimensional screen-space data set corresponding to a new view of the three dimensional object, the new three-dimensional world-space data set satisfying the constraint that the object-space constraint point, when transformed into world space using the new transformation vector, is positioned substantially at the first world-space location and the new two dimensional screen-space data set satisfying the constraint that the first contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the first final screen-space location; and
rendering, on the multi-touch display device, the new three-dimensional view of the three-dimensional object based on the new two-dimensional screen-space data set, the new three dimensional view being a view of the three-dimensional object wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point.
3 Assignments
0 Petitions
Accused Products
Abstract
Techniques for constraining motion of 3D objects displayed on a 2D display interface are described. Touch points are placed by a user on the 2D display interface to manipulate a displayed object. Each touch point is matched with a contact point on the surface of the object. The motion of the object is restricted by adding penalty terms to an energy equation that includes terms that measure deviation between the screen-space location of the touch points and that of their matching contact points. The penalty terms measure deviation from an ideal value. In response to movement of at least one touch point to a new screen-space location, a transformation of the object is determined by applying an algorithm that operates on the energy equation to reduce deviations between the screen-space location of the touch points and that of their matching contact points while also reducing deviation from the ideal value or values set by the penalty term or terms.
46 Citations
20 Claims
-
1. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
accessing a three-dimensional object-space data set from a computer memory storage device, the three-dimensional object-space data set representing a three-dimensional object in object-space and specifying points on a surface of the three dimensional object in three dimensions in object-space; accessing an initial object transformation vector that specifies at least a location and a rotational orientation of the three-dimensional object in world-space; determining, using at least one computer processor, a three-dimensional world-space data set by applying the initial object transformation vector to the three-dimensional object-space data set, the three-dimensional world-space data set representing the three dimensional object in world-space and specifying points on the surface of the three dimensional object in three dimensions in world-space; accessing, from the computer memory storage device, data representing an image plane in world-space; determining a two-dimensional screen-space data set corresponding to a three-dimensional view of the three-dimensional object by projecting the three-dimensional world-space data set onto the image plane based on the data representing the image plane, the two-dimensional screen-space data set specifying points on the surface of the three dimensional object as viewed on the image plane in two dimensions in screen-space; rendering, on the multi-touch display device, the three-dimensional view of the three-dimensional object based on the two-dimensional screen-space data set; detecting touching by one or more input mechanisms of a first touch point on the multi-touch display device, the first touch point being located at a first initial screen-space location; matching the first touch point to a first contact point on the surface of the three-dimensional object, the first contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the first initial screen-space location of the first touch point; tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first touch point from the first initial screen-space location to a first final screen-space location; accessing an object-space constraint point, the object-space constraint point being a point on the surface of the three-dimensional object in object-space; determining a world-space constraint point by transforming the object-space constraint point using the initial object transformation vector, the world-space constraint point being located at a first world-space location; determining a new object transformation vector that, when applied to the three-dimensional object-space data set, results in a new three-dimensional world-space data set that when projected onto the image plane results in a new two-dimensional screen-space data set corresponding to a new view of the three dimensional object, the new three-dimensional world-space data set satisfying the constraint that the object-space constraint point, when transformed into world space using the new transformation vector, is positioned substantially at the first world-space location and the new two dimensional screen-space data set satisfying the constraint that the first contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the first final screen-space location; and rendering, on the multi-touch display device, the new three-dimensional view of the three-dimensional object based on the new two-dimensional screen-space data set, the new three dimensional view being a view of the three-dimensional object wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
accessing a three-dimensional object-space data set from a computer memory storage device, the three-dimensional object-space data set representing a three-dimensional object in object-space and specifying points on a surface of the three dimensional object in three dimensions in object-space; accessing an initial object transformation vector that specifies at least a location and a rotational orientation of the three-dimensional object in world-space; determining, using at least one computer processor, a three-dimensional world-space data set by applying the initial object transformation vector to the three-dimensional object-space data set, the three-dimensional world-space data set representing the three dimensional object in world-space and specifying points on the surface of the three dimensional object in three dimensions in world-space; accessing, from the computer memory storage device, data representing an image plane in world-space; determining a two-dimensional screen-space data set corresponding to a three-dimensional view of the three-dimensional object by projecting the three-dimensional world-space data set onto the image plane based on the data representing the image plane, the two-dimensional screen-space data set specifying points on the surface of the three dimensional object as viewed on the image plane in two dimensions in screen-space; rendering, on the multi-touch display device, the three-dimensional view of the three-dimensional object based on the two-dimensional screen-space data set; detecting touching by one or more input mechanisms of a first touch point on the multi-touch display device, the first touch point being located at a first initial screen-space location; matching the first touch point to a first contact point on the surface of the three-dimensional object, the first contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the first initial screen-space location of the first touch point; tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first touch point from the first initial screen-space location to a first final screen-space location; accessing an object-space constraint point, the object-space constraint point being a point on the surface of the three-dimensional object in object-space; determining a world-space constraint point by transforming the object-space constraint point using the initial object transformation vector, the world-space constraint point being located at a first world-space location; determining a screen-space constraint location by projecting the world-space constraint point onto the image plane based on the data representing the image plane; determining a new object transformation vector that, when applied to the three-dimensional object-space data set, results in a new three-dimensional world-space data set that when projected onto the image plane results in a new two-dimensional screen-space data set corresponding to a new view of the three dimensional object, the new two dimensional screen- space data set satisfying the constraint that the object-space constraint point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the screen-space constraint location, and the new two dimensional screen-space data set also satisfying the constraint that the first contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the first final screen-space location; and rendering, on the multi-touch display device, the new three-dimensional view of the three-dimensional object based on the new two-dimensional screen-space data set, the new three dimensional view being a view of the three-dimensional object wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point. - View Dependent Claims (7, 8, 9, 10)
-
-
11. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
accessing a three-dimensional object-space data set from a computer memory storage device, the three-dimensional object-space data set representing a three-dimensional object in object-space and specifying points on a surface of the three dimensional object in three dimensions in object-space; accessing an initial object transformation vector that specifies at least a location and a rotational orientation of the three-dimensional object in world-space; determining, using at least one computer processor, a three-dimensional world-space data set by applying the initial object transformation vector to the three-dimensional object-space data set, the three-dimensional world-space data set representing the three dimensional object in world-space and specifying points on the surface of the three dimensional object in three dimensions in world-space; accessing, from the computer memory storage device, data representing an image plane in world-space; determining a two-dimensional screen-space data set corresponding to a three-dimensional view of the three-dimensional object by projecting the three-dimensional world-space data set onto the image plane based on the data representing the image plane, the two-dimensional screen-space data set specifying points on the surface of the three dimensional object as viewed on the image plane in two dimensions in screen-space; rendering, on the multi-touch display device, the three-dimensional view of the three-dimensional object based on the two-dimensional screen-space data set; detecting touching by one or more input mechanisms of a first touch point on the multi-touch display device, the first touch point being located at a first initial screen-space location; matching the first touch point to a first contact point on the surface of the three-dimensional object, the first contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the first initial screen-space location of the first touch point; tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first touch point from the first initial screen-space location to a first final screen-space location; accessing a first object-space constraint point, the first object-space constraint point being a first point on the surface of the three-dimensional object in object-space; determining a first world-space constraint point by transforming the first object- space constraint point using the initial object transformation vector, the first world-space constraint point being located at a first world-space location; determining a first screen-space constraint location by projecting the first world-space constraint point onto the image plane based on the data representing the image plane; accessing a second object-space constraint point, the second object-space constraint point being a second point on the surface of the three-dimensional object in object-space; determining a second world-space constraint point by transforming the second object-space constraint point using the initial object transformation vector, the second world-space constraint point being located at a second world-space location; determining a second screen-space constraint location by projecting the second world-space constraint point onto the image plane based on the data representing the image plane; defining a constraint distance as a Euclidean distance between the first screen-space constraint point location and the second screen-space constraint point location; determining a new object transformation vector that, when applied to the three-dimensional object-space data set, results in a new three-dimensional world-space data set that when projected onto the image plane results in a new two-dimensional screen-space data set corresponding to a new view of the three dimensional object, the new two dimensional screen-space data set satisfying the constraint that a displayed distance between the first and the second object-space constraint points, when the first and the second object-space constraint points are transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is substantially the same as the constraint distance, and the new two dimensional screen-space data set also satisfying the constraint that the first contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the first final screen-space location; and rendering, on the multi-touch display device, the new three-dimensional view of the three-dimensional object based on the new two-dimensional screen-space data set, the new three dimensional view being a view of the three-dimensional object wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
accessing a three-dimensional object-space data set from a computer memory storage device, the three-dimensional object-space data set representing a three-dimensional object in object-space and specifying points on a surface of the three dimensional object in three dimensions in object-space; accessing an initial object transformation vector that specifies at least a location and a rotational orientation of the three-dimensional object in world-space; determining, using at least one computer processor, a three-dimensional world-space data set by applying the initial object transformation vector to the three-dimensional object-space data set, the three-dimensional world-space data set representing the three dimensional object in world-space and specifying points on the surface of the three dimensional object in three dimensions in world-space; accessing, from the computer memory storage device, data representing an image plane in world-space; determining a two-dimensional screen-space data set corresponding to a three-dimensional view of the three-dimensional object by projecting the three-dimensional world-space data set onto the image plane based on the data representing the image plane, the two-dimensional screen-space data set specifying points on the surface of the three dimensional object as viewed on the image plane in two dimensions in screen-space; rendering, on the multi-touch display device, the three-dimensional view of the three-dimensional object based on the two-dimensional screen-space data set; detecting touching by one or more input mechanisms of a first touch point on the multi-touch display device, the first touch point being located at a first initial screen-space location; matching the first touch point to a first contact point on the surface of the three-dimensional object, the first contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and projected onto the image plane for rendering, is displayed by the multi-touch display device at the first initial screen-space location of the first touch point; tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first touch point from the first initial screen-space location to a first final screen-space location; accessing a first object-space constraint point, the first object-space constraint point being a first point on the surface of the three-dimensional object in object-space; determining a first world-space constraint point by transforming the first object-space constraint point using the initial object transformation vector, the first world-space constraint point being located at a first world-space location; accessing a second object-space constraint point, the second object-space constraint point being a second point on the surface of the three-dimensional object in object-space; determining a second world-space constraint point by transforming the second object-space constraint point using the initial object transformation vector, the second world-space constraint point being located at a second world-space location; defining a constraint distance as a Euclidean distance between the first world-space constraint point location and the second world-space constraint point location; determining a new object transformation vector that, when applied to the three-dimensional object-space data set, results in a new three-dimensional world-space data set that when projected onto the image plane results in a new two-dimensional screen-space data set corresponding to a new view of the three dimensional object, the new three-dimensional world-space data set satisfying the constraint that a distance in world-space between the first and the second object-space constraint points, when the first and the second object-space constraint points are transformed into world-space using the new object transformation vector, is substantially the same as the constraint distance, and the new two dimensional screen-space data set satisfying the constraint that the first contact point, when transformed into world-space using the new object transformation vector and projected onto the image plane for rendering, is displayed substantially at the first final screen-space location; and rendering, on the multi-touch display device, the new three-dimensional view of the three-dimensional object based on the new two-dimensional screen-space data set, the new three dimensional view being a view of the three-dimensional object wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point. - View Dependent Claims (17, 18, 19, 20)
-
Specification