Screen-space formulation to facilitate manipulations of 2D and 3D structures through interactions relating to 2D manifestations of those structures
First Claim
1. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
- accessing a three-dimensional object-space data set from a computer memory storage device, the three-dimensional object-space data set representing a three-dimensional object in object-space and specifying points on a surface of the three dimensional object in three dimensions in object-space;
accessing an initial object transformation vector that specifies at least a location and a rotational orientation of the three-dimensional object in world-space;
determining, using at least one computer processor, a three-dimensional world-space data set by applying the initial object transformation vector to the three-dimensional object-space data set, the three-dimensional world-space data set representing the three dimensional object in world-space and specifying points on the surface of the three dimensional object in three dimensions in world-space;
accessing, from the computer memory storage device, data representing an image plane in world-space;
determining a two-dimensional screen-space data set corresponding to a three-dimensional view of the three-dimensional object by projecting the three-dimensional world-space data set onto the image plane based on the data representing the image plane, the two-dimensional screen-space data set specifying points on the surface of the three dimensional object as viewed on the image plane in two dimensions in screen-space;
rendering, on the multi-touch display device, the three-dimensional view of the three-dimensional object based on the two-dimensional screen-space data set;
detecting touching by one or more input mechanisms of a first touch point, a second touch point, and a third touch point on the multi-touch display device, the first touch point being located at a first initial screen-space location, the second touch point being located at a second initial screen-space location, and the third touch point being located at a third initial screen-space location;
matching the first touch point to a first contact point on the surface of the three-dimensional object, the first contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the first initial screen-space location of the first touch point;
matching the second touch point to a second contact point on the surface of the three-dimensional object, the second contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the second initial screen-space location of the second touch point;
matching the third touch point to a third contact point on the surface of the three-dimensional object, the third contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the third initial screen-space location of the third touch point;
tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first touch point from the first initial screen-space location to a first final screen-space location;
determining a new object transformation vector that, when applied to the three-dimensional object-space data set, results in a new three-dimensional world-space data set that when projected onto the image plane results in a new two-dimensional screen-space data set corresponding to a new view of the three dimensional object, the new two dimensional screen-space data set satisfying the following constraints when rendered for display;
(i) the first contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the first final screen-space location,(ii) the second contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the second initial screen-space location, and(iii) the third contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the third initial screen-space location; and
rendering, on the multi-touch display device, the new three-dimensional view of the three-dimensional object based on the new two-dimensional screen-space data set, the new three dimensional view being a view of the three-dimensional object wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point.
3 Assignments
0 Petitions
Accused Products
Abstract
Manipulating a three-dimensional object displayed in a multi-touch display device includes displaying a three-dimensional object in two dimensions on the multi-touch display device. Placement of one or more touch points on the multi-touch display device is detected. For each detected touch point, a location of the touch point on the multi-touch display device is determined and a matching contact point on a surface of the three-dimensional object that is displayed by the multi-touch display device at the location of the touch point is also determined. Movement of at least one of the touch points is detected. Subsequent to the detected movement of the one or more touch points, a three-dimensional transformation of the three-dimensional object is determined that results in a display in which the contact points on the surface of the three-dimensional object remain displayed substantially at the locations of their matching touch points.
61 Citations
18 Claims
-
1. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
accessing a three-dimensional object-space data set from a computer memory storage device, the three-dimensional object-space data set representing a three-dimensional object in object-space and specifying points on a surface of the three dimensional object in three dimensions in object-space; accessing an initial object transformation vector that specifies at least a location and a rotational orientation of the three-dimensional object in world-space; determining, using at least one computer processor, a three-dimensional world-space data set by applying the initial object transformation vector to the three-dimensional object-space data set, the three-dimensional world-space data set representing the three dimensional object in world-space and specifying points on the surface of the three dimensional object in three dimensions in world-space; accessing, from the computer memory storage device, data representing an image plane in world-space; determining a two-dimensional screen-space data set corresponding to a three-dimensional view of the three-dimensional object by projecting the three-dimensional world-space data set onto the image plane based on the data representing the image plane, the two-dimensional screen-space data set specifying points on the surface of the three dimensional object as viewed on the image plane in two dimensions in screen-space; rendering, on the multi-touch display device, the three-dimensional view of the three-dimensional object based on the two-dimensional screen-space data set; detecting touching by one or more input mechanisms of a first touch point, a second touch point, and a third touch point on the multi-touch display device, the first touch point being located at a first initial screen-space location, the second touch point being located at a second initial screen-space location, and the third touch point being located at a third initial screen-space location; matching the first touch point to a first contact point on the surface of the three-dimensional object, the first contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the first initial screen-space location of the first touch point; matching the second touch point to a second contact point on the surface of the three-dimensional object, the second contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the second initial screen-space location of the second touch point; matching the third touch point to a third contact point on the surface of the three-dimensional object, the third contact point being a point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the third initial screen-space location of the third touch point; tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first touch point from the first initial screen-space location to a first final screen-space location; determining a new object transformation vector that, when applied to the three-dimensional object-space data set, results in a new three-dimensional world-space data set that when projected onto the image plane results in a new two-dimensional screen-space data set corresponding to a new view of the three dimensional object, the new two dimensional screen-space data set satisfying the following constraints when rendered for display; (i) the first contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the first final screen-space location, (ii) the second contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the second initial screen-space location, and (iii) the third contact point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially at the third initial screen-space location; and rendering, on the multi-touch display device, the new three-dimensional view of the three-dimensional object based on the new two-dimensional screen-space data set, the new three dimensional view being a view of the three-dimensional object wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
accessing a three-dimensional object-space data set from a computer memory storage device, the three-dimensional object-space data set representing a three-dimensional object in object-space and specifying points on a surface of the three dimensional object in three dimensions in object-space; accessing an initial object transformation vector that specifies at least a location and a rotational orientation of the three-dimensional object in world-space; determining, using at least one computer processor, a three-dimensional world-space data set by applying the initial object transformation vector to the three-dimensional object-space data set, the three-dimensional world-space data set representing the three dimensional object in world-space and specifying points on the surface of the three dimensional object in three dimensions in world-space; accessing, from the computer memory storage device, data representing an image plane in world-space; determining a two-dimensional screen-space data set corresponding to a three-dimensional view of the three-dimensional object by projecting the three-dimensional world-space data set onto the image plane based on the data representing the image plane, the two-dimensional screen-space data set specifying points on the surface of the three dimensional object as viewed on the image plane in two dimensions in screen-space; rendering, on the multi-touch display device, the three-dimensional view of the three-dimensional object based on the two-dimensional screen-space data set; detecting touching by one or more input mechanisms of multiple touch points on the multi-touch display device, each of the multiple touch points having a first screen-space location on the multi-touch display device specified in screen-space; identifying, based on the two-dimensional screen-space data set and for each of the multiple touch points, a contact point on the surface of the three dimensional object in object-space that, when transformed into world-space using the initial object transformation vector and then projected onto the image plane for rendering, is displayed by the multi-touch display device at the first screen-space location underneath the corresponding touch point; tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of at least one of the multiple touch points from its first screen-space location to a second screen-space location on the multi-touch display device; determining a new object transformation vector that, when applied to the three-dimensional object-space data set, results in a new three-dimensional world-space data set that when projected onto the image plane results in a new two-dimensional screen-space data set corresponding to a new view of the three dimensional object, the new two dimensional screen-space data set satisfying the following constraints when rendered for display; (i) for each touch point that is moved to its second screen-space location, the contact point corresponding to the moved touch point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially underneath the corresponding moved touch point at its second screen-space location, and (ii) for each touch point that remains stationary at its first screen-space location, the contact point corresponding to the stationary touch point, when transformed into world-space using the new object transformation vector and then projected onto the image plane for rendering, is displayed substantially underneath the corresponding stationary touch point at its first screen-space location; and rendering, on the multi-touch display device, the new three-dimensional view of the three-dimensional object based on the new two-dimensional screen-space data set, the new three dimensional view being a view of the three-dimensional object wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
displaying a three-dimensional object in two dimensions on the multi-touch display device, the three-dimensional object having an initial three-dimensional location and an initial three-dimensional rotational orientation; detecting touching by one or more input mechanisms of a first touch point, a second touch point, and a third touch point on the multi-touch display device; determining a first initial two-dimensional location of the first touch point on the multi-touch display device; determining a second initial two-dimensional location of the second touch point on the multi-touch display device; determining a third initial two-dimensional location of the third touch point on the multi-touch display device; determining a first contact point on the surface of the three-dimensional object that is displayed by the multi-touch display device at the first initial two-dimensional location; determining a second contact point on the surface of the three-dimensional object that is displayed by the multi-touch display device at the second initial two-dimensional location; determining a third contact point on the surface of the three-dimensional object that is displayed by the multi-touch display device at the third initial two-dimensional location; tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first contact point from the first initial two-dimensional location to a first final two-dimensional location on the multi-touch display device; determining, using at least one computer processor, a three-dimensional transformation of the three-dimensional object that specifies at least one of a new three-dimensional location and a new three-dimensional rotational orientation for the three-dimensional object, the determined three-dimensional transformation, when applied to the object to transform the object and when the transformed object is rendered on the multi-touch display device, results in a display in which the first contact point on the surface of the three-dimensional object is displayed substantially at the first final two-dimensional location, the second contact point on the surface of the three-dimensional object is displayed substantially at the second initial two-dimensional location, and the third contact point on the surface of the three-dimensional object is displayed substantially at the third initial two-dimensional location; transforming the three-dimensional object using the three-dimensional transformation such that the transformed three-dimensional object is positioned and rotated in accordance with the at least one of the new three-dimensional location and the new three-dimensional rotational orientation; and displaying the transformed three-dimensional object on the multi-touch display device.
-
-
16. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
displaying a three-dimensional object in two dimensions on the multi-touch display device, the three-dimensional object having an initial three-dimensional location and an initial three-dimensional rotational orientation; detecting touching by one or more input mechanisms of one or more touch points on the multi-touch display device; determining, for each touch point, a first two-dimensional location of the touch point on the multi-touch display device; determining, for each touch point, a contact point on a surface of the three-dimensional object that is displayed by the multi-touch display device at the first two-dimensional location of the touch point; tracking, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of at least one of the touch points from its first two-dimensional location to a second two-dimensional location; determining, using at least one computer processor, a three-dimensional transformation of the three-dimensional object that specifies at least one of a new three-dimensional location and a new three-dimensional rotational orientation for the three-dimensional object, the determined three-dimensional transformation, when applied to the object to transform the object and when the transformed object is rendered, results in a display in which the contact points on the surface of the three-dimensional object corresponding to touch points that have not moved remain displayed substantially at their respective first two-dimensional locations and the contact points on the surface of the three-dimensional object corresponding to touch points that have moved are displayed substantially at the second two-dimensional locations of their respective touch points; transforming the three-dimensional object using the three-dimensional transformation such that the transformed three-dimensional object is positioned and rotated in accordance with the at least one of the new three-dimensional location and the new three-dimensional rotational orientation; and displaying the transformed three-dimensional object on the multi-touch display device.
-
-
17. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
displaying a three-dimensional object in two dimensions on the multi-touch display device by projecting the three-dimensional object onto an image plane of a camera, the three-dimensional object having an initial three-dimensional location and an initial three-dimensional rotational orientation; detecting touching by one or more input mechanisms of a first touch point, a second touch point and a third touch point on the multi-touch display device; determining a first initial two-dimensional location of the first touch point on the multi-touch display device; determining a second initial two-dimensional location of the second touch point on the multi-touch display device; determining a third initial two-dimensional location of the third touch point on the multi-touch display device; matching the first touch point to a first three-dimensional contact point on a surface of the three-dimensional object, the first three-dimensional contact point being displayed at the first initial two-dimensional location of the first touch point when the first three-dimensional contact point is projected for display onto the image plane of the camera; matching the second touch point to a second three-dimensional contact point on the surface of the three-dimensional object, the second three-dimensional contact point being displayed at the second initial two-dimensional location of the second touch point when the second three-dimensional contact point is projected for display onto the image plane of the camera; matching the third touch point to a third three-dimensional contact point on the surface of the three-dimensional object, the third three-dimensional contact point being displayed at the third initial two-dimensional location of the third touch point when the third three-dimensional contact point is projected for display onto the image plane of the camera; detecting, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of the first touch point from the first initial two-dimensional location to a first final two-dimensional location; using a solver to calculate a three-dimensional transformation of the three-dimensional object that specifies at least one of a new three-dimensional rotational orientation and a new three-dimensional location for the three-dimensional object, the three-dimensional transformation being calculated by the solver using an algorithm that reduces deviation between a projected two-dimensional location of the first three-dimensional contact point and the first final two-dimensional location of the first touch point, a projected two-dimensional location of the second three-dimensional contact point and the second initial two-dimensional location of the second touch point, and a projected two-dimensional location of the third three-dimensional contact point and the third initial two-dimensional location of the third touch point; transforming, using at least one computer processor, the three-dimensional object using the three-dimensional transformation such that the transformed three-dimensional object is positioned and rotated in accordance with the at least one of the new three-dimensional location and the new three-dimensional rotational orientation; and displaying the transformed three-dimensional object on the multi-touch display device by projecting the transformed three-dimensional object onto the image plane of the camera.
-
-
18. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
displaying a three-dimensional object in two dimensions on the multi-touch display device by projecting the three-dimensional object onto an image plane of a camera, the three-dimensional object having an initial three-dimensional location and an initial three-dimensional rotational orientation; detecting touching by one or more input mechanisms of one or more touch points on the multi-touch display device; determining, for each touch point, a first two-dimensional location of the touch point on the multi-touch display device; matching each touch point to a three-dimensional contact point on a surface of the three-dimensional object, the three-dimensional contact point being displayed at the first two-dimensional location of its matching touch point when the contact point is projected for display onto the image plane of the camera; detecting, based on movement of the one or more input mechanisms while the one or more input mechanisms remain touching the multi-touch display device, movement of at least one of the touch points from its first two-dimensional location to a second two-dimensional location; using a solver to calculate a three-dimensional transformation of the three-dimensional object that specifies at least one of a new three-dimensional rotational orientation and a new three-dimensional location for the three-dimensional object, the three-dimensional transformation being calculated by the solver using an algorithm that reduces deviation between projected two-dimensional locations of the three-dimensional contact points after object transformation and two dimensional locations of their matching touch points; transforming, using at least one computer processor, the three-dimensional object using the three-dimensional transformation such that the transformed three-dimensional object is positioned and rotated in accordance with the at least one of the new three-dimensional location and the new three-dimensional rotational orientation; and displaying the transformed three-dimensional object on the multi-touch display device by projecting the transformed three-dimensional object onto the image plane of the camera.
-
Specification