3D manipulation using applied pressure
First Claim
1. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
- displaying an initial view of a three-dimensional object on a two-dimensional touch display;
detecting a first touch, second touch, and third touch at respective touch points on the touch display that correspond to virtual contact points on the three-dimensional object displayed on the touch display;
detecting movement of the touch point for the first touch from an initial position to a final position on the touch display, while each of the second touch and third touch remain at respective initial positions; and
rendering, on the multi-touch display device, a new three-dimensional view of the three-dimensional object based on the movement of the first touch, wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point, the new three-dimensional view being scaled, rotated, and/or translated from the initial view of the three-dimensional object;
wherein a screen-space projection of each respective contact point is determined by projecting each contact point onto the touch display; and
wherein rendering the new three-dimensional view of the three-dimensional object comprises applying an algorithm to reduce distances between the final position of the first touch point and a first screen-space projection, the initial position of the second touch point and a second screen-space projection, and the initial position of the third touch point and a third screen-space projection.
3 Assignments
0 Petitions
Accused Products
Abstract
Placement by one or more input mechanisms of a touch point on a multi-touch display device that is displaying a three-dimensional object is detected. A two-dimensional location of the touch point on the multi-touch display device is determined, and the touch point is matched with a three-dimensional contact point on a surface of the three-dimensional object that is projected for display onto the image plane of the camera at the two-dimensional location of the touch point. A change in applied pressure at the touch point is detected, and a target depth value for the contact point is determined based on the change in applied pressure. A solver is used to calculate a three-dimensional transformation of the three-dimensional object using an algorithm that reduces a difference between a depth value of the contact point after object transformation and the target depth value.
93 Citations
17 Claims
-
1. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
displaying an initial view of a three-dimensional object on a two-dimensional touch display; detecting a first touch, second touch, and third touch at respective touch points on the touch display that correspond to virtual contact points on the three-dimensional object displayed on the touch display; detecting movement of the touch point for the first touch from an initial position to a final position on the touch display, while each of the second touch and third touch remain at respective initial positions; and rendering, on the multi-touch display device, a new three-dimensional view of the three-dimensional object based on the movement of the first touch, wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point, the new three-dimensional view being scaled, rotated, and/or translated from the initial view of the three-dimensional object; wherein a screen-space projection of each respective contact point is determined by projecting each contact point onto the touch display; and wherein rendering the new three-dimensional view of the three-dimensional object comprises applying an algorithm to reduce distances between the final position of the first touch point and a first screen-space projection, the initial position of the second touch point and a second screen-space projection, and the initial position of the third touch point and a third screen-space projection. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A computer-implemented method of manipulating a three-dimensional object displayed in a multi-touch display device, the method comprising:
-
displaying an initial view of a three-dimensional object on a two-dimensional touch display; detecting multiple touches at respective touch points on the touch display that correspond to virtual contact points on the three-dimensional object displayed on the touch display; detecting movement of one or more touch points from a respective initial position to a respective final position on the touch display, while any touch point remaining stationary remains at the respective initial position; and rendering, on the multi-touch display device, a new three-dimensional view of the three-dimensional object based on the movement of the one or more touch points, wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point, the new three-dimensional view being scaled, rotated, and/or translated from the initial view of the three-dimensional object; wherein a screen-space projection of each respective contact point is determined by projecting each contact point onto the touch display; and wherein rendering the new three-dimensional view of the three-dimensional object comprises applying an algorithm to reduce distances between the one or more touch points and the respective screen-space projections. - View Dependent Claims (8, 9, 10, 11)
-
-
12. A multi-touch display device, comprising:
-
a processor; a two-dimensional touch display; and a program of instructions executable by the processor to manipulate a three-dimensional object displayed by the device, the program of instructions configured to; display an initial view of the three-dimensional object on the touch display; detect a first touch, second touch, and third touch at respective touch points on the touch display that correspond to virtual contact points on the three-dimensional object displayed on the touch display; detect movement of the touch point for the first touch from an initial position to a final position on the touch display, while each of the second touch and third touch remain at respective initial positions; and render a new three-dimensional view of the three-dimensional object based on the movement of the first touch, wherein each contact point remains displayed by the multi-touch display device substantially underneath its corresponding touch point, the new three-dimensional view being scaled, rotated, and/or translated from the initial view of the three-dimensional object; wherein a screen-space projection of each respective contact point is determined by projecting each contact point onto the touch display; and wherein rendering the new three-dimensional view of the three-dimensional object comprises applying an algorithm to reduce distances between the final position of the first touch point and a first screen-space projection, the initial position of the second touch point and a second screen-space projection, and the initial position of the third touch point and a third screen-space projection. - View Dependent Claims (13, 14, 15, 16, 17)
-
Specification