Working with 3D objects
First Claim
Patent Images
1. A computer-implemented method, comprising:
- detecting a first user input identifying a two-dimensional (2D) object presented on a surface of a display device;
detecting a second user input including a three-dimensional (3D) gesture input comprising a movement in proximity to the surface, in which a portion of the movement is performed at a distance from the surface;
generating, at a computing device, a 3D object based on the 2D object according to the first and second user inputs, the 3D object having a property that depends at least in part on the portion of the movement performed at a distance from the surface; and
presenting the 3D object on the surface.
1 Assignment
0 Petitions
Accused Products
Abstract
Three-dimensional objects can be generated based on two-dimensional objects. A first user input identifying a 2D object presented in a user interface can be detected, and a second user input including a 3D gesture input that includes a movement in proximity to a surface can be detected. A 3D object can be generated based on the 2D object according to the first and second user inputs, and the 3D object can be presented in the user interface.
-
Citations
28 Claims
-
1. A computer-implemented method, comprising:
-
detecting a first user input identifying a two-dimensional (2D) object presented on a surface of a display device; detecting a second user input including a three-dimensional (3D) gesture input comprising a movement in proximity to the surface, in which a portion of the movement is performed at a distance from the surface; generating, at a computing device, a 3D object based on the 2D object according to the first and second user inputs, the 3D object having a property that depends at least in part on the portion of the movement performed at a distance from the surface; and presenting the 3D object on the surface. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A computer-implemented method comprising:
-
identifying a three-dimensional (3D) object shown on a surface of a touch-sensitive display; detecting a 3D gesture input that comprises a movement of a finger or a pointing device in proximity to the surface, the detecting comprising measuring a distance between the finger or the pointing device and the surface of the display; modifying the 3D object according to the 3D gesture input; and showing the updated 3D object on the surface. - View Dependent Claims (8, 9, 10, 11, 12, 13, 14)
-
-
15. A computer-implemented method, comprising:
-
detecting a first user input that comprises at least one of a touch input or a two-dimensional (2D) gesture input; detecting a three-dimensional (3D) gesture input that comprises a movement in proximity to a surface, in which a portion of the movement is performed at a distance from the surface; and generating, at a computing device, a 3D object in a user interface based on the 3D gesture input and at least one of the touch input or 2D gesture input, the 3D object having a property that depends at least in part on the portion of the movement performed at a distance from the surface. - View Dependent Claims (16, 17, 18, 19, 20, 21)
-
-
22. An apparatus comprising:
-
a sensor module to detect touch inputs, two-dimensional (2D) gesture inputs that are associated with a surface, and three-dimensional (3D) gesture inputs, each 3D gesture input comprising a movement having a component in a direction perpendicular to the surface; and a data processor to receive signals output from the sensor module, the signals representing detected 3D gesture inputs and at least one of detected touch inputs or detected 2D gesture inputs, and generate or modify a 3D object in a user interface according to the detected 3D gesture inputs and at least one of detected touch inputs or detected 2D gesture inputs, in which the 3D object has a property that depends at least in part on the 3D gesture input movement in the direction perpendicular to the surface. - View Dependent Claims (23, 24, 25, 26, 27)
-
-
28. An apparatus comprising:
a computer storage medium storing instructions that, when executed by data processing apparatus, cause the data processing apparatus to perform operations comprising; detecting a first user input identifying a two-dimensional (2D) object presented in a user interface, detecting a second user input including a three-dimensional (3D) gesture input comprising a movement in proximity to a surface, in which a portion of the movement is performed at a distance from the surface, generating a 3D object in the user interface based on the 2D object according to the first and second user inputs, the 3D object having a property that depends at least in part on the portion of the movement performed at a distance from the surface, and presenting the 3D object in the user interface.
Specification