Zoom, Rotate, and Translate or Pan In A Single Gesture
First Claim
1. A computer-implemented method for navigating a virtual camera in a three-dimensional environment on a mobile device having a touch screen, comprising:
- (a) receiving a first user input indicating that two or more objects have touched a view of the mobile device;
(b) determining two or more target locations on a surface of the three-dimensional environment corresponding to the two or more objects touching the view of the mobile device;
(c) receiving a second user input indicating that the two objects have performed a motion while touching the view of the mobile device;
(d) determining updated camera parameters for the virtual camera, based on the received second user input, such that the two or more target locations determined in (b) remain corresponding to the two or more objects touching the view of the mobile device; and
(e) moving the virtual camera within the three dimensional environment according to the updated camera parameters, wherein moving the virtual camera comprises at least two of zooming, rotating, tilting, and panning the virtual camera.
1 Assignment
0 Petitions
Accused Products
Abstract
Embodiments relate to navigating through a three dimensional environment on a mobile device using a single gesture. A first user input is received, indicating that two or more objects have touched a view of the mobile device. Two or more target locations on a surface of the three-dimensional environment corresponding to the two or more objects touching the view of the mobile device are determined. A second user input indicating that the two objects have performed a motion while touching the view of the mobile device is received. Camera parameters for the virtual camera, based on the received second user input, are determined. The virtual camera is moved within the three dimensional environment according to the determined camera parameters, such that the two or more target locations remain corresponding to the two or more objects touching the view of the mobile device. Moving the virtual camera may include zooming, rotating, tilting, and panning the virtual camera.
-
Citations
24 Claims
-
1. A computer-implemented method for navigating a virtual camera in a three-dimensional environment on a mobile device having a touch screen, comprising:
-
(a) receiving a first user input indicating that two or more objects have touched a view of the mobile device; (b) determining two or more target locations on a surface of the three-dimensional environment corresponding to the two or more objects touching the view of the mobile device; (c) receiving a second user input indicating that the two objects have performed a motion while touching the view of the mobile device; (d) determining updated camera parameters for the virtual camera, based on the received second user input, such that the two or more target locations determined in (b) remain corresponding to the two or more objects touching the view of the mobile device; and (e) moving the virtual camera within the three dimensional environment according to the updated camera parameters, wherein moving the virtual camera comprises at least two of zooming, rotating, tilting, and panning the virtual camera. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A system for navigating a virtual camera in a three dimensional environment on a mobile device, comprising:
-
a touch receiver that; receives a first user input indicating that two or more objects have touched a view of the mobile device, and receives a second user input indicating that the two or more objects have performed a motion while touching the view of the mobile device; a target module that determines two or more target locations on a surface of the three-dimensional environment corresponding to the two or more objects touching the view of the mobile device; and a navigation module that determines updated camera parameters for the virtual camera, based on the received second user input, such that the two or more determined target locations remain corresponding to the two or more objects touching the view of the mobile device, and that moves the virtual camera within the three dimensional environment according to the updated camera parameters, such that the two or more target locations remain corresponding to the two or more objects touching the view of the mobile device, wherein moving the virtual camera comprises at least two of zooming, rotating, tilting, and panning the virtual camera. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
-
17. A computer readable storage medium having instructions stored thereon that, when executed by a processor, cause the processor to perform operations including:
-
(a) receiving a first user input indicating that two or more objects have touched a view of the mobile device; (b) determining two or more target locations on a surface of the three-dimensional environment corresponding to the two or more objects touching the view of the mobile device; (c) receiving a second user input indicating that the two objects have performed a motion while touching the view of the mobile device; (d) determining updated camera parameters for the virtual camera, based on the received second user input, such that the two or more target locations determined in (b) remain corresponding to the two or more objects touching the view of the mobile device; and (e) moving the virtual camera within the three dimensional environment according to the updated camera parameters, wherein moving the virtual camera comprises at least two of zooming, rotating, tilting, and panning the virtual camera. - View Dependent Claims (18, 19, 20, 21, 22, 23, 24)
-
Specification