Anchored Navigation In A Three Dimensional Environment On A Mobile Device
First Claim
Patent Images
1. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
- (a) receiving a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device;
(b) receiving a second user input indicating that a second object has moved on the touch screen; and
(c) changing an orientation of the virtual camera according to the second user input.
2 Assignments
0 Petitions
Accused Products
Abstract
This invention relates to anchored navigation in a three dimensional environment on a mobile device. In an embodiment, a computer-implemented method navigates a virtual camera in a three dimensional environment on a mobile device having a touch screen. A first user input is received indicating that a first object is approximately stationary on a touch screen of the mobile device. A second user input is received indicating that a second object has moved on the touch screen. An orientation of the virtual camera of the virtual camera is changed according to the second user input.
177 Citations
46 Claims
-
1. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
-
(a) receiving a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device; (b) receiving a second user input indicating that a second object has moved on the touch screen; and (c) changing an orientation of the virtual camera according to the second user input. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A system for navigating a virtual camera in a three dimensional environment on a mobile device, comprising:
-
a touch receiver that receives a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device and receives a second user input indicating that a second object has moved on the touch screen; and a look around module that changes an orientation of the virtual camera according to the second user input. - View Dependent Claims (10, 11, 12, 13, 14, 15)
-
-
16. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
-
(a) receiving a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device; (b) receiving a second user input indicating that a second object has moved on the touch screen; (c) determining a target location in the three dimensional environment; and (d) changing a position of the virtual camera according to the second user input, wherein a distance between the target location and the position of the virtual camera stays approximately constant. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27)
-
-
28. A system for navigating a virtual camera in a three dimensional environment on a mobile device, comprising:
-
a touch receiver that receives a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device and a second user input indicating that a second object has moved on the touch screen; a target module that determines a target location in the three dimensional environment; and a helicopter module that changes a position of the virtual camera according to the second user input, wherein a distance between the target location and the position of the virtual camera stays approximately constant. - View Dependent Claims (29, 30, 31, 32, 33, 34, 35, 36, 37)
-
-
38. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
-
(a) receiving a first user input indicating that a first object is approximately stationary on a touch screen of the mobile device; (b) receiving a second user input indicating that a second object has moved on the touch screen; (c) determining a target location in the three dimensional environment; (d) changing a tilt value of the virtual camera relative to a vector directed upwards from the target location; and (e) changing an azimuth value of the virtual camera relative to the vector directed upwards from the target location.
-
-
39. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
-
(a) receiving a first user input indicating that a first object has touched a first point on a touch screen of a mobile device; (b) receiving a second user input indicating that a second object has touched a second point on the touch screen after the first object touched the first point on the screen; and (c) determining a navigation mode from a plurality of navigation modes based on the position of the first point relative to the second point. - View Dependent Claims (40)
-
-
41. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
-
(a) receiving a first user input indicating that a first object has touched a first point on a touch screen of a mobile device; (b) receiving a second user input indicating that a second object has touched a second point on the touch screen after the first object touched the first point on the screen; and (c) determining a navigation mode from a plurality of navigation modes based on the position of the first point relative to the second point.
-
-
42. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
-
(a) receiving a user input indicating that two objects have touched the touch screen of a mobile device and the two objects have moved on the touch screen approximately the same distance in approximately the same direction; (b) determining motion data representing motion of the two objects on the touch screen; and (c) changing an orientation of the virtual camera according to the motion data determined in (b). - View Dependent Claims (43, 44, 45, 46)
-
Specification