Panning in a three dimensional environment on a mobile device
First Claim
1. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
- (a) accessing, from a memory device, data defining a three dimensional model of the three dimensional environment;
(b) receiving a user input indicating that an object has touched a first point on a touch screen of the mobile device and the object has been dragged to a second point on the touch screen, the first and second points corresponding to data defining the three dimensional model;
(c) determining a first target location in the three dimensional environment based on the first point on the touch screen, wherein the determining (c) includes extending a first ray based on a position of the virtual camera and the first point on the touch screen, and intersecting the first ray with the three dimensional model in the three dimensional environment;
(d) determining a virtual surface based on the first target location, wherein the determining (d) includes constructing a sphere tangent to the first target location and centered at a center of the three dimensional model;
(e) determining a second target location in the three dimensional environment based on the second point on the touch screen, wherein the determining (e) includes extending a second ray based on the position of the virtual camera and the second point on the touch screen, and intersecting the second ray with the virtual surface; and
(f) moving the three dimensional model in the three dimensional environment relative to the virtual camera according to the first and second target locations.
2 Assignments
0 Petitions
Accused Products
Abstract
This invention relates to panning in a three dimensional environment on a mobile device. In an embodiment, a computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen. A user input is received indicating that an object has touched a first point on a touch screen of the mobile device and the object has been dragged to a second point on the touch screen. A first target location in the three dimensional environment is determined based on the first point on the touch screen. A second target location in the three dimensional environment is determined based on the second point on the touch screen. Finally, a three dimensional model is moved in the three dimensional environment relative to the virtual camera according to the first and second target locations.
41 Citations
16 Claims
-
1. A computer-implemented method for navigating a virtual camera in a three dimensional environment on a mobile device having a touch screen, comprising:
-
(a) accessing, from a memory device, data defining a three dimensional model of the three dimensional environment; (b) receiving a user input indicating that an object has touched a first point on a touch screen of the mobile device and the object has been dragged to a second point on the touch screen, the first and second points corresponding to data defining the three dimensional model; (c) determining a first target location in the three dimensional environment based on the first point on the touch screen, wherein the determining (c) includes extending a first ray based on a position of the virtual camera and the first point on the touch screen, and intersecting the first ray with the three dimensional model in the three dimensional environment; (d) determining a virtual surface based on the first target location, wherein the determining (d) includes constructing a sphere tangent to the first target location and centered at a center of the three dimensional model; (e) determining a second target location in the three dimensional environment based on the second point on the touch screen, wherein the determining (e) includes extending a second ray based on the position of the virtual camera and the second point on the touch screen, and intersecting the second ray with the virtual surface; and (f) moving the three dimensional model in the three dimensional environment relative to the virtual camera according to the first and second target locations. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A system for navigating a virtual camera in a three dimensional environment on a mobile device, comprising:
-
a touch receiver that receives a user input indicating that an object has touched a first point on a touch screen of the mobile device and the object has been dragged to a second point on the touch screen, the first and second points corresponding to data defining a three dimensional model; a target module that; accesses, from a memory device, data defining the three dimensional model of the three dimensional environment, determines a first target location in the three dimensional environment based on the first point on the touch screen, wherein when the target module determines the first target location the target module extends a first ray based on a position of the virtual camera and the first point on the touch screen, and intersects the first ray with the three dimensional model in the three dimensional environment, determines a virtual surface based on the first target location, wherein when the target module determines the virtual surface the target module constructs a sphere tangent to the first target location and centered at a center of the three dimensional model, and determines a second target location in the three dimensional environment based on the second point on the touch screen, wherein when the target module determines the second target location the target module extends a second ray based on the position of the virtual camera and the second point on the touch screen, and intersects the second ray with the virtual surface; and a pan module that moves the three dimensional model in the three dimensional environment relative to the virtual camera according to the first and second target locations. - View Dependent Claims (14, 15, 16)
-
Specification