3D NAVIGATION MODE
First Claim
Patent Images
1. A method comprising, by at least one processor:
- identifying an intention of a user of a system in relation to a three-dimensional (3D) virtual object;
selecting a 3D navigation mode from, a plurality of 3D navigation modes based on the identified user intention, wherein the plurality of 3D navigation modes includes at least a model navigation mode, a simple navigation mode, a driving navigation mode, a reaching navigation mode, and a multi-touch navigation mode; and
transitioning the system to the selected 3D navigation mode.
1 Assignment
0 Petitions
Accused Products
Abstract
An example method is provided in according with one implementation of the present disclosure. The method includes identifying an intention of a user of a system in relation to a three-dimensional (3D) virtual object and selecting a 3D navigation mode from a plurality of 3D navigation modes based on the identified user intention. The plurality of 3D navigation modes includes at least a model navigation mode, a simple navigation mode, a driving navigation mode, a reaching navigation mode, and a multi-touch navigation mode. The method further includes transitioning the system to the selected 3D navigation mode.
19 Citations
17 Claims
-
1. A method comprising, by at least one processor:
-
identifying an intention of a user of a system in relation to a three-dimensional (3D) virtual object; selecting a 3D navigation mode from, a plurality of 3D navigation modes based on the identified user intention, wherein the plurality of 3D navigation modes includes at least a model navigation mode, a simple navigation mode, a driving navigation mode, a reaching navigation mode, and a multi-touch navigation mode; and transitioning the system to the selected 3D navigation mode. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A system comprising:
- (this claim relates to the proposed system)
a 3D display displaying at least one 3D visualization; a computing device including a touch display and a plurality of sensors; a behavior analysis engine to perform a behavior analysis of a user by using data from the plurality of sensors, the behavior analysis engine to; determine an attention engagement level of the user, and determine a pose of the user in relation to the auxiliary computing device; an intention analysis engine to determine an intention of the user in relation to the at least one 3D visualization based on the user'"'"'s attention engagement level and the user'"'"'s pose; and a navigation mode engine to select 3D navigation mode from a plurality of 3D navigation modes based on the identified user intention, wherein the plurality of 3D navigation modes includes at least a model navigation mode, a simple navigation mode, a driving navigation mode, a reaching navigation mode, and a multi-touch navigation mode. - View Dependent Claims (10, 11, 12, 13, 14, 15)
- (this claim relates to the proposed system)
-
16. A non-transitory machine-readable storage medium encoded with instructions executable by at least one processor, the machine-readable storage medium comprising instructions to:
-
perform a behavior analysis of a user of a system including a 3D display displaying a 3D visualization, a computing device having a touch display, and a plurality of sensors connected to the computing device by using data from the plurality of sensors, the behavior analysis to; identify an attention engagement level of the user, and identify a pose of the user in relation to the computing device; perform an intention analysis of the user in relation to the 3D visualization based on the user'"'"'s attention engagement level and the user'"'"'s pose; transition the system to a 3D navigation mode selected from a plurality of 3D navigation modes based on the identified user intention; and implement a navigation action with the 3D visualization based on at least one of the 3D navigation mode and a detected user gesture. - View Dependent Claims (17)
-
Specification