Collaboration with 3D data visualizations
First Claim
1. A collaboration system comprising:
- a three-dimensional (3D) display displaying a 3D data visualization, at least two hand avatars of two different users, and a view field avatar, wherein the two hand avatars are of a first user and a collaborator, and wherein the view field avatar is of the collaborator;
a plurality of auxiliary computing devices;
a behavior analysis engine to perform a behavior analysis of a user, the behavior analysis engine to;
determine an attention engagement level of the user, anddetermine a pose of the user in relation to an auxiliary computing device;
an intention analysis engine to determine an intention of the user in relation to the at least one 3D data visualization based on the user'"'"'s attention engagement level and the user'"'"'s pose;
an interaction mode engine to;
select, based on the determined user intention, an interaction mode of a plurality of interaction modes of the collaboration system, wherein each interaction mode is associated with a unique set of commands for interaction with the 3D data visualization, andautomatically adjust the collaboration system to the selected interaction mode; and
a collaboration engine to implement an action with the 3D data visualization based on the selected interaction mode and an identified gesture,wherein the action is to couple a first user'"'"'s view field of the 3D data visualization with a collaborator'"'"'s view field of the 3D data visualization, when the system is in a navigation interaction mode and the collaboration engine is to detect a change of view gesture performed by the first user, andwherein the change of view gesture by the first user is selecting the collaborator'"'"'s view field avatar with the first user'"'"'s hand avatar.
9 Assignments
0 Petitions
Accused Products
Abstract
An example collaboration system is provided in according with one implementation of the present disclosure. The system includes a 3D display a 3D data visualization, at least two hand avatars of two different users, and a view field avatar. The system also includes a plurality of auxiliary computing devices and a behavior analysis engine to perform a behavior analysis of a user. The behavior analysis engine is to: determine an attention engagement level of the user, and determine a pose of the user in relation to the auxiliary computing device. The system further includes an intention analysis engine to determine an intention of the user in relation to the 3D visualization based on the user'"'"'s attention engagement level and the user'"'"'s pose, and a collaboration engine to implement an action with the 3D data visualization by using a hand avatar based on the user'"'"'s intention and an identified gesture.
16 Citations
13 Claims
-
1. A collaboration system comprising:
-
a three-dimensional (3D) display displaying a 3D data visualization, at least two hand avatars of two different users, and a view field avatar, wherein the two hand avatars are of a first user and a collaborator, and wherein the view field avatar is of the collaborator; a plurality of auxiliary computing devices; a behavior analysis engine to perform a behavior analysis of a user, the behavior analysis engine to; determine an attention engagement level of the user, and determine a pose of the user in relation to an auxiliary computing device; an intention analysis engine to determine an intention of the user in relation to the at least one 3D data visualization based on the user'"'"'s attention engagement level and the user'"'"'s pose; an interaction mode engine to; select, based on the determined user intention, an interaction mode of a plurality of interaction modes of the collaboration system, wherein each interaction mode is associated with a unique set of commands for interaction with the 3D data visualization, and automatically adjust the collaboration system to the selected interaction mode; and a collaboration engine to implement an action with the 3D data visualization based on the selected interaction mode and an identified gesture, wherein the action is to couple a first user'"'"'s view field of the 3D data visualization with a collaborator'"'"'s view field of the 3D data visualization, when the system is in a navigation interaction mode and the collaboration engine is to detect a change of view gesture performed by the first user, and wherein the change of view gesture by the first user is selecting the collaborator'"'"'s view field avatar with the first user'"'"'s hand avatar. - View Dependent Claims (2, 3, 4, 11)
-
-
5. A method comprising, by at least one processor:
-
identifying an intention of a user of a system in relation to a 3D virtual data object, the system including a plurality of computing devices and a 3D display displaying a 3D virtual data object at least two hand avatars of two different users, and a view field avatar; selecting, based on the identified intention of the user, an interaction mode of a plurality of interaction modes of the system, wherein each interaction mode is associated with a unique set of commands for interaction with the 3D virtual data object; transitioning the system to the selected interaction mode; and executing an action with the 3D virtual data object by using a hand avatar based on the selected interaction mode and an identified gesture, comprising; detecting a change of view gesture performed by a first user as the identified gesture, wherein the change of view gesture is a selection of a collaborator'"'"'s view field avatar with the first user'"'"'s hand avatar, and coupling the first user'"'"'s view field of the 3D virtual data object with a collaborator'"'"'s view field of the 3D virtual data object when the system is in a navigation interaction mode. - View Dependent Claims (6, 7, 12, 13)
-
-
8. A non-transitory machine-readable storage medium encoded with instructions executable by, at least one processor, the machine-readable storage medium comprising instructions to:
-
perform a behavior analysis of a user of a system including a plurality of computing devices and a 3D display displaying a 3D virtual data object, at least two hand avatars of two different users, and a view field avatar, the behavior analysis to; identify an attention engagement level of the user, and identify a pose of the user in relation to a computing device; identify an intention of the user in relation to the virtual data object based on the user'"'"'s attention engagement level and the user'"'"'s pose; select, based on the identified intention of the user, an interaction mode of a plurality of interaction modes of the system, wherein each interaction mode is associated with a unique set of commands for interaction with the 3D virtual data object; transition the system to the selected interaction mode; and implement an action with the 3D virtual data object by using a hand avatar based on the selected interaction mode and an identified gesture, comprising; detect a change of view gesture performed by a first user as the identified gesture, wherein the change of view gesture includes a selection of a collaborator'"'"'s view field avatar with the first user'"'"'s hand avatar, and couple the first user'"'"'s view field of the 3D virtual data object with a collaborator'"'"'s view field of the 3D virtual data object when the system is in a navigation interaction mode. - View Dependent Claims (9, 10)
-
Specification