User and object interaction with an augmented reality scenario
First Claim
1. A method for generating virtual content for presentation in an augmented reality (AR) system, the method comprising:
- under control of a hardware processor included in the AR system;
analyzing data acquired from a pose sensor to identify a pose of a user of the AR system;
identifying a physical object in a three dimensional (3D) physical environment of the user based at least partly on the pose;
responsive to detecting a first gesture of the user providing an indication to initiate an interaction with the physical object, presenting a first type of virtual content in a display of the AR system, the first type of virtual content presented at a first depth that corresponds to a physical location of a surface of the physical object;
responsive to detecting a second gesture of the user, presenting a pod user interface virtual construct comprising navigable menu that includes a set of available virtual spaces, each available virtual space including one or more applications providing a respective second type of virtual content that is different than the first type of virtual content, wherein the pod user interface virtual construct is presented, at a second depth that is less than the first depth, while the first type of virtual content is being presented; and
responsive to detecting a user selection of a particular application through the navigable menu, rendering, in the display of the AR system, within the pod user interface virtual construct, the particular application in a 3D view to the user.
3 Assignments
0 Petitions
Accused Products
Abstract
A method for generating virtual content for presentation in an AR system includes, under control of a hardware processor included in the AR system, analyzing pose data to identify a pose of a user of the AR system. The method also includes identifying a physical object in a 3D physical environment of the user based at least partly on the pose. The method further includes responsive to detecting a first gesture, presenting a first type of virtual content in a display of the AR system. Moreover, the method includes responsive to detecting a second gesture, presenting a pod user interface virtual construct comprising a navigable menu. In addition, the method includes responsive to detecting a selection of an application through the navigable menu, rendering, in the display of the AR system, within the pod user interface virtual construct, the particular application in a 3D view to the user.
323 Citations
10 Claims
-
1. A method for generating virtual content for presentation in an augmented reality (AR) system, the method comprising:
-
under control of a hardware processor included in the AR system; analyzing data acquired from a pose sensor to identify a pose of a user of the AR system; identifying a physical object in a three dimensional (3D) physical environment of the user based at least partly on the pose; responsive to detecting a first gesture of the user providing an indication to initiate an interaction with the physical object, presenting a first type of virtual content in a display of the AR system, the first type of virtual content presented at a first depth that corresponds to a physical location of a surface of the physical object; responsive to detecting a second gesture of the user, presenting a pod user interface virtual construct comprising navigable menu that includes a set of available virtual spaces, each available virtual space including one or more applications providing a respective second type of virtual content that is different than the first type of virtual content, wherein the pod user interface virtual construct is presented, at a second depth that is less than the first depth, while the first type of virtual content is being presented; and responsive to detecting a user selection of a particular application through the navigable menu, rendering, in the display of the AR system, within the pod user interface virtual construct, the particular application in a 3D view to the user. - View Dependent Claims (2, 3, 4, 5)
-
-
6. An augmented reality (AR) system for generating virtual content, the system comprising:
-
a display system of a wearable device configured to present virtual content; a pose sensor operatively coupled to the display system; and a hardware processor in communication with the display system and the pose sensor, the hardware processor programmed to; analyze data acquired from the pose sensor to identify a pose of a user of the AR system; identify a physical object in a three dimensional (3D) physical environment of the user based at least partly on the pose; responsive to detecting a first gesture of the user providing an indication to initiate an interaction with the physical object, present a first type of virtual content in a display of the AR system, the first type of virtual content presented at a first depth that corresponds to a physical location of a surface of the physical object; responsive to detecting a second gesture of the user, present a pod user interface virtual construct comprising navigable menu that includes a set of available virtual spaces, each available virtual space including one or more applications providing a respective second type of virtual content that is different than the first type of virtual content, wherein the pod user interface virtual construct is presented, at a second depth that is less than the first depth, while the first type of virtual content is being presented; and responsive to detecting a user selection of a particular application through the navigable menu, instruct the display system to render within the pod user interface virtual construct, the particular application in a 3D view to the user. - View Dependent Claims (7, 8, 9, 10)
-
Specification