×

Methods, systems, and computer readable media for unified scene acquisition and pose tracking in a wearable display

  • US 10,365,711 B2
  • Filed: 05/17/2013
  • Issued: 07/30/2019
  • Est. Priority Date: 05/17/2012
  • Status: Active Grant
First Claim
Patent Images

1. A system for unified scene acquisition and pose tracking in a wearable display, the system comprising:

  • a wearable frame configured to be worn on the head of a user, the frame having;

    at least one camera mounted to the wearable frame for acquiring scene information for a real scene proximate to the user, the scene information including images and depth information, the scene information including positions of real objects separate from the user in the real scene local to the user;

    at least one sensor mounted to the wearable frame for acquiring images of gestures and body poses of the user;

    a pose tracker mounted to the wearable frame for generating, based on the scene information, a 3D model of the scene, generating, based on the images of gestures and body poses of the user acquired by the at least one sensor, a 3D model of the user, and estimating a position and orientation of the user in relation to the 3D model of the scene based on the images and depth information acquired by the at least one camera mounted to the frame and the images of gestures and body poses of the user acquired by the at least one sensor;

    a rendering unit mounted to the wearable frame for generating a virtual reality (VR) image based on the scene information acquired by the at least one camera and the estimated position and orientation of the user in relation to the 3D model of the scene, wherein the rendering unit receives, from a location remote from the user, images and depth information of real objects acquired in a remote scene, the images and depth information of real objects including an image and depth information of a virtual participant in a meeting, wherein the rendering unit receives the positions of the real objects in the scene local to the user, and determines, based on the positions of the real objects and a perceived location of the virtual participant, portions of the image of the virtual participant to occlude in the VR image, wherein image of the virtual participant comprises an image of a human participant captured by a camera local to the human participant and remote from the user; and

    at least one display mounted to the frame for displaying to the user a combination of the generated VR image and the scene local to the user, wherein the VR image includes the image of the virtual participant with the portions occluded as determined by the rendering unit.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×