Head-mounted display with facial expression detecting capability
First Claim
Patent Images
1. A method for detecting facial expressions, the method comprising:
- capturing, by a first image capturing device on a main body of a head-mounted display, first images of an upper portion of a user'"'"'s face including the user'"'"'s eye region;
capturing, by a second image capturing device on an extension member of the head-mounted display extending downwards from the main body toward a lower portion of the user'"'"'s face, second images of the user including the lower portion of the user'"'"'s face; and
processing the first images and the second images to extract facial expression parameters representing a facial expression of the user, the processing of the first images and the second images comprising;
performing rigid stabilization to determine a relative pose of the user'"'"'s face relative to the first image capturing device and the second image capturing device.
2 Assignments
0 Petitions
Accused Products
Abstract
Embodiments relate to detecting a user'"'"'s facial expressions in real-time using a head-mounted display unit that includes a 2D camera (e.g., infrared camera) that capture a user'"'"'s eye region, and a depth camera or another 2D camera that captures the user'"'"'s lower facial features including lips, chin and cheek. The images captured by the first and second camera are processed to extract parameters associated with facial expressions. The parameters can be sent or processed so that the user'"'"'s digital representation including the facial expression can be obtained.
21 Citations
9 Claims
-
1. A method for detecting facial expressions, the method comprising:
-
capturing, by a first image capturing device on a main body of a head-mounted display, first images of an upper portion of a user'"'"'s face including the user'"'"'s eye region; capturing, by a second image capturing device on an extension member of the head-mounted display extending downwards from the main body toward a lower portion of the user'"'"'s face, second images of the user including the lower portion of the user'"'"'s face; and processing the first images and the second images to extract facial expression parameters representing a facial expression of the user, the processing of the first images and the second images comprising; performing rigid stabilization to determine a relative pose of the user'"'"'s face relative to the first image capturing device and the second image capturing device. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A virtual or augmented reality system comprising:
-
a head-mounted display unit, comprising; a first capturing device configured to capture first images of an upper portion of a user'"'"'s face including an eye region, a second capturing device at a location below the first capturing device and configured to capture second images of a lower portion of the user'"'"'s face, a display device configured to display images to the user, a body configured to mount the first capturing device and the display device, and an extension member extending from the body toward the lower portion of the user'"'"'s face, the second capturing device mounted on the extension member; and a computing device communicatively coupled to the head-mounted display unit and configured to; receive the first and second images from the head-mounted display unit, perform calibration by (i) generating a personalized neutral face mesh based on rigid stabilization, and (ii) building a personalized tracking model by applying a deformation transfer technique to the personalized neutral face mesh, and process the first images and the second images to extract facial expression parameters representing a facial expression of the user by fitting at least a blendshape model to landmark locations in the first images and the second images based on the personalized tracking model to obtain the facial expression parameters. - View Dependent Claims (9)
-
Specification