Methods for capturing depth data of a scene and applying computer actions
First Claim
1. A computer-implemented method, comprising:
- (a) defining and saving to a memory, a user profile, the user profile including data for identifying and tracking the user with a depth sensing camera;
(b) defining and saving to the memory, animations to be integrated into a virtual world scene based on the user profile;
(c) capturing a scene using the depth sensing camera;
(d) identifying the user within the scene using the depth sensing camera, the identifying further configured to identify stationary objects in the scene, wherein points located on the stationary objects are used to at least partially outline the identified stationary objects; and
(e) automatically applying the defined animations onto at least one identified stationary object in the scene to be displayed on a screen, such that the defined animations are selected for the identified and tracked user.
4 Assignments
0 Petitions
Accused Products
Abstract
A computer-implemented method is provided to automatically apply predefined privileges for identified and tracked users in a space having one or more media sources. The method includes an operation to define and save to memory, a user profile. The user profile may include data that identifies and tracks a user with a depth-sensing camera. In another operation privileges that define levels of access to particular media for the user profile are defined and saved. The method also includes an operation to capture image and depth data from the depth-sensing camera of a scene within the space. In yet another operation, the user is tracked and identified within the scene from the image and depth data. In still another operation the defined privileges are automatically applied to one or more media sources, so that the user is granted access to selected content from the one or more media sources.
329 Citations
12 Claims
-
1. A computer-implemented method, comprising:
-
(a) defining and saving to a memory, a user profile, the user profile including data for identifying and tracking the user with a depth sensing camera; (b) defining and saving to the memory, animations to be integrated into a virtual world scene based on the user profile; (c) capturing a scene using the depth sensing camera; (d) identifying the user within the scene using the depth sensing camera, the identifying further configured to identify stationary objects in the scene, wherein points located on the stationary objects are used to at least partially outline the identified stationary objects; and (e) automatically applying the defined animations onto at least one identified stationary object in the scene to be displayed on a screen, such that the defined animations are selected for the identified and tracked user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A computer-implemented method, comprising:
-
(a) defining and saving to a memory, a user profile, the user profile including data for identifying and tracking the user with a depth sensing camera; (b) defining and saving to the memory, animations to be applied into a virtual world scene associated with the user profile; (c) capturing a scene using the depth sensing camera; (d) identifying the user within the scene using the depth sensing camera; (e) automatically applying the defined animations onto objects or stationary objects found in the captured scene using point tracking, the defined animations being pre-defined for the identified and tracked user, so that a display screen shows the applied animations.
-
-
10. A computer implemented method, comprising:
-
(a) defining a user profile, the user profile including image and depth data related to physical characteristics of a real-world user, the image and depth data captured by a depth-sensing camera; (b) capturing image and depth data for a scene using the depth-sensing camera, wherein point tracking is used to identify stationary objects in the scene, the points being used to draw outlines of stationary objects found in the scene; (c) identifying moving objects within the scene; (d) locking the depth-sensing camera onto a human head within the scene; (e) analyzing the image and depth data for the human head in real-time, the analysis including comparing image and depth data for the human head to user profile image and depth data related to physical characteristic, wherein a user is identified when image and depth data within the user profile substantially matches image and depth data for the head, and identifying animations pre-selected for the user profile when the user is identified; and (f) applying the identified animations onto selected ones of the stationary objects identified in the scene. - View Dependent Claims (11, 12)
-
Specification