Methods and systems for enabling depth and direction detection when interfacing with a computer program
First Claim
1. A computer-implemented method for detecting depth and direction when interfacing with a computer program, comprising:
- (a) capturing one or more images with at least one depth camera, wherein each depth camera has a capture location in a coordinate space and the image includes a person;
(b) identifying a human head of the person in the image and assigning the human head a head location in the coordinate space;
(c) identifying an object held by the person in the image and assigning the object an object location in coordinate space;
(d) identifying a relative position in coordinate space between the head location and the object location when viewed from the capture location, wherein identifying the relative position includes computing an azimuth angle and an altitude angle between the head location and the object location in relation to the capture location, wherein the relative position includes a dimension of depth with respect to the coordinate space, wherein the dimension of depth is determined from analysis of the one or more images; and
(e) displaying the pointing direction of the object on the display screen.
4 Assignments
0 Petitions
Accused Products
Abstract
A method for detecting direction when interfacing with a computer program is provided. The method includes capturing an image presented in front of an image capture device. The image capture device has a capture location in a coordinate space. When a person is captured in the image, the method includes identifying a human head in the image and assigning the human head a head location in the coordinate space. The method also includes identifying an object held by the person in the image and assigning the object an object location in coordinate space. The method further includes identifying a relative position in coordinate space between the head location and the object location when viewed from the capture location. The relative position includes a dimension of depth. The method may be practiced on a computer system, such as one used in the gaming field.
380 Citations
34 Claims
-
1. A computer-implemented method for detecting depth and direction when interfacing with a computer program, comprising:
-
(a) capturing one or more images with at least one depth camera, wherein each depth camera has a capture location in a coordinate space and the image includes a person; (b) identifying a human head of the person in the image and assigning the human head a head location in the coordinate space; (c) identifying an object held by the person in the image and assigning the object an object location in coordinate space; (d) identifying a relative position in coordinate space between the head location and the object location when viewed from the capture location, wherein identifying the relative position includes computing an azimuth angle and an altitude angle between the head location and the object location in relation to the capture location, wherein the relative position includes a dimension of depth with respect to the coordinate space, wherein the dimension of depth is determined from analysis of the one or more images; and (e) displaying the pointing direction of the object on the display screen. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
identifying a second characteristic of the object held by the person at a second point in time, wherein the trigger event is activated when a degree of difference is determined to have existed between first characteristic and the second characteristic when examined between the first point in time and the second point in time.
-
-
11. The method of claim 10, wherein the trigger even being activated is indicative of interactivity with the interactive graphics.
-
12. The method of claim 11, wherein the interactivity can include one or more of selection of a graphic, shooting of a graphic, touching a graphic, moving of a graphic, activation of a graphic, triggering of a graphic and acting upon or with a graphic.
-
13. The method of claim 1, wherein identifying the human head is processed using template matching in combination with face detection code.
-
14. The method of claim 1, wherein identifying the object held by the person is facilitated by color tracking of a portion of the object.
-
15. The method of claim 14, wherein color tracking includes one or a combination of identifying differences in colors and identifying on/off states of colors.
-
16. The method of claim 6, wherein identifying the object held by the person is facilitated by identification of changes in positions of the object when repeating (a)-(d).
-
17. The method of claim 1, wherein the computer program is a video game.
-
18. The method of claim 1, further comprising synchronizing the image capture devices.
-
19. The method of claim 18 wherein synchronizing the image capture devices includes providing a strobe signal that is visible to each image capture device.
-
20. A method for detecting depth and direction when interfacing with a computer program, comprising:
-
(a) capturing an image presented in front of one or more image capture devices, wherein the image capture device has a capture location in a coordinate space; when a person is captured in the image, (b) identifying a human head in the image and assigning the human head a head location in the coordinate space; (c) identifying an object held by the person in the image and assigning the object an object location in coordinate space; (d) receiving at a plurality of microphones at known positions relative to the image capture device a sound signal originating from the object; (e) identifying a relative position in coordinate space between the head location and the object location when viewed from the capture location, wherein the relative position in coordinate space is determined from relative times of arrival of the sound signal at different microphones, wherein the relative position includes a dimension of depth with respect to the coordinate space.
-
-
21. A method for detecting pointing direction of an object directed toward a display screen that can render graphics of a computer program, comprising:
-
(a) capturing two or more stereo images of a scene presented in front of a depth camera having a capture location in a coordinate space that is proximate to the display screen; when a person is captured in the image, (b) identifying a first body part of the person in the image and assigning the first body part a first location in the coordinate space; (c) identifying a second body part of the person in the image and assigning the second body part a second location in coordinate space; and (d) identifying a relative position in coordinate space between the first location and the second location when viewed from the capture location, wherein identifying the relative position includes computing an azimuth angle and an altitude angle between the head location and the object location in relation to the capture location wherein the relative position includes a dimension of depth determined by the depth camera. - View Dependent Claims (22, 23, 25, 26, 27, 28, 29, 30, 31, 32, 33)
-
-
24. The method of 21, wherein (a)-(d) is repeated continually during execution of the computer program, and
examining a shape of the human hand during the repeating of (a)-(d) to determine particular shape changes.
-
34. A method for detecting pointing direction of an object directed toward a display screen that can render graphics of a computer program, comprising:
-
(a) capturing an image presented in front an image capture device, the image capture device having a capture location in a coordinate space that is proximate to the display screen; when a person is captured in the image, (b) identifying a first body part of the person in the image and assigning the first body part a first location in the coordinate space; (c) identifying a second body part of the person in the image and assigning the second body part a second location in coordinate space; (d) receiving at a plurality of microphones at known positions relative to the image capture device a sound signal originating from the object; and (e) identifying a relative position in coordinate space between the first location and the second location when viewed from the capture location, wherein the relative position includes a dimension of depth, wherein the relative position in coordinate space is determined from relative times of arrival of the sound signal at different microphones.
-
Specification