Method and system for creating event data and making same available to be served
First Claim
1. A method for the efficient capture, analysis, tracking and presentation of event data, the method comprising:
- a) providing a plurality of camera units at predetermined positions relative to an event, each of the camera units having an associated field of view of the event, the plurality of camera units being calibrated in 3-D with respect to a venue for the event;
b) generating from the camera units a set of image projections of a dynamic object in the event as image data, wherein the dynamic object is a participant at the event;
c) processing the image data from step b) with image segmentation to extract image features of the dynamic object in the event;
d) transforming the image features from step c) into 3-D features using the 3-D calibration of the camera units;
e) intersecting the 3-D features from step d) to create 3-D candidate object feature data, the 3-D candidate object feature data describing a position, a pose, and an appearance of the dynamic object in the event, wherein the dynamic object includes a plurality of rigid parts, the rigid parts including the head, hands, legs, and feet of the participant at the event, wherein the pose of the dynamic object includes a description of a yaw, a pitch, and a roll of the dynamic object, and wherein the pose of the dynamic object includes a pose of each of the rigid parts of the dynamic object;
f) removing any of the 3-D candidate object feature data from step e) having at least one of an intersection error greater than a predetermined 3-D distance and an impossible 3-D position, wherein the 3-D candidate object feature data not having at least one of the intersection error greater than a predetermined 3-D distance and an impossible 3-D position is validated 3-D feature data;
g) applying known physical laws and forces to compute additional 3-D candidate object feature data including the position, the pose, and the appearance of the dynamic object in the event when the dynamic object is missing from the view of all of the camera units, wherein the computed additional 3-D candidate object feature data includes the pose of each of the rigid parts of the dynamic object when the dynamic object is missing from the view of all of the camera units, wherein previous or subsequent 3-D candidate object feature data can be used to fill-in the pose of each of the rigid parts of the dynamic object as the computed additional 3-D candidate object feature data using consistency rules and the known physical laws and forces when the dynamic object is missing from the view of all of the camera units, wherein the computed additional 3-D candidate object feature data is included in the validated 3-D feature data;
h) acquiring and processing a plurality of sounds from different locations at the event venue to obtain sound data;
i) processing the sound data from step h) to obtain 3-D positional sound data;
j) combining the validated 3-D feature data from steps f) and g) and the 3-D positional sound data from step i) into a description of the event to generate an event model for presentation to a client; and
k) permitting a user of the event model to select any view point within the event model for experiencing the event through the client, the user viewing and hearing the event from the selected view point.
1 Assignment
0 Petitions
Accused Products
Abstract
A method and system for creating event data including 3-D data representing at least one participant in an event and making the event data available to be served is provided. The system includes a communications network. A plurality of camera units are coupled to the communications network. The camera units are configured and installed at an event venue to generate a plurality of images from waves which propagate from objects in the event and includes the at least one participant in a plurality of non-parallel detector planes spaced about the event venue. The camera units include a plurality of detectors for measuring energy in the images in the detector planes to produce a plurality of signals obtained from different directions with respect to the at least one participant and a plurality of signal processors to process the plurality of signals from the plurality of detectors with at least one control algorithm to obtain image data. A processor subsystem is coupled to the communications network to process the image data to obtain the event data including the 3-D data. A server, which includes a data engine, is in communication with the processor subsystem through the communications network. The server is configured to receive the event data including the 3-D data from the processor subsystem and to make the event data available to be served.
31 Citations
15 Claims
-
1. A method for the efficient capture, analysis, tracking and presentation of event data, the method comprising:
-
a) providing a plurality of camera units at predetermined positions relative to an event, each of the camera units having an associated field of view of the event, the plurality of camera units being calibrated in 3-D with respect to a venue for the event; b) generating from the camera units a set of image projections of a dynamic object in the event as image data, wherein the dynamic object is a participant at the event; c) processing the image data from step b) with image segmentation to extract image features of the dynamic object in the event; d) transforming the image features from step c) into 3-D features using the 3-D calibration of the camera units; e) intersecting the 3-D features from step d) to create 3-D candidate object feature data, the 3-D candidate object feature data describing a position, a pose, and an appearance of the dynamic object in the event, wherein the dynamic object includes a plurality of rigid parts, the rigid parts including the head, hands, legs, and feet of the participant at the event, wherein the pose of the dynamic object includes a description of a yaw, a pitch, and a roll of the dynamic object, and wherein the pose of the dynamic object includes a pose of each of the rigid parts of the dynamic object; f) removing any of the 3-D candidate object feature data from step e) having at least one of an intersection error greater than a predetermined 3-D distance and an impossible 3-D position, wherein the 3-D candidate object feature data not having at least one of the intersection error greater than a predetermined 3-D distance and an impossible 3-D position is validated 3-D feature data; g) applying known physical laws and forces to compute additional 3-D candidate object feature data including the position, the pose, and the appearance of the dynamic object in the event when the dynamic object is missing from the view of all of the camera units, wherein the computed additional 3-D candidate object feature data includes the pose of each of the rigid parts of the dynamic object when the dynamic object is missing from the view of all of the camera units, wherein previous or subsequent 3-D candidate object feature data can be used to fill-in the pose of each of the rigid parts of the dynamic object as the computed additional 3-D candidate object feature data using consistency rules and the known physical laws and forces when the dynamic object is missing from the view of all of the camera units, wherein the computed additional 3-D candidate object feature data is included in the validated 3-D feature data; h) acquiring and processing a plurality of sounds from different locations at the event venue to obtain sound data; i) processing the sound data from step h) to obtain 3-D positional sound data; j) combining the validated 3-D feature data from steps f) and g) and the 3-D positional sound data from step i) into a description of the event to generate an event model for presentation to a client; and k) permitting a user of the event model to select any view point within the event model for experiencing the event through the client, the user viewing and hearing the event from the selected view point. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A stereo camera system for the efficient capture, analysis, tracking and presentation of event data, the system comprising:
-
a communications network; a plurality of camera units coupled to said communications network and located at predetermined positions relative to an event, each of said camera units having an associated field of view of the event, said plurality of camera units being calibrated in 3-D with respect to a venue of the event, and generating from said camera units a set of image projections of a dynamic object in the event as image data, wherein the dynamic object is a participant at the event; an audio subsystem coupled to said communications network, said audio subsystem being configured and installed at the event venue to acquire and process a plurality of sounds from different locations at the event venue to obtain sound data wherein said processor subsystem processes the sound data to obtain 3-D positional sound data; a processor subsystem coupled to said communications network to process the image data with image segmentation to extract image features of the dynamic object in the event, to transform the image features into 3-D features using the 3-D calibration of said camera units, intersect the 3-D features creating 3-D candidate object feature data describing a position, a pose, and an appearance of the dynamic object in the event, wherein the dynamic object includes a plurality of rigid parts, the rigid parts including the head, hands, legs, and feet of the participant at the event, wherein the pose of the dynamic object includes a description of a yaw, a pitch, and a roll of the dynamic object, and wherein the pose of the dynamic object includes a pose of each of the rigid parts of the dynamic object, remove any of the 3-D candidate object feature data having at least one of an intersection error greater than a predetermined 3-D distance and an impossible 3-D position, wherein the 3-D candidate object feature data not having at least one of the intersection error greater than a predetermined 3-D distance and an impossible 3-D position is validated 3-D feature data, apply known physical laws and forces to compute additional 3-D candidate object feature data including the position, the pose, and the appearance of the dynamic object in the event when the dynamic object is missing from the view of all of the camera units, wherein the computed additional 3-D candidate object feature data includes the pose of each of the rigid parts of the dynamic object when the dynamic object is missing from the view of all of the camera units, wherein previous or subsequent 3-D candidate object feature data can be used to fill-in the pose of each of the rigid parts of the dynamic object as the computed additional 3-D candidate object feature data using consistency rules and the known physical laws and forces when the dynamic object is missing from the view of all of the camera units, wherein the computed additional 3-D candidate object feature data is included in the validated 3-D feature data, combine the validated 3-D feature data and the 3-D positional sound data into a description of the event to generate an event model for presentation to a client, and permit a user of the event model to select any view point within the event model for experiencing the event through the client; and a server including a data engine, said server being in communication with said processor subsystem through said communications network, said server being configured to receive the event model from said processor subsystem and to make the event model available to be served to a client for viewing and listening by the user from the selected view point. - View Dependent Claims (9, 10, 11, 12, 13, 14, 15)
-
Specification