×

Method and system for creating event data and making same available to be served

  • US 9,087,380 B2
  • Filed: 05/26/2004
  • Issued: 07/21/2015
  • Est. Priority Date: 05/26/2004
  • Status: Active Grant
First Claim
Patent Images

1. A method for the efficient capture, analysis, tracking and presentation of event data, the method comprising:

  • a) providing a plurality of camera units at predetermined positions relative to an event, each of the camera units having an associated field of view of the event, the plurality of camera units being calibrated in 3-D with respect to a venue for the event;

    b) generating from the camera units a set of image projections of a dynamic object in the event as image data, wherein the dynamic object is a participant at the event;

    c) processing the image data from step b) with image segmentation to extract image features of the dynamic object in the event;

    d) transforming the image features from step c) into 3-D features using the 3-D calibration of the camera units;

    e) intersecting the 3-D features from step d) to create 3-D candidate object feature data, the 3-D candidate object feature data describing a position, a pose, and an appearance of the dynamic object in the event, wherein the dynamic object includes a plurality of rigid parts, the rigid parts including the head, hands, legs, and feet of the participant at the event, wherein the pose of the dynamic object includes a description of a yaw, a pitch, and a roll of the dynamic object, and wherein the pose of the dynamic object includes a pose of each of the rigid parts of the dynamic object;

    f) removing any of the 3-D candidate object feature data from step e) having at least one of an intersection error greater than a predetermined 3-D distance and an impossible 3-D position, wherein the 3-D candidate object feature data not having at least one of the intersection error greater than a predetermined 3-D distance and an impossible 3-D position is validated 3-D feature data;

    g) applying known physical laws and forces to compute additional 3-D candidate object feature data including the position, the pose, and the appearance of the dynamic object in the event when the dynamic object is missing from the view of all of the camera units, wherein the computed additional 3-D candidate object feature data includes the pose of each of the rigid parts of the dynamic object when the dynamic object is missing from the view of all of the camera units, wherein previous or subsequent 3-D candidate object feature data can be used to fill-in the pose of each of the rigid parts of the dynamic object as the computed additional 3-D candidate object feature data using consistency rules and the known physical laws and forces when the dynamic object is missing from the view of all of the camera units, wherein the computed additional 3-D candidate object feature data is included in the validated 3-D feature data;

    h) acquiring and processing a plurality of sounds from different locations at the event venue to obtain sound data;

    i) processing the sound data from step h) to obtain 3-D positional sound data;

    j) combining the validated 3-D feature data from steps f) and g) and the 3-D positional sound data from step i) into a description of the event to generate an event model for presentation to a client; and

    k) permitting a user of the event model to select any view point within the event model for experiencing the event through the client, the user viewing and hearing the event from the selected view point.

View all claims
  • 1 Assignment
Timeline View
Assignment View
    ×
    ×