Enhancing broadcast of an event with synthetic scene using a depth map
First Claim
1. A method for enhancing a broadcast of an event, comprising:
- generating a synthetic scene based on audio visual data and supplemental data received in the broadcast;
generating a depth map to store depth information for the synthetic scene; and
integrating the synthetic scene into the broadcast using the depth map, wherein generating the depth map comprises;
establishing the virtual camera using camera tracking data of a tracked camera which defines a viewpoint for the synthetic scene;
setting a field of view of the virtual camera to a corresponding field of view of the tracked camera;
positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object;
extracting depth information of the synthetic tracked object to generate the depth map; and
refining the depth map by distorting grid coordinates of the depth map based on characteristics of the tracked camera.
1 Assignment
0 Petitions
Accused Products
Abstract
A broadcast of an event is enhanced with synthetic scenes generated from audio visual and supplemental data received in the broadcast. A synthetic scene is integrated into the broadcast in accordance with a depth map that contains depth information for the synthetic scene. The supplemental data may be sensing data from various sensors placed at the event, position and orientation data of particular objects at the event, or environmental data on conditions at the event. The supplemental data may also be camera tracking data from a camera that is used to generate a virtual camera and viewpoints for the synthetic scene.
The present invention describes systems, clients, servers, methods, and computer-readable media of varying scope. In addition to the aspects of the present invention described in this summary, further aspects of the invention will become apparent by reference to the drawings and by reading the detailed description that follows.
96 Citations
28 Claims
-
1. A method for enhancing a broadcast of an event, comprising:
-
generating a synthetic scene based on audio visual data and supplemental data received in the broadcast; generating a depth map to store depth information for the synthetic scene; and integrating the synthetic scene into the broadcast using the depth map, wherein generating the depth map comprises; establishing the virtual camera using camera tracking data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to a corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; extracting depth information of the synthetic tracked object to generate the depth map; and refining the depth map by distorting grid coordinates of the depth map based on characteristics of the tracked camera. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A method for enhancing a broadcast of an event, comprising:
-
collecting, at a broadcast server, audio visual data and supplemental data from the event; transmitting the audio visual data and the supplemented data to a broadcast client over a network; generating, at the broadcast client, a synthetic scene based on the audio visual data and the supplemental data; generating a depth map to store depth information for the synthetic scene; and integrating the synthetic scene into the broadcast using the depth map, wherein the generating the depth map comprises; establishing the virtual camera using camera data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; extracting depth information of the synthetic tracked object to generate the depth map; and refining the depth map by distorting grid coordinates of the depth map based on characteristics of the tracked camera. - View Dependent Claims (12, 13, 14)
-
-
15. A system for enhancing a broadcast of an event, comprising:
-
a video signal unit coupled to provide audio visual data from the event; a supplemental data unit coupled to provide supplemental data from the event; a depth map coupled to provide depth information; and a processing unit configured to process the audio visual data and the supplemental data to generate a synthetic scene and further, the processing unit configured to integrate the synthetic scene into the broadcast using the depth map, wherein generating the depth map comprises; establishing the virtual camera using camera data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; extracting depth information of the synthetic tracked object to generate the depth map; and refining the depth map by distorting grid coordinates of the depth map based on characteristics of the tracked camera. - View Dependent Claims (16, 17)
-
-
18. A system for enhancing a broadcast of an event, comprising:
-
a broadcast server configured to receive audio visual (A/V) data and supplemental data; and a broadcast client configured to receive the audio visual data and the supplemental data transmitted from the broadcast server over a network, the broadcast client communicating with the broadcast server over the network, wherein the broadcast client; generates a synthetic scene based on the audio visual data and the supplemental data; generates a depth map to store depth information for the synthetic scene, wherein generating the depth map comprises; establishing the virtual camera using camera data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; and extracting depth information of the synthetic tracked object to generate the depth map; and refining the depth map by distorting grid coordinates of the depth map based on characteristics of the tracked camera; and integrates the synthetic scene into the broadcast using the depth map. - View Dependent Claims (19, 20, 21)
-
-
22. A machine-readable medium having executable code to cause a machine to perform a method for enhancing a broadcast of an event, the method comprising:
-
generating a synthetic scene based on audio visual data and supplemental data received in the broadcast; generating a depth map to store depth information for the synthetic scene; and integrating the synthetic scene into the broadcast using the depth map, wherein generating the depth map comprises; establishing the virtual camera tracking data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to a corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; extracting depth information of the synthetic tracked object to generate the depth map; and refining the depth map by distorting grid coordinates of the depth map based on characteristics of the tracked camera. - View Dependent Claims (23, 24)
-
-
25. A machine-readable medium having executable code to cause a machine to perform a method for enhancing a broadcast of an event, the method comprising:
-
collecting, at a broadcast server, audio visual data and supplemental data from the event; transmitting the audio visual data and the supplemented data to a broadcast client over a network; generating, at the broadcast client, a synthetic scene based on the audio visual data and the supplemental data; generating a depth map to store depth information for the synthetic scene; and integrating the synthetic scene into the broadcast using the depth map, wherein the generating the depth map comprises; establishing the virtual camera using camera data of a tracked camera which defines a viewpoint for the synthetic scene; setting a field of view of the virtual camera to corresponding field of view of the tracked camera; positioning a synthetic tracked object in the synthetic scene according to position information of the tracked object; extracting depth information of the synthetic tracked object to generate the depth map; and refining the depth map by distorting grid coordinates of the depth map based on characteristics of the tracked camera. - View Dependent Claims (26, 27, 28)
-
Specification