Method and apparatus for mapping live video on to three dimensional objects
First Claim
Patent Images
1. A method for mapping live video on to a three dimensional object, the method comprises the steps of:
- a) receiving a live video stream into a capture buffer;
b) registering the capture buffer as a texture map;
c) mapping, directly from the capture buffer, the live video stream on to the three dimensional object based on texture parameters of the three dimensional object; and
d) rendering the three dimensional object into a frame buffer wherein the rendering includes;
writing the three dimensional object into a back buffer; and
page flipping the back buffer to a front buffer after the three dimensional object has been written into the back buffer.
2 Assignments
0 Petitions
Accused Products
Abstract
A method and apparatus for providing live video on a three-dimensional object begins by receiving a video stream into a capture buffer. The process then continues by mapping, directly from the capture buffer, the video stream onto the three-dimensional object. Having mapped the live video onto the three-dimensional object it is rendered into a frame buffer and subsequently displayed on a display device.
80 Citations
22 Claims
-
1. A method for mapping live video on to a three dimensional object, the method comprises the steps of:
-
a) receiving a live video stream into a capture buffer;
b) registering the capture buffer as a texture map;
c) mapping, directly from the capture buffer, the live video stream on to the three dimensional object based on texture parameters of the three dimensional object; and
d) rendering the three dimensional object into a frame buffer wherein the rendering includes;
writing the three dimensional object into a back buffer; and
page flipping the back buffer to a front buffer after the three dimensional object has been written into the back buffer. - View Dependent Claims (2, 3, 4, 5, 6, 7)
receiving the live video by a video decoder;
decoding, by the video decoder, the live video to produce video graphics data; and
providing, by the video decoder, the video graphics data to the capture buffer.
-
-
4. The method of claim 1 further comprises:
rendering the live video or other live video into a background section of the back buffer.
-
5. The method of claim 4 further comprises:
-
receiving the other live video from a second video; and
storing the other live video in a second capture buffer.
-
-
6. The method of claim 1 further comprises generating a plurality of perspective texture maps of the video stream.
-
7. The method of claim 1, wherein step (c) further comprises:
mapping the video stream from the capture memory on to the three dimensional object based on at least one of;
environmental mapping, bump mapping, and terrain mapping.
-
8. A method for mapping live video on to a three dimensional object, the method comprises the steps of:
-
a) receiving a video stream of the live video into a capture buffer;
b) registering the capture buffer as a texture map;
c) accessing the capture buffer based on texture coordinates of the three dimensional object;
d) rendering the three dimensional object with the live video thereon, wherein the live video was mapped directly from the capture buffer on to the three dimensional object;
e) writing the three dimensional object into a back buffer; and
f) page flipping the back buffer to a front buffer after the three dimensional object has been written into the back buffer. - View Dependent Claims (9, 10, 11, 12)
receiving the live video by a video decoder;
decoding, by the video decoder, the live video to produce video graphics data; and
providing, by the video decoder, the video graphics data to the capture buffer.
-
-
10. The method of claim 8 further comprises:
rendering the live video or other live video into a background section of the back buffer.
-
11. The method of claim 10 further comprises:
-
receiving the other live video from a second video; and
storing the other live video in a second capture buffer.
-
-
12. The method of claim 8, wherein step (d) further comprises:
-
writing the three dimensional object into a back buffer; and
page flipping the back buffer to a front buffer after the three dimensional object has been written into the back buffer.
-
-
13. A video processing system comprises:
-
a processing module; and
memory operably coupled to the processing module, wherein the memory stores operational instructions that cause the processing module to (a) receive a live video stream into a capture buffer;
(b) register the capture buffer as a texture map;
(c) map, directly from the capture buffer, the live video stream on to the three dimensional object based on texture parameters of the three dimensional object;
(d) render the three dimensional object into a frame buffer;
(e) write the three dimensional object into a back buffer; and
(f) page flip the back buffer to a front buffer after the three dimensional object has been written into the back buffer.- View Dependent Claims (14, 15, 16, 17, 18)
receive the live video by a video decoder;
decode, by the video decoder, the live video to produce video graphics data; and
provide, by the video decoder, the video graphics data to the capture buffer.
-
-
15. The video processing system of claim 13, wherein the memory further comprises operational instructions that cause the processing module to:
render the live video or other live video into a background section of the back buffer.
-
16. The video processing system of claim 15, wherein the memory further comprises operational instructions that cause the processing module to:
-
receive the other live video from a second video; and
store the other live video in a second capture buffer.
-
-
17. The video processing system of claim 13, wherein the memory further comprises operational instructions that cause the processing module to:
map the video steam from the capture memory on to the three dimensional object based on at least one of;
environmental mapping, bump mapping, and terrain mapping.
-
18. The video processing system of claim 16, wherein the memory further comprises operational instructions that cause the processing module to:
map the video steam from the capture memory on to the three dimensional object based on at least one of;
environmental mapping, bump mapping, and terrain mapping.
-
19. A video processing system comprises:
-
a processing module; and
memory operably coupled to the processing module, wherein the memory stores operational instructions that cause the processing module to;
(a) receive a live video stream of the live video into a capture buffer;
(b) registering the capture buffer as a texture map;
(c) access the capture buffer based on texture coordinates of the three dimensional object;
(d) render the three dimensional object with the live video thereon, wherein the live video was mapped directly from the capture buffer on to the three dimensional object;
(e) write the three dimensional object into a back buffer; and
(f) page flip the back buffer to a front buffer after the three dimensional object has been written into the back buffer.- View Dependent Claims (20, 21, 22)
receive the live video by a video decoder;
decode, by the video decoder, the live video -to produce video graphics data; and
provide, by the video decoder, the video graphics data to the capture buffer.
-
-
21. The video processing system of claim 19, wherein the memory further comprises operational instructions that cause the processing module to:
render the live video or other live video into a background section of the back buffer.
-
22. The video processing system of claim 21, wherein the memory further comprises operational instructions that cause the processing module to:
-
receive the other live video from a second video; and
store the other live video in a second capture buffer.
-
Specification