Method and system for enhancing virtual stage experience
First Claim
1. A method for augmenting visual images of audio-visual entertainment systems, comprising the following steps of:
- (a) enhancing facial images of a user or a plurality of users in a video input by superimposing virtual object images to said facial images,(b) simulating a virtual stage environment image, further comprising steps of processing virtual object image selection, processing music selection, and composing virtual stage images,(c) setting up masked regions on the simulated virtual stage environment image, and(d) positioning the masked virtual stage environment image in front of the body image of said user or said plurality of users,whereby the step for enhancing facial images is processed at the level of local facial features on face images of said user or said plurality of users,whereby examples of the facial features can be eye, nose, and mouth of said user or said plurality of users, andwhereby the body image of said user or said plurality of users is shown through the transparency channel region of the masked virtual stage environment image.
4 Assignments
0 Petitions
Accused Products
Abstract
The present invention is a system and method for increasing the value of the audio-visual entertainment systems, such as karaoke, by simulating a virtual stage environment and enhancing the user'"'"'s facial image in a continuous video input, automatically, dynamically and in real-time. The present invention is named Enhanced Virtual Karaoke (EVIKA). The EVIKA system consists of two major modules, the facial image enhancement module and the virtual stage simulation module. The facial image enhancement module augments the user'"'"'s image using the embedded Facial Enhancement Technology (F.E.T.) in real-time. The virtual stage simulation module constructs a virtual stage in the display by augmenting the environmental image. The EVIKA puts the user'"'"'s enhanced body image into the dynamic background, which changes according to the user'"'"'s arbitrary motion. During the entire process, the user can interact with the system and select and interact with the virtual objects on the screen. The capability of real-time execution of the EVIKA system even with complex backgrounds enables the user to experience a whole new live virtual entertainment environment experience, which was not possible before.
-
Citations
23 Claims
-
1. A method for augmenting visual images of audio-visual entertainment systems, comprising the following steps of:
-
(a) enhancing facial images of a user or a plurality of users in a video input by superimposing virtual object images to said facial images, (b) simulating a virtual stage environment image, further comprising steps of processing virtual object image selection, processing music selection, and composing virtual stage images, (c) setting up masked regions on the simulated virtual stage environment image, and (d) positioning the masked virtual stage environment image in front of the body image of said user or said plurality of users, whereby the step for enhancing facial images is processed at the level of local facial features on face images of said user or said plurality of users, whereby examples of the facial features can be eye, nose, and mouth of said user or said plurality of users, and whereby the body image of said user or said plurality of users is shown through the transparency channel region of the masked virtual stage environment image. - View Dependent Claims (2, 3, 8, 9, 10, 11)
-
-
4. An apparatus for augmenting visual images of an audio-visual entertainment system comprising:
-
(a) one or a plurality of means for capturing facial images from video input image sequences of a user or a plurality of users, (b) means for displaying output, (c) means for enhancing said facial images of said user or said plurality of users from said video input image sequences by superimposing virtual object images to said facial images, (d) means for processing dynamically changing virtual background images according to body movements of said user or said plurality of users, (e) means for simulating a virtual stage environment image by composing the enhanced facial and body image of said user or said plurality of users, virtual stage images, and virtual objects images, and (f) means for handling interaction between said user or said plurality of users and said audio-visual entertainment system, (g) a sound system, and (h) a microphone, whereby the means for enhancing facial images processes the facial image enhancement at the level of local facial features on said facial images of said user or said plurality of users, and whereby examples of the facial features can be eyes, nose, and mouth of said user or said plurality of users. - View Dependent Claims (5, 6, 7, 12, 13, 14, 15)
-
-
16. A method for augmenting images on a means for displaying output of an audio-visual entertainment system, comprising the following steps of:
-
(a) capturing a plurality of images for a user or a plurality of users with a single or a plurality of means for capturing images, (b) processing a single image or a plurality of images from the captured plurality of images in order to obtain facial features and body movement information of said user or said plurality of users, (c) processing selection by said user or said plurality of users for virtual object images on a means for displaying output, (d) augmenting facial feature images of said user or said plurality of users with the selected virtual object images, (e) simulating a virtual stage environment image, and (f) displaying the augmented facial images with said facial feature images of said user or said plurality of users and the simulated virtual stage environment image on said means for displaying output, whereby the step for augmenting facial feature images is processed at the level of local facial features on face images of said user or said plurality of users, whereby examples of the local facial features can be eyes, nose, and mouth of said user or said plurality of users, and whereby the step for augmenting facial feature images of said user or said plurality of users with the selected virtual object images is processed automatically, dynamically, and in real-time. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23)
-
Specification