Modifying Video Call Data
First Claim
1. A method implemented at a user terminal during a video call conducted with at least one further user terminal over a communications network, the method comprising:
- displaying a user interface on a display of the user terminal for display of received video frames;
detecting selection of a selectable button displayed in said user interface by a user using an input device of said user terminal whilst a received video frame is displayed in the user interface;
in response to said detection, disabling the display of video frames received after said received video frame;
determining a position of a face of a user in the received video frame by executing a face tracker algorithm on a processor of said user terminal;
receiving a plurality of drawing inputs whilst the selectable button is in a selected state, each drawing input defining image data to be applied at a facial position on said face;
modifying the displayed video frame in accordance with the plurality of received drawing inputs by applying the image data to each of said facial positions on the face; and
detecting a condition and in response to detecting said condition, for each video frame received after detecting said condition, the method further comprises;
determining a position of the face in the video frame by executing the face tracker algorithm to determine the location of the facial positions in the video frame, modifying the video frame by applying the image data to each of said positions on the face, and displaying the modified video frame in the user interface.
1 Assignment
0 Petitions
Accused Products
Abstract
A method comprising: displaying a UI for display of received video; detecting selection of a UI displayed button whilst a received video frame is displayed; in response, disabling the display of video frames received after the received video frame; determining a position of a face of a user in the received frame; receiving a plurality of drawing inputs whilst the button is selected, each drawing input defining image data to be applied at a position on said face; modifying the video frame in accordance with the drawing inputs by applying the image data to each of the positions; detecting a condition and in response, for each video frame received after the detection, determining a position of the face in the frame to determine the location of the positions in the frame, applying the image data to each of the positions, and displaying the modified video frame in the UI.
-
Citations
20 Claims
-
1. A method implemented at a user terminal during a video call conducted with at least one further user terminal over a communications network, the method comprising:
-
displaying a user interface on a display of the user terminal for display of received video frames; detecting selection of a selectable button displayed in said user interface by a user using an input device of said user terminal whilst a received video frame is displayed in the user interface; in response to said detection, disabling the display of video frames received after said received video frame; determining a position of a face of a user in the received video frame by executing a face tracker algorithm on a processor of said user terminal; receiving a plurality of drawing inputs whilst the selectable button is in a selected state, each drawing input defining image data to be applied at a facial position on said face; modifying the displayed video frame in accordance with the plurality of received drawing inputs by applying the image data to each of said facial positions on the face; and detecting a condition and in response to detecting said condition, for each video frame received after detecting said condition, the method further comprises;
determining a position of the face in the video frame by executing the face tracker algorithm to determine the location of the facial positions in the video frame, modifying the video frame by applying the image data to each of said positions on the face, and displaying the modified video frame in the user interface. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A user terminal comprising:
-
a display; an input device; a network interface configured to transmit and receive video data between the user terminal and a communication network during a video call between the user terminal and at least one further user terminal; a processor configured to run an application operable during said video call to; display a user interface on the display for display of received video frames; detect selection of a selectable button displayed in said user interface by a user using said input device whilst a received video frame is displayed in the user interface; in response to said detection, disable the display of video frames received after said received video frame; determine a position of a face of a user in the received video frame by executing a face tracker algorithm; receive a plurality of drawing inputs whilst the selectable button is in a selected state, each drawing input defining image data to be applied at a facial position on said face; modify the displayed video frame in accordance with the plurality of received drawing inputs by applying the image data to each of said facial positions on the face; and detect a condition and in response to said condition detection, for each video frame received after said condition detection, determine a position of the face in the video frame by executing the face tracker algorithm to determine the location of the facial positions in the video frame, modify the video frame by applying the image data to each of said positions on the face, and display the modified video frame in the user interface.
-
-
20. A computer program product, the computer program product being embodied on a computer-readable medium and configured so as when executed on a processor of a user terminal during a video call between the user terminal and at least one further user terminal, to:
-
display a user interface on a display of the user terminal for display of received video frames; detect selection of a selectable button displayed in said user interface by a user using an input device of said user terminal whilst a received video frame is displayed in the user interface; in response to said detection, disable the display of video frames received after said received video frame; determine a position of a face of a user in the received video frame by executing a face tracker algorithm on a processor of said user terminal; receive a plurality of drawing inputs whilst the selectable button is in a selected state, each drawing input defining image data to be applied at a facial position on said face; modify the displayed video frame in accordance with the plurality of received drawing inputs by applying the image data to each of said facial positions on the face; and detect a condition and in response to said condition detection, for each video frame received after said condition detection, determine a position of the face in the video frame by executing the face tracker algorithm to determine the location of the facial positions in the video frame, modify the video frame by applying the image data to each of said positions on the face, and display the modified video frame in the user interface.
-
Specification