Method for triggering events in a video
First Claim
1. A computer-implemented method for triggering events in a video, the method being performed in connection with a computerized system comprising a processing unit and a memory, the method comprising:
- providing a list comprising a set of objects, a set of object states associated with the set of objects, and a set of events, wherein the set of objects are associated with one or more images of a user and each object from the set of objects has at least one object state triggering at least one event from the set of events in the video and each object state is associated with at least one point of an object of the set of objects;
detecting a face of the user within frames of the video, the face including a set of landmark points corresponding to facial features;
aligning a mesh with the face of the user, the mesh containing a set of feature reference points, each feature reference point corresponding to a landmark point of the set of landmark points;
detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video, the at least one object including at least a portion of the set of feature reference points, the portion of the set of feature reference points aligned with one or more points of the at least one object;
tracking the at least one object across two or more frames, the at least one object having a first object state;
identifying a change in the first object state of the at least one object to a second object state, the change from the first object state to the second object state corresponding to movement of a first portion of landmark points relative to one or more feature reference points of the mesh, indicating movement of the first portion of landmark points on the face of the user, while a second portion of landmark points remain aligned with corresponding feature reference points of the mesh;
determining that the second object state of the at least one object matches a state from the set of object states; and
in response to determining the match, triggering at least one event of the set of events in the video, the at least one event modifying the one or more images of the user by;
selecting a visualization from a plurality of visualizations associated with the at least one event; and
replacing at least a portion of the face, associated with the feature reference point within the frames of the video, with the selected visualization to modify the video, the feature reference point corresponding to the at least one point moved by the change from the first object state to the second object state.
3 Assignments
0 Petitions
Accused Products
Abstract
A computer implemented method of triggering events in a video, the method comprising: providing a list of objects with their states and corresponding events in video such that each state from the list triggers at least one event of the corresponding events, wherein each object from the list has at least one state triggering at least one event of the corresponding events from the list in video; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video; tracking the at least one object and its state; triggering at least one event of the corresponding events from the list in video in case the state of the at least one object matches with one of its states from the list.
-
Citations
18 Claims
-
1. A computer-implemented method for triggering events in a video, the method being performed in connection with a computerized system comprising a processing unit and a memory, the method comprising:
-
providing a list comprising a set of objects, a set of object states associated with the set of objects, and a set of events, wherein the set of objects are associated with one or more images of a user and each object from the set of objects has at least one object state triggering at least one event from the set of events in the video and each object state is associated with at least one point of an object of the set of objects; detecting a face of the user within frames of the video, the face including a set of landmark points corresponding to facial features; aligning a mesh with the face of the user, the mesh containing a set of feature reference points, each feature reference point corresponding to a landmark point of the set of landmark points; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video, the at least one object including at least a portion of the set of feature reference points, the portion of the set of feature reference points aligned with one or more points of the at least one object; tracking the at least one object across two or more frames, the at least one object having a first object state; identifying a change in the first object state of the at least one object to a second object state, the change from the first object state to the second object state corresponding to movement of a first portion of landmark points relative to one or more feature reference points of the mesh, indicating movement of the first portion of landmark points on the face of the user, while a second portion of landmark points remain aligned with corresponding feature reference points of the mesh; determining that the second object state of the at least one object matches a state from the set of object states; and in response to determining the match, triggering at least one event of the set of events in the video, the at least one event modifying the one or more images of the user by; selecting a visualization from a plurality of visualizations associated with the at least one event; and replacing at least a portion of the face, associated with the feature reference point within the frames of the video, with the selected visualization to modify the video, the feature reference point corresponding to the at least one point moved by the change from the first object state to the second object state. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A computer implemented method for triggering events in a video, the method being performed in connection with a computerized system comprising a processing unit and a memory, the method comprising:
-
providing a list comprising a set of objects, a set of object states associated with the set of objects, and a set of events, wherein the set of objects are associated with one or more images of a user and each object from the set of objects has at least one object state triggering at least one event from the set of events in the video and each object state is associated with at least one point of an object of the set of objects; detecting a face of the user within frames of the video, the face including a set of landmark points corresponding to facial features; aligning a mesh with the face of the user, the mesh containing a set of feature reference points, each feature reference point corresponding to a landmark point of the set of landmark points; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video, the at least one object including at least a portion of the set of feature reference points, the portion of the set of feature reference points aligned with one or more points of the at least one object; tracking the at least one object having a first object state across two or more frames; identifying a change in the first object state of the at least one object to a second object state, the change from the first object state to the second object state corresponding to movement of a first portion of landmark points relative to one or more feature reference points of the mesh, indicating movement of the first portion of landmark points on the face of the user, while a second portion of landmark points remain aligned with corresponding feature reference points of the mesh; determining the second object state of the at least one object matches a state from the set of object states; and in response to determining the match, triggering one random event from the list in video, the random event modifying the one or more images of the user and the mesh aligned with the face of the user by; selecting a visualization from a plurality of visualizations associated with the random event; and replacing at least a portion of the face, associated with at least one feature reference point of the set of feature reference points within the frames of the video, with the selected visualization to modify the video.
-
-
11. A computer-implemented method of triggering events in a video which frames comprise images of a user and one or more objects associated with the images of the user, the method being performed in connection with a computerized system comprising a processing unit and a memory, the method comprising:
-
providing a list comprising a set of pieces of information and a set of events, wherein each piece of information from the set of pieces of information triggers at least one event from the set of events in the video, and wherein each event from the set of events modifies an object of the one or more objects associated with the images of the user; detecting a face of the user within frames of the video, the face including a set of landmark points corresponding to facial features; aligning a mesh with the face of the user, the mesh containing a set of feature reference points, each feature reference point corresponding to a landmark point of the set of landmark points; identifying a change in the first object state of the at least one object to a second object state, the change from the first object state to the second object state corresponding to movement of a first portion of landmark points relative to one or more feature reference points of the mesh, indicating movement of the first portion of landmark points on the face of the user, while a second portion of landmark points remain aligned with corresponding feature reference points of the mesh; determining the information relating to the user matches at least one of the pieces of information from the set of pieces of information; and in response to identifying the information, triggering at least one event from the set of events in video, the at least one event modifying at least one object of the one or more objects associated with the images of the user by; selecting a visualization from a plurality of visualizations associated with the at least one event; and replacing at least a portion of the face, associated with at least one feature reference point of the set of feature reference points within the frames of the video, with the selected visualization to modify the video.
-
-
12. A mobile computerized system comprising a central processing unit and a memory, the memory storing instructions for:
-
providing a list comprising a set of objects, a set of object states associated with the set of objects, and a set of events, wherein the set of objects are associated with one or more images of a user and each object from the set of objects has at least one object state triggering at least one event from the set of events in the video and each object state is associated with at least one point of an object of the set of objects; detecting a face of the user within frames of the video, the face including a set of landmark points corresponding to facial features; aligning a mesh with the face of the user, the mesh containing a set of feature reference points, each feature reference point corresponding to a landmark point of the set of landmark points; detecting at least one object from the list that at least partially and at least occasionally is presented in frames of the video, the at least one object including at least a portion of the set of feature reference points, the portion of the set of feature reference points aligned with one or more points of the at least one object; tracking the at least one object having a first object state across two or more frames; identifying a change in the first object state of the at least one object to a second object state, the change from the first object state to the second object state corresponding to movement of a first portion of landmark points relative to one or more feature reference points of the mesh, indicating movement of the first portion of landmark points on the face of the user, while a second portion of landmark points remain aligned with corresponding feature reference points of the mesh; determining the second object state of the at least one object matches a state from the set of object states; and in response to determining the match, triggering at least one event of the set of events in the video, the at least one event modifying the one or more images of the user by; selecting a visualization from a plurality of visualizations associated with the at least one event; and replacing at least a portion of the face, associated with the feature reference point within the frames of the video, with the selected visualization to modify the video, the feature reference point corresponding to the at least one point moved by the change from the first object state to the second object state. - View Dependent Claims (13, 14, 15, 16, 17, 18)
-
Specification