Video processing system providing correlation between objects in different georeferenced video feeds and related methods
First Claim
1. A video processing system comprising:
- a first video input configured to receive a first georeferenced video feed from a first video source corresponding to a first perspective of a scene from a first location;
a second video input configured to receive a second georeferenced video feed from a second video source corresponding to a second perspective of the scene from a second location different than the first location and defining a different viewing angle with respect to the first georeferenced video feed, the second georeferenced video feed overlapping the first georeferenced video feed and being separate from the first video feed; and
a video processor coupled to said first and second video inputs and comprisingan annotation module configured to generate an annotation for an object in the first georeferenced video feed, anda geospatial correlation module configured to geospatially translate the annotation to the object in the second georeferenced video feed overlapping the first georeferenced video feed so that annotations made in the first perspective are translated to the second perspective.
1 Assignment
0 Petitions
Accused Products
Abstract
A video processing system which may include a first video input configured to receive a first georeferenced video feed from a first video source, and a second video input configured to receive a second georeferenced video feed from a second video source, where the second georeferenced video feed overlaps the first georeferenced video feed. The system may further include a video processor coupled to the first and second video inputs. The video processor may include an annotation module configured to generate an annotation for an object in the first georeferenced video feed, and a geospatial correlation module configured to geospatially correlate the annotation to the object in the second georeferenced video feed overlapping the first georeferenced video feed.
33 Citations
19 Claims
-
1. A video processing system comprising:
-
a first video input configured to receive a first georeferenced video feed from a first video source corresponding to a first perspective of a scene from a first location; a second video input configured to receive a second georeferenced video feed from a second video source corresponding to a second perspective of the scene from a second location different than the first location and defining a different viewing angle with respect to the first georeferenced video feed, the second georeferenced video feed overlapping the first georeferenced video feed and being separate from the first video feed; and a video processor coupled to said first and second video inputs and comprising an annotation module configured to generate an annotation for an object in the first georeferenced video feed, and a geospatial correlation module configured to geospatially translate the annotation to the object in the second georeferenced video feed overlapping the first georeferenced video feed so that annotations made in the first perspective are translated to the second perspective. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A video processing system comprising:
-
a first video input configured to receive a first georeferenced video feed from a first video source corresponding to a first perspective of a scene from a first location; a second video input configured to receive a second georeferenced video feed from a second video source corresponding to a second perspective of the scene from a second location different than the first location and a defining a different viewing angle with respect to the first georeferenced video feed, the second georeferenced video feed overlapping the first georeferenced video feed and being separate from the first video feed; and a video processor coupled to said first and second video inputs and comprising an annotation module configured to generate an annotation for an object in the first georeferenced video feed, and a geospatial correlation module configured to geospatially translate the annotation to the object in the second georeferenced video feed overlapping the first georeferenced video feed and comprising a coordinate transformation module configured to transform geospatial coordinates for the annotation in the first georeferenced video feed to pixel coordinates in the second georeferenced video feed so that annotations made in the first perspective are translated to the second perspective, and a velocity model module configured to generate velocity models of the object in the first and second georeferenced video feeds for tracking the object therebetween. - View Dependent Claims (11, 12)
-
-
13. A video processor for processing a first georeferenced video feed from a first video source corresponding to a first perspective of a scene from a first location and a second georeferenced video feed from a second video source corresponding to a second perspective of the scene from a second location different than the first location and defining a different viewing angle with respect to the first georeferenced video feed, the second georeferenced video feed overlapping the first georeferenced video feed and being separate from the first video feed, and the video processor comprising:
-
a non-transitory annotation module configured to generate an annotation for an object in the first georeferenced video feed; and a non-transitory geospatial correlation module configured to geospatially translate the annotation to the object in the second georeferenced video feed overlapping the first georeferenced video feed so that annotations made in the first perspective are translated to the second perspective. - View Dependent Claims (14, 15)
-
-
16. A video processing method comprising:
-
providing a first georeferenced video feed from a first video source corresponding to a first perspective of a scene from a first location; providing a second georeferenced video feed from a second video source corresponding to a second perspective of the scene from a second location different than the first location and defining a different viewing angle with respect to the first georeferenced video feed, the second georeferenced video feed overlapping the first georeferenced video feed and being separate from the first video feed; generating an annotation for an object in the first georeferenced video feed using a non-transitory annotation module; and geospatially correlating the annotation to the object in the second georeferenced video feed overlapping the first georeferenced video feed using a non-transitory geospatial correlation module so that annotations made in the first perspective are translated to the second perspective. - View Dependent Claims (17, 18, 19)
-
Specification