Perspective display systems and methods
First Claim
1. A method comprising:
- calibrating, by a perspective display system, a geometric model of an image array, a virtual aperture defined by a single display screen of a single display device, and a user space associated with the single display screen of the single display device, the calibrating comprising virtually positioning the image array at a constant distance behind the virtual aperture based on a predefined distance from the single display screen at which the entire image array is to be viewable, from the user space, through the virtual aperture defined by the single display screen of the single display device;
acquiring, by a perspective display system, visual data representative of a camera view of the user space associated with the single display screen of the single display device;
determining, by the perspective display system based on the visual data, a position of a first user within the user space and a position of a second user within the user space, whereinthe determining of the position of the first user comprises;
detecting a predefined tracking initiation gesture performed by the first user; and
locking onto and tracking, based on the predefined tracking initiation gesture, a physical feature of the first user within the user space during a first time period; and
the determining of the position of the second user comprises;
detecting an additional predefined tracking initiation gesture performed by the second user; and
locking onto and tracking, based on the additional predefined tracking initiation gesture, a physical feature of the second user within the user space during a second time period, the second time period different from the first time period;
receiving, by the perspective display system, a request to view one of a plurality of channels that provide user-selected content to the first user and the second user via the perspective display system, the one of the plurality of channels including at least one image;
identifying, by the perspective display system based on a relationship between the position of the first user, the virtual aperture defined by the single display screen of the single display device, and the at least one image represented by the image array positioned at the constant virtual distance behind the virtual aperture in accordance with the calibrated geometric model, a first viewable region of the at least one image;
identifying, by the perspective display system based on a relationship between the position of the second user, the virtual aperture defined by the single display screen of the single display device, and the at least one image represented by the image array positioned at the constant virtual distance behind the virtual aperture in accordance with the calibrated geometric model, a second viewable region of the at least one image;
generating, by the perspective display system, one of a display of the first viewable region of the at least one image and a display of the second viewable region of the at least one image; and
displaying, by the perspective display system, the one of the generated display of the first viewable region of the at least one image and the generated display of the second viewable region of the at least one image on the single display screen of the single display device, the first viewable region of the at least one image representing a first perspective view of the at least one image based on the relationship between the position of the first user and the virtual aperture defined by the single display screen of the single display device, and the second viewable region of the at least one image representing a second perspective view of the at least one image based on the relationship between the position of the second user and the virtual aperture defined by the single display screen of the single display device.
1 Assignment
0 Petitions
Accused Products
Abstract
Exemplary perspective display systems and methods are disclosed herein. An exemplary method includes a perspective display system acquiring visual data representative of a camera view of a user space associated with a display screen, determining, based on the visual data, a position of a user within the user space, identifying, based on the position of the user, a viewable region of an image, and displaying, on the display screen, the viewable region of the image, the displayed viewable region of the image representing a perspective view of the image based on the position of the user. In certain examples, the method further includes the perspective display system detecting a movement of the user to another position within the user space and updating the display on the display screen in real time in accordance with the movement to display another viewable region of the image on the display screen.
81 Citations
23 Claims
-
1. A method comprising:
-
calibrating, by a perspective display system, a geometric model of an image array, a virtual aperture defined by a single display screen of a single display device, and a user space associated with the single display screen of the single display device, the calibrating comprising virtually positioning the image array at a constant distance behind the virtual aperture based on a predefined distance from the single display screen at which the entire image array is to be viewable, from the user space, through the virtual aperture defined by the single display screen of the single display device; acquiring, by a perspective display system, visual data representative of a camera view of the user space associated with the single display screen of the single display device; determining, by the perspective display system based on the visual data, a position of a first user within the user space and a position of a second user within the user space, wherein the determining of the position of the first user comprises; detecting a predefined tracking initiation gesture performed by the first user; and locking onto and tracking, based on the predefined tracking initiation gesture, a physical feature of the first user within the user space during a first time period; and the determining of the position of the second user comprises; detecting an additional predefined tracking initiation gesture performed by the second user; and locking onto and tracking, based on the additional predefined tracking initiation gesture, a physical feature of the second user within the user space during a second time period, the second time period different from the first time period; receiving, by the perspective display system, a request to view one of a plurality of channels that provide user-selected content to the first user and the second user via the perspective display system, the one of the plurality of channels including at least one image; identifying, by the perspective display system based on a relationship between the position of the first user, the virtual aperture defined by the single display screen of the single display device, and the at least one image represented by the image array positioned at the constant virtual distance behind the virtual aperture in accordance with the calibrated geometric model, a first viewable region of the at least one image; identifying, by the perspective display system based on a relationship between the position of the second user, the virtual aperture defined by the single display screen of the single display device, and the at least one image represented by the image array positioned at the constant virtual distance behind the virtual aperture in accordance with the calibrated geometric model, a second viewable region of the at least one image; generating, by the perspective display system, one of a display of the first viewable region of the at least one image and a display of the second viewable region of the at least one image; and displaying, by the perspective display system, the one of the generated display of the first viewable region of the at least one image and the generated display of the second viewable region of the at least one image on the single display screen of the single display device, the first viewable region of the at least one image representing a first perspective view of the at least one image based on the relationship between the position of the first user and the virtual aperture defined by the single display screen of the single display device, and the second viewable region of the at least one image representing a second perspective view of the at least one image based on the relationship between the position of the second user and the virtual aperture defined by the single display screen of the single display device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A method comprising:
-
receiving, by a video distribution subsystem, a plurality of incoming real-time video feeds carrying data representative of a plurality of live video images captured by a plurality of video cameras; receiving, by the video distribution subsystem, a request to view a user-selected outgoing real-time video feed among a plurality of outgoing real-time video feeds on one of a plurality of channels carrying data representative of one of the plurality of the live video images; transmitting, by the video distribution subsystem, the user-selected outgoing real-time video feed carrying data representative of the one of the plurality of live video images to a video access subsystem; receiving, by the video access subsystem, the user-selected outgoing real-time video feed carrying data representative of the one of the plurality of live video images; calibrating, by the video access subsystem, a geometric model of an image array, a virtual aperture defined by a single display screen of a single display device, and a user space associated with the single display screen of the single display device, the calibrating comprising virtually positioning the image array at a constant distance behind the virtual aperture based on a predefined distance from the single display screen at which the entire image array is to be viewable, from the user space, through the virtual aperture defined by the single display screen of the single display device; detecting, by the video access subsystem, a position of a first user within the user space associated with the single display screen of the single display device and a position of a second user within the user space associated with the single display screen of the single display device, wherein the detecting of the position of the first user comprises; detecting a predefined tracking initiation gesture performed by the first user; and locking onto and tracking, based on the predefined tracking initiation gesture, a physical feature of the first user within the user space during a first time period; and the detecting of the position of the second user comprises; detecting an additional predefined tracking initiation gesture performed by the second user; and locking onto and tracking, based on the additional predefined tracking initiation gesture, a physical feature of the second user within the user space during a second time period, the second time period different from the first time period; identifying, by the video access subsystem based on a relationship between the position of the first user, the virtual aperture defined by the single display screen of the single display device, and the one of the plurality of live video images represented by the image array positioned at the constant virtual distance behind the virtual aperture in accordance with the calibrated geometric model, a first viewable region of the one of the plurality of live video images; identifying, by the video access subsystem based on a relationship between the position of the second user, the virtual aperture defined by the single display screen of the single display device, and the one of the plurality of live video images represented by the image array positioned at the constant virtual distance behind the virtual aperture in accordance with the calibrated geometric model, a second viewable region of the one of the plurality of live video images; generating, by the video access subsystem, one of a display of the first viewable region of the one of the plurality of live video images and a display of the second viewable region of the one of the plurality of live video images; and displaying, by the video access subsystem on the single display screen of the single display device, the one of the generated display of the first viewable region of the one of the plurality of live video images and the generated display of the second viewable region of the one of the plurality of live video images on the single display screen of the single display device, the first viewable region of the one of the plurality of live video images representing a first perspective view of the one of the plurality of live video images based on the relationship between the position of the first user and the virtual aperture defined by the single display screen of the single display device, and the second viewable region of the one of the plurality of live video images representing a second perspective view of the one of the plurality of live video images based on the relationship between the position of the second user and the virtual aperture defined by the single display screen of the single display device. - View Dependent Claims (16, 17, 18, 19, 20)
-
-
21. A system comprising:
-
a video distribution subsystem that receives a plurality of incoming real-time video feeds carrying data representative of one of a plurality of a live video image captured by a plurality of video cameras, receives a request to view a user-selected outgoing real-time video feed among a plurality of outgoing real-time video feeds on one of a plurality of channels carrying data representative of one of the plurality of live video images, and distributes the user-selected outgoing real-time video feed carrying data representative of the one of the plurality of live video images over a network; and a video access subsystem communicatively coupled to the video distribution subsystem by way of the network and that receives the outgoing real-time video feed carrying data representative of one of the plurality of live video images; calibrates a geometric model of an image array, a virtual aperture defined by a single display screen of a single display device, and a user space associated with the single display screen of the single display device, the calibrating comprising virtually positioning the image array at a constant distance behind the virtual aperture based on a predefined distance from the single display screen at which the entire image array is to be viewable, from the user space, through the virtual aperture defined by the single display screen of the single display device; detects a position of a first user within the user space associated with the single display screen of the single display device and a position of a second user within the physical user space associated with the single display screen of the single display device, wherein the video access subsystem detects the position of the first user by; detecting a predefined tracking initiation gesture performed by the first user; and locking onto and tracking, based on the predefined tracking initiation gesture, a physical feature of the first user within the user space during a first time period; and the video access subsystem detects the position of the second user by; detecting an additional predefined tracking initiation gesture performed by the second user; and locking onto and tracking, based on the additional predefined tracking initiation gesture, a physical feature of the second user within the user space during a second time period, the second time period different from the first time period; identifies, based on a relationship between the position of the first user, the virtual aperture defined by the single display screen of the single display device, and the one of the plurality of live video images represented by the image array positioned at the constant virtual distance behind the virtual aperture in accordance with the calibrated geometric model, a first viewable region of the one of the plurality of live video images; identifies, based on a relationship between the position of the second user, the virtual aperture defined by the single display screen of the single display device, and the one of the plurality of live video images represented by the image array positioned at the constant virtual distance behind the virtual aperture in accordance with the calibrated geometric model, a second viewable region of the one of the plurality of live video images; generates, within the single display device, one of a display of the first viewable region of the one of the plurality of live video images and a display of the second viewable region of the one of the plurality of live video images; and displays, on the single display screen of the single display device, one of a first perspective view of the generated display of the first viewable region of the one of the plurality of live video images based on the relationship between the position of the first user and the virtual aperture defined by the single display screen of the single display device and a second perspective view of the generated display of the second viewable region of the one of the plurality of live video images based on the relationship between the position of the second user and the virtual aperture defined by the single display screen of the single display device. - View Dependent Claims (22, 23)
-
Specification