Classes of meeting participant interaction
First Claim
Patent Images
1. A method comprising:
- capturing first video data in a room using at least a first camera and second video data using at least a second camera, the room corresponding to at least a first zone and a second zone, the first zone being located between the second zone and a primary screen of the room, the first camera capable of capturing at least a substantial portion of a standing user located in the first zone, the second camera capable of capturing a user located in the second zone;
analyzing, by a processor, the first video data and the second video data to determine at least a first location of a first meeting participant of a video-conference and a second location of a second meeting participant of the video-conference;
determining, by the processor, first characteristics of at least the first meeting participant and second characteristics of at least the second meeting participant;
defining, by the processor, a first participant interaction class based on the first characteristics of at least the first meeting participant, and a second participant interaction class based on the second characteristics of at least the second meeting participant, wherein the first participant interaction class is associated with the first location of the first meeting participant and the second participant interaction class is associated with the second location of the second meeting participant; and
varying, by the processor, presentation of the video-conference based on at least one of the first participant interaction class or the second participant interaction class.
1 Assignment
0 Petitions
Accused Products
Abstract
A technology for interacting with a collaborative videoconferencing environment is disclosed. A display having a substantially “L-shaped” configuration allows for display of collaborative materials and video of remote participants simultaneously, which provides for a more natural interaction for a meeting participant interacting with the collaborative materials. Meeting participants in the collaborative videoconferencing environment can be classified based on position with respect to the environment, or their likely interaction profile. The technology can configure a meeting experience based on the classification of the meeting participant.
-
Citations
20 Claims
-
1. A method comprising:
-
capturing first video data in a room using at least a first camera and second video data using at least a second camera, the room corresponding to at least a first zone and a second zone, the first zone being located between the second zone and a primary screen of the room, the first camera capable of capturing at least a substantial portion of a standing user located in the first zone, the second camera capable of capturing a user located in the second zone; analyzing, by a processor, the first video data and the second video data to determine at least a first location of a first meeting participant of a video-conference and a second location of a second meeting participant of the video-conference; determining, by the processor, first characteristics of at least the first meeting participant and second characteristics of at least the second meeting participant; defining, by the processor, a first participant interaction class based on the first characteristics of at least the first meeting participant, and a second participant interaction class based on the second characteristics of at least the second meeting participant, wherein the first participant interaction class is associated with the first location of the first meeting participant and the second participant interaction class is associated with the second location of the second meeting participant; and varying, by the processor, presentation of the video-conference based on at least one of the first participant interaction class or the second participant interaction class. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A system comprising:
-
a processor; and a memory containing instructions that, when executed, cause the processor to; capture first video data in a room using at least a first camera and second video data using at least a second camera, the room corresponding to at least a first zone and a second zone, the first zone being located between the second zone and a primary screen of the room, the first camera capable of capturing at least a substantial portion of a standing user located in the first zone, the second camera capable of capturing a user located in the second zone; analyze the first video data and the second video data to determine at least a first location of a first meeting participant of a video-conference and a second location of a second meeting participant of the video-conference; determine first characteristics of at least the first meeting participant and second characteristics of at least the second meeting participant; identify a first participant interaction class based on the first characteristics of at least the first meeting participant, and a second participant interaction class based on the second characteristics of at least the second meeting participant, wherein the first participant interaction class is associated with the first location of the first meeting participant and the second participant interaction class is associated with the second location of the second meeting participant; and vary presentation of the video-conference based on at least one of the first participant interaction class or the second participant interaction class. - View Dependent Claims (10, 11, 12, 13, 14)
-
-
15. A non-transitory computer-readable medium containing instructions that, when executed by a computing device, cause the computing device to:
-
capture first video data in a room using at least a first camera and second video data using at least a second camera, the room corresponding to at least a first zone and a second zone, the first zone being located between the second zone and a primary screen of the room, the first camera capable of capturing at least a substantial portion of a standing user located in the first zone, the second camera capable of capturing a user located in the second zone; analyze the first video data and the second video data to determine at least a first location of a first meeting participant of a video-conference and a second location of a second meeting participant of the video-conference; determine first characteristics of at least the first meeting participant and second characteristics of at least the second meeting participant; identify a first participant interaction class based on first characteristics of at least the first meeting participant, and a second meeting participant interaction class based on second characteristics of at least the second meeting participant, wherein the first participant interaction class is associated with the first location of the first meeting participant and the second participant interaction class is associated with the second location of the second meeting participant; and vary presentation of the video-conference based on at least one of the first participant interaction class or the second participant interaction class. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification