Optimized telepresence using mobile device gestures
First Claim
1. A computer-implemented process for optimizing the telepresence of two or more users participating in a telepresence session, said users comprising a mobile user who is utilizing a mobile device (MD) comprising a display screen, and one or more remote users each of whom are utilizing a computer that is connected to the MD via a network, comprising:
- using the MD to perform the following process actions;
receiving live video of a first remote user over the network;
displaying default information on the display screen; and
whenever it is detected by the MD that the mobile user has performed a gesture with the MD using a first motion,displaying the video of the first remote user on the display screen, instead of the default information, wherein gesture detection comprises detecting physical movement of the MD itself,receiving gaze information for the first remote user over the network, wherein said information specifies what the first remote user is currently looking at on their display device, andwhenever the gaze information for the first remote user specifies that they are currently looking at either a workspace or a video of another remote user, differentiating the video of the first remote user in a manner that indicates to the mobile user that the first remote user is not looking at the mobile user.
3 Assignments
0 Petitions
Accused Products
Abstract
Telepresence of a mobile user (MU) utilizing a mobile device (MD) and remote users who are participating in a telepresence session is optimized. The MD receives video of a first remote user (FRU). Whenever the MU gestures with the MD using a first motion, video of the FRU is displayed. The MD can also receive video and audio of the FRU and a second remote user (SRU), display a workspace, and reproduce the audio of the FRU and SRU in a default manner. Whenever the MU gestures with the MD using the first motion, video of the FRU is displayed and audio of the FRU and SRU is reproduced in a manner that accentuates the FRU. Whenever the MU gestures with the MD using a second motion, video of the SRU is displayed and audio of the FRU and SRU is reproduced in a manner that accentuates the SRU.
-
Citations
20 Claims
-
1. A computer-implemented process for optimizing the telepresence of two or more users participating in a telepresence session, said users comprising a mobile user who is utilizing a mobile device (MD) comprising a display screen, and one or more remote users each of whom are utilizing a computer that is connected to the MD via a network, comprising:
-
using the MD to perform the following process actions; receiving live video of a first remote user over the network; displaying default information on the display screen; and whenever it is detected by the MD that the mobile user has performed a gesture with the MD using a first motion, displaying the video of the first remote user on the display screen, instead of the default information, wherein gesture detection comprises detecting physical movement of the MD itself, receiving gaze information for the first remote user over the network, wherein said information specifies what the first remote user is currently looking at on their display device, and whenever the gaze information for the first remote user specifies that they are currently looking at either a workspace or a video of another remote user, differentiating the video of the first remote user in a manner that indicates to the mobile user that the first remote user is not looking at the mobile user. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
-
-
16. A computer-implemented process for optimizing the telepresence of three or more users participating in a telepresence session, said users comprising a mobile user who is utilizing a mobile device (MD) comprising a display screen and an audio output device, and two or more remote users each of whom are utilizing a computer that is connected to the MD via a network, comprising:
-
using the MD to perform the following process actions; receiving live video and live audio of a first remote user over the network; receiving live video and live audio of a second remote user over the network; displaying a workspace on the display screen, and reproducing the audio of the first and second remote users via the audio output device in a default manner; whenever the workspace is displayed on the display screen and it is detected by the MD that the mobile user has performed a gesture with the MD using a first motion, displaying the video of the first remote user on the display screen instead of the workspace, and reproducing the audio of the first and second remote users via the audio output device in a manner that accentuates the first remote user; and whenever the workspace is displayed on the display screen and it is detected by the MD that the mobile user has performed a gesture with the MD using a second motion, displaying the video of the second remote user on the display screen instead of the workspace, and reproducing the audio of the first and second remote users via the audio output device in a manner that accentuates the second remote user; and
whereingesture detection comprises detecting physical movement of the MD itself. - View Dependent Claims (17, 18, 19)
-
-
20. A computer-implemented process for optimizing the telepresence of three or more users participating in a telepresence session, said users comprising a mobile user who is utilizing a mobile device (MD) comprising a display screen and a front-facing video capture device that captures live video of the mobile user, and two or more remote users each of whom are utilizing a computer that is connected to the MD via a network, comprising:
-
using the MD to perform the following process actions; (a) receiving live video of a first remote user over the network; (b) receiving live video of a second remote user over the network; (c) displaying a workspace on the display screen; (d) transmitting gaze information over the network to the computer of each remote user, said information specifying what the mobile user is currently looking at on the display screen; (e) whenever what the mobile user is looking at on the display screen changes, transmitting revised gaze information over the network to the computer of each remote user; (f) receiving gaze information for the first remote user over the network, wherein said information specifies what the first remote user is currently looking at on their display device; (g) receiving gaze information for the second remote user over the network, wherein said information specifies what the second remote user is currently looking at on their display device; (h) whenever the workspace is displayed on the display screen and it is detected by the MD that the mobile user has performed a gesture with the MD by tilting the MD about its left edge, displaying the video of the first remote user on the display screen rather than the workspace, wherein whenever the gaze information for the first remote user specifies that they are currently looking at either a workspace or the video of the second remote user, said displayed video is differentiated in a manner that indicates to the mobile user that the first remote user is not looking at the mobile user; (i) whenever the workspace is displayed on the display screen and it is detected by the MD that the mobile user has performed a gesture with the MD by tilting the MD about its right edge, displaying the video of the second remote user on the display screen rather than the workspace, wherein whenever the gaze information for the second remote user specifies that they are currently looking at either a workspace or the video of the first remote user, said displayed video is differentiated in a manner that indicates to the mobile user that the second remote user is not looking at the mobile user; (j) whenever the video of the first remote user is displayed on the display screen and it is detected by the MD that the mobile user has performed a gesture with the MD by tilting the MD about its right edge, re-displaying the workspace on the display screen, instead of the first remote user or the second remote user; (k) whenever the video of the second remote user is displayed on the display screen and it is detected by the MD that the mobile user has performed a gesture with the MD by tilting the MD about its left edge, re-displaying the workspace on the display screen, instead of the video of the second remote user or the video of the first remote user; and (l) repeating actions (e)-(k) until the end of the telepresence session.
-
Specification