Mixed environment display of attached control elements
First Claim
1. A first computing device, comprising:
- a processor;
a hardware display surface;
one or more input devices;
a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the first computing device toobtain control data defining one or more commands configured to cause an execution of a second set of computer-executable instructions at a second computing device, wherein the second computing device is configured to interact with an object,analyze image data from the one or more input devices to identify a physical characteristic of the object,obtain identification data indicating an association between a network address of the second computing device and the physical characteristic of the object,establish a connection with the second computing device using the network address in response to identifying the physical characteristic of the object,obtain status data indicating a status associated with the second computing device or the object,cause a display of one or more graphical elements comprising the status data and the one or more commands on the hardware display surface, wherein the hardware display surface is configured to display the one or more graphical elements with a real-world view of the object through a transparent section of the hardware display surface,capture a gesture of at least a portion of a hand, performed by a user, wherein the gesture is viewable by the user through the transparent section in the transparent section of the hardware display surface, and wherein the gesture is viewable by the user with the real-world view of the object,select at least one of the one or more graphical elements based on the gesture, andcommunicate the one or more commands to the second computing device in response to the selection of the one or more commands detected by the one or more input devices, wherein the communication of the one or more commands causes an execution of at least a portion of the second set of computer-executable instructions at the second computing device.
1 Assignment
0 Petitions
Accused Products
Abstract
Technologies described herein provide a mixed environment display of attached control elements. The techniques disclosed herein enable users of a first computing device to interact with a remote computing device configured to control an object, such as a light, appliance, or any other suitable object. Configurations disclosed herein enable the first computing device to cause one or more actions, such as a selection of the object or the display of a user interface, by capturing and analyzing input data defining the performance of one or more gestures, such as a user looking at the object controlled by the second computing device. Rendered graphical elements configured to enable the control of the object can be displayed with a real-world view of the object.
-
Citations
20 Claims
-
1. A first computing device, comprising:
-
a processor; a hardware display surface; one or more input devices; a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the first computing device to obtain control data defining one or more commands configured to cause an execution of a second set of computer-executable instructions at a second computing device, wherein the second computing device is configured to interact with an object, analyze image data from the one or more input devices to identify a physical characteristic of the object, obtain identification data indicating an association between a network address of the second computing device and the physical characteristic of the object, establish a connection with the second computing device using the network address in response to identifying the physical characteristic of the object, obtain status data indicating a status associated with the second computing device or the object, cause a display of one or more graphical elements comprising the status data and the one or more commands on the hardware display surface, wherein the hardware display surface is configured to display the one or more graphical elements with a real-world view of the object through a transparent section of the hardware display surface, capture a gesture of at least a portion of a hand, performed by a user, wherein the gesture is viewable by the user through the transparent section in the transparent section of the hardware display surface, and wherein the gesture is viewable by the user with the real-world view of the object, select at least one of the one or more graphical elements based on the gesture, and communicate the one or more commands to the second computing device in response to the selection of the one or more commands detected by the one or more input devices, wherein the communication of the one or more commands causes an execution of at least a portion of the second set of computer-executable instructions at the second computing device. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A computer-implemented method, comprising:
-
obtaining, at a first computing device, control data defining one or more commands configured to cause an execution of a second set of computer-executable instructions at a second computing device, wherein the second computing device is configured to interact with an object, analyzing image data from the one or more input devices to identify a physical characteristic of the object, obtaining identification data indicating an association between a network address of the second computing device and the physical characteristic of the object, establishing a connection with the second computing device using the network address in response to identifying the physical characteristic of the object, causing a display of one or more graphical elements comprising the one or more commands on a hardware display surface associated with the first computing device, wherein the hardware display surface is configured to display the one or more graphical elements with a real-world view of the object through a transparent section of the hardware display surface, the real-world view of the object provided by a camera generating a video feed of an environment around the first computing device, capturing a gesture of at least a portion of a hand, performed by a user, wherein the gesture is viewable by the user through the transparent section in the transparent section of the hardware display surface, and wherein the gesture is viewable by the user with the real-world view of the object, selecting at least one of the one or more graphical elements based on the gesture, and communicating the one or more commands from the first computing device to the second computing device in response to the selection of the one or more commands detected by the one or more input devices, wherein the communication of the one or more commands causes an execution of at least a portion of the second set of computer-executable instructions at the second computing device. - View Dependent Claims (10, 11, 12, 13)
-
-
14. A first computing device, comprising:
-
a processor; a hardware display surface; one or more input devices; a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the first computing device to obtain control data defining one or more commands configured to cause an execution of a second set of computer-executable instructions at one or more controller devices, wherein the one or more controller devices are configured to interact with one or more objects, analyze image data from the one or more input devices to identify a physical characteristic of the one or more objects, obtain identification data indicating an association between a network address of the second computing device and the physical characteristic of the one or more objects, establish a connection with the second computing device using the network address in response to identifying the physical characteristic of the one or more objects, cause a display of one or more graphical elements comprising the one or more commands on the hardware display surface, wherein the display further comprises a rendering of a real-world view of the one or more objects, capture a gesture performed by a user, wherein the gesture is viewable by the user through the transparent section in the transparent section of the hardware display surface, and wherein the gesture is viewable by the user with the real-world view of the one or more objects, select at least one of the one or more graphical elements based on the gesture, and communicate the one or more commands to the one or more controller devices in response to the selection of the one or more commands detected by the one or more input devices, wherein the communication of the one or more commands causes an execution of at least a portion of the second set of computer-executable instructions at the one or more controller devices. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification