Interactivity with a mixed reality
First Claim
Patent Images
1. A method for enabling a user of a mobile computing device to use image sensor data and audio sensor data for interactions, the method comprising the steps of:
- capturing, from an image sensor of a mobile device, image sensor data relating to a plurality of real-world objects;
identifying, by the mobile device in cooperation with a server, a first set of attributes associated with a first real-world object and a second set of attributes associated with a second real-world object from the plurality of real-world objects, based on the captured image sensor data;
providing, by the mobile device to the server, the first set of attributes and the second set of attributes;
determining, by the server, a first command set based on the first set of attributes and a second command set based on the second set of attributes from a database storing a plurality of command sets corresponding to a plurality of real-world objects, wherein the first and second command sets comprise commands that control functions of the first and the second real-world objects, respectively;
supplying, by the server, the first command set and the second command set to the mobile computing device;
making accessible, by the mobile computing device, the first command set and the second command set to the user; and
receiving, by the mobile computing device, a user-selected command from one of the first command set and the second command set, wherein the user-selected command is executable by the mobile computing device to control an accessible function of the respective first or second real-world object.
3 Assignments
0 Petitions
Accused Products
Abstract
Methods of interacting with a mixed reality are presented. A mobile device captures an image of a real-world object where the image has content information that can be used to control a mixed reality object through an offered command set. The mixed reality object can be real, virtual, or a mixture of both real and virtual.
-
Citations
21 Claims
-
1. A method for enabling a user of a mobile computing device to use image sensor data and audio sensor data for interactions, the method comprising the steps of:
-
capturing, from an image sensor of a mobile device, image sensor data relating to a plurality of real-world objects; identifying, by the mobile device in cooperation with a server, a first set of attributes associated with a first real-world object and a second set of attributes associated with a second real-world object from the plurality of real-world objects, based on the captured image sensor data; providing, by the mobile device to the server, the first set of attributes and the second set of attributes; determining, by the server, a first command set based on the first set of attributes and a second command set based on the second set of attributes from a database storing a plurality of command sets corresponding to a plurality of real-world objects, wherein the first and second command sets comprise commands that control functions of the first and the second real-world objects, respectively; supplying, by the server, the first command set and the second command set to the mobile computing device; making accessible, by the mobile computing device, the first command set and the second command set to the user; and receiving, by the mobile computing device, a user-selected command from one of the first command set and the second command set, wherein the user-selected command is executable by the mobile computing device to control an accessible function of the respective first or second real-world object. - View Dependent Claims (2, 11, 16)
-
-
3. A system for using image sensor data and audio sensor data for interactions, the system comprising:
-
a server communicatively coupled with a mobile device and a database storing a plurality of command sets corresponding to a plurality of real-world objects; wherein the mobile device comprises an image sensor configured to capture image sensor data and an audio sensor configured to capture audio sensor data; wherein the mobile device is configured to; identify, in cooperation with the server, a first set of attributes associated with a first real-world object and a second set of attributes associated with a second real-world object by using the image sensor data captured by the image sensor of the mobile device; and provide the first set of attributes and second set of attributes to the server; wherein the server is configured to determine a first command set based on the first set of attributes and a second command set based on the second set of attributes from a database storing a plurality of command sets corresponding to a plurality of real-world objects, wherein the first and second command sets comprise commands that control functions of the first and the second real-world objects, respectively; and wherein the mobile device is further configured to; receive the first command set and the second command set from the server; make the first command set and the second command set accessible to a user; and receive a user-selected command from one of the first command set and second command set, wherein the user-selected command is executable by the mobile device to control an accessible function of the respective first or second real-world object. - View Dependent Claims (4, 12, 17, 18)
-
-
5. A system for using image sensor data and audio sensor data for interactions, the system comprising:
-
a server communicatively coupled with a mobile device and a database storing a plurality of command sets corresponding to a plurality of real-world objects; wherein the mobile device comprises an image sensor configured to capture image sensor data and an audio sensor configured to capture audio sensor data; wherein the mobile device is configured to; identify, in cooperation with the server, a first set of attributes associated with a first real-world object and a second set of attributes associated with a second real-world object by using the image sensor data captured by the image sensor of the mobile device and at least one of GPS sensor data and location information; and provide the first set of attributes and the second set of attributes to the server; wherein the server is configured to determine a first command set based on the first set of attributes and the at least one of GPS sensor data and location information and a second command set based on the second set of attributes and the at least one of GPS sensor data and location information from a database storing a plurality of command sets corresponding to a plurality of real-world objects, wherein the first and second command sets comprise commands that control functions of the first and the second real-world objects, respectively; and wherein the mobile device is further configured to; receive the first command set and second command set from the server; make the first command set and the second command set accessible to the user; and receive a user-selected command from one of the first command set and second command set, wherein the user-selected command is executable by the mobile to control an accessible function of the respective first or second real-world object. - View Dependent Claims (8, 13, 19)
-
-
6. A system for using image sensor data and audio sensor data for interactions, the system comprising:
-
a server communicatively coupled with a mobile device and a database storing a plurality of command sets corresponding to a plurality of objects; wherein the mobile device comprises an image sensor configured to capture image sensor data and an audio sensor configured to capture audio sensor data; wherein the mobile device is configured to; identify, in cooperation with the server, a first set of attributes associated with a first real-world object and a second set of attributes associated with a second real-world object by using the image sensor data captured by the image sensor of the mobile device and GPS sensor data; and provide the first set of attributes and the second set of attributes to the server; wherein the server is configured to determine a first command set based on the first set of attributes and a second command set based on the second set of attributes from a database storing a plurality of command sets corresponding to a plurality of real-world objects, wherein the first and second command sets comprise commands that control functions of the first and the second real-world objects, respectively; and wherein the mobile device is further configured to; receive the first command set and the second command set from the server; make the first command set and the second command set accessible to the user; and receive a user-selected command from one of the first command set and second command set, wherein the user-selected command is related to a time and executable by the mobile device to control an accessible function of the respective first or second real-world object. - View Dependent Claims (9, 14, 20)
-
-
7. A system for using image sensor data and audio sensor data for interactions, the system comprising:
-
a server communicatively coupled with a mobile device and a database storing a plurality of command sets corresponding to a plurality of objects; wherein the mobile device comprises an image sensor configured to capture image sensor data and an audio sensor configured to capture audio sensor data; wherein the mobile device is configured to; identify, in cooperation with the server, a first set of attributes associated with a first real-world object and a second set of attributes associated with a second real-world object by using the image sensor data captured by the image sensor of the mobile device; and provide the first set of attributes and the second set of attributes to the server; wherein the server is configured to determine a first command set based on the first set of attributes and a second command set based on the second set of attributes from a database storing a plurality of command sets corresponding to a plurality of real-world objects, wherein the first and second command sets comprise commands that control functions of the first and the second real-world objects, respectively; and wherein the mobile device is further configured to; receive the first command set and the second command set from the server; make the first command set and the second command set accessible to the user; and receive a user-selected command from one of the first command set and second command set, wherein the user-selected command is related to a time executable by the mobile device to control an accessible function of the respective first or second real-world object. - View Dependent Claims (10, 15, 21)
-
Specification