USER AUGMENTED REALITY FOR CAMERA-ENABLED MOBILE DEVICES
First Claim
1. A method of providing information regarding one or more scenes captured with a camera of a mobile device, comprising:
- when a camera of the mobile device is pointed at a scene having one or more object(s), (i) displaying an image/video of the scene in a display of the mobile device, and (ii) overlaying over the image/video a plurality of options for selecting one of a plurality of user augmented reality modes that include an encyclopedia mode, a decision support mode, and an action mode;
when the encyclopedia mode is selected, obtaining contextual information regarding an identity of the one or more objects and presenting the obtained contextual information in the display;
when the decision support mode is selected, obtaining decision information related to a set of actions that can be taken with respect to an identity of the one or more object(s) and presenting the decision information in the display; and
when the action mode is selected, obtaining a set of references to a plurality of actions that can be performed with respect to an identity of the one or more object(s) and presenting the set of references in the display so that they are selectable by a user to initiate the referenced actions.
5 Assignments
0 Petitions
Accused Products
Abstract
Disclosed are apparatus and methods for providing a user augmented reality (UAR) service for a camera-enabled mobile device, so that a user of such mobile device can use the mobile device to obtain meta data regarding one or more images/video that are captured with such device. As the user points the mobile device'"'"'s camera at one or more objects in one or more scenes, such objects are automatically analyzed by the UAR to identify the one or more objects and then provide meta data regarding the identified objects in the display of the mobile device. The meta data is interactive and allows the user to obtain additional information or specific types of information, such as information that will aid the user in making a decision regarding the identified objects or selectable action options that can be used to initiate actions with respect to the identified objects. The user can utilize the UAR to continuously pass the camera over additional objects and scenes so that the meta data presented in the display of the mobile device is continuously updated.
364 Citations
25 Claims
-
1. A method of providing information regarding one or more scenes captured with a camera of a mobile device, comprising:
-
when a camera of the mobile device is pointed at a scene having one or more object(s), (i) displaying an image/video of the scene in a display of the mobile device, and (ii) overlaying over the image/video a plurality of options for selecting one of a plurality of user augmented reality modes that include an encyclopedia mode, a decision support mode, and an action mode; when the encyclopedia mode is selected, obtaining contextual information regarding an identity of the one or more objects and presenting the obtained contextual information in the display; when the decision support mode is selected, obtaining decision information related to a set of actions that can be taken with respect to an identity of the one or more object(s) and presenting the decision information in the display; and when the action mode is selected, obtaining a set of references to a plurality of actions that can be performed with respect to an identity of the one or more object(s) and presenting the set of references in the display so that they are selectable by a user to initiate the referenced actions. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method as recited in claim 9, further comprising presenting the contextual and/or decision information or obtaining and presenting additional contextual and/or decision information, after the reference is selected by the user.
-
10. A mobile device for providing information regarding one or more scenes, comprising:
-
a camera for capturing one or more scenes; a display for displaying an image or video; at least one a processor; and at least one memory, the at least one processor and/or memory being configured for; when the camera is pointed at a scene having one or more object(s), (i) displaying an image/video of the scene in the display, and (ii) overlaying over the image/video a plurality of options for selecting one of a plurality of user augmented reality modes that include an encyclopedia mode, a decision support mode, and an action mode; when the encyclopedia mode is selected, obtaining contextual information regarding an identity of the one or more objects and presenting the obtained contextual information in the display; when the decision support mode is selected, obtaining decision information related to a set of actions that can be taken with respect to an identity of the one or more object(s) and presenting the decision information in the display; and when the action mode is selected, obtaining a set of references to a plurality of actions that can be performed with respect to an identity of the one or more object(s) and presenting the set of references in the display so that they are selectable by a user to initiate the referenced actions. - View Dependent Claims (11, 12, 13, 14, 15)
-
-
16. At least one computer readable storage medium having computer program instructions stored thereon that are arranged to perform the following operations:
-
when a camera of the mobile device is pointed at a scene having one or more object(s), (i) displaying an image/video of the scene in a display of the mobile device, and (ii) overlaying over the image/video a plurality of options for selecting one of a plurality of user augmented reality modes that include an encyclopedia mode, a decision support mode, and an action mode; when the encyclopedia mode is selected, obtaining contextual information regarding an identity of the one or more objects and presenting the obtained contextual information in the display; when the decision support mode is selected, obtaining decision information related to a set of actions that can be taken with respect to an identity of the one or more object(s) and presenting the decision information in the display; and when the action mode is selected, obtaining a set of references to a plurality of actions that can be performed with respect to an identity of the one or more object(s) and presenting the set of references in the display so that they are selectable by a user to initiate the referenced actions. - View Dependent Claims (17, 18, 19, 20)
-
-
21. A method of providing information to a mobile device, comprising:
-
when one or more imaged scenes/video are received from a camera of a mobile device, obtaining an identification of the one or more objects of the one or more scenes; when an encyclopedia mode is selected for the one or more imaged scenes/video, obtaining contextual information regarding the identified one or more objects and sending the obtained contextual information to the mobile device; when a decision support mode is selected for the one or more imaged scenes/video, obtaining decision information related to a set of actions that can be taken with respect to the identified one or more object(s) and sending the decision information to the mobile device; and when an action mode is selected for the one or more scenes, obtaining a set of references to a plurality of actions that can be performed with respect to the identified one or more object(s) and sending the set of references to the mobile device, wherein the references are selectable by a user to initiate the referenced actions. - View Dependent Claims (22, 23)
-
-
24. A system for providing text translation for a mobile device, comprising:
-
at least one a processor; and at least one memory, the at least one processor and/or memory being configured for; when one or more imaged scenes/video are received from a camera of a mobile device, obtaining an identification of the one or more objects of the one or more scenes; when an encyclopedia mode is selected for the one or more imaged scenes/video, obtaining contextual information regarding the identified one or more objects and sending the obtained contextual information to the mobile device; when a decision support mode is selected for the one or more imaged scenes/video, obtaining decision information related to a set of actions that can be taken with respect to the identified one or more object(s) and sending the decision information to the mobile device; and when an action mode is selected for the one or more scenes, obtaining a set of references to a plurality of actions that can be performed with respect to the identified one or more object(s) and sending the set of references to the mobile device, wherein the references are selectable by a user to initiate the referenced actions. - View Dependent Claims (25)
-
Specification