Methods of and systems for content search based on environment sampling
First Claim
1. A computer-implemented user interface method of displaying at least one available action overlaid on an image, the method comprising:
- generating for display, a live image and a visual guide overlaid on the live image;
identifying an object of interest in the live image based on a proximity of the object of interest to the visual guide;
identifying, by a processor, without receiving user input, a first plurality of actions of different types from a second plurality of actions for subsequent selection by a user, the first plurality of actions being identified automatically based at least in part on the object of interest and at least one of (1) current device location, (2) location at which the live image was taken, (3) date of capturing the live image, (4) time of capturing the live image, and (5) a user preference signature representing prior actions selected by a user and content preferences learned about the user associated with particular times or locations at which the prior actions were selected by the user;
assigning a ranking weight to the first plurality of actions based on a non-textual portion of the identified object of interest;
ranking the first plurality of actions based on its assigned ranking weight; and
presenting the first plurality of actions to a user as selectable options.
9 Assignments
0 Petitions
Accused Products
Abstract
The present disclosure provides user interface methods of and systems for displaying at least one available action overlaid on an image, comprising displaying an image; selecting at least one action and assigning a ranking weight thereto based on at least one of (1) image content, (2) current device location, (3) location at which the image was taken, (4) date of capturing the image; (5) time of capturing the image; and (6) a user preference signature representing prior actions chosen by a user and content preferences learned about the user; and ranking the at least one action based on its assigned ranking weight.
-
Citations
26 Claims
-
1. A computer-implemented user interface method of displaying at least one available action overlaid on an image, the method comprising:
-
generating for display, a live image and a visual guide overlaid on the live image; identifying an object of interest in the live image based on a proximity of the object of interest to the visual guide; identifying, by a processor, without receiving user input, a first plurality of actions of different types from a second plurality of actions for subsequent selection by a user, the first plurality of actions being identified automatically based at least in part on the object of interest and at least one of (1) current device location, (2) location at which the live image was taken, (3) date of capturing the live image, (4) time of capturing the live image, and (5) a user preference signature representing prior actions selected by a user and content preferences learned about the user associated with particular times or locations at which the prior actions were selected by the user; assigning a ranking weight to the first plurality of actions based on a non-textual portion of the identified object of interest; ranking the first plurality of actions based on its assigned ranking weight; and presenting the first plurality of actions to a user as selectable options. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A system for displaying at least one available action overlaid on an image, the system comprising:
-
a memory device that stores instructions; and a processor circuitry that executes the instructions and is configured to; generate, for display, a live image and a visual guide overlaid on the live image; identify an object of interest in the live image based on the proximity of the object of interest to the visual guide; identify, without receiving user input, a first plurality of actions of different types from a second plurality of actions for subsequent selection by the user, the first plurality of actions being identified automatically based at least in part on the object of interest and at least one of (1) current device location, (2) location at which the live image was taken, (3) date of capturing the live image, (4) time of capturing the live image, and (5) a user preference signature representing prior actions selected by a user and content preferences learned about the user associated with particular times or locations at which prior actions were selected by the user; assign a ranking weight to the first plurality of actions based on a non-textual portion of the identified object of interest; rank the first plurality of actions based on its assigned ranking weight; and present the first plurality of actions to a user as selectable options. - View Dependent Claims (16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26)
-
Specification