Capturing objects in editable format using gestures
First Claim
1. At least one non-transitory computing device-readable storage medium having instructions stored thereon that, in response to execution by a computing device, cause the computing device to:
- identify a gesture associated with a displayed first image of an object of interest external to the computing device, the gesture indicating a combination of commands comprising a first command to select and capture at least a portion of the object of interest from the displayed first image, the selected portion including one or more moving non-textual elements, a second command to perform an operation on the selected portion of the object of interest, and a third command to expose a set of inferred dynamic properties associated with the one or more moving non-textual elements;
based on a result of identifying the gesture, select the at least a portion of the object from the displayed first image, calculate the set of inferred dynamic properties, and capture a second image of the at least a portion of the object of interest and the exposed inferred dynamic properties from the displayed first image of the object, in response to the first and third commands provided by the identified gesture;
wherein the computing device comprises a mobile device, and wherein to capture the second image includes to determine and provide a suggestion of a particular positioning of the computing device relative to the displayed first image of the object to capture the second image of the at least a portion of the object of interest from the displayed first image of the object;
perform the indicated operation on the captured second image of the portion of the object, according to the second command indicated by the identified gesture; and
process the second image, after performance of the operations indicated by the first and second commands, to generate a third image, the third image to be provided in an editable format for use in one or more further operations in accordance with one or more further commands.
1 Assignment
0 Petitions
Accused Products
Abstract
Embodiments of methods, systems, and storage medium associated with capturing, with a user device, at least a portion of an object based on a user gesture indicating a command are disclosed herein. In one instance, the method may include identifying a gesture associated with an object of interest external to the user device; capturing an image of at least a portion of the object of interest based on a result of the identifying the gesture; and providing the portion of the object in an editable format based on the image. Other embodiments may be described and/or claimed.
12 Citations
28 Claims
-
1. At least one non-transitory computing device-readable storage medium having instructions stored thereon that, in response to execution by a computing device, cause the computing device to:
-
identify a gesture associated with a displayed first image of an object of interest external to the computing device, the gesture indicating a combination of commands comprising a first command to select and capture at least a portion of the object of interest from the displayed first image, the selected portion including one or more moving non-textual elements, a second command to perform an operation on the selected portion of the object of interest, and a third command to expose a set of inferred dynamic properties associated with the one or more moving non-textual elements; based on a result of identifying the gesture, select the at least a portion of the object from the displayed first image, calculate the set of inferred dynamic properties, and capture a second image of the at least a portion of the object of interest and the exposed inferred dynamic properties from the displayed first image of the object, in response to the first and third commands provided by the identified gesture; wherein the computing device comprises a mobile device, and wherein to capture the second image includes to determine and provide a suggestion of a particular positioning of the computing device relative to the displayed first image of the object to capture the second image of the at least a portion of the object of interest from the displayed first image of the object; perform the indicated operation on the captured second image of the portion of the object, according to the second command indicated by the identified gesture; and process the second image, after performance of the operations indicated by the first and second commands, to generate a third image, the third image to be provided in an editable format for use in one or more further operations in accordance with one or more further commands. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A computing device comprising:
-
a processor; and an object capture application operated by the processor to; identify a gesture associated with a displayed first image of an object of interest external to the computing device, the gesture indicating a combination of commands comprising a first command to select and capture at least a portion of the object of interest from the displayed first image, the selected portion including one or more moving non-textual elements, a second command to perform an operation on the selected portion of the object of interest, and a third command to expose a set of inferred dynamic properties associated with the one or more moving non-textual elements; based on a result of identifying the gesture, select the at least a portion of the object from the displayed first image, calculate the set of inferred dynamic properties, and capture a second image of the at least a portion of the object of interest and the exposed inferred dynamic properties from the displayed first image of the object, in response to the first and third commands provided by the identified gesture; wherein the computing device comprises a mobile device, and wherein to capture the second image includes to determine and provide a suggestion of a particular positioning of the computing device relative to the displayed first image of the object to capture the second image of the at least a portion of the object of interest from the displayed first image of the object; perform the indicated operation on the captured second image of the portion of the object, in response to the second command; and process the second image, after performance of the operation indicated by the second command, to generate a third image, or cause another computing device to process the second image, after performance of the operations indicated by the first and second commands, to generate the third image, the third image to be provided in an editable format for use in one or more further operations in accordance with one or more further commands. - View Dependent Claims (16, 17, 18, 19, 20)
-
-
21. A computer-implemented method comprising:
-
identifying, by a computing device, a gesture associated with a displayed first image of an object of interest external to the computing device, the gesture indicating a combination of commands comprising a first command to select and capture at least a portion of the object of interest from the displayed first image, the selected portion including one or more moving non-textual elements, a second command to perform an operation on the selected portion of the object of interest, and a third command to expose a set of inferred dynamic properties associated with the one or more moving non-textual elements; selecting, by the computing device, the at least a portion of the object of interest from the displayed first image of the object in response to the first command, according to the identified gesture; calculating the set of inferred dynamic properties; capturing, by the computing device, a second image of the selected portion of the object of interest and the exposed inferred dynamic properties from the displayed first image, in response to the first and third commands; wherein the computing device comprises a mobile device, and wherein capturing the second image includes determining and providing a suggestion of a particular positioning of the computing device relative to the displayed first image of the object to capture the second image of the at least a portion of the object of interest from the displayed first image of the object; performing, by the computing device, the indicated operation on the captured second image of the portion of the object according to the second command indicated by the gesture; and processing the second image, after performance of the operations indicated by the first and second commands, to generate a third image, the third image to be provided in an editable format for use in one or more further operations in accordance with one or more further commands. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28)
-
Specification