MULTI DISPARATE GESTURE ACTIONS AND TRANSACTIONS APPARATUSES, METHODS AND SYSTEMS
First Claim
11. A processor-implemented method comprising:
- providing check-in information to a merchant store, the check-in information i) being associated with a user, and ii) being stored on the user'"'"'s mobile device, wherein the user has an account with the merchant store;
accessing, based on the provided check-in information, an identifier for the user, wherein the identifier is associated with the account;
detecting, via a sensor, a first gesture that is performed by the user, the first gesture being directed to an item that is included in the merchant store, wherein the first gesture is detected after the providing of the check-in information to the merchant store;
providing the detected first gesture to the merchant store;
determining an action associated with the detected first gesture;
performing the action associated with the detected first gesture, wherein the performing of the action associated with the detected first gesture modifies the account with information related to the item;
detecting, via the sensor, a second gesture that is performed by the user, wherein the second gesture is detected after the performing of the action associated with the detected first gesture;
providing the detected second gesture to the merchant store;
determining an action associated with the detected second gesture, wherein the action associated with the detected second gesture initiates a payment transaction between the user and the merchant store; and
performing the action associated with the detected second gesture.
1 Assignment
0 Petitions
Accused Products
Abstract
The Multi Disparate Gesture Actions And Transactions Apparatuses, Methods And Systems (“MDGAAT”) transform gesture, video, and audio inputs via MDGAAT components into action, augmented reality, and transaction outputs. receiving from a wallet user multiple gesture actions within a specified temporal quantum; determining composite constituent gestures, gesture manipulated objects, and user account information from the received multiple gesture actions; determining via a processor a composite gesture action associated with the determined composite constituent gestures and gesture manipulated objects; and executing via a processor the composite gesture action to perform a transaction with a user account specified by the user account information.
-
Citations
97 Claims
-
11. A processor-implemented method comprising:
-
providing check-in information to a merchant store, the check-in information i) being associated with a user, and ii) being stored on the user'"'"'s mobile device, wherein the user has an account with the merchant store; accessing, based on the provided check-in information, an identifier for the user, wherein the identifier is associated with the account; detecting, via a sensor, a first gesture that is performed by the user, the first gesture being directed to an item that is included in the merchant store, wherein the first gesture is detected after the providing of the check-in information to the merchant store; providing the detected first gesture to the merchant store; determining an action associated with the detected first gesture; performing the action associated with the detected first gesture, wherein the performing of the action associated with the detected first gesture modifies the account with information related to the item; detecting, via the sensor, a second gesture that is performed by the user, wherein the second gesture is detected after the performing of the action associated with the detected first gesture; providing the detected second gesture to the merchant store; determining an action associated with the detected second gesture, wherein the action associated with the detected second gesture initiates a payment transaction between the user and the merchant store; and performing the action associated with the detected second gesture. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. A processor-implemented method comprising:
-
obtaining a visual capture of a reality scene via a visual device, the visual capture of the reality scene including an object that identifies a subset of data included in a user account; performing image analysis on the visual capture via an image analysis tool of the visual device, wherein the object is identified based on the image analysis, and wherein the visual device accesses the subset of data based on the identified object; generating, based on the subset of data, an augmented reality display that is viewed by a user, the user i) being associated with the subset of data, and ii) using the visual device to obtain the visual capture; detecting a gesture performed by a user, wherein the gesture is directed to a user interactive area included in the augmented reality display; providing the detected gesture to the visual device, the visual device being configured to determine an action associated with the detected gesture, wherein the determined action is based on one or more aspects of the augmented reality display; and performing the action associated with the detected gesture, wherein the performing of the action modifies the subset of data based on information relating to the user interactive area. - View Dependent Claims (22, 23, 24, 25, 26, 27, 28, 29, 30)
-
-
31. A processor-implemented method comprising:
-
obtaining a visual capture of a reality scene via a visual device, the visual capture including an image of a customer, wherein the visual device is operated by personnel of a merchant store; performing image analysis on the visual capture via an image analysis tool of the visual device; identifying, based on the image analysis, an identifier for the customer that is depicted in the image, the identifier being associated with a user account of the customer; and generating, via the visual device, an augmented reality display that includes i) the image of the customer, and ii) additional image data that surrounds the image of the customer, the augmented reality display being viewed by the personnel of the merchant store, wherein the additional image data is based on the user account of the customer and is indicative of prior behavior by the customer. - View Dependent Claims (32, 33, 34, 35, 36, 37, 38, 39, 40)
-
-
41. A processor-implemented method comprising:
-
obtaining one or more visual captures of a reality scene via a visual device, the one or more visual captures including i) a first image of a bill to be paid, and ii) a second image of a person or object that is indicative of a financial account; performing image analysis on the one or more visual captures via an image analysis tool of the visual device, wherein the person or object that is indicative of the financial account is identified based on the image analysis, and wherein an itemized expense included on the bill to be paid is identified based on the image analysis; generating, via the visual device, an augmented reality display that includes a user interactive area, the user interactive area being associated with the itemized expense; detecting, via a sensor, a gesture performed by a user of the visual device, the gesture being directed to the user interactive area; providing the detected gesture to the visual device, wherein the visual device is configured to determine an action associated with the detected gesture; and performing the action associated with the detected gesture, the performing of the action being configured to associate the itemized expense with the financial account. - View Dependent Claims (42, 43, 44, 45, 46, 47, 48, 49, 50)
-
-
51. A processor-implemented method comprising:
-
obtaining a visual capture of a reality scene via a visual device, the visual capture including i) an image of a store display of a merchant store, and ii) an object that is associated with a first item and a second item, wherein the merchant store sells the first item and the second item, and wherein the store display includes the first item and the second item; performing image analysis on the visual capture via an image analysis tool of the visual device, wherein the object is identified in the visual capture based on the image analysis; storing an image of a user at the visual device, wherein the visual device is operated by the user or worn by the user; generating, at the visual device, an interactive display that includes the image of the user and one or more user interactive areas, the one or more user interactive areas being associated with an image of the first item or an image of the second item; detecting, via a sensor, a gesture performed by the user, wherein the detected gesture is directed to the one or more user interactive areas, and wherein the detected gesture is provided to the visual device; and determining an action associated with the gesture and performing the action at the visual device, wherein the performing of the action updates the interactive display based on the image of the first item or the image of the second item, and wherein the updating of the interactive display causes the image of the user to be modified based on the image of the first item or the image of the second item. - View Dependent Claims (52, 53, 54, 55, 56, 57, 58, 59, 60)
-
-
61. A processor-implemented method comprising:
-
obtaining a visual capture of a reality scene via a visual device, wherein the visual capture includes an image of an item sold by a merchant store; performing image analysis on the visual capture via an image analysis tool of the visual device, wherein the item sold by the merchant store is identified based on the image analysis; and generating an augmented reality display at the visual device, wherein the augmented reality display includes i) the image of the item sold by the merchant store, and ii) additional image data that surrounds the image of the item, wherein the additional image data that surrounds the image of the item is based on a list of one or more store items that is associated with a user, wherein the list of the one or more store items includes the item sold by the merchant store, and wherein the visual device is operated by the user or worn by the user. - View Dependent Claims (62, 63, 64, 65, 66, 67, 68, 69, 70)
-
-
71. A processor-implemented method comprising:
-
displaying, at a television, a virtual store display that includes an image of an item, wherein a merchant store sells the item, and wherein the merchant store provides data to the television to generate the virtual store display; obtaining a visual capture of the television via a visual device, wherein the visual capture includes at least a portion of the virtual store display; performing image analysis on the visual capture via an image analysis tool of the visual device; identifying the image of the item in the visual capture based on the image analysis; generating an interactive display at the visual device, the interactive display including a user interactive area and a second image of the item; detecting, via a sensor, a gesture performed by a user, the gesture being directed to the user interactive area of the interactive display; providing the detected gesture to the visual device; determining, at the visual device, an action associated with the detected gesture; and performing the action associated with the detected gesture, wherein the performing of the action updates the interactive display. - View Dependent Claims (72, 73, 74, 75, 76, 77, 78, 79, 80)
-
-
81. A processor-implemented method comprising:
-
detecting, at a sensor, a voice command that is vocalized by a first entity, wherein the voice command initiates a payment transaction to a second entity; providing the detected voice command to a visual device that is operated by the first entity; obtaining, at the visual device, a visual capture of a reality scene, wherein the visual capture of the reality scene includes an image of the second entity; performing, at an image analysis tool of the visual device, image analysis on the obtained visual capture, wherein the image analysis tool identifies the image of the second entity in the visual capture; reporting to the visual device that the second entity is in proximity to the first entity based on the identifying of the image of the second entity by the image analysis tool; and completing the payment transaction from the first entity to the second entity based on the reporting. - View Dependent Claims (82, 83, 85, 86, 87, 88, 89, 90)
-
-
84-1. The method of claim 84, wherein obtaining the authorization credentials for the payment transaction include:
-
requesting a user to input a passcode for user identity confirmation;
orrequesting the user to speak a predetermined sentence or phrase.
-
-
91. A processor-implemented method comprising:
-
receiving from a wallet user multiple gesture actions within a specified temporal quantum; determining composite constituent gestures, gesture manipulated objects, and user account information from the received multiple gesture actions; determining via a processor a composite gesture action associated with the determined composite constituent gestures and gesture manipulated objects; and executing via a processor the composite gesture action to perform a transaction with a user account specified by the user account information. - View Dependent Claims (92, 93, 94, 95, 96, 97)
-
Specification