Real-time annotation of images in a human assistive environment
First Claim
1. A method of annotating video images associated with an environmental situation based on detected actions of a human interacting with the environmental situation, the method comprising:
- receiving, with an information processing system, a set of real-time video images captured by at least one video camera associated with an environment presenting one or more environmental situations to a human;
monitoring, with the information processing system, one or more user actions made by the human that is associated with the set of real-time video images with respect to the environmental situation;
determining, based on the monitoring, that the human driver has one of performed and failed to perform at least one action associated with one or more images of the set of real-time video images;
annotating, with the information processing system, the one or more images of the set of real-time video images with a set of annotations based on the at least one action that has been one of performed and failed to be performed by the human;
comparing at least two sets of annotations of the one or more images;
identifying, based on the comparing, a most common action for the environmental situation; and
providing a control signal for an automatic action to be performed by a user assistive product, based on the most common action that has been identified, when the environmental situation is detected by the user assistive product.
2 Assignments
0 Petitions
Accused Products
Abstract
A method, information processing system, and computer program storage product annotate video images associated with an environmental situation based on detected actions of a human interacting with the environmental situation. A set of real-time video images are received that are captured by at least one video camera associated with an environment presenting one or more environmental situations to a human. One or more user actions made by the human that is associated with the set of real-time video images with respect to the environmental situation are monitored. A determination is made, based on the monitoring, that the human driver has one of performed and failed to perform at least one action associated with one or more images of the set of real-time video images. The one or more images of the set of real-time video images are annotated with a set of annotations.
-
Citations
18 Claims
-
1. A method of annotating video images associated with an environmental situation based on detected actions of a human interacting with the environmental situation, the method comprising:
-
receiving, with an information processing system, a set of real-time video images captured by at least one video camera associated with an environment presenting one or more environmental situations to a human; monitoring, with the information processing system, one or more user actions made by the human that is associated with the set of real-time video images with respect to the environmental situation; determining, based on the monitoring, that the human driver has one of performed and failed to perform at least one action associated with one or more images of the set of real-time video images; annotating, with the information processing system, the one or more images of the set of real-time video images with a set of annotations based on the at least one action that has been one of performed and failed to be performed by the human; comparing at least two sets of annotations of the one or more images; identifying, based on the comparing, a most common action for the environmental situation; and providing a control signal for an automatic action to be performed by a user assistive product, based on the most common action that has been identified, when the environmental situation is detected by the user assistive product. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. An information processing system for annotating video images associated with an environment of a moving vehicle, based on detected human actions of a driver of the moving vehicle, the information processing system comprising:
-
a memory; a processor communicatively coupled to the memory; an environment manager communicatively coupled to the memory and the processor, wherein the environment manager is configured to; receive, with an information processing system, a set of real-time video images captured by at least one video camera associated with an environment of a moving vehicle, wherein the set of real-time video images are associated specifically with at least one vehicle control and maneuver environmental situation of the moving vehicle; monitor, with the information processing system, one or more user control input signals corresponding to one or more vehicle control and maneuver actions made by a human driver of the moving vehicle that is associated with the set of real-time video images, with respect to the vehicle control and maneuver environmental situation; determine, based on the monitoring, that the human driver has performed at least one vehicle control and maneuver action associated with one or more images of the set of real-time video images; annotate, with the information processing system, the one or more images of the set of real-time video images with a set of annotations based on the at least one vehicle control and maneuver action performed by the human driver; identify, based on at least the set of annotations, a most common vehicle control and maneuver action for the vehicle control and maneuver environmental situation; and provide a control signal for an automatic action to be performed by a user assistive product, based on the most common vehicle control and maneuver action that has been identified, when the vehicle control and maneuver environmental situation is detected by the user assistive product. - View Dependent Claims (9, 10, 11, 12, 13, 14)
-
-
15. A non-transitory computer program storage product having a computer program stored thereon for annotating video images associated with an environment of a moving vehicle, based on detected human actions of a driver of the moving vehicle, the computer program comprising instructions for:
-
receiving a set of real-time video images captured by at least one video camera associated with an environment of a moving vehicle, wherein the set of real-time video images are associated specifically with at least one vehicle control and maneuver environmental situation of the moving vehicle; monitoring one or more user control input signals corresponding to one or more vehicle control and maneuver actions made by a human driver of the moving vehicle that is associated with the set of real-time video images, with respect to the vehicle control and maneuver environmental situation; determining, based on the monitoring, that the human driver has performed at least one vehicle control and maneuver action associated with one or more images of the set of real-time video images; annotating the one or more images of the set of real-time video images with a set of annotations based on the at least one vehicle control and maneuver action performed by the human driver; identifying, based on at least the set of annotations, a most common vehicle control and maneuver action for the vehicle control and maneuver environmental situation; and providing a control signal for an automatic action to be performed by a user assistive product, based on the most common vehicle control and maneuver action that has been identified, when the vehicle control and maneuver environmental situation is detected by the user assistive product. - View Dependent Claims (16, 17, 18)
-
Specification