Systems and methods for cookware detection
First Claim
1. A method for detecting cookware, the method comprising:
- identifying, by one or more computing devices, one or more locations respectively associated with one or more burners included in a cooktop, wherein identifying, by one or more computing devices, the one or more locations respectively associated with the one or more burners comprises;
obtaining, by the one or more computing devices, a reference frame of imagery depicting the cooktop without any objects placed thereon;
obtaining, by the one or more computing devices, one or more calibration frames of imagery depicting the cooktop with one or more items of cookware respectively positioned at the one or more locations respectively associated with the one or more burners, wherein the one or more calibration frames of imagery are captured when motion is not detected at the cooktop;
performing, by the one or more computing devices, background subtraction for at least one of the one or more calibration frames of imagery with respect to the reference frame of imagery to identify new imagery; and
segmenting, by the one or more computing devices, the new imagery to identify the one or more locations respectively associated with the one or more burners included in the cooktop;
obtaining, by the one or more computing devices, a frame of imagery captured by a vision sensor, wherein the frame of imagery either depicts cookware on the cooktop, or does not depict cookware on the cooktop;
segmenting, by the one or more computing devices, the frame of imagery into one or more image segments based at least in part on the one or more locations respectively associated with the one or more burners included in the cooktop;
using, by the one or more computing devices, a classifier to provide an initial classification for each of the one or more image segments of the frame of imagery, the initial classification for each of the one or more image segments classifying the one or more image segments into either a first class of images depicting cookware or a second class of images not depicting cookware, wherein classification of one of the one or more image segments into the first class of images corresponds to detection of cookware on the cooktop;
providing, by the one or more computing devices for at least one of the one or more image segments, an indication to a user of whether the initial classification for such image segment comprises the first class or the second class;
after providing the indication for at least one of the one or more image segments, receiving, by the one or more computing devices, a user input indicating whether the initial classification for such image segment is correct; and
when the user input indicates that the initial classification for one of the one or more image segments is not correct, changing, by the one or more computing devices, the initial classification for such image segment to a subsequent classification, wherein the subsequent classification comprises the second class when the initial classification comprises the first class, and wherein the subsequent classification comprises the first class when the initial classification comprises the second class.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods for cookware detection are provided. One example system includes a vision sensor positioned so as to collect a plurality of images of a cooktop. The system includes a classifier module implemented by one or more processors. The classifier module is configured to calculate a cookware score for each of the plurality of images and to use the cookware score for each of the plurality of images to classify such image as either depicting cookware or not depicting cookware. The system includes a classifier training module implemented by the one or more processors. The classifier training module is configured to train the classifier module based at least in part on a positive image training dataset and a negative image training dataset.
-
Citations
16 Claims
-
1. A method for detecting cookware, the method comprising:
-
identifying, by one or more computing devices, one or more locations respectively associated with one or more burners included in a cooktop, wherein identifying, by one or more computing devices, the one or more locations respectively associated with the one or more burners comprises; obtaining, by the one or more computing devices, a reference frame of imagery depicting the cooktop without any objects placed thereon; obtaining, by the one or more computing devices, one or more calibration frames of imagery depicting the cooktop with one or more items of cookware respectively positioned at the one or more locations respectively associated with the one or more burners, wherein the one or more calibration frames of imagery are captured when motion is not detected at the cooktop; performing, by the one or more computing devices, background subtraction for at least one of the one or more calibration frames of imagery with respect to the reference frame of imagery to identify new imagery; and segmenting, by the one or more computing devices, the new imagery to identify the one or more locations respectively associated with the one or more burners included in the cooktop; obtaining, by the one or more computing devices, a frame of imagery captured by a vision sensor, wherein the frame of imagery either depicts cookware on the cooktop, or does not depict cookware on the cooktop; segmenting, by the one or more computing devices, the frame of imagery into one or more image segments based at least in part on the one or more locations respectively associated with the one or more burners included in the cooktop; using, by the one or more computing devices, a classifier to provide an initial classification for each of the one or more image segments of the frame of imagery, the initial classification for each of the one or more image segments classifying the one or more image segments into either a first class of images depicting cookware or a second class of images not depicting cookware, wherein classification of one of the one or more image segments into the first class of images corresponds to detection of cookware on the cooktop; providing, by the one or more computing devices for at least one of the one or more image segments, an indication to a user of whether the initial classification for such image segment comprises the first class or the second class; after providing the indication for at least one of the one or more image segments, receiving, by the one or more computing devices, a user input indicating whether the initial classification for such image segment is correct; and when the user input indicates that the initial classification for one of the one or more image segments is not correct, changing, by the one or more computing devices, the initial classification for such image segment to a subsequent classification, wherein the subsequent classification comprises the second class when the initial classification comprises the first class, and wherein the subsequent classification comprises the first class when the initial classification comprises the second class. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system for detecting cookware, the system comprising:
-
a vision sensor positioned so as to collect imagery depicting a cooktop; one or more processors; and one or more non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations, the operations comprising; obtaining a first plurality of images, wherein the first plurality of images depict cookware upon the cooktop; storing the first plurality of images in a memory as a positive image training dataset; obtaining a second plurality of images, wherein the second plurality of images do not depict cookware upon the cooktop; storing the second plurality of images in the memory as a negative image training dataset; training a classifier based on the positive image training dataset and the negative image training dataset; identifying one or more locations respectively associated with one or more burners included in the cooktop, wherein identifying, by one or more computing devices, the one or more locations respectively associated with the one or more burners comprises; obtaining a reference frame of imagery depicting the cooktop without any objects placed thereon; obtaining one or more calibration frames of imagery depicting the cooktop with one or more items of cookware respectively positioned at the one or more locations respectively associated with the one or more burners, wherein the one or more calibration frames of imagery are captured when motion is not detected at the cooktop; performing background subtraction for the one or more calibration frames of imagery with respect to the reference frame of imagery to identify new imagery; and segmenting the new imagery to identify the one or more locations respectively associated with the one or more burners; segmenting a frame of imagery captured by the vision sensor into one or more image segments based at least in part on the one or more locations respectively associated with the one or more burners included in the cooktop; using the classifier to provide an initial classification for each of the one or more image segments, the initial classification for each of the one or more image segments classifying such image segment into either a first class of images depicting cookware or a second class of images not depicting cookware, wherein classification of one or more image segments into first class of images corresponds to detection of cookware on cooktop. - View Dependent Claims (9, 10, 11)
-
-
12. A system for detecting cookware, the system comprising:
-
a vision sensor positioned so as to collect a plurality of images of a cooktop; a calibration module implemented by one or more processors, the calibration module configured to; obtain reference imagery depicting the cooktop without objects placed thereon; obtain calibration imagery depicting the cooktop with one or more items of cookware respectively placed on one or more burners included in the cooktop;
wherein the calibration imagery is captured only when motion is not detected at the cooktop;perform background subtraction for the calibration imagery with respect to the reference imagery to identify new imagery; and segment the new imagery to identify one or more locations respectively associated with the one or more burners; a classifier module implemented by the one or more processors, the classifier module configured to; calculate a cookware score for each of the plurality of images; and use the cookware score for each of the plurality of images to respectively classify each of the plurality of the images as either depicting cookware or not depicting cookware, wherein classification of one of the plurality of images as depicting cookware corresponds to detection of cookware at the cooktop; and a classifier training module implemented by the one or more processors, the classifier training module configured to train the classifier module based at least in part on a positive image training dataset and a negative image training dataset. - View Dependent Claims (13, 14, 15, 16)
-
Specification