Assessing user-supplied evaluations
First Claim
1. A method for a computing system of an online merchant to assess reliability of evaluations supplied by users of the online merchant, the method comprising:
- receiving multiple pieces of content that are created by multiple author users and that are supplied to the online merchant for use by customers of the online merchant, the author users being a subset of the customers of the online merchant and the pieces of content each being a customer-generated textual item review for one of multiple items available from the online merchant, each of the multiple author users creating one or more of the multiple item review content pieces;
assessing the multiple item review content pieces by,receiving multiple evaluations of the multiple item review content pieces that are supplied by multiple evaluator users who are customers of the online merchant, each of the received evaluations being from one of the evaluator users for one of the item review content pieces and including a numerical rating of the item review content piece for each of one or more of multiple predefined rating dimensions, each rating dimension related to an aspect of the item review content piece such that a numerical rating for the rating dimension indicates an assessment by the evaluator user of a degree to which that aspect of the item review content piece is satisfied, the received evaluations including one or more evaluations for each of the multiple item review content pieces;
identifying one or more of the multiple evaluations that are unreliable by, for each combination of an evaluator user and an author user who created one or more item review content pieces evaluated by the evaluator user,determining a subset of the received evaluations supplied by the evaluator user for the item review content pieces created by the author user; and
automatically assessing the evaluations of the determined subset to identify whether any of the evaluations of the determined subset are unreliable based at least in part on bias of the evaluator user towards the author user being detected, the detecting of the bias of the evaluator user towards the author user being based at least in part on analysis of the numerical ratings included in the evaluations of the determined subset; and
automatically determining quality ratings for each of the multiple item review content pieces and for at least one of the multiple rating dimensions based on the numerical ratings of the received multiple evaluations other than the identified unreliable evaluations; and
providing one or more indications of at least some of the determined quality ratings.
1 Assignment
0 Petitions
Accused Products
Abstract
Techniques are described for assessing information supplied by users in various ways, such as to assess the reliability and/or other attributes of the user-supplied information. In at least some situations, the user-supplied information includes votes or other evaluations supplied by users related to items available from an online merchant, such as ratings of usefulness or other attributes of item reviews for the items or of other types of content pieces that are provided by other users. If user-supplied information is assessed as being sufficiently reliable and/or to have other desired attributes of interest, such as based on an automated analysis of the information, the user-supplied information may be used in various ways in various embodiments, such as to rate the quality or other attributes of the evaluated content pieces, and/or to rate quality or other attributes of the content-providing users who provide the content pieces.
94 Citations
26 Claims
-
1. A method for a computing system of an online merchant to assess reliability of evaluations supplied by users of the online merchant, the method comprising:
-
receiving multiple pieces of content that are created by multiple author users and that are supplied to the online merchant for use by customers of the online merchant, the author users being a subset of the customers of the online merchant and the pieces of content each being a customer-generated textual item review for one of multiple items available from the online merchant, each of the multiple author users creating one or more of the multiple item review content pieces; assessing the multiple item review content pieces by, receiving multiple evaluations of the multiple item review content pieces that are supplied by multiple evaluator users who are customers of the online merchant, each of the received evaluations being from one of the evaluator users for one of the item review content pieces and including a numerical rating of the item review content piece for each of one or more of multiple predefined rating dimensions, each rating dimension related to an aspect of the item review content piece such that a numerical rating for the rating dimension indicates an assessment by the evaluator user of a degree to which that aspect of the item review content piece is satisfied, the received evaluations including one or more evaluations for each of the multiple item review content pieces; identifying one or more of the multiple evaluations that are unreliable by, for each combination of an evaluator user and an author user who created one or more item review content pieces evaluated by the evaluator user, determining a subset of the received evaluations supplied by the evaluator user for the item review content pieces created by the author user; and automatically assessing the evaluations of the determined subset to identify whether any of the evaluations of the determined subset are unreliable based at least in part on bias of the evaluator user towards the author user being detected, the detecting of the bias of the evaluator user towards the author user being based at least in part on analysis of the numerical ratings included in the evaluations of the determined subset; and automatically determining quality ratings for each of the multiple item review content pieces and for at least one of the multiple rating dimensions based on the numerical ratings of the received multiple evaluations other than the identified unreliable evaluations; and providing one or more indications of at least some of the determined quality ratings. - View Dependent Claims (2, 3)
-
-
4. A computer-implemented method for assessing reliability of evaluations supplied by users, the method comprising:
-
receiving multiple evaluations from an evaluator user, each of the received evaluations being for one of multiple content pieces that are supplied by an author user distinct from the evaluator user and including a quantitative rating of that content piece with respect to an indicated content rating dimension; automatically assessing the received evaluations to identify one or more of the evaluations that are unreliable, the identifying being based at least in part on a determination that a bias relationship between the evaluator user and the author user exists at one or more times during which the identified one or more evaluations are received; and providing an indication of the identified unreliable one or more evaluations, so that use of the identified unreliable one or more evaluations is inhibited. - View Dependent Claims (5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. A non-transitory computer-readable medium whose contents cause a computing device to assess reliability of evaluations supplied by users, by performing a method comprising:
-
obtaining multiple evaluations by a first user of multiple pieces of content provided by a second user, the multiple evaluations including ratings of one or more attributes of at least some of the multiple provided content pieces; determining a subset of the obtained evaluations by the first user for the multiple pieces of content provided by the second user; automatically assessing the evaluations of the determined subset to determine whether one or more of the evaluations are unreliable based on at least one of a bias relationship existing between the first and second users when the one or more evaluations are supplied by the first user and of the one or more evaluations being disguised duplicates of one or more other evaluations supplied by the first user; and providing an indication of the one or more evaluations if the one or more evaluations are determined to be unreliable. - View Dependent Claims (18, 19, 20)
-
-
21. A computing system configured to assess reliability of evaluations supplied by users, comprising:
-
one or more memories; and an evaluation assessment system configured to, for each of one or more evaluator users who each supply multiple evaluations of objects associated with other users such that each of the supplied evaluations includes one or more ratings that each corresponds to an attribute of one of the objects; determine a subset of the multiple evaluations supplied by the evaluator user for objects associated with one or more of the other users; automatically assessing the determined subset of the multiple evaluations supplied by the evaluator user to determine whether one or more bias relationships exist between the evaluator user and the one or more of the other users such that one or more of the subset of evaluations are identified as unreliable based on being supplied as part of those bias relationships, the assessing being based at least in part on the ratings included in the subset of evaluations; and providing an indication of the one or more unreliable evaluations if the one or more bias relationships are determined to exist. - View Dependent Claims (22, 23, 24, 25, 26)
-
Specification