Methods and systems for providing multi-user recommendations
First Claim
Patent Images
1. A computer-implemented method, comprising:
- maintaining, with respect to each of a plurality of known group contexts, a contextual profile comprising a set of profile parameters for each known group context and an indication of an activity of a plurality of activities in which one or more users may be engaged;
receiving, from a voice-activated device, a voice command of a first user requesting media content to be played by the voice-activated device as well as a contextual audio data corresponding to ambient sound within an environment in which the voice-activated device is located, wherein the ambient sound includes user voices as well as other sound data that is independent of media content played by the voice-activated device, and wherein the contextual audio data corresponding to the ambient sound is utilizable to determine a known group context of the plurality of known group contexts;
identifying, based on a first voice associated with the voice command, the first user;
determining, based on a second voice associated with the contextual audio data, a presence of a second user different from the first user within a proximity of the voice-activated device;
identifying, based on the second voice associated with the contextual audio data, the second user;
determining, based on the first user and the second user, the known group context of the plurality of known group contexts associated with the environment at least in part by comparing an audio feature extracted from the contextual audio data with an audio feature associated with a profile parameter of the known group context, the known group context indicating the activity of the plurality of activities in which multiple people are engaged;
determining a first list of recommended media content for the first user based on a first set of attributes stored in relation to the first user and the known group context, the first list of recommended media content determined based on the activity;
determining a second list of recommended media content for the second user based on a second set of attributes stored in relation to the second user and the known group context, the second list of recommended media content determined based on the activity;
determining a third list of recommended media content based at least in part on the first list and the second list; and
providing the third list of recommend media content to the voice-activated device.
1 Assignment
0 Petitions
Accused Products
Abstract
Techniques described herein can be used to provide recommendations for multiple users. In particular, one or more users may interact with an interactive device to stream media content or utilize other services provided by a service provider. The users may provide commands to the interactive device to request content from a service provider. Contextual data associated with the request may be used to determine that an audience of the interactive device comprises more than one user. Based on this determination, content recommendations can be provided so that the recommendations are more likely to be suitable for the audience.
40 Citations
19 Claims
-
1. A computer-implemented method, comprising:
-
maintaining, with respect to each of a plurality of known group contexts, a contextual profile comprising a set of profile parameters for each known group context and an indication of an activity of a plurality of activities in which one or more users may be engaged; receiving, from a voice-activated device, a voice command of a first user requesting media content to be played by the voice-activated device as well as a contextual audio data corresponding to ambient sound within an environment in which the voice-activated device is located, wherein the ambient sound includes user voices as well as other sound data that is independent of media content played by the voice-activated device, and wherein the contextual audio data corresponding to the ambient sound is utilizable to determine a known group context of the plurality of known group contexts; identifying, based on a first voice associated with the voice command, the first user; determining, based on a second voice associated with the contextual audio data, a presence of a second user different from the first user within a proximity of the voice-activated device; identifying, based on the second voice associated with the contextual audio data, the second user; determining, based on the first user and the second user, the known group context of the plurality of known group contexts associated with the environment at least in part by comparing an audio feature extracted from the contextual audio data with an audio feature associated with a profile parameter of the known group context, the known group context indicating the activity of the plurality of activities in which multiple people are engaged; determining a first list of recommended media content for the first user based on a first set of attributes stored in relation to the first user and the known group context, the first list of recommended media content determined based on the activity; determining a second list of recommended media content for the second user based on a second set of attributes stored in relation to the second user and the known group context, the second list of recommended media content determined based on the activity; determining a third list of recommended media content based at least in part on the first list and the second list; and providing the third list of recommend media content to the voice-activated device. - View Dependent Claims (2, 3)
-
-
4. A computer-implemented method, comprising:
-
maintaining, with respect to each of a plurality of known group contexts, a contextual profile comprising profile parameters and an indication of an activity of a plurality of activities associated with a known group context of the plurality of known group contexts; receiving a request, from a device, for media content to be played by the device; determining, based on contextual audio data corresponding to ambient sound within an environment in which the device is located, that an audience of the device comprises more than one user, wherein the ambient sound includes user voices as well as other sound data that is independent of media content played by the device, and wherein the contextual audio data corresponding to the ambient sound is utilizable to determine the known group context of the plurality of known group contexts; identifying, based on a first voice in the contextual audio data, a first set of attributes associated with a first user; identifying, based on a second voice in the contextual audio data, a second set of attributes associated with a second user; determining the known group context of the plurality of known group contexts associated with the environment by comparing audio features from the contextual audio data with audio features associated with profile parameters of the known group context, the known group context indicating the activity of the plurality of activities in which the first user and the second user are engaged; generating a recommended media content based on the first set of attributes, the second set of attributes, and the known group context, the recommended media content being selected based on one or more preferences stored in relation to the activity; and providing the recommended media content to the device. - View Dependent Claims (5, 6, 7, 8, 9, 10, 11)
-
-
12. A computer system, comprising:
-
a memory that stores computer-executable instructions; and a processor configured to access the memory and execute the computer-executable instructions to implement a method comprising; maintaining, with respect to each of a plurality of known group contexts, a contextual profile comprising a set of profile parameters for each known group context and an indication of an activity of a plurality of activities in which one or more users may be engaged; receiving a request, from a device, for media content to be played by the device; determining, based on contextual audio data corresponding to ambient sound within an environment in which the device is located, that an audience of the device comprises more than one user, wherein the ambient sound includes user voices as well as other sound data that is independent of media content played by the device, and wherein the contextual audio data corresponding to the ambient sound is utilizable to determine the known group context of the plurality of known group contexts; identifying, based on a first voice in the contextual audio data, a first set of attributes associated with a first user; identifying, based on a second voice in the contextual audio data, a second set of attributes associated with a second user; determining the known group context of the plurality of known group contexts associated with the environment by comparing audio features from the contextual audio data with audio features associated with profile parameters of the known group context, the known group context indicating the activity of the plurality of activities in which the first user and the second user are engaged; generating a recommended media content based on the first set of attributes, the second set of attributes, and the known group context, the recommended media content being selected based on preferences stored in relation to the activity; and providing the recommended media content to the device. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19)
-
Specification