Systems and methods for automatically enabling subtitles based on detecting an accent
First Claim
1. A method for automatically enabling subtitles based on a user profile when a language is spoken with an accent, the method comprising:
- storing, in a user profile associated with a user, a first data structure indicating a list of one or more languages that the user understands;
determining, at a first point in time, that a language of the one or more languages in the list is being spoken with an accent by retrieving the first data structure, extracting the list, and comparing the language to the one or more languages;
detecting a first plurality of user interactions of the user while the given language is being spoken with the accent;
storing, in the user profile, a data log indicating the first point in time and the first plurality of user interactions;
retrieving, from a remote source, an information table associating user interactions with values, wherein the values represent a general level of difficulty, the general level of difficulty being indicative of a measure of difficulty a plurality of users have in understanding accents in audio content;
comparing the first plurality of user interactions with the information table to determine a first plurality of values, wherein each value of the first plurality of values is associated with a respective one of the first plurality of user interactions;
calculating a first value based on the first plurality of values;
creating a second data structure, wherein the second data structure associates the first value with a user specific level of difficulty, the user specific level of difficulty being indicative of a measure of difficulty the user encounters in understanding the given language when spoken with the accent;
storing the second data structure in the user profile;
detecting that the given language is being spoken with the accent at a second point in time later than the first point in time;
based on detecting that the given language is being spoken with the accent at the second point in time, retrieving, from the user profile, the data log;
monitoring user interactions of the user while the given language is being spoken with the accent at the second point in time to determine whether the first plurality of user interactions are being performed again while the given language is being spoken with the accent;
based on determining that the first plurality of user interactions are not being performed again, updating the second data structure, the second data structure associating a second value that is lower than the first value with the user specific level of difficulty;
detecting that a media asset includes the given language spoken with the accent;
retrieving, from the user profile, the second data structure;
extracting, from the second data structure, the user specific level of difficulty; and
automatically generating for display subtitles for the media asset based on the extracted user specific level of difficulty.
9 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods are described for automatically enabling subtitles based on a user profile when a language is spoken with an accent a user has difficulty understanding. For example, a media guidance application may detect a first plurality of user interactions of the user while the given language is being spoken with the accent. Based on the first plurality of interactions, the media guidance application may calculate a first value associated with a user specific level of difficulty indicating how difficult it is for the user to understand the language when spoken with the accent. If the first plurality of user interactions are not being performed again, the media guidance application may update the user specific difficulty with a second value that is lower than the first value. The media guidance application may automatically generate for display subtitles for a media asset based on the user specific level of difficulty.
29 Citations
20 Claims
-
1. A method for automatically enabling subtitles based on a user profile when a language is spoken with an accent, the method comprising:
-
storing, in a user profile associated with a user, a first data structure indicating a list of one or more languages that the user understands; determining, at a first point in time, that a language of the one or more languages in the list is being spoken with an accent by retrieving the first data structure, extracting the list, and comparing the language to the one or more languages; detecting a first plurality of user interactions of the user while the given language is being spoken with the accent; storing, in the user profile, a data log indicating the first point in time and the first plurality of user interactions; retrieving, from a remote source, an information table associating user interactions with values, wherein the values represent a general level of difficulty, the general level of difficulty being indicative of a measure of difficulty a plurality of users have in understanding accents in audio content; comparing the first plurality of user interactions with the information table to determine a first plurality of values, wherein each value of the first plurality of values is associated with a respective one of the first plurality of user interactions; calculating a first value based on the first plurality of values; creating a second data structure, wherein the second data structure associates the first value with a user specific level of difficulty, the user specific level of difficulty being indicative of a measure of difficulty the user encounters in understanding the given language when spoken with the accent; storing the second data structure in the user profile; detecting that the given language is being spoken with the accent at a second point in time later than the first point in time; based on detecting that the given language is being spoken with the accent at the second point in time, retrieving, from the user profile, the data log; monitoring user interactions of the user while the given language is being spoken with the accent at the second point in time to determine whether the first plurality of user interactions are being performed again while the given language is being spoken with the accent; based on determining that the first plurality of user interactions are not being performed again, updating the second data structure, the second data structure associating a second value that is lower than the first value with the user specific level of difficulty; detecting that a media asset includes the given language spoken with the accent; retrieving, from the user profile, the second data structure; extracting, from the second data structure, the user specific level of difficulty; and automatically generating for display subtitles for the media asset based on the extracted user specific level of difficulty. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A system for automatically enabling subtitles based on a user profile when a language is spoken with an accent, the system comprising:
control circuitry configured to; store, in a user profile associated with a user, a first data structure indicating a list of one or more languages that the user understands; determine, at a first point in time, that a language of the one or more languages in the list is being spoken with an accent by retrieving the first data structure, extracting the list, and comparing the language to the one or more languages; detect a first plurality of user interactions of the user while the given language is being spoken with the accent; store, in the user profile, a data log indicating the first point in time and the first plurality of user interactions; retrieve, from a remote source, an information table associating user interactions with values, wherein the values represent a general level of difficulty, the general level of difficulty being indicative of a measure of difficulty a plurality of users have in understanding accents in audio content; compare the first plurality of user interactions with the information table to determine a first plurality of values, wherein each value of the first plurality of values is associated with a respective one of the first plurality of user interactions; calculate a first value based on the first plurality of values; create a second data structure, wherein the second data structure associates the first value with a user specific level of difficulty, the user specific level of difficulty being indicative of a measure of difficulty the user encounters in understanding the given language when spoken with the accent; store the second data structure in the user profile; detect that the given language is being spoken with the accent at a second point in time later than the first point in time; retrieve, based on detecting that the given language is being spoken with the accent at the second point in time, from the user profile, the data log; monitor user interactions of the user while the given language is being spoken with the accent at the second point in time to determine whether the first plurality of user interactions are being performed again while the given language is being spoken with the accent; update, based on determining that the first plurality of user interactions are not being performed again, the second data structure, the second data structure associating a second value that is lower than the first value with the user specific level of difficulty; detect that a media asset includes the given language spoken with the accent; retrieve, from the user profile, the second data structure; extract, from the second data structure, the user specific level of difficulty; and automatically generate for display subtitles for the media asset based on the extracted user specific level of difficulty. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
Specification