Systems and methods for automatically enabling subtitles based on detecting an accent
First Claim
1. A method for automatically displaying explanatory messages based on a user profile when a language is spoken with an accent in a media asset, the method comprising:
- storing, in a user profile, a first data structure indicating a list of one or more languages that the user understands;
determining, at a first point in time, that a language is being spoken in the media asset with an accent by;
performing natural language processing to detect an audio signature of the language;
determining, based on the audio signature, that the language is being spoken with an accent;
identifying the language spoken in the media asset; and
determining that the identified language spoken in the media asset matches one of the languages in the list of one or more languages that the user understands;
receiving, from a remote source, information needed to populate the explanatory messages;
extracting from the user profile a user specific level of difficulty being indicative of a measure of difficulty the user has in understanding the language when spoken with an accent; and
in response to determining that the language is being spoken in the media asset with the accent, automatically generating for display explanatory messages for the media asset based on the user specific level of difficulty.
9 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods are described for automatically enabling subtitles based on a user profile when a language is spoken with an accent a user has difficulty understanding. For example, a media guidance application may detect a first plurality of user interactions of the user while the given language is being spoken with the accent. Based on the first plurality of interactions, the media guidance application may calculate a first value associated with a user specific level of difficulty indicating how difficult it is for the user to understand the language when spoken with the accent. If the first plurality of user interactions are not being performed again, the media guidance application may update the user specific difficulty with a second value that is lower than the first value. The media guidance application may automatically generate for display subtitles for a media asset based on the user specific level of difficulty.
17 Citations
20 Claims
-
1. A method for automatically displaying explanatory messages based on a user profile when a language is spoken with an accent in a media asset, the method comprising:
-
storing, in a user profile, a first data structure indicating a list of one or more languages that the user understands; determining, at a first point in time, that a language is being spoken in the media asset with an accent by; performing natural language processing to detect an audio signature of the language; determining, based on the audio signature, that the language is being spoken with an accent; identifying the language spoken in the media asset; and determining that the identified language spoken in the media asset matches one of the languages in the list of one or more languages that the user understands; receiving, from a remote source, information needed to populate the explanatory messages; extracting from the user profile a user specific level of difficulty being indicative of a measure of difficulty the user has in understanding the language when spoken with an accent; and in response to determining that the language is being spoken in the media asset with the accent, automatically generating for display explanatory messages for the media asset based on the user specific level of difficulty. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A system for automatically displaying explanatory messages based on a user profile when a language is spoken with an accent in a media asset, the system comprising:
-
control circuitry configured to; store, in a user profile, a first data structure indicating a list of one or more languages that the user understands; determine, at a first point in time, that a language is being spoken in the media asset with an accent by; performing natural language processing to detect an audio signature of the language; determining, based on the audio signature, that the language is being spoken with an accent; identifying the language spoken in the media asset; and determining that the identified language spoken in the media asset matches one of the languages in the list of one or more languages that the user understands; receive, from a remote source, information needed to populate the explanatory messages; extract from the user profile a user specific level of difficulty being indicative of a measure of difficulty the user has in understanding the language when spoken with an accent; and in response to determining that the language is being spoken in the media asset with the accent, automatically generate for display explanatory messages for the media asset based on the user specific level of difficulty. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
Specification