×

Systems and methods for automatically enabling subtitles based on detecting an accent

  • US 9,854,324 B1
  • Filed: 01/30/2017
  • Issued: 12/26/2017
  • Est. Priority Date: 01/30/2017
  • Status: Active Grant
First Claim
Patent Images

1. A method for automatically enabling subtitles based on a user profile when a language is spoken with an accent, the method comprising:

  • storing, in a user profile associated with a user, a first data structure indicating a list of one or more languages that the user understands;

    determining, at a first point in time, that a language of the one or more languages in the list is being spoken with an accent by retrieving the first data structure, extracting the list, and comparing the language to the one or more languages;

    detecting a first plurality of user interactions of the user while the given language is being spoken with the accent;

    storing, in the user profile, a data log indicating the first point in time and the first plurality of user interactions;

    retrieving, from a remote source, an information table associating user interactions with values, wherein the values represent a general level of difficulty, the general level of difficulty being indicative of a measure of difficulty a plurality of users have in understanding accents in audio content;

    comparing the first plurality of user interactions with the information table to determine a first plurality of values, wherein each value of the first plurality of values is associated with a respective one of the first plurality of user interactions;

    calculating a first value based on the first plurality of values;

    creating a second data structure, wherein the second data structure associates the first value with a user specific level of difficulty, the user specific level of difficulty being indicative of a measure of difficulty the user encounters in understanding the given language when spoken with the accent;

    storing the second data structure in the user profile;

    detecting that the given language is being spoken with the accent at a second point in time later than the first point in time;

    based on detecting that the given language is being spoken with the accent at the second point in time, retrieving, from the user profile, the data log;

    monitoring user interactions of the user while the given language is being spoken with the accent at the second point in time to determine whether the first plurality of user interactions are being performed again while the given language is being spoken with the accent;

    based on determining that the first plurality of user interactions are not being performed again, updating the second data structure, the second data structure associating a second value that is lower than the first value with the user specific level of difficulty;

    detecting that a media asset includes the given language spoken with the accent;

    retrieving, from the user profile, the second data structure;

    extracting, from the second data structure, the user specific level of difficulty; and

    automatically generating for display subtitles for the media asset based on the extracted user specific level of difficulty.

View all claims
  • 9 Assignments
Timeline View
Assignment View
    ×
    ×