×

Augmenting speech segmentation and recognition using head-mounted vibration and/or motion sensors

  • US 9,135,915 B1
  • Filed: 07/26/2012
  • Issued: 09/15/2015
  • Est. Priority Date: 07/26/2012
  • Status: Active Grant
First Claim
Patent Images

1. A method, comprising:

  • receiving audio data representative of audio detected by a microphone, wherein the microphone is positioned on a head-mountable device (HMD);

    determining whether the received audio data comprises audio speech data in an audio-channel speech band or audio non-speech data outside the audio-channel speech band;

    receiving vibration data representative of vibrations detected by a sensor other than the microphone, wherein the sensor is positioned on the HMD;

    determining a degree of spectral coherency, with respect to a threshold, between the audio data and the vibration data;

    determining whether or not the audio data is causally related to the vibration data based on the determined degree of spectral coherency; and

    if the received audio data both;

    (a) comprises audio speech data in an audio-channel speech band and (b) is determined to be causally related to the vibration data based on the degree of spectral coherency, then generating an indication that the audio data contains HMD-wearer speech and conditioning at least one of the audio data and the vibration data as speech data, wherein the conditioning comprises amplifying at least one of the audio data and the vibration data;

    if the received audio data both;

    (a) comprises audio non-speech data outside the audio-channel speech band and (b) is determined to be causally related to the vibration data based on the degree of spectral coherency, then conditioning at least one of the audio data and the vibration data as coherent non-speech data, wherein the conditioning comprises removing or replacing non-speech data from at least one of the audio data and the vibration data; and

    otherwise, determining that the received audio data and the vibration data are non-coherent and conditioning at least one of the audio data and the vibration data as non-speech data, wherein the conditioning comprises removing or replacing non-speech data from at least one of the audio data and the vibration data.

View all claims
  • 2 Assignments
Timeline View
Assignment View
    ×
    ×