Method and apparatus for speech encoding and decoding by sinusoidal analysis and waveform encoding with phase reproducibility

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
50Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. A speech encoding method in which an input speech signal is divided on a time axis in terms of preset encoding units and encoded in terms of the preset encoding units, comprising the steps of:
 detecting a voiced/unvoiced sound state of the input speech signal and classifying the input speech signal into voiced portions and unvoiced portions;
finding shortterm prediction residuals of the voiced portions of the input speech signal;
encoding the shortterm prediction residuals of the voiced portions of the input speech signal by sinusoidal analytic encoding; and
encoding the unvoiced portions of the input speech signal by waveform encoding.
1 Assignment
0 Petitions
Accused Products
Abstract
A speech encoding method and apparatus in which an input speech signal is divided in terms of blocks or frames as encoding units and encoded in terms of the encoding units, whereby explosive and fricative consonants can be impeccably reproduced, while there is an attenuation of the occurrence of foreign sounds being generated at a transient portion between voiced (V) and unvoiced (UV) portions, so that the speech with high clarity devoid of “stuffed” feeling may be produced. The encoding apparatus includes a first encoding unit for finding residuals of linear predictive coding (LPC) of an input speech signal for performing harmonic coding and a second encoding unit for encoding the input speech signal by waveform coding. The first encoding unit and the second encoding unit are used for encoding a voiced (V) portion and an unvoiced (UV) portion of the input signal, respectively. Code excited linear prediction (CELP) encoding employing vector quantization by a closed loop search of an optimum vector using an analysisbysynthesis method is used for the second encoding unit. A corresponding decoding method and apparatus is also provided.
62 Citations
View as Search Results
Switching between coding schemes  
Patent #
US 7,876,966 B2
Filed 03/11/2003

Current Assignee
Intellectual Ventures I LLC

Sponsoring Entity
Spyder Navigations LLC

Audio Signal Decoder, Time Warp Contour Data Provider, Method and Computer Program  
Patent #
US 20110106542A1
Filed 07/01/2009

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Time Warp Contour Calculator, Audio Signal Encoder, Encoded Audio Signal Representation, Methods and Computer Program  
Patent #
US 20110161088A1
Filed 07/01/2009

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

CODING WITH NOISE SHAPING IN A HIERARCHICAL CODER  
Patent #
US 20110224995A1
Filed 11/17/2009

Current Assignee
Orange S.A.

Sponsoring Entity
Orange S.A.

Method and apparatus for extracting pitch information from audio signal using morphology  
Patent #
US 7,822,600 B2
Filed 07/11/2006

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Audio signal noise reduction device and method  
Patent #
US 7,711,557 B2
Filed 11/27/2006

Current Assignee
Sony Corporation

Sponsoring Entity
Sony Corporation

METHOD AND APPARATUS FOR ENCODING AND DECODING AUDIO SIGNALS  
Patent #
US 20090187409A1
Filed 10/08/2007

Current Assignee
Qualcomm Inc.

Sponsoring Entity
Qualcomm Inc.

Method of synthesis for a steady sound signal  
Patent #
US 7,558,727 B2
Filed 08/05/2003

Current Assignee
Huawei Technologies Co. Ltd.

Sponsoring Entity
Koninklijke Philips N.V.

System and Method for a High Performance Audio Codec  
Patent #
US 20080162150A1
Filed 12/14/2007

Current Assignee
Vianix Delaware LLC

Sponsoring Entity
Vianix Delaware LLC

Method and apparatus for extracting pitch information from audio signal using morphology  
Patent #
US 20070106503A1
Filed 07/11/2006

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Audio signal noise reduction device and method  
Patent #
US 20070150261A1
Filed 11/27/2006

Current Assignee
Sony Corporation

Sponsoring Entity
Sony Corporation

Method of synthesis for a steady sound signal  
Patent #
US 20060178873A1
Filed 08/05/2003

Current Assignee
Huawei Technologies Co. Ltd.

Sponsoring Entity
Koninklijke Philips N.V.

Switching between coding schemes  
Patent #
US 20060173675A1
Filed 03/11/2003

Current Assignee
Intellectual Ventures I LLC

Sponsoring Entity
Intellectual Ventures I LLC

Audio coding and decoding apparatuses and methods, and recording mediums storing the methods  
Patent #
US 20060206316A1
Filed 01/18/2006

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

METHODS AND SYSTEMS FOR BIT ALLOCATION AND PARTITIONING IN GAINSHAPE VECTOR QUANTIZATION FOR AUDIO CODING  
Patent #
US 20120232913A1
Filed 03/07/2012

Current Assignee
Timothy B. Terriberry, JeanMarc Valin

Sponsoring Entity
Timothy B. Terriberry, JeanMarc Valin

PostQuantization Gain Correction in Audio Coding  
Patent #
US 20130339038A1
Filed 07/04/2011

Current Assignee
Telefonaktiebolaget LM Ericsson

Sponsoring Entity
Telefonaktiebolaget LM Ericsson

Selection of scalar quantixation (SQ) and vector quantization (VQ) for speech coding  
Patent #
US 8,620,647 B2
Filed 01/26/2009

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
WIAV Solutions LLC

Codebook sharing for LSF quantization  
Patent #
US 8,635,063 B2
Filed 01/26/2009

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
WIAV Solutions LLC

Multimode speech encoding system for encoding a speech signal used for selection of one of the speech encoding modes including multiple speech encoding rates  
Patent #
US 8,650,028 B2
Filed 08/20/2008

Current Assignee
MACom Technology Solutions Holdings Inc.

Sponsoring Entity
Mindspeed Technologies Inc.

METHOD AND SYSTEM FOR LOW BIT RATE VOICE ENCODING AND DECODING APPLICABLE FOR ANY REDUCED BANDWIDTH REQUIREMENTS INCLUDING WIRELESS  
Patent #
US 20140108007A1
Filed 10/09/2013

Current Assignee
Open Invention Network LLC

Sponsoring Entity
Clyde Holmes

Method and system for twostep spreading for tonal artifact avoidance in audio coding  
Patent #
US 8,838,442 B2
Filed 03/07/2012

Current Assignee
Xiph.Org Foundation

Sponsoring Entity
Xiph.Org Foundation

Coding with noise shaping in a hierarchical coder  
Patent #
US 8,965,773 B2
Filed 11/17/2009

Current Assignee
Orange S.A.

Sponsoring Entity
Orange S.A.

Methods and systems for adaptive timefrequency resolution in digital data coding  
Patent #
US 9,008,811 B2
Filed 09/16/2011

Current Assignee
Xiph.Org Foundation

Sponsoring Entity
Xiph.Org Foundation

Methods and systems for bit allocation and partitioning in gainshape vector quantization for audio coding  
Patent #
US 9,009,036 B2
Filed 03/07/2012

Current Assignee
Xiph.Org Foundation

Sponsoring Entity
Xiph.Org Foundation

Methods and systems for avoiding partial collapse in multiblock audio coding  
Patent #
US 9,015,042 B2
Filed 03/07/2012

Current Assignee
Xiph.Org Foundation

Sponsoring Entity
Xiph.Org Foundation

Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs  
Patent #
US 9,015,041 B2
Filed 01/11/2011

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Audio signal decoder, audio signal encoder, encoded multichannel audio signal representation, methods and computer program  
Patent #
US 9,025,777 B2
Filed 07/01/2009

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Audio signal decoder, time warp contour data provider, method and computer program  
Patent #
US 9,043,216 B2
Filed 07/01/2009

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Scalable And Embedded Codec For Speech And Audio Signals  
Patent #
US 20150302859A1
Filed 05/04/2015

Current Assignee
AlcatelLucent SA

Sponsoring Entity
AlcatelLucent SA

Adaptive codebook gain control for speech coding  
Patent #
US 9,190,066 B2
Filed 01/26/2009

Current Assignee
MACom Technology Solutions Holdings Inc.

Sponsoring Entity
Mindspeed Technologies Inc.

Wideband speech parameterization for high quality synthesis, transformation and quantization  
Patent #
US 9,224,402 B2
Filed 09/30/2013

Current Assignee
International Business Machines Corporation

Sponsoring Entity
International Business Machines Corporation

Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs  
Patent #
US 9,263,057 B2
Filed 11/11/2014

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Adaptive gain reduction for encoding a speech signal  
Patent #
US 9,269,365 B2
Filed 07/11/2008

Current Assignee
MACom Technology Solutions Holdings Inc.

Sponsoring Entity
Mindspeed Technologies Inc.

Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs  
Patent #
US 9,293,149 B2
Filed 11/11/2014

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Time warp contour calculator, audio signal encoder, encoded audio signal representation, methods and computer program  
Patent #
US 9,299,363 B2
Filed 07/01/2009

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Adaptive tilt compensation for synthesized speech  
Patent #
US 9,401,156 B2
Filed 06/27/2008

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs  
Patent #
US 9,431,026 B2
Filed 11/11/2014

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs  
Patent #
US 9,466,313 B2
Filed 11/11/2014

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs  
Patent #
US 9,502,049 B2
Filed 11/11/2014

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Method and apparatus for encoding and decoding audio signals  
Patent #
US 9,583,117 B2
Filed 10/08/2007

Current Assignee
Qualcomm Inc.

Sponsoring Entity
Qualcomm Inc.

Time warp activation signal provider, audio signal encoder, method for providing a time warp activation signal, method for encoding an audio signal and computer programs  
Patent #
US 9,646,632 B2
Filed 11/11/2014

Current Assignee
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Sponsoring Entity
FraunhoferGesellschaft zur Forderung der angewandten Forschung e.V.

Method and system for low bit rate voice encoding and decoding applicable for any reduced bandwidth requirements including wireless  
Patent #
US 9,886,959 B2
Filed 10/09/2013

Current Assignee
Open Invention Network LLC

Sponsoring Entity
Open Invention Network LLC

Decoding method and decoder for audio signal according to gain gradient  
Patent #
US 10,102,862 B2
Filed 12/31/2015

Current Assignee
Huawei Technologies Co. Ltd.

Sponsoring Entity
Huawei Technologies Co. Ltd.

Postquantization gain correction in audio coding  
Patent #
US 10,121,481 B2
Filed 07/04/2011

Current Assignee
Telefonaktiebolaget LM Ericsson

Sponsoring Entity
Telefonaktiebolaget LM Ericsson

Apparatus and method for encoding/decoding for high frequency bandwidth extension  
Patent #
US 10,152,983 B2
Filed 12/28/2011

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Linear prediction coefficient conversion device and linear prediction coefficient conversion method  
Patent #
US 10,163,448 B2
Filed 04/16/2015

Current Assignee
NTT Docomo Incorporated

Sponsoring Entity
NTT Docomo Incorporated

Apparatus and method for encoding/decoding for high frequency bandwidth extension  
Patent #
US 10,453,466 B2
Filed 12/10/2018

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Postquantization gain correction in audio coding  
Patent #
US 10,460,739 B2
Filed 08/04/2017

Current Assignee
Telefonaktiebolaget LM Ericsson

Sponsoring Entity
Telefonaktiebolaget LM Ericsson

Linear prediction coefficient conversion device and linear prediction coefficient conversion method  
Patent #
US 10,714,107 B2
Filed 11/14/2018

Current Assignee
NTT Docomo Incorporated

Sponsoring Entity
NTT Docomo Incorporated

Linear prediction coefficient conversion device and linear prediction coefficient conversion method  
Patent #
US 10,714,108 B2
Filed 11/14/2018

Current Assignee
NTT Docomo Incorporated

Sponsoring Entity
NTT Docomo Incorporated

Excitation synchronous time encoding vocoder and method  
Patent #
US 5,623,575 A
Filed 07/17/1995

Current Assignee
General Dynamics C4 Systems Incorporated

Sponsoring Entity
Motorola Inc.

Digital speech coder with different excitation types  
Patent #
US 4,912,764 A
Filed 08/28/1985

Current Assignee
Bell Telephone Laboratories Inc.

Sponsoring Entity
American Telephone Telegraph

Speech decoding method and apparatus  
Patent #
US 5,752,222 A
Filed 10/23/1996

Current Assignee
Sony Corporation

Sponsoring Entity
Sony Corporation

Comfort noise generation for digital communication systems  
Patent #
US 5,537,509 A
Filed 05/28/1992

Current Assignee
Hughes Network Systems LLC

Sponsoring Entity
Hughes Electronics Corporation

Method and apparatus for speech encoding, speech decoding, and speech post processing  
Patent #
US 5,596,675 A
Filed 09/13/1995

Current Assignee
Mitsubishi Electric Corporation

Sponsoring Entity
Mitsubishi Electric Corporation

Transform vector quantization for adaptive predictive coding  
Patent #
US 5,487,086 A
Filed 09/13/1991

Current Assignee
Intelsat Global Service Corporation

Sponsoring Entity
COMSAT Corporation

Voice encoding method and voice decoding method  
Patent #
US 5,473,727 A
Filed 11/01/1993

Current Assignee
Sony Corporation

Sponsoring Entity
Sony Corporation

Speech coding system having codebook storing differential vectors between each two adjoining code vectors  
Patent #
US 5,323,486 A
Filed 05/14/1992

Current Assignee
Fujitsu Limited

Sponsoring Entity
Fujitsu Limited

Speech encoding apparatus and related decoding apparatus  
Patent #
US 5,228,086 A
Filed 05/06/1991

Current Assignee
Matsushita Electric Industrial Company Limited

Sponsoring Entity
Matsushita Electric Industrial Company Limited

Lowcomplexity method for improving the performance of autocorrelationbased pitch detectors  
Patent #
US 5,127,053 A
Filed 12/24/1990

Current Assignee
L3 Communications Corporation

Sponsoring Entity
General Electric Company

Linear predictive codeword excited speech synthesizer  
Patent #
US 5,138,661 A
Filed 11/13/1990

Current Assignee
Lockheed Martin Corporation

Sponsoring Entity
General Electric Company

Linear predictive residual representation via noniterative spectral reconstruction  
Patent #
US 5,067,158 A
Filed 06/11/1985

Current Assignee
Texas Instruments Inc.

Sponsoring Entity
Texas Instruments Inc.

28 Claims
 1. A speech encoding method in which an input speech signal is divided on a time axis in terms of preset encoding units and encoded in terms of the preset encoding units, comprising the steps of:
detecting a voiced/unvoiced sound state of the input speech signal and classifying the input speech signal into voiced portions and unvoiced portions; finding shortterm prediction residuals of the voiced portions of the input speech signal; encoding the shortterm prediction residuals of the voiced portions of the input speech signal by sinusoidal analytic encoding; and encoding the unvoiced portions of the input speech signal by waveform encoding.  View Dependent Claims (2, 3, 4, 5)
 6. A speech encoding apparatus in which an input speech signal is divided on a time axis in terms of preset encoding units and encoded in terms of the preset encoding units, comprising:
means for detecting a voiced/unvoiced sound state of the input speech signal and classifying the input speech signal into voiced portions and unvoiced portions; means for finding shortterm prediction residuals of voiced portions of the input speech signal; means for encoding the shortterm prediction residuals of voiced portions of the input speech signal by sinusoidal analytic encoding; and means for encoding unvoiced portions of the input speech signal by waveform encoding.  View Dependent Claims (7, 8, 9, 10)
 11. A speech decoding method for decoding an encoded speech signal obtained by encoding a voiced portion of an input speech signal with first encoding comprising sinusoidal analytic encoding and by encoding an unvoiced portion of the input speech signal with second encoding employing shortterm prediction residuals, comprising the steps of:
finding first shortterm prediction residuals for the voiced speech portion of the encoded speech signal by sinusoidal synthesis; finding second shortterm prediction residuals for the unvoiced speech portion of the encoded speech signal; and employing predictive synthetic filtering for synthesizing first and second timeaxis waveforms based on the first and second shortterm prediction residuals of the voiced and unvoiced speech portions, respectively.  View Dependent Claims (12, 13, 14)
 15. A speech decoding apparatus for decoding an encoded speech signal obtained by encoding voiced portions of an input speech signal with a first encoding and by encoding unvoiced portions of the input speech signal with a second encoding, comprising:
means for finding shortterm prediction residuals for the voiced portions of the input speech signal by sinusoidal analytic encoding; means for finding shortterm prediction residuals for the unvoiced portions of said encoded speech signal; and predictive synthetic filtering means for synthesizing a first timeaxis waveform based on said shortterm prediction residuals of the voiced speech portions and for synthesizing a second timeaxis waveform based on the shortterm prediction residuals of the unvoiced speech portions.  View Dependent Claims (16)
 17. A speech decoding method for decoding an encoded speech signal obtained by finding shortterm prediction residuals of an input speech signal and encoding resulting shortterm prediction residuals with sinusoidal analytic encoding, comprising the steps of:
finding said shortterm prediction residuals of said encoded speech signal by sinusoidal synthesis; adding noise controlled in amplitude based on said encoded speech signal to said shortterm prediction residuals found by said sinusoidal synthesis; and performing predictive synthetic filtering by synthesizing a timedomain waveform based on said shortterm prediction residuals found by said sinusoidal synthesis added to said noise.  View Dependent Claims (18, 19, 20)
 21. A speech decoding apparatus for decoding an encoded speech signal obtained by finding shortterm prediction residuals of an input speech signal and encoding said resulting shortterm prediction residuals with sinusoidal analytic encoding, comprising:
sinusoidal synthesis means for finding said shortterm prediction residuals of said encoded speech signal by sinusoidal synthesis; noise addition means for adding noise controlled in amplitude based on said encoded speech signal to said shortterm prediction residuals; and predictive synthetic filtering means for synthesizing a timedomain waveform based on said shortterm prediction residuals found by said sinusoidal synthesis means added to said noise.  View Dependent Claims (22, 23, 24)
 25. A method for encoding an audible signal, comprising the steps of:
converting parameters derived from the input audible signal into a frequencydomain signal; and performing weighted vector quantization of said parameters, the weight of said weighted vector quantization being calculated based on results of an orthogonal transform of parameters derived from an impulse response of a weight transfer function.  View Dependent Claims (26)
 27. A portable radio terminal apparatus comprising:
amplifier means for amplifying an input speech signal; A/D conversion means for performing analog to digital conversion of an output signal from said amplifier means; speech encoding means for speechencoding an output signal from said A/D conversion means; transmission path encoding means for channel coding an output signal from said speech encoding means; modulation means for modulating an output signal from said transmission path encoding means; D/A conversion means for performing digital to analog conversion of an output signal from said modulation means; and amplifier means for amplifying an output signal from said D/A conversion means and supplying the resulting amplified signal to an antenna; wherein said speech encoding means comprises; means for detecting a voiced/unvoiced sound state of the input speech signal and classifying the input speech signal into voiced portions and unvoiced portions; predictive encoding means for finding shortterm prediction residuals of voiced portions of the input speech signal; sinusoidal analytic encoding means for encoding the shortterm prediction residuals of voiced portions of the input speech signal by sinusoidal analytic encoding; and waveform encoding means for waveform encoding of unvoiced portions of the input speech signal.
 28. A portable radio terminal apparatus comprising:
amplifier means for amplifying a received signal; A/D conversion means for performing analog to digital conversion of an output signal from said amplifier means; demodulating means for demodulating an output signal from said A/D conversion means; transmission path decoding means for channel decoding an output signal from said demodulating means; speech decoding means for speechdecoding an output signal from said transmission path decoding means; and D/A conversion means for performing digital to analog conversion of an output signal from said demodulating means; wherein said speech decoding means comprises; sinusoidal synthesis means for finding shortterm prediction residuals of said encoded speech signal by sinusoidal synthesis; noise addition means for adding noise controlled in amplitude based on said encoded speech signal to said shortterm prediction residuals; and a predictive synthetic filter for synthesizing a timedomain waveform based on the shortterm prediction residuals added to the noise.
1 Specification
1. Field of the Invention
This invention relates to a speech encoding method in which an input speech signal is divided in terms of blocks or frames as encoding units and encoded in terms of the encoding units, a decoding method for decoding the encoded signal, and a speech encoding/decoding method.
2. Description of the Related Art
There have conventionally been known a variety of encoding methods for encoding an audio signal (inclusive of speech and acoustic signals) for signal compression by exploiting statistic properties of the signals in the time domain and in the frequency domain and psychoacoustic characteristics of the human ear. The encoding methods may roughly be classified into timedomain encoding, frequency domain encoding and analysis/synthesis encoding.
Examples of the highefficiency encoding of speech signals include sinusoidal analytic encoding, such as harmonic encoding or multiband excitation (MBE) encoding, subband coding (SBC), linear predictive coding (LPC), discrete cosine transform (DCT), modified DCT (MDCT), and fast Fourier transform (FFT).
In the conventional MBE encoding or harmonic encoding, unvoiced speech portions are generated by a noise generating circuit. However, this method has a drawback that explosive consonants, such as p, k or t, or fricative consonants, cannot be produced correctly.
Moreover, if encoded parameters having totally different properties, such as line spectrum pairs (LSPs), are interpolated at a transient portion between a voiced (V) portion and an unvoiced (UV) portion, extraneous or foreign sounds tend to be produced. It being understood that by voiced is meant those sounds that have a discernable spectral distribution and by unvoiced is meant those sounds whose spectrum looks like noise.
In addition, with the conventional sinusoidal synthetic coding, lowpitch speech, particularly, male speech, tends to become unnatural “stuffed” speech.
It is therefore an object of the present invention to provide a speech encoding method and apparatus and a speech decoding method and apparatus whereby the explosive or fricative consonants can be correctly reproduced without the risk of a strange sound being generated in a transition portion between the voiced speech and the unvoiced speech, and whereby the speech of high clarity devoid of “stuffed” feeling can be produced.
With the speech encoding method of the present invention, in which an input speech signal is divided on the time axis in terms of preset encoding units and subsequently encoded in terms of the preset encoding units, shortterm prediction residuals of the input speech signal are found, the shortterm prediction residuals thus found are encoded with sinusoidal analytic encoding, and the input speech signal is encoded by waveform encoding.
The input speech signal is discriminated as to whether it is voiced or unvoiced. Based on the results of discrimination, the portion of the input speech signal judged to be voiced is encoded with the sinusoidal analytic encoding, while the portion thereof judged to be unvoiced is processed with vector quantization of the timeaxis waveform by a closedloop search of an optimum vector using an analysisbysynthesis method.
It is preferred that, for the sinusoidal analytic encoding, perceptually weighted vector or matrix quantization is used for quantizing the shortterm prediction residuals, and that, for such perceptually weighted vector or matrix quantization, the weight is calculated based on the results of orthogonal transform of parameters derived from the impulse response of the weight transfer function.
According to the present invention, the shortterm prediction residuals, such as LPC residuals, of the input speech signal, are found, and the shortterm prediction residuals are represented by a synthesized sinusoidal wave, while the input speech signal is encoded by waveform encoding of phase transmission of the input speech signal, thus realizing efficient encoding.
In addition, the input speech signal is discriminated as to whether it is voiced or unvoiced and, based on the results of discrimination, the portion of the input speech signal judged to be voiced is encoded by the sinusoidal analytic encoding, while the portion thereof judged to be unvoiced is processed with vector quantization of the timeaxis waveform by the closed loop search of the optimum vector using the analysisbysynthesis method, thereby improving the expressiveness of the unvoiced portion to produce a reproduced speech of high clarity. In particular, such effect is enhanced by raising the quantization rate. It is also possible to prevent extraneous sound from being produced at the transient portion between the voiced and unvoiced portions. The seeming synthesized speech at the voiced portion is diminished to produce more natural synthesized speech.
By calculating the weight at the time of weighted vector quantization of the parameters of the input signal converted into the frequency domain signal based on the results of orthogonal transform of the parameters derived from the impulse response of the weight transfer function, the processing volume may be diminished to a fractional value thereby simplifying the structure or expediting the processing operations.
Referring to the drawings, preferred embodiments of the present invention will be explained in detail.
The basic concept underlying the speech signal encoder of
The first encoding unit 110 employs the encoding of the LPC residuals, for example, with sinusoidal analytic encoding, such as harmonic encoding or multiband excitation (MBE) encoding. The second encoding unit 120 performs code excited linear prediction (CELP) using vector quantization by closed loop search of an optimum vector and also uses, for example, an analysis by synthesis method.
In the embodiment shown in
The second encoding unit 120 of
Referring to
The index as the envelope quantization output of the input terminal 203 is sent to an inverse vector quantization unit 212 for inverse vector quantization to find a spectral envelope of the LPC residues which is sent to a voiced speech synthesizer 211. The voiced speech synthesizer 211 synthesizes the linear prediction encoding (LPC) residuals of the voiced speech portion by sinusoidal synthesis. The synthesizer 211 is fed also with the pitch and the V/UV discrimination output from the input terminals 204, 205. The LPC residuals of the voiced speech from the voiced speech synthesis unit 211 are sent to an LPC synthesis filter 214. The index data of the UV data from the input terminal 207 is sent to an unvoiced sound synthesis unit 220 where reference is had to the noise codebook for taking out the LPC residuals of the unvoiced portion. These LPC residuals are also sent to the LPC synthesis filter 214. In the LPC synthesis filter 214, the LPC residuals of the voiced portion and the LPC residuals of the unvoiced portion are processed by LPC synthesis. Alternatively, the LPC residuals of the voiced portion and the LPC residuals of the unvoiced portion summed together may be processed with LPC synthesis. The LSP index data from the input terminal 202 is sent to the LPC parameter reproducing unit 213 where αparameters of the LPC are taken out and sent to the LPC synthesis filter 214. The speech signals synthesized by the LPC synthesis filter 214 are taken out at an output terminal 201.
Referring to
In the speech signal encoder shown in
The LPC analysis circuit 132 of the LPC analysis/quantization unit 113 applies a Hamming window, with a length of the input signal waveform on the order of 256 samples as a block, and finds a linear prediction coefficient, that is a socalled αparameter, by the autocorrelation method. The framing interval as a data outputting unit is set to approximately 160 samples. If the sampling frequency fs is 8 kHz, for example, a oneframe interval is 20 msec or 160 samples.
The αparameter from the LPC analysis circuit 132 is sent to an αLSP conversion circuit 133 for conversion into line spectrum pair (LSP) parameters. This converts the αparameter, as found by direct type filter coefficient, into for example, ten, that is five pairs of the LSP parameters. This conversion is carried out by, for example, the NewtonRhapson method. The reason the αparameters are converted into the LSP parameters is that the LSP parameter is superior in interpolation characteristics to the αparameters.
The LSP parameters from the αLSP conversion circuit 133 are matrix or vector quantized by the LSP quantizer 134. It is possible to take a frametoframe difference prior to vector quantization, or to collect plural frames in order to perform matrix quantization. In the present case, two frames, each 20 msec long, of the LSP parameters, calculated every 20 msec, are handled together and processed with matrix quantization and vector quantization.
The quantized output of the quantizer 134, that is the index data of the LSP quantization, are taken out at a terminal 102, while the quantized LSP vector is sent to an LSP interpolation circuit 136.
The LSP interpolation circuit 136 interpolates the LSP vectors, quantized every 20 msec or 40 msec, in order to provide an octatuple rate. That is, the LSP vector is updated every 2.5 msec. The reason is that, if the residual waveform is processed with the analysis/synthesis by the harmonic encoding/decoding method, the envelope of the synthetic waveform presents an extremely toothed waveform, so that, if the LPC coefficients are changed abruptly every 20 msec, a foreign noise is likely to be produced. That is, if the LPC coefficient is changed gradually every 2.5 msec, such foreign noise may be prevented from occurrence.
For inverted filtering of the input speech using the interpolated LSP vectors produced every 2.5 msec, the LSP parameters are converted by an LSP to α conversion circuit 137 into αparameters, which are filter coefficients of e.g., tenorder direct type filter. An output of the LSP to α conversion circuit 137 is sent to the LPC inverted filter circuit 111 which then performs inverse filtering for producing a smooth output using an αparameter updated every 2.5 msec. An output of the inverse LPC filter 111 is sent to an orthogonal transform circuit 145, such as a DCT circuit, of the sinusoidal analysis encoding unit 114, such as a harmonic encoding circuit.
The αparameter from the LPC analysis circuit 132 of the LPC analysis/quantization unit 113 is sent to a perceptual weighting filter calculating circuit 139 where data for perceptual weighting is found. These weighting data are sent to a perceptual weighting vector quantizer 116, perceptual weighting filter 125 and the perceptual weighted synthesis filter 122 of the second encoding unit 120.
The sinusoidal analysis encoding unit 114 of the harmonic encoding circuit analyzes the output of the inverted LPC filter 111 by a method of harmonic encoding. That is, pitch detection, calculations of the amplitudes Am of the respective harmonics and voiced (V)/unvoiced (UV) discrimination, are carried out and the numbers of the amplitudes Am or the envelopes of the respective harmonics, varied with the pitch, are made constant by dimensional conversion.
In an illustrative example of the sinusoidal analysis encoding unit 114 shown in
The openloop pitch search unit 141 and the zerocrossing counter 142 of the sinusoidal analysis encoding unit 114 of
The orthogonal transform circuit 145 performs orthogonal transform, such as discrete Fourier transform (DFT), for converting the LPC residuals on the time axis into spectral amplitude data on the frequency axis. An output of the orthogonal transform circuit 145 is sent to the fine pitch search unit 146 and a spectral evaluation unit 148 configured for evaluating the spectral amplitude or envelope.
The fine pitch search unit 146 is fed with relatively rough pitch data extracted by the open loop pitch search unit 141 and with frequencydomain data obtained by DFT by the orthogonal transform unit 145. The fine pitch search unit 146 swings the pitch data by plusorminus several samples, at a rate of 0.2 to 0.5, centered about the rough pitch value data, in order to arrive ultimately at the value of the fine pitch data having an optimum decimal point (floating point). The analysis by synthesis method is used as the fine search technique for selecting a pitch so that the power spectrum will be closest to the power spectrum of the original sound. Pitch data from the closedloop fine pitch search unit 146 is sent to an output terminal 104 via a switch 118.
In the spectral evaluation unit 148, the amplitude of each of the harmonics and the spectral envelope as the sum of the harmonics are evaluated based on the spectral amplitude and the pitch as the orthogonal transform output of the LPC residuals, and sent to the fine pitch search unit 146, V/UV discrimination unit 115 and to the perceptually weighted vector quantization unit 116.
The V/UV discrimination unit 115 discriminates V/UV of a frame based on an output of the orthogonal transform circuit 145, an optimum pitch from the fine pitch search unit 146, spectral amplitude data from the spectral evaluation unit 148, maximum value of the normalized autocorrelation r(p) from the open loop pitch search unit 141 and the zerocrossing count value from the zerocrossing counter 142. In addition, the boundary position of the bandbased V/UV discrimination for the MBE may also be used as a condition for V/UV discrimination. A discrimination output of the V/UV discrimination unit 115 is taken out at an output terminal 105.
An output unit of the spectrum evaluation unit 148 or an input unit of the vector quantization unit 116 may be provided with a number from a data conversion unit (a unit performing a sort of sampling rate conversion). The number from the data conversion unit may be used for setting the amplitude data Am of an envelope to a constant value in consideration that the number of bands split on the frequency axis and the number of data differ with the pitch. That is, if the effective band is up to 3400 kHz, the effective band can be split into 8 to 63 bands depending on the pitch. The number of mMX+1 of the amplitude data Am, obtained from band to band, is changed in a range from 8 to 63. Thus, the data number conversion unit (not shown) converts the amplitude data of the variable number mMx+1 to a preset number M of data, such as 44 data.
The amplitude data or envelope data of the preset number M, such as 44, from the data number conversion unit, provided at an output unit of the spectral evaluation unit 148 or at an input unit of the vector quantization unit 116, are handled together in terms of a preset number of data, such as 44 data, as a unit, by the vector quantization unit 116, by way of performing weighted vector quantization. This weight is supplied by an output of the perceptual weighting filter calculation circuit 139. The index of the envelope from the vector quantizer 116 is taken out by a switch 117 at an output terminal 103. Prior to weighted vector quantization, it is advisable to take interframe difference using a suitable leakage coefficient for a vector made up of a preset number of data.
The second encoding unit 120 is now further explained. The second encoding unit 120 has a socalled CELP encoding structure and is used in particular for encoding the unvoiced portion of the input speech signal. In the CELP encoding structure for the unvoiced portion of the input speech signal, a noise output, corresponding to the LPC residuals of the unvoiced sound, as a representative output value of the noise codebook, or a socalled stochastic codebook 121, is sent via a gain control circuit 126 to a perceptually weighted synthesis filter 122. The weighted synthesis filter 122 LPC synthesizes the input noise by LPC synthesis and sends the produced weighted unvoiced signal to the subtractor 123. The subtractor 123 is fed with a signal supplied from the input terminal 101 via an highpass filter (HPF) 109 and perceptually weighted by a perceptual weighting filter 125. The subtractor finds the difference or error between the signal and the signal from the synthesis filter 122. Meanwhile, a zero input response of the perceptually weighted synthesis filter is previously subtracted from an output of the perceptual weighting filter output 125. This error is fed to a distance calculation circuit 124 for calculating the distance. A representative vector value which will minimize the error is searched in the noise codebook 121. The above is the summary of the vector quantization of the timedomain waveform employing the closedloop search by the analysis by synthesis method.
As data for the unvoiced (UV) portion from the second encoder 120 employing the CELP coding structure, the shape index of the codebook from the noise codebook 121 and the gain index of the codebook from the gain circuit 126 are taken out. The shape index, which is the UV data from the noise codebook 121, is sent to an output terminal 107s via a switch 127s, while the gain index, which is the UV data of the gain circuit 126, is sent to an output terminal 107g via a switch 127g.
These switches 127s, 127g and the switches 117, 118 are turned on and off depending on the results of V/UV decision from the V/UV discrimination unit 115. Specifically, the switches 117, 118 are turned on, if the results of V/UV discrimination of the speech signal of the frame currently transmitted indicates voiced (V), while the switches 127s, 127g are turned on if the speech signal of the frame currently transmitted is unvoiced (UV).
In
The LSP index is sent to the inverse vector quantizer 231 of the LSP for the LPC parameter reproducing unit 213 so as to be inverse vector quantized to line spectral pair (LSP) data which are then supplied to LSP interpolation circuits 232, 233 for interpolation. The resulting interpolated data is converted by the LSP to α conversion circuits 234, 235 to a parameters which are sent to the LPC synthesis filter 214. The LSP interpolation circuit 232 and the LSP to α conversion circuit 234 are designed for voiced (V) sound, while the LSP interpolation circuit 233 and the LSP to α conversion circuit 235 are designed for unvoiced (UV) sound. The LPC synthesis filter 214 is made up of the LPC synthesis filter 236 of the voiced speech portion and the LPC synthesis filter 237 of the unvoiced speech portion. That is, LPC coefficient interpolation is carried out independently for the voiced speech portion and the unvoiced speech portion for prohibiting ill effects which might otherwise be produced in the transient portion from the voiced speech portion to the unvoiced speech portion or vice versa by interpolation of the LSPs of different properties.
To an input terminal 203 of
The vectorquantized index data of the spectral envelope Am from the input terminal 203 is sent to an inverse vector quantizer 212 for inverse vector quantization where a conversion inverted from the data number conversion is carried out. The resulting spectral envelope data is sent to a sinusoidal synthesis circuit 215.
If the interframe difference is found prior to vector quantization of the spectrum during encoding, interframe difference is decoded after inverse vector quantization for producing the spectral envelope data.
The sinusoidal synthesis circuit 215 is fed with the pitch from the input terminal 204 and the V/UV discrimination data from the input terminal 205. From the sinusoidal synthesis circuit 215, LPC residual data corresponding to the output of the LPC inverse filter 111 shown in
The envelope data of the inverse vector quantizer 212 and the pitch and the V/UV discrimination data from the input terminals 204, 205 are sent to a noise synthesis circuit 216 configured for noise addition for the voiced portion (V). An output of the noise synthesis circuit 216 is sent to an adder 218 via a weighted overlapandadd circuit 217. Specifically, the noise is added to the voiced portion of the LPC residual signals in consideration that, if the excitation as an input to the LPC synthesis filter of the voiced sound is produced by sine wave synthesis, a “stuffed” feeling is produced in the lowpitch sound, such as male speech, and the sound quality is abruptly changed between the voiced sound and the unvoiced sound, thus producing an unnatural sound. Such noise takes into account the parameters concerned with speech encoding data, such as pitch, amplitudes of the spectral envelope, maximum amplitude in a frame or the residual signal level, in connection with the LPC synthesis filter input of the voiced speech portion, that is excitation.
A sum output of the adder 218 is sent to a synthesis filter 236 for the voiced sound of the LPC synthesis filter 214 where LPC synthesis is carried out to form time waveform data which then is filtered by a postfilter 238v for the voiced speech and sent to the adder 239.
The shape index and the gain index, as UV data from the output terminals 107s and 107g of
An output of the windowing circuit 223 is sent to a synthesis filter 237 for the unvoiced (UV) speech of the LPC synthesis filter 214. The data sent to the synthesis filter 237 is processed with LPC synthesis to become time waveform data for the unvoiced portion. The time waveform data of the unvoiced portion is filtered by a postfilter for the unvoiced portion 238u before being sent to an adder 239.
In the adder 239, the time waveform signal from the postfilter for the voiced speech 238v and the time waveform data for the unvoiced speech portion from the postfilter for the unvoiced speech 238u are added to each other and the resulting sum data is taken out at the output terminal 201.
The abovedescribed speech signal encoder can output data of different bit rates depending on the required sound quality. That is, the output data can be output with variable bit rates. For example, if the low bit rate is 2 kbps and the high bit rate is 6 kbps, the output data is data of the bit rates having the following bit rates shown in Table 1.
The pitch data from the output terminal 104 is output at all times at a bit rate of 8 bits/20 msec for the voiced speech, with the V/UV discrimination output from the output terminal 105 being at all times 1 bit/20 msec. The index for LSP quantization, output from the output terminal 102, is switched between 32 bits/40 msec and 48 bits/40 msec. On the other hand, the index during the voiced speech (V) output by the output terminal 103 is switched between 15 bits/20 msec and 87 bits/20 msec. The index for the unvoiced (UV) output from the output terminals 107s and 107g is switched between 11 bits/10 msec and 23 bits/5 msec. The output data for the voiced sound (UV) is 40 bits/20 msec for 2 kbps and 120 kbps/20 msec for 6 kbps. On the other hand, the output data for the voiced sound (UV) is 39 bits/20 msec for 2 kbps and 117 kbps/20 msec for 6 kbps.
The index for LSP quantization, the index for voiced speech (V) and the index for the unvoiced speech (UV) are explained later on in connection with the arrangement of pertinent portions.
Referring to
The αparameter from the LPC analysis circuit 132 is sent to an αLSP circuit 133 for conversion to LSP parameters. If the Porder LPC analysis is performed in a LPC analysis circuit 132, P αparameters are calculated. These P αparameters are converted into LSP parameters which are held in a buffer 610 of
The buffer 610 outputs 2 frames of LSP parameters. The two frames of the LSP parameters are matrixquantized by a matrix quantizer 620 made up of a first matrix quantizer 620_{1 }and a second matrix quantizer 620_{2}. The two frames of the LSP parameters are matrixquantized in the first matrix quantizer 620_{1 }and the resulting quantization error is further matrixquantized in the second matrix quantizer 620_{2}. The matrix quantization exploits correlation in both the time axis and in the frequency axis. The quantization error for two frames from the matrix quantizer 620_{2 }enters a vector quantization unit 640 made up of a first vector quantizer 640_{1 }and a second vector quantizer 640. The first vector quantizer 640_{1 }is made up of two vector quantization portions 650, 660, while the second vector quantizer 640_{2 }is made up of two vector quantization portions 670, 680. The quantization error from the matrix quantization unit 620 is quantized on the frame basis by the vector quantization portions 650, 660 of the first vector quantizer 640_{1}. The resulting quantization error vector is further vectorquantized by the vector quantization portions 670, 680 of the second vector quantizer 640_{2}. The above described vector quantization exploits correlation along the frequency axis.
The matrix quantization unit 620, executing the matrix quantization as described above, includes at least a first matrix quantizer 620_{1 }for performing a first matrix quantization step and a second matrix quantizer 620_{2 }for performing a second matrix quantization step for matrix quantizing the quantization error produced by the first matrix quantization. The vector quantization unit 640, executing the vector quantization as described above, includes at least a first vector quantizer 640_{1 }for performing a first vector quantization step and a second vector quantizer 640_{2 }for performing a second matrix quantization step for matrix quantizing the quantization error produced by the first vector quantization.
The matrix quantization and the vector quantization will now be explained in detail.
The LSP parameters for two frames, stored in the buffer 610, that is a 10×2 matrix, is sent to the first matrix quantizer 620_{1}. The first matrix quantizer 620_{1 }sends LSP parameters for two frames via LSP parameter adder 621 to a weighted distance calculating unit 623 for finding the weighted distance of the minimum value.
The distortion measure d_{MQ1 }during codebook search by the first matrix quantizer 620_{1 }is given by the equation (1):
where X_{1 }is the LSP parameter and X_{1 }is the quantization value, with t and i being the numbers of the Pdimension.
The weight w, in which weight limitation in the frequency axis and in the time axis is not taken into account, is given by the equation (2):
where x(t, 0)=0, x(t, p+1)=π regardless of t.
The weight w of the equation (2) is also used for downstream side matrix quantization and vector quantization.
The calculated weighted distance is sent to a matrix quantizer MQ_{1 }622 for matrix quantization. An 8bit index output by this matrix quantization is sent to a signal switcher 690. The quantized value by matrix quantization is subtracted in an adder 621 from the LSP parameters for two frames from the buffer 610. A weighted distance calculating unit 623 calculates the weighted distance every two frames so that matrix quantization is carried out in the matrix quantization unit 622. Also, a quantization value minimizing the weighted distance is selected. An output of the adder 621 is sent to an adder 631 of the second matrix quantizer 620_{2}.
Similarly to the first matrix quantizer 620_{1}, the second matrix quantizer 620_{2 }performs matrix quantization. An output of the adder 621 is sent via adder 631 to a weighted distance calculation unit 633 where the minimum weighted distance is calculated.
The distortion measure d_{MQ2 }during the codebook search by the second matrix quantizer 620_{2 }is given by the equation (3):
The weighted distance is sent to a matrix quantization unit (MQ_{2}) 632 for matrix quantization. An 8bit index, output by matrix quantization, is sent to a signal switcher 690. The weighted distance calculation unit 633 sequentially calculates the weighted distance using the output of the adder 631. The quantization value minimizing the weighted distance is selected. An output of the adder 631 is sent to the adders 651, 661 of the first vector quantizer 640_{1 }frame by frame.
The first vector quantizer 640_{1 }performs vector quantization frame by frame. An output of the adder 631 is sent frame by frame to each of weighted distance calculating units 653, 663 via adders 651, 661 for calculating the minimum weighted distance.
The difference between the quantization error X_{2 }and the quantization error X_{2}′ is a matrix of (10×2). If the difference is represented as X_{2}−X_{2}=[x_{31}, x_{32}], the distortion measures d_{VQ1}, d_{VQ2 }during codebook search by the vector quantization units 652, 662 of the first vector quantizer 640_{1 }are given by the equations (4) and (5):
The weighted distance is sent to a vector quantization unit VQ_{1 }652 and a vector quantization unit VQ_{2 }662 for vector quantization. Each 8bit index output by this vector quantization is sent to the signal switcher 690. The quantization value is subtracted by the adders 651, 661 from the input twoframe quantization error vector. The weighted distance calculating units 653, 663 sequentially calculate the weighted distance, using the outputs of the adders 651, 661, for selecting the quantization value minimizing the weighted distance. The outputs of the adders 651, 661 are sent to adders 671, 681 of the second vector quantizer 640_{2}.
The distortion measure d_{VQ3}, d_{VQ4 }during codebook searching by the vector quantizers 672, 682 of the second vector quantizer 640_{2}, for
x_{41}=x_{31}−x_{31}′
x_{42}=x_{32}−x_{32}′
are given by the equations (6) and (7):
These weighted distances are sent to the vector quantizer (VQ_{3}) 672 and to the vector quantizer (VQ_{4}) 682 for vector quantization. The 8bit output index data from vector quantization are subtracted by the adders 671, 681 from the input quantization error vector for two frames. The weighted distance calculating units 673, 683 sequentially calculate the weighted distances using the outputs of the adders 671, 681 for selecting the quantized value minimizing the weighted distances.
During codebook learning, learning is performed by the general Lloyd algorithm based on the respective distortion measures.
The distortion measures during codebook searching and during learning may be of the same or different values.
The 8bit index data from the matrix quantization units 622, 632 and the vector quantization units 652, 662, 672 and 682 are switched by the signal switcher 690 and output at an output terminal 691.
Specifically, for a lowbit rate, outputs of the first matrix quantizer 620_{1 }carrying out the first matrix quantization step, second matrix quantizer 620_{2 }carrying out the second matrix quantization step, and the first vector quantizer 640_{1 }carrying out the first vector quantization step are taken out, whereas, for a high bit rate, the output for the low bit rate is summed to an output of the second vector quantizer 640_{2 }carrying out the second vector quantization step and the resulting sum is taken out.
This produces an index of 32 bits/40 msec and an index of 48 bits/40 msec for 2 kbps and 6 kbps, respectively.
The matrix quantization unit 620 and the vector quantization unit 640 perform weighting limited in the frequency axis and/or the time axis in conformity to characteristics of the parameters representing the LPC coefficients.
The weighting limited in the frequency axis in conformity to characteristics of the LSP parameters is first explained. If the number of orders P=10, the LSP parameters X(i) are grouped into
L_{1}={X(i) 1≦i≦2}
L_{2}={X(i) 3≦i≦6}
L_{3}={X(i) 7≦i≦10}
for three ranges of low, mid and high ranges. If the weighting of the groups L_{1}, L_{2 }and L_{3 }is ¼, ½ and ¼, respectively, the weighting limited only in the frequency axis is given by the equations (8), (9) and (10)
The weighting of the respective LSP parameters is performed in each group only and such weight is limited by the weighting for each group.
Looking in the time axis direction, the sum total of the respective frames is necessarily 1, so that limitation in the time axis direction is framebased. The weight limited only in the time axis direction is given by the equation (11):
where 1≦i≦10 and 0≦t≦1.
By this equation (11), weighting not limited in the frequency axis direction is carried out between two frames having the frame numbers of t=0 and t=1. This weighting limited only in the time axis direction is carried out between two frames processed with matrix quantization.
During learning, the totality of frames used as learning data, having the total number T, is weighted in accordance with the equation (12):
where 1≦i≦10 and 0≦t≦T.
The weighting limited in the frequency axis direction and in the time axis direction is explained. If the number of orders P=10, the LSP parameters x(i, t) are grouped into
L_{1}={x(i, t)1≦i≦2, 0≦t≦1}
L_{2}={x(i, t)3≦i≦6, 0≦t≦1}
L_{3}={x(i, t)7≦i≦10, 0≦t≦1}
for three ranges of low, mid and high ranges. If the weights for the groups L_{1}, L_{2 }and L_{3 }are ¼, ½ and ¼, the weighting limited only in the frequency axis is given by the equations (13), (14) and (15):
By these equations (13) to (15) weighting limited every three frames in the frequency axis direction and across two frames processed with matrix quantization is carried out. This is effective both during codebook search and during learning.
During learning, weighting is for the totality of frames of the entire data. The LSP parameters x(i, t) are grouped into
L_{1}={x(i, t)1≦i≦2, 0≦t≦T}
L_{2}={x(i, t)3≦i≦6, 0≦t≦T}
L_{3}={x(i, t)7≦i≦10, 0≦t≦T}
for low, mid and high ranges. If the weighting of the groups L_{1}, L_{2 }and L_{3 }is ¼, ½ and ¼, respectively, the weighting for the groups L_{1}, L_{2 }and L_{3}, limited only in the frequency axis, is given by the equations (16), (17) and (18):
By these equations (16) to (18), weighting can be performed for three ranges in the frequency axis direction and across the totality of frames in the time axis direction.
In addition, the matrix quantization unit 620 and the vector quantization unit 640 perform weighting depending on the magnitude of changes in the LSP parameters. In V to UV or UV to V transient regions, which represent minority frames among the totality of speech frames, the LSP parameters are changed significantly due to differences in the frequency response between consonants and vowels. Therefore, the weighting shown by the equation (19) may be multiplied by the weighting W′(i, t) for carrying out the weighting placing emphasis on the transition regions.
The following equation (20):
may be used in place of the equation (19).
Thus the LSP quantization unit 134 executes twostage matrix quantization and twostage vector quantization to render the number of bits of the output index variable.
The basic structure of the vector quantization unit 116 is shown in
First, in the speech signal encoding device shown in
A variety of methods may be conceived for such data number conversion. In the present embodiment, dummy data interpolating the values from the last data in a block to the first data in the block, or preset data such as data repeating the last data or the first data in a block, are appended to the amplitude data of one block of an effective band on the frequency axis for enhancing the number of data to N_{F}, amplitude data equal in number to Os times, such as eight times, are found by Ostuple, such as octatuple, oversampling of the limited bandwidth type. The ((mMx+1)×Os) amplitude data are linearly interpolated for expansion to a larger N_{M }number, such as 2048. This N_{M }data is subsampled for conversion to the abovementioned preset number M of data, such as 44 data. In effect, only data necessary for formulating M data ultimately required is calculated by oversampling and linear interpolation without finding all of the abovementioned N_{M }data.
The vector quantization unit 116 for carrying out weighted vector quantization of
An output vector x of the spectral evaluation unit 148, that is envelope data having a preset number M, enters an input terminal 501 of the first vector quantization unit 500. This output vector x is quantized with weighted vector quantization by the vector quantization unit 502. Thus, a shape index output by the vector quantization unit 502 is output at an output terminal 503, while a quantized value x_{0}′ is output at an output terminal 504 and sent to adders 505, 513. The adder 505 subtracts the quantized value x_{0}′ from the source vector x to give a multiorder quantization error vector y.
The quantization error vector y is sent to a vector quantization unit 511 in the second vector quantization unit 510. This second vector quantization unit 511 is made up of plural vector quantizers, or two vector quantizers 511_{1}, 511_{2 }in
Thus, for the low bit rate, an output of the first vector quantization step by the first vector quantization unit 500 is taken out, whereas, for the high bit rate, an output of the first vector quantization step and an output of the second quantization step by the second quantization unit 510 are outputted.
Specifically, the vector quantizer 502 in the first vector quantization unit 500 in the vector quantization section 116 is of an Lorder, such as 44dimensional twostage structure, as shown in
That is, the sum of the output vectors of the 44dimensional vector quantization codebook with the codebook size of 32, multiplied with a gain g_{i}, is used as a quantized value x_{0}′ of the 44dimensional spectral envelope vector x. Thus, as shown in
The spectral envelope Am obtained by the above MBE analysis of the LPC residuals and converted into a preset dimension is x. It is crucial how efficiently x is to be quantized.
The quantization error energy E is defined by
where H denotes characteristics on the frequency axis of the LPC synthesis filter and W a matrix for weighting for representing characteristics for perceptual weighting on the frequency axis.
If the αparameter by the results of LPC analyses of the current frame is denoted as α_{i }(1≦i≦P), the values of the Ldimension, for example, 44dimension corresponding points, are sampled from the frequency response of the equation (22):
For calculations, 0s are stuffed next to a string of 1, α_{1}, α_{2}, . . . α_{p }to give a string of 1, α_{1}, α_{2}, . . . α_{p}, 0, 0, . . . , 0 to give e.g., 256point data. Then, by 256point FFT, (r_{e}^{2}+im^{2})^{1/2 }are calculated for points associated with a range from 0 to π and the reciprocals of the results are found. These reciprocals are subsampled to L points, such as 44 points, and a matrix is formed having these L points as diagonal elements:
A perceptually weighted matrix W is given by the equation (23):
where α_{i }is the result of the LPC analysis, and λa, λb are constants, such that λa=0.4 and λb=0.9.
The matrix W may be calculated from the frequency response of the above equation (23). For example, FFT is executed on 256point data of 1, α1λb, α2λ1b^{2}, . . . αpλb^{p}, 0, 0, . . . , 0 to find (r_{e}^{2}[i]+Im^{2}[i])^{1/2 }for a domain from 0 to π, where 0≦i≦128. The frequency response of the denominator is found by 256point FFT for a domain from 0 to X for 1, α1λa, α2λa^{2}, . . . , αpλa^{p}, 0, 0, . . . , 0 at 128 points to find (re′^{2}[i]+im′^{2}[i])^{1/2}, where 0≦i≦128. The frequency response of the equation 23 may be found by
where 0≦i≦128. This is found for each associated point of, for example, the 44dimensional vector, by the following method. More precisely, linear interpolation should be used. However, in the following example, the closest point is used instead.
That is,
ω[i]=ω0 [nint(128i/L)], where 1≦i≦L.
In the equation nint(X) is a function which returns a value closest to X.
As for H, h(1), h(2), . . . h(L) are found by a similar method. That is,
As another example, H(z)W(z) is first found and the frequency response is then found for decreasing the number of times of FFT. That is, the denominator of the equation (25):
is expanded to
256point data, for example, is produced by using a string of 1, β_{1}, β_{2}, . . . , β_{2p}, 0, 0, . . . , 0. Then, 256point FFT is executed, with the frequency response of the amplitude being
rms[i]=√{square root over (re″^{2}[i]+im″^{2}[i])}
where 0≦i≦128. From this,
where 0≦i≦128. This is found for each of corresponding points of the Ldimensional vector. If the number of points of the FFT is small, linear interpolation should be used. However, the closest value is herein is found by:
where 1≦i≦L. If a matrix having these as diagonal elements is W′,
The equation (26) is the same matrix as the above equation (24).
Alternatively, H(exp(jω))W(exp(jω)) may be directly calculated from the equation (25) with respect to ω≡iπ, where 1≦i≦É, so as to be used for wh[i].
Alternatively, a suitable length, such as 40 points, of an impulse response of the equation (25) may be found and FFTed to find the frequency response of the amplitude which is employed.
The method for reducing the volume of processing in calculating characteristics of a perceptual weighting filter and an LPC synthesis filter is explained.
H(z)W(z) in the equation (25) is Q(z), that is,
in order to find the impulse response of Q(z) which is set to q(n), with 0≦n<L_{imp}, where L_{imp }is an impulse response length and, for example, L_{imp}=40.
In the present embodiment, since P=10, the equation (a1) represents a 20order infinite impulse response (IIR) filter having 30 coefficients. By approximately L_{imp}×3P=1200 sumofproduct operations, L_{imp }samples of the impulse response q(n) of the equation (a1) may be found. By stuffing 0s in q(n), q′(n), where 0≦n≦2^{m}, is produced. If, for example, m=7, 2^{m}−L_{imp}=128−40=88 0s are appended to q(n) (0stuffing) to provide q′(n).
This q′(n) is FFTed at 2^{m }(=128 points). The real and imaginary parts of the result of FFT are re[i] and im[i], respectively, where 0≦is ≦2^{m1}. From this,
rm[i]=√{square root over (re^{2}[i]+im^{2}[i])} (a2)
This is the amplitude frequency response of Q(z), represented by
2^{m1 }points. By linear interpolation of neighboring values of rm[i], the frequency response is represented by 2^{m }points. Although higher order interpolation may be used in place of linear interpolation, the processing volume is correspondingly increased. If an array obtained by such interpolation is wlpc[i], where 0≦i≦2^{m},
wplpc[2i]=rm[i], where 0≦i≦2^{m1} (a3)
wlpc[2i+1]=(rm[i]+rm[i+1])/2, where 0≦i≦2^{m1} (a4)
This gives wlpc[i], where 0≦i≦2^{m1}.
From this, wh[i] may be derived by
wh[i]=wlpc [nint(1281i/L)], where 1≦i≦É (a5)
where nint(x) is a function which returns an integer closest to x. The indicates that, by executing one 128point FFT operation, W′ of the equation (26) may be found by executing one 128point FFT operation.
The processing volume required for Npoint FFT is generally. (N/2)log_{2}N complex multiplication and Nlog_{2}N complex addition, which is equivalent to (N/2)log_{2}N×4 realnumber multiplication and Nlog_{2}N×2 realnumber addition.
By such method, the volume of the sumofproduct operations for finding the above impulse response q(n) is 1200. On the other hand, the processing volume of FFT for N 2^{7}=128 is approximately 128/2×7×4=1792 and 128×7×2=1792. If the number of the sumofproduct is one, the processing volume is approximately 1792. As for the processing for the equation (a2), the square sum operation, the processing volume of which is approximately 3, and the square root operation, the processing volume of which is approximately 50, are executed 2^{m1}=2^{6}=64 times, so that the processing volume for the equation (a2) is
64×(3+50)=3392.
On the other hand, the interpolation of the equation (a4) is on the order of 64×2=128.
Thus, in sum total, the processing volume is equal to 1200+1792+3392=128=6512.
Since the weight matrix W is used in a pattern of W′^{T}W, only rm^{2}[i] may be found and used without executing the processing for square root. In this case, the above equations (a3) and (a4) are executed for rm^{2}[i] instead of for rm[i], while it is not wh[i] but wh^{2}[i] that is found by the above equation (a5). The processing volume for finding rm^{2}[i] in this case is 192, so that, in sum total, the processing volume becomes equal to
1200+1792+192+128=3312.
If the processing from the equation (25) to the equation (26) is executed directly, the sum total of the processing volume is on the order of approximately 12160. That is, 256point FFT is executed for both the numerator and the denominator of the equation (25). This 256point FFT is on the order of 256/2×8×4=4096. On the other hand, the processing for wh_{0}[i] involves two square sum operations, each having the processing volume of 3, division having the processing volume of approximately 25 and square sum operations, with the processing volume of approximately 50. If the square root calculations are omitted in a manner as described above, the processing volume is on the order of 128×(3+3+25)=3968. Thus, in sum total, the processing volume is equal to 4096×2+3968=12160.
Thus, if the above equation (25) is directly calculated to find wh_{0}^{2}[i] in place of wh_{0}[i], the processing volume of the order of 12160 is required, whereas, if the calculations from the equations (a1) to a(5) are executed, the processing volume is reduced to approximately 3312, meaning that the processing volume may be reduced to onefourth. The weight calculation procedure with the reduced processing volume may be summarized as shown in a flowchart of
Referring to
These calculations for finding the weighted vector quantization can be applied not only to speech encoding but also to encoding of audible signals, such as audio signals. That is, in audible signal encoding in which the speech or audio signal is represented by DFT coefficients, DCT coefficients or MDCT coefficients, as frequencydomain parameters, or parameters derived from these parameters, such as amplitudes of harmonics or amplitudes of harmonics of LPC residuals. The parameters may be quantized by weighted vector quantization by FFTing the impulse response of the weight transfer function or the impulse response interrupted partway and stuffed with 0s and calculating the weight based on the results of the FFT. It is preferred in this case that, after FFTing the weight impulse response, the FFT coefficients themselves, (re, im) where re and im represent real and imaginary parts of the coefficients, respectively, re^{2}+im^{2 }or (re^{2}+im^{2})^{1/2}, be interpolated and used as the weight.
If the equation (21) is rewritten using the matrix W′ of the above equation (26), that is, the frequency response of the weighted synthesis filter, we obtain:
E=∥W_{k}′(x−g_{k}(s_{0c}+s_{1k}))∥^{2 }
The method for learning the shape codebook and the gain codebook is now further explained.
The expected value of the distortion is minimized for all frames k for which a code vector s0_{c }is selected for CB0. If there are M such frames, it suffices if
is minimized. In the equation (28), W_{k}′, X_{k}, g_{k }and s_{ik }denote the weighting for the k′th frame, an input to the k′th frame, the gain of the k′th frame, and an output of the codebook CB0 for the k′th frame, respectively.
For minimizing the equation (28),
Hence,
so that
where ( ) denotes an inverse matrix and W_{k}′^{T }denotes a transposed matrix of W_{k}′.
Next, gain optimization is considered.
The expected value of the distortion concerning the k′th frame selecting the code word gc of the gain is given by:
Solving
we obtain
and
The above equations (31) and (32) give optimum centroid conditions for the shape s_{0i}, s_{1i}, and the gain g_{l }for 0≦i≦31, 0≦j≦31 and 0≦l≦31, that is an optimum decoder output. Meanwhile, s_{1i }may be found in the same way as for s_{0i}.
The optimum encoding condition, that is the nearest neighbor condition, is considered.
The above equation (27) for finding the distortion measure, that is s_{0i }and s_{1i }minimizing the equation E=∥W′(X−g1(s_{1i}+s_{1j}))∥^{2}, are found each time the input x and the weight matrix W′ are given, that is, on the framebyframe basis.
Intrinsically, E is found on the round robin fashion for all combinations of gl (0≦l≦31), s_{0i }(0≦i≦31) and s_{0j }(0≦j≦31), that is 32×32×32=32768, in order to find the set of s_{0i}, s_{1i }which will give the minimum value of E. Since this requires extensive calculations, however, the shape and the gain are sequentially searched in the present embodiment. Meanwhile, round robin search is used for the combination of s_{0i }and s_{1i}. There are 32×32=1024 combinations for s_{0i }and s_{1i}. In the following description, s_{1i}+s_{1j }are indicated as s_{m }for simplicity.
The above equation (27) becomes E=∥W′(x−glsm)∥^{2}. If, for further simplicity, x_{w}=W′x and s_{w}=W′s_{m}′, we obtain
E=∥x_{w}−g_{l}s_{w}∥^{2} (33)
Therefore, if gl can be made sufficiently accurate, a search can be performed in two steps of
(1) searching for SW which will maximize
and
(2) searching for g_{l }which is closest to
If the above is rewritten using the original notation,
(1)′ searching is made for a set of s_{0i }and s_{1i }which will maximize
and
(2)′ searching is made for g_{l }which is closest to
The above equation (35) represents an optimum encoding condition (nearest neighbor condition).
Using the conditions (centroid conditions) of the equations (31) and (32) and the condition of the equation (35), codebooks (CB0, CB1 and CBg) can be trained simultaneously with the use of the socalled generalized Lloyd algorithm (GLA).
In the present embodiment, W′ divided by a norm of an input x is used as W′. That is, W′/∥x∥ is substituted for W′ in the equations (31), (32) and (35).
Alternatively, the weighting W′, used for perceptual weighting at the time of vector quantization by the vector quantizer 116, is defined by the above equation (26). However, the weighting W′ taking into account the temporal masking can also be found by finding the current weighting W′ in which past W′ has been taken into account.
The values of wh(1), wh(2), . . . , wh(L) in the above equation (26), as found at the time n, that is, at the n'"'"'th frame, are indicated as whn(1), whn(2), . . . , whn(L), respectively.
If the weights at time n, taking past values into account, are defined as An(i), where 1≦i≦L,
where λ may be set to, for example, λ=0.2. In An(i), with 1≦i≦L, thus found, a matrix having such An(i) as diagonal elements may be used as the above weighting.
The shape index values s_{0i}, s_{1j}, obtained by the weighted vector quantization in this manner, are output at output terminals 520, 522, respectively, of
The adder 505 subtracts the quantized value from the spectral envelope vector x to generate a quantization error vector y. Specifically, this quantization error vector y is sent to the vector quantization unit 511 so as to be dimensionally split and quantized by vector quantizers 511_{1 }to 511_{8 }with weighted vector quantization. The second vector quantization unit 510 uses a larger number of bits than the first vector quantization unit 500. Consequently, the memory capacity of the codebook and the processing volume (complexity) for codebook searching are increased significantly. Thus, it becomes nearly impossible to carry out vector quantization with the 44dimension which is the same as that of the first vector quantization unit 500. Therefore, the vector quantization unit 511 in the second vector quantization unit 510 is made up of plural vector quantizers and the input quantized values are dimensionally split into plural lowdimensional vectors for performing weighted vector quantization.
The relation between the quantized values y_{0 }to y_{7}, used in the vector quantizers 511_{1 }to 511_{8}, the number of dimensions, and the number of bits are shown in the following Table 2.
The index values Id_{vq0 }to Id_{vq7 }output from the vector quantizers 511_{1 }to 511_{8 }are output at output terminals 523_{1 }to 523_{8}. The sum of bits of these index data is 72.
If a value obtained by connecting the output quantized values y_{0}′ to y_{7}′ of the vector quantizers 511_{1 }to 511_{8 }in the dimensional direction is y′, the quantized values y′ and x_{0}′ are summed by the adder 513 to give a quantized value x_{1}′. Therefore, the quantized value x_{1}′ is represented by
That is, the ultimate quantization error vector is y′y.
If the quantized value x_{1}′ from the second vector quantizer 510 is to be decoded, the speech signal decoding apparatus is not in need of the quantized value x_{1}′ from the first quantization unit 500. It is, however, in need of index data from the first quantization unit 500 and the second quantization unit 510.
The learning method and code book search in the vector quantization section 511 will now be further explained.
As for the learning method, the quantization error vector y is divided into eight lowdimension vectors y_{0 }to y_{7}, using the weight W′, as shown in Table 2. If the weight W′ is a matrix having 44point subsampled values as diagonal elements:
the weight W′ is split into the following eight matrices:
y and W′, thus split in low dimensions, are termed Y_{i }and
W_{i}′, where 1≦i≦8, respectively.
The distortion measure E is defined as
E=∥W_{i}′(y_{i}−s)∥^{2} (37)
The codebook vector s is the result of quantization of y_{i}. Such code vector of the codebook minimizing the distortion measure E is searched.
In the codebook learning, further weighting is performed using the general Lloyd algorithm (GLA). The optimum centroid condition for learning is first explained. If there are M input vectors y which have selected the code vector s as optimum quantization results, and the training data is y_{k}, the expected value of distortion J is given by the equation (38) minimizing the center of distortion on weighting with respect to all frames k:
Solving
we obtain
Taking transposed values of both sides, we obtain
Therefore,
In the above equation (39), s is an optimum representative vector and represents an optimum centroid condition.
As for the optimum encoding condition, it suffices to search for s minimizing the value of ∥W_{i}′(y_{i}−s)∥^{2}·W_{i}′ during searching need not be the same as W_{i}′ during learning and may be nonweighted matrix:
By constructing the vector quantization unit 116 in the speech signal encoder using twostage vector quantization units, it becomes possible to render the number of output index bits variable.
The second encoding unit 120 employing the abovementioned CELP encoder is comprised of multistage vector quantization processors as shown in
Referring to
In the twostage second encoding units 120_{1 }and 120_{2 }shown in
In the arrangement of
The perceptual weighting filter 304 finds data for perceptual weighting, which is the same as that produced by the perceptually weighting filter calculation circuit 139 of
are searched.
Although s and g minimizing the quantization error energy E may be fullsearched, the following method may be used for reducing the amount of calculations.
The first method is to search the shape vector s minimizing E_{s }defined by the following equation (41):
From s obtained by the first method, the ideal gain is as shown by the equation (42):
Therefore, as the second method, such g minimizing the equation (43):
Eg=(g_{ref}−g)^{2} (43)
is searched.
Since E is a quadratic function of g, such g minimizing Eg minimizes E.
From s and g obtained by the first and second methods, the quantization error vector e can be calculated by the following equation (44):
e=r−gs_{syn} (44)
This is quantized as a reference of the secondstage second encoding unit 120_{2 }as in the first stage.
That is, the signal supplied to the terminals 305 and 307 are directly supplied from the perceptually weighted synthesis filter 312 of the firststage second encoding unit 120_{1 }to a perceptually weighted synthesis filter 322 of the second stage second encoding unit 120_{2}. The quantization error vector e found by the firststage second encoding unit 120_{1 }is supplied to a subtractor 323 of the secondstage second encoding unit 120_{2}.
At step S5 of
The shape index output of the stochastic codebook 310 and the gain index output of the gain codebook 315 of the firststage second encoding unit 120_{1 }and the index output of the stochastic codebook 320 and the index output of the gain codebook 325 of the secondstage second encoding unit 120_{2 }are sent to an index output switching circuit 330. If 23 bits are outputted from the second encoding unit 120, the index data of the stochastic codebooks 310, 320 and the gain codebooks 315, 325 of the firststage and secondstage second encoding units 120_{1}, 120_{2 }are summed and outputted. If 15 bits are outputted, the index data of the stochastic codebook 310 and the gain codebook 315 of the firststage second encoding unit 120_{1 }are outputted.
The filter state is then updated for calculating zero input response output as shown at step S6.
In the present embodiment, the number of index bits of the secondstage second encoding unit 120_{2 }is as small as 5 for the shape vector, while that for the gain is as small as 3. If suitable shape and gain are not present in this case in the codebook, the quantization error is likely to be increased, instead of being decreased.
Although 0 may be provided in the gain for preventing this problem from occurring, there are only three bits for the gain. If one of these is set to 0, the quantizer performance is significantly deteriorated. In this consideration, an all0 vector is provided for the shape vector to which a larger number of bits have been allocated. The abovementioned search is performed, with the exclusion of the allzero vector, and the allzero vector is selected if the quantization error has ultimately been increased. The gain is arbitrary. This makes it possible to prevent the quantization error from being increased in the secondstage second encoding unit 120_{2}.
Although the twostage arrangement has been described above, the number of stages may be larger than 2. In such case, if the vector quantization by the firststage closedloop search has come to a close, quantization of the N′th stage, where 2≦N, is carried out with the quantization error of the (N−1)st stage as a reference input, and the quantization error of the of the N′th stage is used as a reference input to the (N+1)st stage.
It is seen from
The code vector of the stochastic codebook (shape vector) can be generated by, for example, the following method.
The code vector of the stochastic codebook, for example, can be generated by clipping the socalled Gaussian noise. Specifically, the codebook may be generated by generating the Gaussian noise, clipping the Gaussian noise with a suitable threshold value and normalizing the clipped Gaussian noise.
However, there are a variety of types of speech. For example, the Gaussian noise can cope with speech of consonant sounds close to noise, such as “sa, shi, su, se and so”, while the Gaussian noise cannot cope with the speech of acutely rising consonants, such as “pa, pi, pu, pe and po”.
According to the present invention the Gaussian noise is applied to some of the code vectors, while the remaining portion of the code vectors are dealt with by learning, so that both the consonants having sharply rising consonant sounds and the consonant sounds close to the noise can be coped with. If, for example, the threshold value is increased, a vector is obtained which has several larger peaks, whereas, if the threshold value is decreased, the code vector is approximate to the Gaussian noise. Thus, by increasing the variation in the clipping threshold value, it becomes possible to cope with consonants having sharp rising portions, such as “pa, pi, pu, pe and po” or consonants close to noise, such as “sa, shi, su, se and so”, thereby increasing clarity.
Thus, an initial codebook is prepared by clipping the Gaussian noise and a suitable number of nonlearning code vectors are set. The nonlearning code vectors are selected in the order of the increasing variance value for coping with consonants close to the noise, such as “sa, shi, su, se and so”. The vectors found by learning use the LBG algorithm for learning. The encoding under the nearest neighbor condition uses both the fixed code vector and the code vector obtained on learning. In the centroid condition, only the code vector to be learned is updated. Thus the code vector to be learned can cope with sharply rising consonants, such as “pa, pi, pu, pe and po”.
An optimum gain may be learned for these code vectors by a conventional learning process.
In
At the next step S11, the initial codebook by clipping the Gaussian noise is generated. At step S12, part of the code vectors are fixed as nonlearning code vectors.
At the next step S13, encoding is done using the above codebook. At step S14, the error is calculated. At step S15, it is judged if (D_{n−1}−D_{n}/D_{n}<ε, or n=n_{max}). If the result is YES, processing is terminated. If the result is NO, processing transfers to step S16.
At step S16, the code vectors not used for encoding are processed. At the next step S17, the code books are updated. At step S18, the number of times of learning n is incremented before returning to step S13.
In the speech encoder of
The V/UV discrimination unit 115 performs V/UV discrimination of a frame under consideration based on an output of the orthogonal transform circuit 145, an optimum pitch from the high precision pitch search unit 146, spectral amplitude data from the spectral evaluation unit 148, a maximum normalized autocorrelation value r(p) from the openloop pitch search unit 141, and a zerocrossing count value from the zerocrossing counter 142. The boundary position of the bandbased results of V/UV decision, similar to that used for MBE, is also used as one of the conditions for the frame under consideration.
The condition for V/UV discrimination for the MBE, employing the results of bandbased V/UV discrimination, is now further explained.
The parameter or amplitude A_{m} representing the magnitude of the m′th harmonics in the case of MBE may be represented by
In this equation, S(j) is a spectrum obtained on DFTing LPC residuals, and E(j) is the spectrum of the basic signal, specifically, a 256point Hamming window, while a_{m}, b_{m }are lower and upper limit values, represented by an index j, of the frequency corresponding to the m′th band corresponding in turn to the m′th harmonics. For bandbased V/UV discrimination, a noise to signal ratio (NSR) is used. The NSR of the m′th band is represented by
If the NSR value is larger than a preset threshold, such as 0.3, that is, if an error is larger, it may be judged that approximation of S(j) by A_{m} E(j) in the band under consideration is not good, that is, that the excitation signal E(j) is not appropriate as the base. Thus the band under consideration is determined to be unvoiced (UV). If otherwise, it may be judged that approximation has been done fairly well and hence is determined to be voiced (V).
It is noted that the NSR of the respective bands (harmonics) represent similarity of the harmonics from one harmonics to another. The sum of gainweighted harmonics of the NSR is defined as NSR_{all }by:
NSR_{all}=(Σ_{m}A_{m}NSR_{m})/(Σ_{m}A_{m})
The rule base used for V/UV discrimination is determined depending on whether this spectral similarity NSR_{all }is larger or smaller than a certain threshold value. This threshold may be set to Th_{NSR}=0.3. This rule base is concerned with the maximum value of the autocorrelation of the LPC residuals, frame power, and the zerocrossing. In the case of the rule base used for NSR_{all}<Th_{NSR}, the frame under consideration becomes V and UV if the rule is applied and if there is no applicable rule, respectively.
A specified rule is as follows:
For NSR_{all}<TH_{NSR},
if numZero XP<24, formPow>340 and r0>0.32, then the frame under consideration is V;
For NSR_{all}≧TH_{NSR},
If numZero XP>30, frmPow<900 and r0>0.23, then the frame under consideration is UV;
wherein respective variables are defined as follows:
numZeroXP: number of zerocrossings per frame
formPow: frame power
r0: maximum value of autocorrelation
The rule representing a set of specified rules such as those given above are consulted for doing V/UV discrimination.
The arrangement of essential portions and the operation of the speech signal decoder of
The LPC synthesis filter 214 is separated into the synthesis filter 236 for the voiced speech (V) and into the synthesis filter 237 for the unvoiced speech (UV), as previously explained. If LSPs are continuously interpolated every 20 samples, that is, every 2.5 msec, without separating the synthesis filter and without making V/UV distinction, LSPs of totally different properties are interpolated at V to UV or UV to V transient portions. The result is that LPC of UV and V are used as residuals of V and UV, respectively, such that strange sound tends to be produced. For preventing such ill effects from occurring, the LPC synthesis filter is separated into V and UV and LPC coefficient interpolation is independently performed for V and UV.
The method for coefficient interpolation of the LPC filters 236, 237 in this case is now further explained. Specifically, LSP interpolation is switched depending on the V/UV state, as shown in Table 3.
Taking an example of the 10order LPC analysis, the equal interval LSP is such LSP corresponding to αparameters for flat filter characteristics and the gain equal to unity, that is, α_{0}=1, α_{1}=α_{2}= . . . =α_{10}=0, with 0≦α≦10.
Such 10order LPC analysis, that is, 10order LSP, is the LSP corresponding to a completely flat spectrum, with LSPs being arrayed at equal intervals at 11 equally spaced apart positions between 0 and π. In such case, the entire band gain of the synthesis filter has minimum throughcharacteristics.
As for the unit of interpolation, it is 2.5 msec (20 samples) for the coefficient of 1/H_{v(z)}, while it is 10 msec (80 samples) for the bit rate of 2 kbps and 5 msec (40 samples) for the bit rate of 6 kbps, respectively, for the coefficient of 1/H_{uv(z)}. For UV, since the second encoding unit 120 performs waveform matching employing an analysis by synthesis method, interpolation with the LSPs of the neighboring V portions may be performed without performing interpolation with the equal interval LSPs. It is noted that, in the encoding of the UV portion in the second encoding portion 120 of
Outputs of these LPC synthesis filters 236, 237 are sent to the respective independently provided postfilters 238v, 238u. The intensity and the frequency response of the postfilters are set to values that are independent and may be different for V and UV for setting the intensity and the frequency response of the postfilters to different values for V and UV.
The windowing of junction portions between the V and the UV portions of the LPC residual signals, that is, the excitation as an LPC synthesis filter input, is now further explained. This windowing is carried out by the sinusoidal synthesis circuit 215 of the voiced speech synthesis unit 211 and by the windowing circuit 223 of the unvoiced speech synthesis unit 220. The method for synthesis of the Vportion of the excitation is explained in detail in JP Patent Application No. 491422, assigned to the present Assignee, while the method for fast synthesis of the Vportion of the excitation is explained in detail in JP Patent Application No. 6198451, similarly assigned to the present Assignee. In the present illustrative embodiment, this method of fast synthesis is used for generating the excitation of the Vportion using this fast synthesis method.
In the voiced (V) portion, in which sinusoidal synthesis is performed by interpolation using the spectrum of the neighboring frames, all waveforms between the n′th and (n+1)st frames can be produced. Nevertheless, for the signal portion astride the V and UV portions, such as the (n+1)st frame and the (n+2)nd frame in
The noise synthesis and the noise addition at the voiced (V) portion is now further explained. These operations are performed by the noise synthesis circuit 216, weighted overlapandadd circuit 217 and by the adder 218 of
The processing by this noise synthesis circuit 216 is carried out in much the same way as in synthesis of the unvoiced sound by, for example, multiband encoding (MBE).
That is, referring to
In the embodiment of
Specifically, a method of generating random numbers in a range of ±x and handling the generated random numbers as real and imaginary parts of the FFT spectrum, or a method of generating positive random numbers ranging from 0 to a maximum number (max) for handling them as the amplitude of the FFT spectrum and generating random numbers ranging from −π to +π and handling these random numbers as the phase of the FFT spectrum, may be employed.
This renders it possible to eliminate the STFT processor 402 of
The noise amplitude control circuit 410 has a basic structure shown for example in
Among these functions f_{1}(Pch, Am[i]) are:
f_{1}(Pch, Am[i])=0 where 0≦i≦Noise_b×I),
f_{1}(Pch, Am[i])=Am[i]×noisemix where Noiseb×I≦i≦I, and
noisemix=K×Pch/2.0.
It is noted that the maximum value of noisemax is noisemixmax at which it is clipped. As an example, K=0.02, noisemixmax=0.3 and Noiseb=0.7, where Noiseb is a constant which determines to which portion of the entire band this noise is to be added. In the present embodiment, the noise is added in a frequency range higher than 70%position, that is, if fs=8 kHz, the noise is added in a range from 4000×0.7=2800 kHz as far as 4000 kHz.
As a second specified embodiment for noise synthesis and addition, in which the noise amplitude Amnoise[i] is a function f_{2}(Pch, Am[i], Amax) of three of the four parameters, namely the pitch lag Pch, spectral amplitude Am[i], and the maximum spectral amplitude Amax, is explained.
Among these functions f_{2}(Pch, Am[i], Amax) are:
f_{2}(Pch, Am[i], Amax)=0, where 0≦i≦Noiseb×I),
f_{1}(Pch, Am[i], Amax)=Am[i]×noisemix where Noiseb×I≦i≦I, and
noisemix=K×Pch/2.0.
It is noted that the maximum value of noisemix is noisemixmax and, as an example, K=0.02, noisemixmax=0.3 and Noiseb=0.7.
If Am[i]×noisemix>A max×C×noisemix, f_{2}(Pch, Am[i], Amax)=Amax×C×noisemix, where the constant C is set to 0.3 (C=0.3). Since the level can be prohibited by this conditional equation from being excessively large, the above values of K and noisemixmax can be increased further and the noise level can be increased further if the highrange level is higher.
As a third specified embodiment of the noise synthesis and addition, the above noise amplitude Amnoise[i] may be a function of all of the above four parameters, that is f_{3}(Pch, Am[i], Amax, Lev).
Specified examples of the function f_{3}(Pch, Am[i], Am[max], Lev) are basically similar to those of the above function f_{2}(Pch, Am[i], Amax). The residual signal level Lev is the root mean square (RMS) of the spectral amplitudes Am[i] or the signal level as measured on the time axis. The difference from the second specified embodiment is that the values of K and noisemixmax are set so as to be functions of Lev. That is, if Lev is smaller or larger, the values of K, and noisemixmax are set to larger and smaller values, respectively. Alternatively, the value of Lev may be set so as to be inversely proportionate to the values of K and noisenixmax.
The postfilters 238v, 238u will now be further explained.
If the coefficients of the denominators Hv(z) and Huv(z) of the LPC synthesis filter, that is, αparameters, are expressed as α_{i}, the characteristics PF(z) of the spectrum shaping filter 440 may be expressed by:
The fractional portion of this equation represents characteristics of the formant emphasizing filter, while the portion (1−kz^{−1}) represents characteristics of a highrange emphasizing filter. β, γ and k are constants, such that, for example, β=0.6, γ=0.8 and k 0.3.
The gain of the gain adjustment circuit 443 is given by:
In the above equation, x(i) and y(i) represent an input and an output of the spectrum shaping filter 440, respectively.
It is noted that, while the coefficient updating period of the spectrum shaping filter 440 is 20 samples or 2.5 msec as is the updating period for the αparameter which is the coefficient of the LPC synthesis filter, the updating period of the gain G of the gain adjustment circuit 443 is 160 samples or 20 msec.
By setting the coefficient updating period of the spectrum shaping filter 443 so as to be longer than that of the coefficient of the spectrum shaping filter 440 as the postfilter, it becomes possible to prevent ill effects otherwise caused by gain adjustment fluctuations.
That is, in a generic post filter, the coefficient updating period of the spectrum shaping filter is set so as to be equal to the gain updating period and, if the gain updating period is selected to be 20 samples and 2.5 msec, variations in the gain values are caused even in one pitch period, thus producing a click noise. In the present embodiment, by setting the gain switching period to be longer, for example, equal to one frame or 160 samples or 20 msec as shown in
By way of gain junction processing between neighboring frames, the filter coefficient and the gain of the previous frame and those of the current frame are multiplied by triangular windows of
W(i)=i/20 (0≦i≦20) and
1−W(i) where 0≦i≦20 for fadein and fadeout and the resulting products are summed together.
The abovedescribed signal encoding and signal decoding apparatus may be used as a speech codebook employed in, for example, a portable communication terminal or a portable telephone set shown in
The present invention is not limited to the abovedescribed embodiments. For example, the construction of the speech analysis side (encoder) of