LANGUAGE MODEL TRAINING METHOD AND DEVICE
First Claim
1. A language model training method, comprising:
- obtaining a universal language model in an offline training mode, and clipping the universal language model to obtain a clipped language model;
obtaining a log language model of logs within a preset time period in an online training mode;
fusing the clipped language model with the log language model to obtain a first fusion language model used for carrying out first time decoding; and
fusing the universal language model with the log language model to obtain a second fusion language model used for carrying out second time decoding.
1 Assignment
0 Petitions
Accused Products
Abstract
The present disclosure provides a language model training method and device, including: obtaining a universal language model in an offline training mode, and clipping the universal language model to obtain a clipped language model; obtaining a log language model of logs within a preset time period in an online training mode; fusing the clipped language model with the log language model to obtain a first fusion language model used for carrying out first time decoding; and fusing the universal language model with the log language model to obtain a second fusion language model used for carrying out second time decoding. The method is used for solving the problem that a language model obtained offline in the prior art has poor coverage on new corpora, resulting in a reduced language recognition rate.
16 Citations
15 Claims
-
1. A language model training method, comprising:
-
obtaining a universal language model in an offline training mode, and clipping the universal language model to obtain a clipped language model; obtaining a log language model of logs within a preset time period in an online training mode; fusing the clipped language model with the log language model to obtain a first fusion language model used for carrying out first time decoding; and fusing the universal language model with the log language model to obtain a second fusion language model used for carrying out second time decoding. - View Dependent Claims (2, 3, 4, 5)
-
-
6. An electronic device, comprising:
-
at least one processor; and a memory communicably connected with the at least one processor for storing instructions executable by the at least one processor, wherein execution of the instructions by the at least one processor causes the at least one processor to; obtain a universal language model in an offline training mode; clip the universal language model to obtain a clipped language model; obtain a log language model of logs within a preset time period in an online training mode; fuse the clipped language model with the log language model to obtain a first fusion language model used for carrying out first time decoding; and fuse the universal language model with the log language model to obtain a second fusion language model used for carrying out second time decoding. - View Dependent Claims (7, 8, 9, 10)
-
-
11. A non-transitory computer-readable storage medium storing executable instructions that, when executed by an electronic device with a touch-sensitive display, cause the electronic device to:
-
obtain a universal language model in an offline training mode; clip the universal language model to obtain a clipped language model; obtain a log language model of logs within a preset time period in an online training mode; fuse the clipped language model with the log language model to obtain a first fusion language model used for carrying out first time decoding; and fuse the universal language model with the log language model to obtain a second fusion language model used for carrying out second time decoding. - View Dependent Claims (12, 13, 14, 15)
-
Specification