linear interpolation process
Andreas Stolcke
stolcke at speech.sri.com
Mon Apr 5 12:01:00 PDT 2004
In message <003c01c418b3$1ca4ee00$0800000a at speechasus>you wrote:
>
> Hi!
>
> I have a question about linear interpolation process executed by ngram
> command in SRILM.
> What's the main difference between dynamic interpolation (using -bayes) and
> static interpolation?
> I tried both but I'm getting a big difference in perplexity values: for
> instance, 314 against 246.
> If we do static interpolation one can use -write-lm to pruduce a file with
> the interpolated model. However, using dynamic process it is not. Why? Are
> the process diferences so big?
>
> Just an observation: the big differences in perplexity values result in the
> case we are doing interpolation of word and class models. For interpolation
> of word models the difference is quite insignificant.
That's the problem. You cannot do "static" interpolation of a word
and class-based N-gram LM. This is only supported for two word or two
class-based LM.s
--Andreas
More information about the SRILM-User
mailing list