[SRILM User List] Interpolating LMs with different smoothing
Fed Ang
ang.feddy at gmail.com
Tue Jul 17 21:22:58 PDT 2018
Hi,
I don't know if it has been asked before, but does it make sense to
interpolate on the basis of smoothing instead of domain/genre? What should
be the assumptions in considering this when the resulting perplexity is
lower than any of the two separately?
Let's say: 5-gram Katz yields 100, and 5-gram Modified KN yields 90
Then best-mix of the two yields 87
On a theoretical perspective, is it sound to simply trust that the
interpolated LM is better/generalizable to different smoothing combinations?
-Fred
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.speech.sri.com/pipermail/srilm-user/attachments/20180718/774616c2/attachment.html>
More information about the SRILM-User
mailing list