<div dir="ltr">Hi Andreas,<div><br><div>> <span style="font-size:12.8px">First off, does the ppl obtained with just the KN ngram model match?</span></div><div><span style="font-size:12.8px">Yes, I could exactly reproduce the 3-gram and 5-gram KN ppl numbers. I had to use the -interpolate and -gtXmin 1 flags to replicate the results though.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">> </span><span style="font-size:12.8px">Of course you still might have trouble getting the exact same results since Tomas didn't disclose the exact parameter values he used. </span></div><div><span style="font-size:12.8px">Thanks a lot for the method! I was suspecting something was missing.</span></div><div><span style="font-size:12.8px"><br></span></div><div><span style="font-size:12.8px">Best Regards,</span></div><div><span style="font-size:12.8px">Kalpesh</span></div><div><span style="font-size:12.8px"><br></span><br><br><br></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Sep 5, 2017 at 9:42 PM, Andreas Stolcke <span dir="ltr"><<a href="mailto:stolcke@icsi.berkeley.edu" target="_blank">stolcke@icsi.berkeley.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span class="">
<div class="m_-4427263158905817274moz-cite-prefix">On 9/4/2017 4:24 PM, Kalpesh Krishna
wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi everyone,
<div>I'm trying to implement the KN5+cache model mentioned in
Mikolov's PhD Thesis, <a href="http://www.fit.vutbr.cz/%7Eimikolov/rnnlm/thesis.pdf" target="_blank">http://www.fit.vutbr.<wbr>cz/~imikolov/rnnlm/thesis.pdf</a> <wbr>in
Table 4.1. By using the command "./ngram -lm LM -ppl
ptb.test.txt -unk -order 5 -cache 192 -cache-lambda 0.1" I
managed to achieve a ppl value of 126.74 (I tuned `cache` and
`cache-lambda`). What additional steps are needed to exactly
reproduce the result? (125.7)</div>
<div>I generated my LM using "./ngram-count -lm LM -unk
-kndiscount -order 5 -text ptb.train.txt -interpolate -gt3min
1 -gt4min 1 -gt5min 1".</div>
<div><br>
</div>
</div>
</blockquote></span>
First off, does the ppl obtained with just the KN ngram model match?<br>
<br>
About the cache LM, Tomas writes <br>
<br>
<blockquote type="cite">We also report the perplexity of the best
n-gram model (KN5) when
<div style="font-size:18.1818px;font-family:sans-serif">using
unigram cache model (as implemented in the SRILM toolkit). We
have used several</div>
<div style="font-size:18.1818px;font-family:sans-serif">unigram
cache models interpolated together, with different lengths of
the cache history</div>
<div style="font-size:18.1818px;font-family:sans-serif">(this
works like a crude approximation of cache decay, ie. words
further in the history</div>
have lower weight). </blockquote>
So he didn't just use a single cache LM as implemented by the ngram
-cash option. He must have used multiple versions of this model
(with different parameter values), saved out the word-level
probabilities, and interpolated them off-line. <br>
<br>
You can run an individual cache LM and save out the probabilities
using <br>
<br>
ngram -vocab VOCAB -null -cache 192 -cache-lambda 1 -ppl
TEST -debug 2 > TEST.ppl<br>
<br>
Repeat this several times with different -cache parameters, and also
for the KN ngram.<br>
<br>
Then use compute-best-mix on all the output files to determine the
best mixture weights (of course you need to do this using a held-out
set, not the actual test set).<br>
<br>
Then you do the same for the test set, but use<br>
<br>
compute-best-mix lambda='....' precision=1000 ppl-file ppl-file
...<br>
<br>
where you provide the weights from the held-out set to the lambda=
parameter. (The precision parameter is such that it won't
iterate.) This will give you the test-set perplexity.<br>
<br>
Of course you still might have trouble getting the exact same
results since Tomas didn't disclose the exact parameter values he
used. But since you're already within 1 perplexity point of his
results I would question whether this matters.<span class="HOEnZb"><font color="#888888"><br>
<br>
Andreas<br>
<br>
<br>
<br>
</font></span></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr">Kalpesh Krishna,</div><div>Junior Undergraduate,</div><div dir="ltr"><div>Electrical Engineering,</div><div>IIT Bombay</div></div></div></div></div></div></div></div></div>
</div>