[SRILM User List] How do you calculate perplexity given a test sentence?
Andreas Stolcke
stolcke at icsi.berkeley.edu
Fri May 18 10:18:07 PDT 2012
On 5/17/2012 10:59 AM, Burkay Gur wrote:
> Hi,
>
> I was wondering how the perplexity is calculated given different test
> sentences to a single language model.
>
> 1) For example, does SRI calculate 2^-H(p) no matter what the input
> sentence is ?
>
> 2) Or does it calculate the perplexity based on the cross-entropy
> between the model and the input sentence? ie 2^-H(p,q) where p is the
> language model and q is (not sure what it would be)
Perplexity is always computed by evaluating the model on the test data.
So the "q" in H(p,q) is approximated by taking an average over the test
data (which is assumed to be a sample from the true "q" distribution).
So the estimate used is
H(p,q) = 1/N \sum_i log p(w_i|h_i)
where N is the number of tokens, w_i is the i-th token and h_i its history.
Andreas
More information about the SRILM-User
mailing list