[SRILM User List] -posterior-decode vs -viterbi-decode
Ilana Heintz
heintz.38 at osu.edu
Mon Jan 25 09:56:49 PST 2010
Hello Andreas et al,
I have installed version 1.5.10 successfully. (I ran make test, for which
disambig, make-ngram-pfsg, merge-batch-counts, nbest-optimize-bleu, and
ngram-server have differing output, everything else Ok).
But I am still having trouble with lattice-tool. It's true that
-posterior-decode now takes into account the new LM before decoding, so
I'm getting different answers with different order LMs. But, now I can't
seem to get -viterbi-decode to work with an LM at all...
lattice-tool -read-htk -in-lattice $lattice -posterior-decode
--- gives 1-best output
lattice-tool -read-htk -in-lattice $lattice -posterior-decode -lm $lm
--- gives differing 1-best output
lattice-tool -read-htk -in-lattice $lattice -viterbi-decode
--- gives 1-best output
lattice-tool -read-htk -in-lattice $lattice -viterbi-decode -lm $lm
--- gives _no_ output
Can you replicate this? Am I missing an option or flag?
As a follow-up on my other question about the difference between the
algorithms, I found a very clear explanation (with a nice explanation of
HMMs to boot) in these lecture notes:
http://ai.stanford.edu/~serafim/CS262_2007/notes/lecture5.pdf
Ilana Heintz
Department of Linguistics
Ohio State University
http://www.ling.ohio-state.edu/~bromberg
On Thu, 21 Jan 2010, Andreas Stolcke wrote:
>
> In message <alpine.DEB.1.10.1001211344270.31067 at brutus.ling.ohio-state.edu>you
> wrote:
>> Hello,
>>
>> I am trying to work out the differences between two ways of decoding an
>> HTK lattice using higher-order ngram models. My main question is, does
>> the -posterior-decode option take into account the higher-order n-grams?
>>
>> I am using these commands:
>>
>> lattice-tool -read-htk -in-lattice-list htklats -lm expD9a.lm
>> -viterbi-decode -unk -keep-unk -order $i
>>
>> or
>>
>> lattice-tool -read-htk -in-lattice htklats -lm expD9a.lm
>> -posterior-decode -unk -keep-unk -order $i
>
> As of version 1.5.10, the LM rescoring happens BEFORE word posterior
> decoding, so yes, it should take the new LM into account,
> but only if you have the latest version of SRILM.
>
>>
>> where expD9a.lm includes up to 6-grams, is calculated with Witten-Bell
>> discounting, and $i ranges from 3 to 6. I get the following results,
>> which represent the accuracy of the chosen paths for the 80 utterances in
>> htklats:
>>
>> For -viterbi-decode:
>> 3-grams 72.34
>> 4-grams 72.42
>> 5-grams 72.36
>> 6-grams 72.36
>>
>> For -posterior-decode:
>> 3-grams 74.43
>> 4-grams 74.43
>> 5-grams 74.43
>> 6-grams 74.43
>>
>> In looking more closely at the -posterior-decode results, I'm pretty sure
>> it's giving the same best path every time, regardless of the order of the
>> LM. Also, when I use the -debug 2 flag, I notice that with
>> -posterior-decode, this note:
>>
>> Lattice::expandToLM: starting expansion to general LM (maxNodes = 0) ...
>>
>> comes _after_ the best path is given. Should I interpret this to mean
>> that the expansion (which I think means making the lattice include
>> higher-order probabilities) does not happen in time for the decode? Is
>> there a way to change that, since the posterior decode seems to be working
>> better than viterbi in this instance?
>
> Get the latest version, the informational messages should appear in
> a different order now.
>
> And don't forget to use the -order option to specify the desired LM order.
> The default is to only use trigrams, even if the LM file contains higher-order
> ngrams!
>
>>
>> Any insight on how these two decode methods work, or what situations each
>> is more appropriate for, would be appreciated.
>
> That's a longer story. Read the papers on posterior-based (or
> "sausage") decoding for ASR (google scholar will get them for you).
>
> Andreas
>
> _______________________________________________
> SRILM-User site list
> SRILM-User at speech.sri.com
> http://www.speech.sri.com/mailman/listinfo/srilm-user
>
More information about the SRILM-User
mailing list