SRILM and LM network servers

Andreas Stolcke stolcke at speech.sri.com
Tue Nov 27 15:14:52 PST 2007


FYI, there is now much enhanced support for network-based "LM servers"
in SRILM.  The main changes are:

        * New ngram -use-server option to run the client side of a network LM
        server as implemented by ngram -server-port.  Optionally, probabilities
        may be cached in the client (option -cache-served-ngrams).

        * New ngram -use-server option to run the client side of a network LM
        server as implemented by ngram -server-port . 

        * New LMClient class to implement the above (a stub LM subclass that
        queries a server for LM probabilities).

        * ngram -server-port now behaves like a true server daemon: it handles
        multiple simultaneous or sequential clients, and never exits (unless
        killed).  The number of simultaneous clients may be limited with the
        -server-maxclients option.

This is still somewhat experimental, so I welcome any feedback.
If you want to give it a try download the 1.5.6 (beta) version from the
SRILM download page.

An example and test of the functionality is in $SRILM/test/tests/ngram-server .

Andreas




More information about the SRILM-User mailing list