A ''statistical language model'', or more simply a ''language model'', is a probabilistic mechanism for generating text. Such a definition is general enough to include an endless variety of schemes. However, while a statistical language model can in principle be used to synthesize artificial text, a program that classifies text into predefined categories, such as "natural" and "artificial," would not be considered as a language model (though such a program may use a language model to make its decision).
The first serious statistical language modeler was Claude Shannon . In exploring the application of his newly founded theory of information to human language, thought of purely as a statistical source, Shannon measured how well simple n-gram models did at predicting, or compressing, natural text. To do this, he estimated the true entropy through experiments with human subjects, and also estimated the n-gram models' cross-entropy on natural text. The ability of generative language models to be evaluated in this way is one of their important virtues. While estimating the "true" entropy of language is like aiming at many moving targets, by all measures current language modeling methods remain far from the Shannon limit in terms of their predictive power. This, however, has not kept them from being useful for a variety of text processing tasks, and moreover can be viewed as encouragement that there is still great room for improvement in statistical language modeling.
In the past several years there has been significant interest in the use of language modeling methods for a variety of text and natural language processing tasks. In particular, a new approach to text information retrieval has emerged based on statistical language modeling that is quite different from traditional probabilistic approaches, and is fundamentally different from vector space methods. It is striking that the language modeling approach to information retrieval was not proposed until the late 1990s; however, until recently the IR and language modeling research communities were somewhat isolated. The communities are now beginning to work more closely together, and research at a number of sites has confirmed that the language modeling approach is an effective and theoretically attractive probabilistic framework for building IR systems. But there is still groundwork to do in understanding the basics of the LM approach. This note briefly describes recent work on this topic.
For many years, the primary consumers of statistical language models were speech recognition systems. In the source-channel approach to speech processing , the language model is used as a source model or prior over natural language utterances that the user might make to the system, which is combined with a channel model of how that language is converted into an acoustic signal. For nearly 30 years, the statistical language model has been the workhorse of statistical speech recognition; it is an indispensable component of any system. Yet, while smoothing techniques are important for building an effective language model for speech processing, advances over relatively simple word n-gram models have been few. For open domain and large vocabulary speech recognition, little, if any, empirical improvement has come from modeling the linguistic and semantic structure of natural language. Work in the late 1980s at the IBM Watson Research Center adopted the source-channel paradigm for other problems, notably the statistical approach to machine translation  . The language models used for statistical machine translation were the same basic n-gram models used for speech, but became just as important for obtaining good performance.
Basic language modeling ideas have been used in information retrieval and document classification for quite some time. In the so-called ''naive Bayes'' text classification method, a unigram language model is estimated for each class, and then combined with a class prior to form the posteriors used for classification; the naivety of the approach lies in the unrealistic independence assumptions that lead to a unigram model. While the independence assumptions are clearly incorrect, such "bag of words" models are surprisingly effective for classifying documents according to a small number of predefined labels.
A similar approach is adopted in the standard probabilistic model of document retrieval first proposed by Robertson and Sparck Jones   . In this model, distributions over documents are estimated for two classes: "relevant" and "non-relevant." Documents are broken down into attributes, in the simplest case indicating occurrence or non-occurrence of individual words, and the attributes are modeled independently, as in the naive Bayes model for classification. In contrast with document classification, however, for retrieval there is typically little, if any, training data, and the only evidence available for estimating the models is the query itself. Thus, one is led to model the distribution of the query terms in relevant and non-relevant documents. The Okapi  system  has been one of the primary vehicles for the Robertson-Sparck Jones model of retrieval, and has met with considerable empirical success.
The fact that so little evidence is available for estimating the relevant and non-relevant document classes has made it attractive to consider "turning the problem around." In 1998 Ponte and Croft  proposed using a smoothed version of the document unigram model to assign a score to a query, which can be thought of as the probability that the query was generated from the document model. This simple approach was remarkably effective "right out of the box." As developed further in , this approach can be thought of as using a language model as a kind of noisy channel model or "translation model" that maps documents to queries. To quote from :
When designing a statistical model for language processing tasks, often the most natural route is to apply a generative model which builds up the output step-by-step. Yet to be effective, such models need to liberally distribute probability mass over a huge space of possible outcomes. This probability can be difficult to control, making an accurate direct model of the distribution of interest difficult to construct. The source channel perspective suggests a different approach: turn the search problem around to predict the input. Far more than a simple application of Bayes' law, there are compelling reasons why reformulating the problem in this way should be rewarding. In speech recognition, natural language processing, and machine translation, researchers have time and again found that predicting what is already known (i.e., the query) from competing hypotheses can be easier than directly predicting all of the hypotheses.
This view is especially attractive when considering that query terms may be represented in different ways to describe the user's information need. The method of using document language models to assign likelihood scores to queries has come to be known as the ''language modeling approach'', and has opened up new ways of thinking about information retrieval. The effectiveness of this approach has been confirmed and enhanced by several groups, e.g.,  .
This empirical success and the overall potential of the language modeling approach have led to the Lemur  project and the toolkit presented on these web pages. The approach shows significant promise, yet there is still much to be done to develop it further. Some of the recent efforts in this direction are briefly noted below.
One of the attractive aspects of the language modeling approach is the potential for estimating the document model or document-to-query translation model in different ways. Recent work has compared different smoothing schemes for discounting the maximum likelihood estimates . One finding from this work is that a simple smoothing scheme based on Dirichlet priors gives very good performance, due to the way that it effectively normalizes for document length. This and other work using the Lemur toolkit has carried out empirical studies over a broad range of collections and test conditions, including an entry in the 2001 TREC web track .
Progress has also been made in understanding the formal underpinnings of the language modeling approach. For example, a general framework based on Bayesian decision theory has been developed  under which the basic language modeling approach, as well as the standard probabilistic model of Robertson and Sparck Jones, is derived as a special case. Furthermore, it has been shown how the language modeling approach can be viewed in terms of an underlying relevance model, allowing the approach to be interpreted in a manner similar to the standard probabilistic model  .
More promising than parameter smoothing, which plays a role similar to traditional term weighting, is what can be referred to as semantic smoothing, which in its simplest form plays a role similar to relevance feedback in more standard approaches. One class of semantic smoothing techniques using Markov chain techniques is presented in . The technique of probabilistic latent semantic indexing  is a very promising approach to semantic smoothing. Other interesting applications and discussions related to the language modeling approach to IR were presented in a recent workshop held at CMU . While there has been significant progress in using simple language models for text retrieval, there is clearly great room for more effective models.
In the second paragraph of his classic paper , Shannon made clear that the theory to follow did not address the semantic aspects of communication, which he identified as irrelevant to the problem of reliable communication as an engineering challenge. Yet it is obvious that in terms of reliable human communication, meaning matters; consider your last dinner conversation in a crowded and noisy restaurant. The difference lies in the fact that we don't have direct control over the channel code, which has been determined through the course of the evolution of human language. It is clear that current statistical language models capture very little of the higher-level structure and meaning that natural language understanding will require. Indeed, many current methods are still based on relatively simple n-gram models, similar to those that Shannon himself used. However, there is no mathematical theory of natural language communication. Statistical language models should be viewed not as an end, but as a powerful means for approaching difficult problems using principled methods. Future work is sure to see much more sophisticated language modeling techniques used, as the language modeling approach is more broadly applied, and as more ambitious goals are set for their application to information processing systems.