Jump to content

Naive Bayes classifier

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by 217.158.106.215 (talk) at 18:57, 18 September 2002 (In real life, the naive Bayes approach is more powerful than might be expected from the extreme simplicity of its model; in particular, it is fairly robust in the presence of non-independent attribute). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Here is a worked example of naive Bayesian classification which is an application of Bayesian inference to the document classification problem.

Consider the problem of classifying documents by their content, for example into spam and non-spam E-mails. Imagine that documents are drawn from a number of classes of documents which can be modelled as sets of words where the (independent) probability that the i-th word of a given document occurs in a document from class C can be written as

p(wi | C).

(For this treatment, we simplify thing further by assuming that the probability of a word in a document is independent of the length of a document, or that all documents are of the same length).

Then the probability of a given document D, given a class C, is

p(D | C) = product over i of p(wi | C)

The question that we desire to answer is: what is the probability that a given document D belongs to a given class C?

Now, by their definition,

p(D | C) = p(D and C) / p(C)

and

p(C | D) = p(D and C) / p(D)

Bayes' theorem manipulates these into a statement of probability in terms of likelihood.

p(C | D) = (p(D) / p(C)) · p(D | C)

Assume for the moment that there are only two classes, S and not-S.

p(D | S) = product over i of p(wi | S)

and

p(D | not-S) = product over i of p(wi | not-S)

Using the Bayesian result above, we can write:

p(S | D) = (p(D) / p(S)) · product over i of p(wi | S)
p(not-S | D) = (p(D) / p(not-S)) · product over i of p(wi | not-S)

Dividing one by the other gives:

p(S | D) / p(not-S | D) =
p(not-S) · product over i of p(wi | S)
p(S) · product over i of p(wi | not-S)

Which can be re-factored as:

(p(not-S) / p(S)) · product over i of p(wi | S)
                                      p(wi | not-S)

Thus, the probability ratio p(S | D) / p(not-S | D) can be expressed in terms of a series of likelihood ratios. The actual probability p(S | D) can be easily computed from ln(p(S | D) / p(not-S | D)) based on the observation that p(S | D) + p(not-S | D) = 1.

Taking the logarithm of all these ratios, we have:

ln(p(S | D) / p(not-S | D)) =
ln(p(not-S) / p(S)) + sum over i of ln(p(wi | S)/p(wi | not-S))

This technique of "log-likelihood ratios" is a common technique in statistics. In the case of two mutually exclusive alternatives (such as this example), the conversion of a log-likelihood ratio to a probability takes the form of a sigmoid curve.


In real life, the naive Bayes approach is more powerful than might be expected from the extreme simplicity of its model; in particular, it is fairly robust in the presence of non-independent attributes wi. Recent theoretical analysis has shown why the naive Bayes classifier is so robust.

See also:

External links: