Lecture 3: Probability and Naive Bayes Classification

Today’s Topic

Using Bayes’ theorem for machine learning. You should do some revision on the use of Bayes’ theorem in general. In this lecture you will look at how to use Bayes’ theorem to build a spam detector. One important idea to take away from this lecture is that there are a variety of ways implementing spam detection: in particular there are different feature models that you can use that give you different ways of calculating the relevant probabilities. It is important that you understand the difference between the different ways of implementing spam detection.

Reading Guide

What should I know by the end of this lecture?

  • How do I use Bayes’ theorem?
  • How do I use Bayes’ theorem to build a simple spam detector that only uses one word?
  • What is independence assumption in the naive Bayes’ algorithm?
  • What are the various models estimating the probabilities for spam detection?
  • What is Laplacian smoothing and how does it work in the context of spam detection?
  • When do you need to use logarithms to calculate the relevant probabilities, and how do you use them?

Subsections of Lecture 3: Probability and Naive Bayes Classification

Naive Bayes for Spam Classification

There are a lot of tutorials and youtube videos out there on using Naive Bayes for document classification. None of these tutorials are wrong, but they often hide some subtle points that if you think too hard about you will get confused. In this posts I want to explain what is really going on in Naive Bayes for spam classification.

This post assumes that you are already familiar with Bayes’ theorem.

Rather foolishly I did all the calculations in the post by hand. If you find any errors the please report them to me.

Our data set

To make things more concrete we will work on a very small data set where we can do the calculations directly. We are classifying micro-tweets of exactly 3 words. Our training set is as follows. $S$ indicates that a message is spam and $\neg S$ indicates that a message is not spam. $\neg S$
Number Tweet Spam ( $S$ or $\neg S$ )
1 money aardvark boondoggle $S$
2 money money money $S$
3 money money world $S$
4 money world world $S$
5 viagra money back $S$
6 viagra heart honey $S$
7 aardvark boondoggle world $\neg S$ (not spam)
8 honey honey honey $\neg S$ (not spam)
9 viagra heart money $\neg S$ (not spam)
10 money honey now $\neg S$ (not spam)

Background: Classifying with only one word.

As a warm up let's just build a classifier that uses one particular word $w = \mathrm{money}$. Bayes' Theorem should be familiar to you by now: $$ P(S|w) = \frac{P(w|S)P(S)}{P(w)} $$ $P(S|w)$ is the even that an email is Spam given that the word $w$ occurs in it. Using Bayes theorem we can calculate the probability that a message containing the word $w$ is spam. We can estimate the values $P(w|S)$, $P(S)$ and $P(w)$ from out data set. $$ P(S) = \frac{\mathrm{number\ spam}}{\mathrm{total\ messages}} = \frac{6}{10} $$ To estimate $P(w|S)$ we have to count the number of times that a particular word occurs in a spam message. So $$ P(\mathrm{money}|S) = \frac{5}{6} $$ When you are only considering a single word then estimating $P(\mathrm{money})$ is easy. It is the ratio of the number of tweets that contain the word 'money' and the total number of tweets. The word money appears in $7$ tweets. So $$ P(\mathrm{money}) = \frac{7}{10} $$ So if we get a message that contains the word 'money' we can calculate the probability that it is a spam message. $$ P(S|\mathrm{money}) = \frac{P(w|S)P(S)}{P(w)} = \frac{\frac{5}{6}\frac{6}{10}}{\frac{7}{10}} = \frac{5}{10}\frac{10}{7} = \frac{5}{7} \approx 0.71 $$ There is an important identity that will become useful later on $$ P(\mathrm{money}) = P(\mathrm{money}|S)P(S) + P(\mathrm{money}|\neg S)P(\neg S) $$ So $$ P(\mathrm{money})= \frac{5}{6}\frac{6}{10} + \frac{2}{4}\frac{4}{10} = \frac{7}{10} $$

First Pitfall: Estimating the probabilities

So how to we estimate the probabilities $P(w|S)$ , $P(S)$ and $P(w)$? What do they really mean? The probabilities $P(S)$ and $P(\neg S$) are unambiguous. They are just the probability that a tweet is spam or not. But $P(w|S)$, $P(w|\neg S)$, and $P(w)$ can mean different things depending exactly which model we use to calculate the probabilities.

There are two models:

  • (A): To calculate $P(\mathrm{money}|S)$. There are 6 messages that are spam and in those 6 messages 5 of them (1,2,3,4,5) contain the word money so $P(\mathrm{money}|S) = 5/6$, and of the 10 messages 7 of them (1,2,3,4,5,9,10) contain the word 'money' so $P(\mathrm{money}) = 7/10$. This is exactly what we did above.

  • (B): To calculate $P(\mathrm{money})$ there are $10\times 3 = 30$ words in our training set and the word money appears 10 times so $$P(\mathrm{money}) = 10/30.$$ To calculate $P(\mathrm{money}|S)$ there are 6 spam messages each of 3 words long. In the words of the spam messages the word 'money' appears 8 times. So $$ P(\mathrm{money} | S) = \frac{8}{3 \times 6} = \frac{8}{18} = \frac{4}{9} $$ The probability a message being spam is still $6/10$. So if I get the message 'money for nothing' then the probability that it is spam is calculated as before $$ P(S|\mathrm{money}) = \frac{P(\mathrm{money}|S)P(S)}{P(\mathrm{money})} = \frac{{\frac{8}{3 \times 6}\times \frac{6}{10}}}{ \frac{10}{30}} = \frac{8}{10} $$ It seems that if spammers are prone to repeat words in their message then this increases the probability that a message containing that word is spam.

So how do you calculate the probability that the message 'money money mammon' is spam? In model (A) it does not matter how many times 'money' appears in a message: you only count the number of messages 'money' appears in. While in model (B) there is some weighting of the number of times that a word appears. But to calculate $P(S|\mathrm{money}^2)$ (where is short hand for 'money appearing twice) we have calculate $P(\mathrm{money}^2|S)$. How you do this depends a bit on your model and the assumptions underlying the model. We'll get to that later.
Take Home Message

Take home message

So the first take home message is be careful how you count the words and how you calculate the probabilities. If you confuse model (A) and model (B) while writing your code you will get strange answers (as I did at one point).

Naive Bayes: first version

We are going to use model (A). That is we going to ignore how many times a word appears in a message. We are only interested if the word appears on the message or not. One word is not going to much of a spam classifier. Even in our little data set above, the word ‘money’ can appear in spam and non spam messages. We will get a better classifier if we take into account more words. Our data set is quite small and for each word we can count the number of times it appears in a spam tweet and the number of times it appears in a non-spam tweet.

Word occurrences in spam occurrences in non spam
$w_1 = \mathrm{money}$ 5 2
$w_2 = \mathrm{world}$ 2 1
$w_3 = \mathrm{viagra}$ 2 1
$w_4 = \mathrm{aardvark}$ 1 1
$w_5 = \mathrm{heart}$ 1 1
$w_6 = \mathrm{boondoggle}$ 1 1
$w_7 = \mathrm{honey}$ 1 2
$w_8 = \mathrm{back}$ 1 0
$w_9 = \mathrm{now}$ 0 1
You can turn these counts into probabilities, and thus you can calculate quantities like $P(\mathrm{money}|S) = 5/6$, $P(\mathrm{money}|\neg S) = 2/4$. Suppose I receive a message 'viagra money boondoggle' what is the probability that it is spam message? When we use Bayes' theorem we have to calculate $$P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|S)$$ where $$\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}$$ is the event that the words 'viagra', 'money' and 'boondoggle' appears in a message.

The Naive in Naive Bayes

We need to make an independence assumption. In a spam or non spam message the probability of words are independent. That is $$ P(w_1 \land w_2 \land \cdots \land w_n | S) = P(w_1|S)P(w_1|S) \cdots P(w_n|S) $$ and $$ P(w_1 \land w_2 \land \cdots \land w_n | \neg S) = P(w_1|\neg S)P(w_1|\neg S) \cdots P(w_n|\neg S) $$ Note this is a weaker assumption than simply saying $$ P(w_1 \cdots w_n) = \prod_{1\leq i \leq n} P(w_i) $$

Take home message

Note that because we have made the assumptions that $$P(w_1 \land w_2 \land \cdots \land w_n | S) = \prod_{i=1}^n P(w_i|S)$$ and $$P(w_1 \land w_2 \land \cdots \land w_n | S) = \prod_{i=1}^n P(w_i|S)$$ it does not make sense to directly estimate the probabilities of $P(w_i)$ directly from the data set. Later on we will see that you do actually need the probabilities $P(w_1 \cdots w_n)$ to decide if a message is spam or not. If you want to calculate the probability $P(w_1 \cdots w_n)$ then you must use the identity $P(w_1 \cdots w_n)$ equals $$ P(w_1 \cdots w_n|S)P(S) + P(w_1 \cdots w_n|\neg S)P(\neg S) $$

The independence assumption is why Naive Bayes is referred to as naive. Although this model could be improved. It seems that the probability of one word appearing in a message should not be independent of another word. If a spammer write ‘money’ then he is likely to also include ‘viagra’ in the message. Even so, assuming independence works very well in practice.

Calculating the spam probability.

We can now apply Bayes' theorem $P(S | \mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})$ equals $$ \frac{ P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|S) P(S)}{P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})} $$ From the independence assumption we have that $P(S | \mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})$ equals $$ \frac{ P(\mathrm{viagra}|S)P(\mathrm{money}|S)P(\mathrm{boondoggle}|S)P(S)} {P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})} $$ To calculate $P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})$ we use the identity above. Taking product $$P(\mathrm{viagra})P(\mathrm{money})P(\mathrm{boondoggle})$$ is the wrong answer. So instead we get that $P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})$ equals $P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|S)P(S)$ plus $P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|\neg S)P(\neg S)$. The by the independence assumption $P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle}|S)$ equals $$ P(\mathrm{viagra}|S)P(\mathrm{money}|S)P(\mathrm{boondoggle}|S) = \frac{2}{6}\frac{5}{6}\frac{1}{6} = \frac{5}{108} $$ Putting the numbers in we get $P(\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})$ equals $$ \left(\frac{2}{6}\cdot \frac{5}{6}\cdot \frac{1}{6}\right)\frac{6}{10} + \left(\frac{1}{4}\cdot \frac{2}{4}\cdot \frac{1}{4}\right)\frac{4}{10} \approx 0.08 $$ So $P(S|\mathrm{viagra} \land \mathrm{money} \land \mathrm{boondoggle})$ equals $$ \frac{\frac{5}{108}\frac{6}{10}}{0.08} \approx 0.35 $$

Not calculating the whole probability.

When implementing the spam filter we do not actually need to calculate the denominators. We just compare the expressions $P(\mathrm{viagra}|S)P(\mathrm{money}|S)P(\mathrm{boondoggle}|S)P(S)$ and $P(\mathrm{viagra}|\neg S)P(\mathrm{money}|\neg S)P(\mathrm{boondoggle}|\neg S)P(\neg S)$ and see which one is bigger. This is also important because some of the numbers start getting smaller and smaller and you end up with floating point underflow errors. If the numbers get too small then you have to calculate with the logarithm of the probability and do additions rather than subtraction.

Laplacian Smoothing

What if we have the message 'now money viagra', if we look at our data set the word 'now' has not appeared in a spam message. There could be two reasons for this, one is that a spam message will never contain the word 'now' (unlikely), or that we just do not have a spam message with 'now' appearing in our training set. If we use model (A) and calculate the probability that our message is spam we get $P(S|\mathrm{now}\land\mathrm{money}\land{viagra})$ equals $$ \frac{P(\mathrm{now}|S)P(\mathrm{money}|S)P(\mathrm{viagra}|S)P(S)}{P(\mathrm{now}\land\mathrm{money}\land\mathrm{viagra})} $$ which equals $$\frac{0 \cdot \frac{5}{6} \cdot \frac{2}{6}\frac{6}{10}}{P(\mathrm{now}\land\mathrm{money}\land\mathrm{viagra})} $$ So even though the words 'money' and 'viagra' are pretty good indicators of a message being spam we get probability 0. To get around this we add one to all our counts to avoid probability $0$ estimates and adjust the total count so as to avoid any probabilities greater than $1$. So in model (A) if we are considering $9$ words as a above then we estimate $P(\mathrm{now}|S)$ to be $$ \frac{0 + 1}{6 + 1} $$ instead of $$ \frac{0}{6} $$ If you had a word that appeared in all 6 of the spam tweets then you would get an estimate $\frac{6+1}{6+1}$ which would be $1$. I leave it an exercise to work out the correct thing to do in model (B).

Feature Modelling

All most all machine learning algorithms require numbers and vectors as inputs. Our Naive Bayes classifier does not really work with words, but feature vectors. There are different possible models, but we use something similar to model (A). First we take our data set and find the first $n$ most popular words. The most popular word a data set consisting of English messages is typically the word 'the'. You can improve things by filtering out the most popular words that don't contribute much a message being (referred to as stop words). We will not worry about that here. But a word like 'the' is equally likely to appear in a spam tweet or a non spam tweet, so it is better to ignore it. Then we turn each message into a vector of length $n$ where each entry is $1$ or $0$, and the $i$th entry is $1$ if the message contains the $i$th most popular word. So in a typical English message data set 'the' is the most popular and 'to' is the second most popular word. So if our message contained the words 'to the' then the first two entries of its feature vector would have the value $1$. $$ f = (f_1, \ldots , f_i, \ldots, f_n) $$ Where $f_i$ is $1$ if the $i$th most popular word $w_i$ occurs in the message and $0$ otherwise. It is easy to write a function takes a message and turns it into our feature vector. Given our training set we can estimate two probabilities for our two classes $S$ and $\neg S$ , $P(w_i|S)$, $P(\overline{w}_i|S)$, $P(w_i| \neg S)$ and $P(\overline{w}_i | \neg S)$, where $w_i$ is the even that word $i$ occurs in the message and $\overline{w}_i$ is the event that word $i$ does not occur in the message. In our example above we only have $9$ words in our data set and they appear in order of popularity as 'money', 'world', 'viagra', 'aardvark', 'heart', 'boondoogle' , 'honey', 'back' , 'now'. You have to break ties (words that are equally popular) and you have to do it consistently. So give the message 'aardvark money now' its feature vector would be $$ f = (1,0,0,1,0,0,0,0,1) $$ This vector $f$ corresponds to the event $$ w_1\overline{w}_2\overline{w}_3w_4\overline{w}_5\overline{w}_6\overline{w}_7w_8 $$ So to use Bayes' theorem to work on the probability that the tweet is is spam we have to calculate the quantity $$ \frac{P(w_1\overline{w}_2\overline{w}_3w_4\overline{w}_5\overline{w}_6\overline{w}_7\overline{w}_8|S)P(S)}{P(w_1\overline{w}_2\overline{w}_3w_4\overline{w}_5\overline{w}_6\overline{w}_7\overline{w}_8 w_9)} $$ Calculating $P(w_1\overline{w}_2\overline{w}_3w_4\overline{w}_5\overline{w}_6\overline{w}_7\overline{w}_8|S)$ is easily done by the independence assumption. It is the product of the terms $P(w_1|S)$,$P(\overline{w}_2|S)$,$P(\overline{w}_3|S)$,$P(w_4|S)$,$P(\overline{w}_5|S)$,$P(\overline{w}_6|S)$,$P(\overline{w}_7|S)$ and $P(\overline{w}_8|S)$. All these values are easily estimated from our data set. For example $P(\overline{w}_3|S)$ is the probability that the word 'money' does not appear in a spam tweet. We had 6 spam tweets and 5 of them contained the word money and so we get that $P(\overline{w}_3|S)$ equals $1/6$.

Model (A) and Feature Vectors

If you go back to model (A) and you try to estimate if a message is spam or not, then using the same message we would only need to calculate $$ \frac{P(w_1|S)(w_4|S)P(w_9|S)P(S)}{P(w_1\land w_w \land w_3)} $$ since $w_1$ is 'money', $w_4$ is 'aardvark' and $w_9$ is 'now'. We are throwing away information about the words that do not occur in the tweet. Does this matter? More importantly is this calculation incorrect. To simply things imagine that we only had two words in our feature vector $W_1$ and $W_2$. Then given a message there are 4 possible atomic events: $$ W_1 W_2, W_1 \overline{W}_2, \overline{W}_1 W_2, \overline{W}_1 \overline{W_2}$$ What do we mean when we write $P(W_1|S)$? Looking at our atomic events we actually mean $$ P( W_1 W_2 \lor W_1\overline{W}_2|S) $$ Any event is a union of the atomic events in your probability model. Using the independence assumption for $W_1$ and $W_2$ and the basic rule of probability that $P(A\lor B)$ equals $P(A)+P(B)$ when then events $A$ and $B$ are independent atomic events we get that $P(W_1|S)$ equals $P( W_1 W_2 \lor W_1\overline{W}_2|S)$ which equals $$ P(W_1|S)P(W_2|S) + P(W_1|S)P(\overline{W}_2|S)$$ refactoring gives $$P(W_1|S)(P(W_2|S) + P(\overline{W}_2|S)) $$ and since $P(W_2|S) + P(\overline{W}_2|S)$ equals $1$ we get $P(W_1|S)$. So if we ignore any information about $W_2$ then we get a factor of $1$. If we use what information we have about $W_2$ then we get a better estimate for the probability. You can show that the same applies if you have lots of words in your feature vector. So our original model (A) is not wrong, but the feature vector model where we take into account if a word appears or not when we are estimating the probabilities and gives a better estimate if the word is spam or not. Note the above argument depends on our independence assumption (the naive in Naive Bayes). If you did not look at any words then the only information that you would have is $P(S)$ or $P(\neg S)$ as you look at more words in your feature you get a better estimate of the probability that the message is spam or not.

Model (B) with multiple words

How do you calculate the probability that the message 'money money boondoggle' is spam? We have already assumed that the probabilities of a words occurring in a spam or non-spam tweet are independent. If we also assume that the probability of a word appearing $k$ times is $$ P(w^k|S) = P(w|S)^k $$ That is each occurrence is independent, then we can calculate the probability that a message containing the multiple occurrences of a word is spam or not, but only if you use model (B) to calculate the probabilities. You should not mix up model (A) and model (B). It does not make sense in model (A) to ask what the probability of a word occurring $k$ times in a spam message is. We can only ask if a m message contains the word or not.
Take Home Message

If you want to take into account multiple words then do not use model (A) to calculate your probabilities.

Model (B) with multiple words and feature vectors.

The feature vector approach for model (A) considered vectors where the entries are $0$ or $1$. Given a feature vector $f$ the entry $i$th entry is $1$ if the word appears in the message and $0$ otherwise. Instead we would have a feature vector where the $ith$ entry tells you how many times the word appears in the message. Thus for our message 'money money boondoggle' its feature vector would be (using the same ordering as above): $$ (2,0,0,0,0,1,0,0,0) $$ Again it is not hard to use the information that a word appears $0$ times.

Take home messages

  • Are you using model (A) or model (B). Don’t get them confused, especially when you are estimating the probabilities $P(w_i|S)$ from the training data.

  • Are you using the negative information, the information that a word does not occur? It does not matter your maths is correct, but you are not using all the information that you have.

  • To understand how this is related to other machine learning algorithms, then you have to understand that we take a message and construct a feature vector. Depending on if you are using model (A) or (B) your feature vector either has $0$ or $1$ entries or positive integer that tells you how many times a word occurs in a message. Feature vectors is an example modelling that you often have to do in machine learning. Your data set and package does not always come as ready packaged as a set of vectors.

  • If you watch some random video on the internet, it is not always clear which model they are using when they calculate the probabilities.

The documentation to scikit-learn has a nice entry on naive Bayes, which discusses the various options on modelling as well as links to various interesting articles on the different modelling approaches to naive Bayes.