Gaussian Naive Bayes
Naïve Bayes is a probabilistic machine learning algorithm used for many classification functions and is based on the Bayes theorem. Gaussian Naïve Bayes is the extension of naïve Bayes. While other functions are used to estimate data distribution, Gaussian or normal distribution is the simplest to implement as you will need to calculate the mean and standard deviation for the training data.
Top Machine Learning and AI Courses Online
What is the Naive Bayes Algorithm?
Naive Bayes is a probabilistic machine learning algorithm that can be used in several classification tasks. Typical applications of Naive Bayes are classification of documents, filtering spam, prediction and so on. This algorithm is based on the discoveries of Thomas Bayes and hence its name.
The name “Naïve” is used because the algorithm incorporates features in its model that are independent of each other. Any modifications in the value of one feature do not directly impact the value of any other feature of the algorithm. The main advantage of the Naïve Bayes algorithm is that it is a simple yet powerful algorithm.
It is based on the probabilistic model where the algorithm can be coded easily, and predictions did quickly in real-time. Hence this algorithm is the typical choice to solve real-world problems as it can be tuned to respond to user requests instantly. But before we dive deep into Naïve Bayes and Gaussian Naïve Bayes, we must know what is meant by conditional probability.
Trending Machine Learning Skills
Conditional Probability Explained
We can understand conditional probability better with an example. When you toss a coin, the probability of getting ahead or a tail is 50%. Similarly, the probability of getting a 4 when you roll dice with faces is 1/6 or 0.16.
If we take a pack of cards, what is the probability of getting a queen given the condition that it is a spade? Since the condition is already set that it must be a spade, the denominator or the selection set becomes 13. There is only one queen in spades, hence the probability of picking a queen of spade becomes 1/13= 0.07.
The conditional probability of event A given event B means the probability of event A occurring given that event B has already occurred. Mathematically, the conditional probability of A given B can be denoted as P[A|B] = P[A AND B] / P[B].
FYI: Free nlp course!
Let us consider a little complex example. Take a school with a total of 100 students. This population can be demarcated into 4 categories- Students, Teachers, Males and Females. Consider the tabulation given below:
Female | Male | Total | |
Teacher | 8 | 12 | 20 |
Student | 32 | 48 | 80 |
Total | 40 | 50 | 100 |
Here, what is the conditional probability that a certain resident of the school is a Teacher given the condition that he is a Man.
To calculate this, you will have to filter the sub-population of 60 men and drill down to the 12 male teachers.
So, the expected conditional probability P[Teacher | Male] = 12/60 = 0.2
P (Teacher | Male) = P (Teacher ∩ Male) / P(Male) = 12/60 = 0.2
This can be represented as a Teacher(A) and Male(B) divided by Male(B). Similarly, the conditional probability of B given A can also be calculated. The rule that we use for Naïve Bayes can be concluded from the following notations:
P (A | B) = P (A ∩ B) / P(B)
P (B | A) = P (A ∩ B) / P(A)
The Bayes Rule
In the Bayes rule, we go from P (X | Y) that can be found from the training dataset to find P (Y | X). To achieve this, all you need to do is replace A and B with X and Y in the above formulae. For observations, X would be the known variable and Y would be the unknown variable. For each row of the dataset, you must calculate the probability of Y given that X has already occurred.
But what happens where there are more than 2 categories in Y? We must compute the probability of each Y class to find out the winning one.
Through Bayes rule, we go from P (X | Y) to find P (Y | X)
Known from training data: P (X | Y) = P (X ∩ Y) / P(Y)
P (Evidence | Outcome)
Unknown – to be predicted for test data: P (Y | X) = P (X ∩ Y) / P(X)
P (Outcome | Evidence)
Bayes Rule = P (Y | X) = P (X | Y) * P (Y) / P (X)
The Naïve Bayes
The Bayes rule provides the formula for the probability of Y given condition X. But in the real world, there may be multiple X variables. When you have independent features, the Bayes rule can be extended to the Naïve Bayes rule. The X’s are independent of each other. The Naïve Bayes formula is more powerful than the Bayes formula
Gaussian Naïve Bayes
So far, we have seen that the X’s are in categories but how to compute probabilities when X is a continuous variable? If we assume that X follows a particular distribution, you can use the probability density function of that distribution to calculate the probability of likelihoods.
If we assume that X’s follow a Gaussian or normal distribution, we must substitute the probability density of the normal distribution and name it Gaussian Naïve Bayes. To compute this formula, you need the mean and variance of X.
In the above formulae, sigma and mu is the variance and mean of the continuous variable X computed for a given class c of Y.
Representation for Gaussian Naïve Bayes
The above formula calculated the probabilities for input values for each class through a frequency. We can calculate the mean and standard deviation of x’s for each class for the entire distribution.
This means that along with the probabilities for each class, we must also store the mean and the standard deviation for every input variable for the class.
mean(x) = 1/n * sum(x)
where n represents the number of instances and x is the value of the input variable in the data.
standard deviation(x) = sqrt(1/n * sum(xi-mean(x)^2 ))
Here square root of the average of differences of each x and the mean of x is calculated where n is the number of instances, sum() is the sum function, sqrt() is the square root function, and xi is a specific x value.
Predictions with the Gaussian Naïve Bayes Model
The Gaussian probability density function can be used to make predictions by substituting the parameters with the new input value of the variable and as a result, the Gaussian function will give an estimate for the new input value’s probability.
Naïve Bayes Classifier
The Naïve Bayes classifier assumes that the value of one feature is independent of the value of any other feature. Naïve Bayes classifiers need training data to estimate the parameters required for classification. Due to simple design and application, Naïve Bayes classifiers can be suitable in many real-life scenarios.
Popular AI and ML Blogs & Free Courses
Conclusion
The Gaussian Naïve Bayes classifier is a quick and simple classifier technique that works very well without too much effort and a good level of accuracy.
If you’re interested to learn more about AI, machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.
Learn ML Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.
What is a naive bayes algorithm?
Naive bayes is a classic machine learning algorithm. Having its origin in statistics, naive bayes is a simple and a powerful algorithm. Naive bayes is a family of classifiers based on applying a conditional probability analysis. In this analysis, the conditional probability of an event is computed using the probability of each of the individual events constituting the event. Naive bayes classifiers are often found to be extremely effective in practice, especially when the number of dimensions of the feature set is large.
What are the applications of naive bayes algorithm?
Naive Bayes is used in text classification, document classification and for document indexing. In naive bayes, each possible feature does not have any weight assigned in the pre-processing phase and the weights are later assigned during training as well as recognition phases. The basic assumption of naive bayes algorithm is that features are independent.
What is Gaussian Naïve Bayes algorithm?
Gaussian Naive Bayes is a probabilistic classification algorithm based on applying Bayes' theorem with strong independence assumptions. In the context of classification, independence refers to the idea that the presence of one value of a feature does not influence the presence of another (unlike independence in probability theory). Naive refers to the use of an assumption that the features of an object are independent of one another. In the context of machine learning, naive Bayes classifiers are known to be highly expressive, scalable, and reasonably accurate, but their performance deteriorates rapidly with the growth of the training set. A number of features contribute to the success of naive Bayes classifiers. Most notably, they do not require any tuning of the parameters of the classification model, they scale well with the size of the training data set, and they can easily handle continuous features.