The goal of machine learning algorithms is to develop models that can correctly anticipate outcomes from novel, unforeseen input. Models may experience overfitting when dealing with complicated datasets, which causes them to perform remarkably well on training data but fall short of generalizing to fresh data. A potent method for preventing overfitting and improving the performance of linear regression models is ridge regression, often known as L2 regularization. The complexities of Ridge Regression, including its formula, implementation in Python using sklearn, and a comparison of Lasso and Ridge Regression, will be covered in detail in this in-depth guide.
Understanding Ridge Regression
Ridge Regression, a linear regression methodology, improves on the traditional least squares approach by introducing a penalty element in the loss function. This penalty term prevents the model from relying too much on a single feature, reducing the impacts of multicollinearity. A regularization parameter (or alpha) is used to alter it.
What is Multicollinearity?
In order to achieve a given level of accuracy, multicollinearity is a phenomenon where one predicted value in several regression models is linearly predicted with others.
Multicollinearity basically happens when more than two anticipated variables have substantial correlations with one another.
In modeled data, multicollinearity could be defined as the presence of a correlation between independent variables. Estimates of the regression coefficient may become inaccurate as a result.
It can potentially raise the standard errors of the regression coefficients and reduce the efficacy of any t-tests.
In addition to increasing model redundancy and decreasing predictability’s effectiveness and dependability, multicollinearity can provide false results and p-values.
Multicollinearity can be introduced by using multiple data sources. This could happen as a result of limitations placed on linear or demographic models, an overly precise model, outliers, or model design or choice made during the data collection process.
Multicollinearity may be introduced during the data collection process if the data were gathered using an inappropriate sampling method. Even if the sample size is smaller than expected, it could still happen.
Because there are more variables than data, multicollinearity will be visible if the model is overspecified. You can avoid this while the model is being deployed.
Outliers (extreme variable values that might produce multicollinearity) can be removed to reverse multicollinearity.
Check out upGrad’s free courses on AI.
How does Ridge Regression function?
Let’s look at the mathematical formula for Ridge Regression to get a better understanding of how it functions:
In the formula:
– y = target variable.
– X = matrix of independent variables.
– β = coefficients of independent variables.
– λ = (lambda) regularization parameter.
L2 regularization is performed through ridge regression. In this, the square of the coefficients’ magnitude is increased by the penalty equivalent. The motive of minimization is as follows:
You can define its coefficients using a response vector y Rn and a prediction matrix X Rnp as follows:
- λ is the deciding factor that determines the severity of the penalty term.
- When λ = 0, the goal is comparable to basic linear regression. You will receive the same coefficients as with simple linear regression.
- When λ =, the coefficients obtained are zero due to infinite weightage on the square of coefficients, as anything less than zero renders the goal endless.
- When 0 < λ < ∞, the magnitude of λ determines the weightage that is assigned to the various aspects of the objective.
- LS Obj λ (the sum of the squares of the coefficients) is the minimization objective.
In this case, LS Obj stands for Least Square Objective. This is the linear regression objective without regularization.
Ridge regression in r tends to generate some bias as the coefficients are shrunk down towards zero. However, it can also significantly reduce variance, giving you a superior mean-squared error. increases the ridge penalty while regulating shrinking. A large indicates a higher degree of shrinkage, and for different values of, different coefficient estimations can be obtained.
You will have a thorough understanding of the Ridge Regression formula and its importance in the field of machine learning by the end of this article via Executive PG Program in Machine Learning & AI from IIITB.
Where is Ridge Regression used?
When the number of predictor variables in a given set is more than the number of observations or when the dataset exhibits multicollinearity, it is used to build sparse models. It is mostly used to analyze multicollinearity in data from multiple regressions.
Top Machine Learning and AI Courses Online
Ridge Regression vs Lasso Regression
To avoid overfitting, two well-liked regularization methods are Ridge Regression and Lasso Regression (L1 regularization). Although they both aim to impose punishment terms, they differ in the kinds of penalties that are imposed.
Comparison of Penalty Terms:
In contrast to Lasso Regression, which adds the sum of the absolute values of the coefficients, Ridge Regression adds the sum of the squares of the coefficients to the loss function. Since the consequences change, people behave differently as a result.
With smaller yet non-zero coefficients, Ridge Regression frequently keeps all pertinent features in the model. When the majority of characteristics affect the target variable, it works well and feature selection is not the main priority.
While performing feature selection, Lasso Regression has a tendency to reduce some of the coefficients to zero. It is helpful when a dataset contains a large number of redundant or useless attributes.
How does Ridge Regression deal with Multicollinearity?
Although least squares estimates have a tendency to be unbiased in the presence of multicollinearity, their enormous variances mean that they may be significantly off from the true value. By adding some bias to the regression estimates, ridge regression lowers the standard errors. In essence, it seeks to obtain more accurate estimates. Learn the complexities of this via the Executive PG Program in Data Science & Machine Learning from the University of Maryland.
Implementing Ridge Regression in Python
In Python, there are many Ridge regression implementations available, including Ridge from the scikit-learn package and the statsmodels’ RidgeCV module. Ridge regression is implemented in the following code, which makes use of the Ridge class from Sklearn.linear_model.
In contrast to ordinary linear regression, which reduces the total of squared errors, ridge regression contains a penalty component that reduces the total of squared coefficients. This punishment time is known as the alpha value.
Ridge regression is accomplished in Python using the ridge regression sklearn module. The alpha argument is passed to the Ridge class, which determines how much regularization will be used.
The following example shows how to use ridge regression to anticipate Boston property prices using a dataset from the scikit-learn package. Once the data has been separated into test and training sets, the training set is utilized to develop a ridge regression model.
When a Ridge instance is created, Alpha is set to 0.1. The penalty word’s weight is determined by the alpha value. A higher alpha number implies that the penalty phrase is weighted more heavily, whereas a lower alpha number suggests that the penalty term is weighted less heavily. In this case, the alpha value is set at 0.1. This means that the punishment time will be given a weight of 0.1.
import numpy as np
def __init__(self, alpha=1.0):
self.alpha = alpha # Regularization strength (lambda)
def fit(self, X, y):
# Add a column of ones to the feature matrix X for the intercept term
X = np.c_[np.ones(X.shape), X]
n_features = X.shape
n_samples = X.shape
# Calculate the ridge matrix (X^T * X alpha * I) where I is the identity matrix
ridge_matrix = np.dot(X.T, X) self.alpha * np.eye(n_features)
# Calculate the ridge coefficients
ridge_coefficients = np.linalg.inv(ridge_matrix).dot(X.T).dot(y)
# Extract the intercept and coefficients
self.intercept_ = ridge_coefficients
self.coef_ = ridge_coefficients[1:]
def predict(self, X):
# Add a column of ones to the feature matrix X for the intercept term
X = np.c_[np.ones(X.shape), X]
return X.dot(np.concatenate(([self.intercept_], self.coef_)))
# Example usage:
if __name__ == “__main__”:
# Generate some example data
X = 2 * np.random.rand(100, 3)
y = 4 np.dot(X, np.array([3, 1.5, -2])) np.random.randn(100)
# Make and apply the Ridge Regression model
alpha = 0.1 # Regularization strength
ridge_model = RidgeRegression(alpha=alpha)
# Make predictions on new data
new_data = np.array([[1, 2, 3], [4, 5, 6]])
predictions = ridge_model.predict(new_data)
In-demand Machine Learning Skills
Advantages of Ridge Regression
Following are the advantages of Ridge Regression:
- Effective at dealing with multicollinearity: Ridge Regression lessens the effect of correlated features, making it appropriate for datasets with high collinearity.
- Resistance to outliers: The penalty term reduces the impact of outliers, making predictions more reliable.
- Consistent outcomes: Ridge Regression offers more reliable findings than simple least squares, particularly when working with noisy data.
Enroll for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.
Disadvantages of Ridge Regression
Following are the disadvantages of Ridge Regression:
- Limited feature selection: Ridge Regression uses every feature in the model, which might not be ideal in situations where feature selection is essential.
- Sensitivity to regularization parameter: The effectiveness of Ridge Regression depends on the regularization parameter that is used. To get the best results, this parameter needs to be adjusted precisely.
In this thorough article, we looked into Ridge and Lasso Regression, a potent machine learning method that helps improve the performance of linear regression models by preventing overfitting. Ridge Regression offers reliable models that generalize well to fresh, untested data by including a penalty component. Additionally, we contrasted Ridge Regression with Lasso Regression and outlined each method’s unique use cases. With this knowledge, you can use Ridge Regression to enhance the functionality of your machine learning models and produce predictions that are more precise. Understand the intricacies of the regression models via Advanced Certificate Program in GenerativeAI.
What sets Ridge Regression and Lasso Regression apart from one another?
The main distinction is in the kind of punishment they impose. While Lasso Regression utilizes the sum of the absolute values of the coefficients, Ridge Regression employs the sum of the squares of the coefficients.
How is multicollinearity handled by the Ridge Regression?
Ridge Regression reduces the impact of multicollinearity by decreasing the coefficients of correlated variables towards zero.
Can I use Ridge Regression to perform variable selection?
Ridge Regression does not exclude any variables from the model. Lasso Regression can be a better option if you need feature selection.