Code360 powered by Coding Ninjas X Naukri.com. Code360 powered by Coding Ninjas X Naukri.com
Table of contents
1.
Introduction
2.
What is a Regression?
2.1.
Example
3.
Need For Learning Regression
4.
Characteristics of Regression
5.
Types Of Regression
6.
Linear Regression
6.1.
Why Is It Called Regression?
6.2.
What Is the Purpose of Regression?
6.3.
Simple linear regression vs Multiple linear regression
6.4.
How to perform linear regression in Python
6.5.
Python
6.6.
Assumptions of linear regression
7.
Logistic Regression
7.1.
How it differs from linear regression
7.2.
How to perform logistic regression in Python
7.3.
Python
7.4.
Assumptions of logistic regression
8.
Polynomial Regression
8.1.
How to perform polynomial regression in Python
8.2.
Python
8.3.
Assumptions of polynomial regression
9.
Other types of regression
10.
Model Evaluation And Selection
10.1.
How to evaluate the performance of a regression model
10.2.
How to select the best model for a given dataset 
11.
Advantages Of Regression
12.
Limitations Of Regression
13.
Applications of regression in real-world problems
14.
Frequently Asked Questions
14.1.
What is regression with example?
14.2.
Why is it called a regression?
14.3.
Why is regression important?
15.
Conclusion
Last Updated: Mar 27, 2024
Easy

What is-Regression

Author aayush
1 upvote

Introduction

Regression analysis is a statistical method with the help of which we find the relationship of an unknown variable on the known variables. Many types of regression methods exist.

Introduction

In this article, you are going to learn what Regression is, and you will also be learning about different types of Regression that are used for solving different kinds of problems. You will learn about different advantages and disadvantages related to Regression and the kind of problems that can be solved with Regression. The blog will also cover the applications of Regression and what limitations associated with Regression.

What is a Regression?

Regression is a very popular statistical method used in statistical analysis. It is used to model and find the relationship between a variable unknown to us and a variable whose values are known to us. The unknown variable, which depends on the other variables, is called a dependent variable (target), and the variables whose values are known are called independent variables (predictor).

Regression is a supervised learning method that allows us to understand the relationship between variables and make predictions about a continuous output based on one or more predictor variables. It is commonly used for forecasting, time series modeling, and determining cause-and-effect relationships between variables. By supervised learning method, we mean that the actual value of the target variable is known to us for a given set of predictor values. Regression also helps us understand the change in the unknown variable when the known variable's value changes.

Example

Examples of Regression are very common in our day-to-day life. Some examples of Regression can be:

  • Predicting the probability of rain with the help of temperature and other factors such as humidity, wind speed, etc., we can use Regression in this case.
     
  • Determining trends in the market
     
  • Prediction of accidents due to rash driving.

Need For Learning Regression

Regression analysis is an important statistical method used in many fields, especially in machine learning and data science, for predicting continuous variables related to real-world scenarios such as weather conditions, stock price trends, and marketing insights. 

Need For Learning Regression

Some of the reasons for learning regression analysis are:

  • To Estimate Relationship: Regression analysis allows us to estimate the relationship between a target and independent variables. It helps us understand how changes in independent variables impact the target variable.
     
  • Trend Identification: Regression analysis enables us to identify trends in the data. By analyzing historical data, we can determine patterns and trends that can aid in making predictions about future outcomes.
     
  • Real/Continuous Value Prediction: Regression analysis is particularly useful when predicting real or continuous values. It can provide accurate estimations for variables that lie on a continuous scale, such as temperature, sales revenue, or stock prices.
     
  • Factor Importance Determination: We can find the relative importance of different factors through regression analysis. By examining the coefficients or weights assigned to each independent variable, we can easily identify the most influential and least influential factors and understand how they affect one another.
     

We can say that regression analysis is an essential tool for predicting continuous variables, uncovering trends, and understanding the relationships between variables. And it has multiple applications in machine learning and data science contributing to more accurate predictions and valuable insights for various industries and domains.

Characteristics of Regression

Regression has multiple characteristics associated with it. Some of them are- 

  • Dependent and independent variables: In the regression model, there is a dependent variable whose value depends on another variable, and this is the variable we are predicting. The independent variables are the variables whose values are known, and the dependent variables with the help of which we make predictions.
     
  • Relationship type: Regression analysis is used to find the type of relationship (linear or nonlinear) between the dependent and independent variables. 
     
  • Coefficients: Regression analysis is used to generate the coefficient estimates representing the strength and the direction of the relationship between different sets of variables. These coefficients represent the average change in the value of the dependent variable for a one-unit change in the independent variable. It works by assuming that all the other variables remain constant.
     
  • Accuracy: The regression models are evaluated using different methods such as R squared to check the fitness or accuracy of the given regression model. 
     
  • Assumptions: Regression analysis works on several assumptions, including linearity (the relationship between variables is linear) and independence of errors (the errors are not correlated).
     
  • Interpretation of relationship: Regression analysis allows us to interpret how the changes in the independent variables affect the dependent variable. By examining the coefficient estimates, we can determine the direction and magnitude of the relationship. Positive coefficients indicate a positive effect, while negative coefficients indicate a negative effect.
     
  • Predictions: Regression models can be used for prediction purposes. By using specific values for the independent variables, we can estimate the value of the dependent variable. The accuracy of predictions depends on the model's quality and the data's reliability.

Types Of Regression

There are several types of regressions that exist. Each of them is used to solve different kinds of problems and has its own advantages and disadvantages. Some of them are-

  • Linear Regression
     
  • Logistic Regression
     
  • Polynomial Regression
     
  • Ridge Regression
     
  • Lasso Regression

Linear Regression

Many types of Regression exist in the real world. Linear Regression is one of the most basic regression techniques.

Linear Regression is a supervised machine learning algorithm that establishes a linear relationship between a dependent variable and one or more independent features. It comes in two forms: Univariate Linear Regression, which involves a single independent feature, and Multivariate Linear Regression, which incorporates multiple features. The algorithm aims to find the optimal linear equation to predict the dependent variable based on the independent features. The equation represents a straight line, where the slope indicates the magnitude of change in the dependent variable for a unit change in the independent feature(s). Linear Regression is widely used for prediction and analysis tasks in various domains.

There are two main types of linear regression-

  • Simple linear Regression is also known as univariate linear Regression, which involves predicting a dependent variable based on a single independent variable. It assumes a linear relationship between the variables and seeks the best-fitting straight line that minimizes the differences between observed and predicted values. Determining the slope and intercept of this line finds the relationship between the variables and helps in the predictions based on the independent variable. 
     
  • Multiple linear Regression is also known as multivariate linear Regression, which involves predicting a dependent variable based on multiple independent variables. It assumes a linear relationship between the variables and seeks the best-fitting straight line that minimizes the differences between observed and predicted values. It considers multiple factors simultaneously. Determining the slope and intercept of this line finds the relationship between the variables and helps in the predictions based on the independent variable. It allows better predictions by considering multiple factors.

Why Is It Called Regression?

It is called Regression because linear Regression is used to determine the relationship between two variables. By the term regression, we mean the process of finding the best line that helps us find the relationship between the two variables.

What Is the Purpose of Regression?

Regression is done for the following reasons-
 

  • Prediction: It is used to predict the values of a dependent variable based on the values of an independent variable.
     
  • Variable Importance: It is used to find the importance of variables and how the changes in one’s value affect the other variable.
     
  • Relationship identification: It is used to find the relationship between two variables and how are they correlated.Whether they are positively correlated or negatively correlated.

Simple linear regression vs Multiple linear regression

There are many differences between simple linear regression and multiple linear regression-
 

Basis Simple linear regression

Multiple linear regression

 

Number of variables There is a single independent variable. There are multiple independent variables.
Complexity It is a simpler model since the number of variables is less. It is a complex model since the number of variables that are considered is more.
Relationship Linear relationship between the independent and dependent variables. Considers the individual and combined effects of multiple variables.
Predictions The predictions are based on a single independent variable. The predictions are based on multiple independent variables.

How to perform linear regression in Python

Code-

  • Python

Python

import numpy as np
from sklearn.linear_model import LinearRegression

# Sample input data
X = np.array([1, 2, 3, 4, 5]).reshape(-1, 1) # Input features
y = np.array([2, 4, 5, 4, 5]) # Target variable

# Create a linear regression model
model = LinearRegression()

# Fit the model to the data
model.fit(X, y)

# Predict the target variable for new input
X_new = np.array([6]).reshape(-1, 1)
y_pred = model.predict(X_new)

# Print the predicted value
print("Predicted value:", y_pred)
You can also try this code with Online Python Compiler
Run Code

Output-

Output

Explanation-

In the above code, we perform a simple linear regression using the inbuilt libraries in Python. We first import the required libraries Numpy and Scikit Learn into our Notebook.

X is the independent variable, and Y is the dependent variable. We have a given set of values to determine the relationship between X and Y. We first reshape the values and then create an instance of the ‘LinearRegression’ class as our model. We fit the model on the given training set using the fit().

X_new is the known value of X for which we predict the values of variable Y using our model.

You can also print the best-fitted line using the Pyplot library of Python using the following code-

# Plot the data points and the regression line
plt.scatter(X, y, color='blue', label='Data Points')
plt.plot(X, model.predict(X), color='red', label='Regression Line')
plt.scatter(X_new, y_pred, color='green', label='Predicted Value')
plt.xlabel('X')
plt.ylabel('y')
plt.title('Simple Linear Regression')
plt.legend()
plt.show()


Output-

Output

Assumptions of linear regression

Following assumptions while finding the relationship between two variables using linear regression-
 

  • Linearity: Linear regression states that the dependent and independent variables should be linearly related to check that we can plot a scatter plot. 
     
  • Normality: The dependent and independent variables should be normally distributed, which means that the majority of the values must be around the mean, although there could be some outlier exceptions.
     
  • Independence/ Multicollinearity: There should be no collinearity between the independent variables. To check that, we can plot a heatmap/ correlation matrix. 

Logistic Regression

Logistic regression is a regression analysis technique mainly used for the task of classification where the goal is to predict the probability that an instance of belonging to a given class. It is used for classification algorithms. Its name is logistic regression. It’s referred to as regression because it takes the output of the linear regression function as input and uses a sigmoid function to estimate the probability for the given class. The difference between linear and logistic regression is that linear regression output is the continuous value that can be anything, while logistic regression predicts the probability that an instance belongs to a given class or not.

How it differs from linear regression

The basic difference between linear Regression and logistic Regression is that linear Regression predicts values of a dependent variable in a continuous range. In contrast, logistic Regression finds a probabilistic value between 0 and 1.

Basis linear regression

logistic regression

 

Objective Linear Regression is a supervised regression model. Logistic Regression is a supervised classification model.
Output The outcomes depend upon the independent variable and are in a continuous range.  The values predicted in the logistic regression are either 0 or 1.
Threshold values There are no requirements for threshold values. A threshold value is required.
Assumptions Linear regression assumes that the distribution of the dependent variable’s data is normal or Gaussian  The distribution of data is assumed to be Binomial.

How to perform logistic regression in Python

Code-

  • Python

Python

import numpy as np
from sklearn.linear_model import LogisticRegression

# Sample input data
X = np.array([[2, 3], [1, 2], [4, 5], [3, 4]]) # Input features
y = np.array([0, 0, 1, 1]) # Target variable

# Create a logistic regression model
model = LogisticRegression()

# Fit the model to the data
model.fit(X, y)

# Predict the target variable for new input
X_new = np.array([[5, 6]]) # New input features
y_pred = model.predict(X_new)

# Print the predicted class label
print("Predicted class label:", y_pred)
You can also try this code with Online Python Compiler
Run Code

Output-

Output

Explanation-

In the above code, we perform a simple logistic regression using the inbuilt libraries in Python. First, import the required libraries, Numpy and Scikit Learn, in your notebook.

X is the independent feature used for training the model, and Y is the class label. ‘Model’ is an instance of the LogisticRegression’ class which we use as our model. We train the model on the given training set using the fit().

X_new is the known value of X for which we use our model to predict the values of variable Y, i,e to which class it belongs.

Assumptions of logistic regression

Following assumptions are made in logistic regression-
 

  • No Outliers: It is assumed that there are no outliers in the dataset.
     
  • Independent observations: Each observation is independent of the other. It means that there is no correlation between any input variables.
     
  • Distribution of data: The distribution of data is assumed to be Binomial.

Polynomial Regression

Polynomial Regression is the third type of regression analysis method, similar to multivariate linear Regression but has significant differences.

Polynomial Regression is a regression analysis that finds the relationship between the dependent and independent variables using polynomial functions. A linear relationship was assumed to exist in linear Regression, but in polynomial Regression, more complex and nonlinear relationships between the variables can exist. It involves including higher-order quadratic polynomial terms or even cubic terms in the regression equation to capture curved or nonlinear patterns in the data. This helps the model to train properly on the data and make more accurate predictions, especially when the relationship between the variables is not linear.

How to perform polynomial regression in Python

Code-

  • Python

Python

import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures

# Sample input data
X = np.array([1, 2, 3, 4, 5]).reshape(-1, 1) # Input features
y = np.array([2, 4, 5, 4, 5]) # Target variable

# Create polynomial features
degree = 2 # Degree of the polynomial
poly_features = PolynomialFeatures(degree=degree)
X_poly = poly_features.fit_transform(X)

# Create a linear regression model
model = LinearRegression()

# Fit the model to the polynomial features
model.fit(X_poly, y)

# Predict the target variable for new input
X_new = np.array([6]).reshape(-1, 1)
X_new_poly = poly_features.transform(X_new)
y_pred = model.predict(X_new_poly)
print(y_pred)
You can also try this code with Online Python Compiler
Run Code


Output-

Output

Explanation-

Input data X and target variable y are defined.

The degree of the polynomial regression curve we want to fit is set to 2 in the code.

Polynomial features are created using PolynomialFeatures, and input X is transformed into X_poly with polynomial features up to the specified degree.

After that,  an instance of the LinearRegression class is created as a model and fits the model to the polynomial features X_poly and target variable y.

Prediction for the target variable for new input is made, and first, it is transformed into the new features X_new_poly using the transform method of the PolynomialFeatures object and then the model. Predict to obtain the predicted value y_pred.

Assumptions of polynomial regression

Assumptions of polynomial regression

Following assumptions are made in polynomial regression-

  • Linearity: In polynomial Regression, the relationship between the dependent variable and the polynomial terms is assumed to be linear. This means that the coefficients of the polynomial terms are constant and not influenced by the values of the independent variables.
     
  • Independence: In polynomial Regression, the observations are assumed to be independent.  
     
  • Homoscedasticity: The variance of the errors should be constant across all levels of the independent variables. It means that the spread of residuals should not systematically change as the independent variables vary.
     
  • Normality: The errors in polynomial Regression should follow a normal distribution. This assumption is important for statistical inference and hypothesis testing.

Other types of regression

There can be a situation in which we need to minimize the squared error of the model on the training data and reduce the complexity of the model we have created using a simple linear regression method.

For this, we apply regularization to decrease the complexity of our model.

Two famous examples of regularization procedures for linear Regression are:
 

  • Ridge Regression: In this type of regularization method the Ordinary Least Squares are modified to minimize the absolute sum of the coefficients (called L1 regularization). Mathematically it can be represented as:
Ridge Regression

Where

RSS is the residual sum of squares

Yi is the dependent variable

Xij is the dependent variable

Wj is the differentiation of cost with respect to particular data points.
 

  • Lasso Regression: In this type of regularization method the Ordinary Least Squares are modified to minimize the squared absolute sum of the coefficients (called L2 regularization). Mathematically it can be represented as
Lasso Regression

Where

RSS is the residual sum of squares

Y is the dependent variable

X is the independent variable

 

  • Elastic Net Regression: In this type of regularization method, the Ordinary Least Squares are modified and linearly combine the L1 regularization’s penalty and L2 regularization’s penalty.

Model Evaluation And Selection

Now we know about the different regression techniques that can be used to create a model for predicting results. The next important task is to choose ways to evaluate the performance of a given model so that we are able to select the best model for a given dataset. 

How to evaluate the performance of a regression model

There are many metrics and techniques that are available which can be used to evaluate the performance of a regression model-

  • Mean Squared Error (MSE) and Root Mean Squared Error (RMSE): MSE and RMSE measure the average squared difference between the predicted and observed values. Lower values of MSE and RMSE  indicate better model performance. RMSE is particularly useful as it is on the same scale as the dependent variable, making it easier to interpret.
     
  • Mean Absolute Error (MAE): MAE measures the average absolute difference between the predicted and observed values. It is also on the same scale as the dependent variable and provides a robust measure of the model's accuracy.
     
  • R-squared and adjusted R-squared: R-squared measures the proportion of variance in the dependent variable explained by the independent variables. It ranges from 0 to 1, where higher values indicate a better fit. However, R-squared can be biased by the number of predictors, leading to overfitting. Adjusted R-squared accounts for the number of predictors and is a more reliable measure for model comparison.

How to select the best model for a given dataset 

Various factors are considered before selecting a model for a given dataset. Based on those factors the choice of the best model for a given dataset is made. Some of these factors are-
 

  • Data Exploration: The nature of the dataset, such as the type of variables (continuous, categorical), the distribution of data, the presence of outliers, and missing values helps in determining the best model for a given dataset.
     
  • Objective and Problem Type: The specific goal of analysis, whether it is regression, classification, clustering, or another task helps in determining the best model for a given dataset.
     
  • Performance Metrics: The choice of a mode is based on the evaluation metrics such as accuracy, precision, recall, F1 score, or mean squared error for the task. They help in understanding how correctly is our model predicting results.
     
  • Data Size and Computational Efficiency: The size of the dataset should be considered for choosing the model. Because some models, like deep learning models, may require a large amount of data for training. 

Advantages Of Regression

There are various advantages associated with Regression analysis-
 

  • Predictive power: Models based on regression analysis can be used for prediction. You can apply the model to new data to predict the values of the dependent variable after the relationship between the variables is established. 
     
  • Identification of relationships: Regression analysis helps in identifying the relationship between the dependent variable and the independent variables. It is used to determine how changes in the independent variables cause changes in the dependent variable.
     
  • Flexibility: Regression analysis can be applied to a wide range of fields and research areas. You can use it with various types of data like continuous, categorical, and binary variables to solve different problems. Hence it is scalable.
     
  • Interpretability: Models based on regression provide coefficients that can be interpreted which allow researchers to understand the direction and magnitude of the relationship between variables. This helps in explaining the results to others and gaining insights into the underlying mechanisms.
     
  • Variable selection: Regression analysis helps us in identifying the most important independent variables that have a significant impact on the dependent variable. It enables us to understand which variables are most relevant and influential which helps in better decision-making and resource allocation.

Limitations Of Regression

The regression analysis has the following limitations associated with it-
 

  • Linearity assumption:  Regression assumes a linear relationship between the independent and dependent variables. The model may not accurately capture the underlying pattern if the relationship is nonlinear.
     
  • Independence Assumption: Regression assumes that observations are independent of each other. If there is autocorrelation or dependence among the data points, the standard errors of the coefficients may be biased, and the model may provide inaccurate predictions.
     
  • Normality Assumption: Regression analysis assumes that the residuals follow a normal distribution. Deviations from normality can affect the accuracy of hypothesis tests and confidence intervals.
     
  • Outliers: Regression analysis is sensitive to outliers, which are extreme values that can influence the estimated coefficients. Outliers can distort the relationship between variables and affect the model's accuracy.
     
  • Multicollinearity: Multicollinearity happens in a model when independent variables are highly correlated with each other. This can lead to unstable and unreliable coefficient estimates, making it difficult to interpret the individual effects of the variables.
     
  • Data limitations: Regression analysis relies on the quality and representativeness of the data. The regression results may be compromised if the data are incomplete, contain errors, or suffer from selection bias.
     
  • Extrapolation: Regression models are generally valid within the range of the observed data. Extrapolating the model beyond the observed data may lead to unreliable predictions because the underlying relationships may not hold outside the range of the data.

Applications of regression in real-world problems

Regression has multiple applications and it is used to solve many real-world problems. Some of the applications are-
 

  • Finance: You can use Regression in finance and economics to study the relationships between different variables, such as GDP and stock prices respect to the different variables. 
     
  • Environmental Science: Regression analysis in environmental studies helps to predict environmental phenomena, assess the effectiveness of conservation measures, and understand the relationships between variables like pollution, climate change, and land use on ecological systems. 
     
  • Social Science and Psychology: Regression analysis is also used in social sciences and psychology to study human behavior, attitudes, and relationships. It helps in understanding factors that affect educational outcomes and social interactions.
     
  • Healthcare: You can also use regression in the medical field to study the relationship between risk factors, lifestyle choices, and health outcomes. It aids in predicting disease progression and assessing the effectiveness of treatments.
     
  • Operations and Supply Chain Management: Operations Management of Supply Chain: Regression is used in operations and supply chain management to study factors affecting the production process, inventory management, and supply chain 

Frequently Asked Questions

What is regression with example?

Regression is a statistical method to model relationships between variables. For instance, it can predict a person's salary based on their years of experience.

Why is it called a regression?

The term "regression" comes from its historical origins when studying the heights of parents and children. It signifies that extreme values tend to move back toward the average in subsequent observations.

Why is regression important?

Regression is important for prediction, understanding relationships, modeling complex systems, risk assessment, and scientific research. It helps make informed decisions and solve real-world problems in various fields.

Conclusion

In this article, you got the answer to the question of what regression is and learned about regression meaning and all the related concepts. You also got to know about the different kinds of advantages and disadvantages of regression. In the blog, the limitations of regression. Were also discussed. It is very useful for statistical analysis and has multiple applications. 

Recommended Reading-

Also, check out some Guided Paths on topics like Data StructureData Structure and AlgorithmsCompetitive Programming as well as some Contests and very helpful Interview Experiences and Interview Bundles only on Coding Ninjas Studio brought to you by Industry Experts.

Live masterclass