Table of contents
1.
Introduction
2.
False Positives and False Negatives in real life
2.1.
Type I Error (False Positive): 
2.2.
Type II Error (False Negative): 
3.
Formal Understanding 
3.1.
False Positives Rate
3.2.
False Negative Rate
4.
FAQS
5.
Key Takeaways
Last Updated: Mar 27, 2024
Easy

False positives and Negatives

Career growth poll
Do you think IIT Guwahati certified course can help you in your career?

Introduction

In machine learning, we often deal with quantization and formulations of errors, and there may be different types of errors occurring while we make predictions. 

As the name suggests, they both are false, meaning they are errors, but positive and negative explain their nature. 

As the name suggests, below is the meaning of these articles. 

False Positive: Prediction is accurate, but the actual value is false. 

False Negative: Prediction is false, but the true value is true. 

They both are errors as the predicted value is different from the true value. 

They are also called Type I and Type II errors formally. 

In more specific terms, we commit a Type I Error when we reject the null hypothesis when, in fact, it was true. A Type II Error occurs when we accept the null hypothesis when, in fact, it was false.

False Positives and False Negatives in real life

Let's say your mom shows a lentil(Dal) and asks you if this is Urad Dal??

So we don't know what the true value is right now. 

Type I Error (False Positive): 

If you say it's Yes, it's Urad Dal, whereas, in reality, it's not, then it's a false positive.

Type II Error (False Negative): 

If you say it's not Urad Dal when in reality it's Urad Dal, it's a false negative. 

Now we understand what these terms mean in real life as well.

Formal Understanding 

Now we understand what these errors mean but let's see their application in statistics and machine learning. 

These errors, in general, occur in binary classification when the answer is one of the two. In our experiments, one or the other error is costly to make. For example, sending an innocent to jail is more expensive than not sending a criminal to prison. 

Telling a person he is HIV positive when he is not(false positive) will cause a lot of trauma to him. Still, it's not as deadly as telling a person he is not HIV positive when in fact, he is( false negative), he will spread it in more people, and he will miss treatment too. 

So we usually define what's more costly to us and try to minimize that. That's where this understanding of false positives and negatives come in handy.

False Positives Rate

The false-positive rate is the proportion of all predicted positive negatives, i.e., Given the absence of an event, the conditional probability of a positive test result.

False Negative Rate

Similarly, The false-negative rate is the proportion of all positives that are predicted negative. 

These are evaluation metrics for a classification model. They tell us how well our model is working and which thing we need to focus on to improve our model. 
Check out this problem - First Missing Positive 

FAQS

1. Which one is essential false positive or false negative?

It depends on the situation; a situation will decide which one is important there is no hard and fast rule to say which one will always be important. 

2. What is the use of false positives and negatives in data science and Machine learning?

There are the metrics that are telling the errors they are used in calculating accuracies and hence finding the performance of our model. 

3.  Why is the problem of false-positive and false-negative important?

All tests have a chance of resulting in false positive and false negative errors. They are an unavoidable problem in scientific testing. This creates problems in data analysis in many scientific fields. For example, a blood test can be used to screen for a number of diseases, including diabetes.

4. How do you increase the true positive rate?

Every favourable example in your training set can be duplicated to give your classifier the impression that classes are genuinely balanced. To penalize more False Negatives, you might adjust the classifier's loss (this is actually pretty close to duplicating your positive examples in the dataset).

Key Takeaways

Both are errors, so their true value is opposite to what they are predicting. 

False-positive means prediction is positive but true value is negative. 

False-negative means prediction is negative but true value is positive.

Which one is more critical depends on the situation.

Hey Ninjas! Don't stop here; check out Coding Ninjas for Machine Learning, more unique courses, and guided paths. Also, try Coding Ninjas Studio for more exciting articles, interview experiences, and fantastic Data Structures and Algorithms problems. Happy Learning!

 

Live masterclass