*** Welcome to piglix ***

False negative


In medical statistics, false positives and false negatives are concepts analogous to type I and type II errors in statistical hypothesis testing, where a positive result corresponds to rejecting the null hypothesis, and a negative result corresponds to not rejecting the null hypothesis. The terms are often used interchangeably, but there are differences in detail and interpretation.

A false positive error, or in short false positive, commonly called a "false alarm", is a result that indicates a given condition has been fulfilled, when it has not. I.e. erroneously a positive effect has been assumed. In the case of "crying wolf" – the condition tested for was "is there a wolf near the herd?"; the result was that there had not been a wolf near the herd. The shepherd wrongly indicated there was one, by calling "Wolf, wolf!"

A false positive error is a type I error where the test is checking a single condition, and results in an affirmative or negative decision usually designated as "true or false".

A false negative error, or in short false negative, is where a test result indicates that a condition failed, while it was successful. I.e. erroneously no effect has been assumed. A common example is a guilty prisoner freed from jail. The condition: "Is the prisoner guilty?" is true (yes, the prisoner is guilty). But the test (a court of law) failed to realize this, and wrongly decided the prisoner was not guilty.

A false negative error is a type II error occurring in test steps where a single condition is checked for and the result can either be positive or negative.

The false positive rate is the proportion of all negatives that still yield positive test outcomes, i.e., the conditional probability of a positive test result given an event that was not present.

The false positive rate is equal to the significance level. The specificity of the test is equal to 1 minus the false positive rate.

In statistical hypothesis testing, this fraction is given the Greek letter α, and 1−α is defined as the specificity of the test. Increasing the specificity of the test lowers the probability of type I errors, but raises the probability of type II errors (false negatives that reject the alternative hypothesis when it is true).


...
Wikipedia

...