# Difference Between Type I and Type II Errors with Proper Definition and Brief Explanation

Mainly, two types of errors occur while hypothesis testing is performed, that is, either reject H 0 , when h 0 is true, or accept H 0 when in fact h 0 is false. So the former represents a type I error and the latter is a type II error indicator.  Type I and Type II Errors

Hypothesis testing is a common procedure; used by the researcher to test validity, that determines whether a specific hypothesis is correct or not. The test result is a cornerstone for accepting or rejecting the null hypothesis (H 0 ). The null hypothesis is a proposition; That does not expect any difference or effect. An alternative hypothesis (H 1 ) is a premise that expects some difference or effect.

There are slight and subtle differences between Type I and Type II errors, which we are going to discuss in this article.

## Contents: Type IV error Type II error

1. Comparative graph
2. Definition
3. Key differences
4. conclusion

### Comparative graph

Basis for comparison Error type I Type II error
Sense Type I error refers to the non-acceptance of hypotheses that should be accepted. Type II error is the acceptance of hypotheses that should be rejected.
Equivalent to False positive False negative
What is it? It is an incorrect rejection of the true null hypothesis. It is incorrect to accept a false null hypothesis.
It represents A fake hit A lady
Probability of making a mistake Same as the significance level. Just like the power of proof.
Indicated by Greek letter ‘α’ Greek letter ‘β’

### Definition of type I error

In statistics, type I error is defined as an error that occurs when sample results cause the null hypothesis to be rejected, even though it is true. In simple terms, the error of accepting the alternative hypothesis, when the results can be attributed to chance.

Also known as the alpha error, it leads the researcher to infer that there is a variation between two observances when they are identical. The probability of type I error is equal to the level of importance that the researcher sets for his test. Here the significance level refers to the chances of making a type I error.

E.g. Suppose that based on the data, the research team of a company concluded that more than 50% of the total customers, such as the new service initiated by the company, which is, in fact, less than 50% ..

### Definition of type II error

When based on the data, the null hypothesis is accepted, when in fact it is false, this type of error is known as a Type II Error. It arises when the researcher does not deny the false null hypothesis. It is denoted by the Greek letter ‘beta (β)’ and is often referred to as a beta error.

Type II error is the failure of the researcher to accept an alternative hypothesis, even though it is true. Validate a proposition; That should be rejected. The researcher concludes that the two observances are identical when in fact they are not.

The probability of making such an error is analogous to the power of the test. Here, the power of the test refers to the probability of rejecting the null hypothesis, which is false and must be rejected. As the sample size increases, the power of the test also increases, resulting in a reduced risk of type II errors.

E.g. Suppose that based on the results of the sample, the research team of an organization claims that less than 50% of the total customers, such as the new service initiated by the company, which is, in fact, more than 50% ..

### Key differences between type I and type II error

The points given below are substantial when it comes to the differences between Type I and Type II errors:

1. Type I error is an error that occurs when the result is a rejection of the null hypothesis that is, in fact, true. Type II error occurs when the sample results in the acceptance of the null hypothesis, which is actually false.
2. Type I error or also known as false positives, in essence, the positive result is equivalent to the rejection of the null hypothesis. In contrast, Type II error is also known as false negatives, that is, a negative result leads to acceptance of the null hypothesis.
3. When the null hypothesis is true but wrongly rejected, it is a type I error. Contrary to this, when the null hypothesis is false but wrongly accepted, it is a type II error.
4. Type I error tends to state something that is not really present, that is, it is a false hit. On the contrary, the type II error fails to identify something, which is present, that is, it is an error.
5. The probability of making a type I error is sampled as the significance level. Conversely, the probability of making a type II error is the same as the power of the test.
6. The Greek letter ‘α’ indicates a type I error. In contrast, the type II error which is denoted by the Greek letter ‘β’.

### conclusion

In general, the Type I error arises when the researcher notices a difference, when in fact there is none, while the Type II error arises when the investigator does not discover a difference when in fact there is one. The occurrence of the two types of errors is very common as they are part of the testing process. These two errors cannot be completely eliminated, but they can be reduced to a certain level.