# USE OF TESTS IN DIAGNOSIS AND MANAGEMENT

**USE OF TESTS IN DIAGNOSIS AND MANAGEMENT**is a topic covered in the

**Guide to Diagnostic Tests**.

To view the entire topic, please log in or purchase a subscription.

Emergency Central is a collection of disease, drug, and test information including 5-Minute Emergency Medicine Consult, Davis’s Drug, McGraw-Hill Medical’s Diagnosaurus®, Pocket Guide to Diagnostic Tests, and MEDLINE Journals created for emergency medicine professionals. Explore these free sample topics:

-- The first section of this topic is shown below --

The usefulness of a test in a particular clinical situation depends not only on the test’s characteristics (eg, sensitivity and specificity, which are not predictive measures) but also on the probability that the patient has the disease before the test result is known (pretest probability). The results of a useful test substantially change the probability that the patient has the disease (posttest probability). Figure 1–4 shows how posttest probability can be calculated from the known sensitivity and specificity of the test and the estimated pretest probability of disease (or disease prevalence), based on Bayes theorem.

The pretest probability, or prevalence, of disease has a profound effect on the posttest probability of disease. As demonstrated in Table 1–4, when a test with 90% sensitivity and specificity is used, the posttest probability can vary from 8% to 99% depending on the pretest probability of disease. Furthermore, as the pretest probability of disease decreases, it becomes more likely that a positive test result represents a false positive.

Pretest Probability | Posttest Probability |
---|---|

0.01 | 0.08 |

0.50 | 0.90 |

0.99 | 0.999 |

As an example, suppose the clinician wishes to calculate the posttest probability of prostate cancer using the PSA test and a cutoff value of 4 ng/mL (4 mcg/L). Using the data shown in Figure 1–5, sensitivity is 90% and specificity is 60%. The clinician estimates the pretest probability of disease given all the evidence and then calculates the posttest probability using the approach shown in Figure 1–4. The pretest probability that an otherwise healthy 50-year-old man has prostate cancer is equal to the prevalence of prostate cancer in that age group (probability = 10%) and the posttest probability after a positive test is only 20%. Even though the test is positive, there is still an 80% chance that the patient does not have prostate cancer (Figure 1–6A). If the clinician finds a prostate nodule on rectal examination, the pretest probability of prostate cancer rises to 50% and the posttest probability using the same test is 69% (Figure 1–6B). Finally, if the clinician estimates the pretest probability to be 98% based on a prostate nodule, bone pain, and lytic lesions on spine radiographs, the posttest probability using PSA is 99% (Figure 1–6C). This example illustrates that pretest probability has a profound effect on posttest probability and that tests provide more information when the diagnosis is truly uncertain (pretest probability about 50%) than when the diagnosis is either unlikely or nearly certain.

**Figure 1–6.**Effect of pretest probability and test sensitivity and specificity on the posttest probability of disease. (See text for explanation.)

-- To view the remaining sections of this topic, please log in or purchase a subscription --

The usefulness of a test in a particular clinical situation depends not only on the test’s characteristics (eg, sensitivity and specificity, which are not predictive measures) but also on the probability that the patient has the disease before the test result is known (pretest probability). The results of a useful test substantially change the probability that the patient has the disease (posttest probability). Figure 1–4 shows how posttest probability can be calculated from the known sensitivity and specificity of the test and the estimated pretest probability of disease (or disease prevalence), based on Bayes theorem.

The pretest probability, or prevalence, of disease has a profound effect on the posttest probability of disease. As demonstrated in Table 1–4, when a test with 90% sensitivity and specificity is used, the posttest probability can vary from 8% to 99% depending on the pretest probability of disease. Furthermore, as the pretest probability of disease decreases, it becomes more likely that a positive test result represents a false positive.

Pretest Probability | Posttest Probability |
---|---|

0.01 | 0.08 |

0.50 | 0.90 |

0.99 | 0.999 |

As an example, suppose the clinician wishes to calculate the posttest probability of prostate cancer using the PSA test and a cutoff value of 4 ng/mL (4 mcg/L). Using the data shown in Figure 1–5, sensitivity is 90% and specificity is 60%. The clinician estimates the pretest probability of disease given all the evidence and then calculates the posttest probability using the approach shown in Figure 1–4. The pretest probability that an otherwise healthy 50-year-old man has prostate cancer is equal to the prevalence of prostate cancer in that age group (probability = 10%) and the posttest probability after a positive test is only 20%. Even though the test is positive, there is still an 80% chance that the patient does not have prostate cancer (Figure 1–6A). If the clinician finds a prostate nodule on rectal examination, the pretest probability of prostate cancer rises to 50% and the posttest probability using the same test is 69% (Figure 1–6B). Finally, if the clinician estimates the pretest probability to be 98% based on a prostate nodule, bone pain, and lytic lesions on spine radiographs, the posttest probability using PSA is 99% (Figure 1–6C). This example illustrates that pretest probability has a profound effect on posttest probability and that tests provide more information when the diagnosis is truly uncertain (pretest probability about 50%) than when the diagnosis is either unlikely or nearly certain.

**Figure 1–6.**Effect of pretest probability and test sensitivity and specificity on the posttest probability of disease. (See text for explanation.)

There's more to see -- the rest of this entry is available only to subscribers.

### Citation

*Guide to Diagnostic Tests*, 7th ed., McGraw-Hill Education, 2017.

*Emergency Central*, emergency.unboundmedicine.com/emergency/view/GDT/619004/all/USE_OF_TESTS_IN_DIAGNOSIS_AND_MANAGEMENT.

*Guide to Diagnostic Tests*. McGraw-Hill Education; 2017. https://emergency.unboundmedicine.com/emergency/view/GDT/619004/all/USE_OF_TESTS_IN_DIAGNOSIS_AND_MANAGEMENT. Accessed June 18, 2021.

*Guide to Diagnostic Tests*(7th edition). McGraw-Hill Education. https://emergency.unboundmedicine.com/emergency/view/GDT/619004/all/USE_OF_TESTS_IN_DIAGNOSIS_AND_MANAGEMENT

*Guide to Diagnostic Tests*. McGraw-Hill Education; 2017. [cited 2021 June 18]. Available from: https://emergency.unboundmedicine.com/emergency/view/GDT/619004/all/USE_OF_TESTS_IN_DIAGNOSIS_AND_MANAGEMENT.