NP Insights

Thoughtful use of diagnostic testing: Making practical sense of sensitivity, specificity, and predictive value By Tom Bartol, APRN

Our current healthcare culture emphasizes evidence-based treatment. Diagnostic testing should also be evidence-based. Tests are sometimes ordered without considering the evidence behind them. Clinicians may order a diagnostic test out of fear or to offer reassurance to the patient. Inefficient testing can lead to increased costs as well as unnecessary or unwanted treatment for some patients. Using evidence to guide diagnostic testing can become part of the shared decision-making process, giving the patients a perspective about what the test might mean for them. The patient and clinician can then make a choice that fits with the patient’s condition as well as the patient’s desired goals and values. This process need not add immense complexity to the decisionmaking process. Four steps can make the process more thoughtful and efficient. First, determine the pretest probability of the condition you are concerned about. If you have no idea what you are looking for or have no differential diagnoses, then a test is probably not the way to begin. Second, determine what you want from the test. Do you want to rule out or rule in a disease or condition? Next, understand the sensitivity and specificity of the test you want to use. Finally, think about what you will do with the results of the test.

■ Pre-test probability Pretest probability is the likelihood that a patient has the condition you are considering prior to testing. This can be based on the prevalence of the condition in the population. For example, the prevalence of colon cancer in the average 50-year-old female patient is about 0.1% or 1 in 1,000.1 If that female had a family history of colon cancer, heavy alcohol use, little physical activity, or other factors that increase risk for colon cancer, the pretest probability would be higher. Frequent exercise or a high-fiber diet would lower pretest risk. Pretest probability can vary based on symptoms or clinical conditions as well. Consider the case of a 59-year-old male presenting with left-sided chest pressure. The pretest probability would be lower if the pain is sharp and aggravated with deep breathing and higher if the pain is worse with exertion, accompanied by shortness of breath, nausea, and diaphoresis. A past history of coronary artery disease (CAD) or a history of hypertension and diabetes in this patient would also increase pretest probability. Determining pretest probability can sometimes be challenging. For various types of cancer, the pretest probability or incidence can be found on the CDC website (cdc.gov). In many cases, you will not be able to find an exact percent or number for the pretest probability. Simply determining if the probability is low, medium, or high can be very helpful in making testing decisions. For example, consider pretest probability

for diabetes with two different people. The first is a thin, 65-yearold male with no family history of diabetes, normal BP, and lipids who would have a low pretest probability. The second is a 58-year-old obese male with hypertension, hyperlipidemia, and two brothers with diabetes; this patient would have a high pretest probability. A general sense of pretest probability for many conditions can be determined through the history and physical exam of your patient. ■ Testing goals Next, consider what your goal is for the test. Do you want to rule in a diagnosis or rule out a diagnosis? For those with a high pretest probability of a condition, you will likely be ruling in a diagnosis, and for those with low pretest probability, ruling out will be the goal of the diagnostic test. By using pretest probability and understanding what you want to do with a test, you can compare the sensitivity and specificity of a test to help determine how each test will help you with your goal. Understanding sensitivity and specificity can be challenging for some clinicians. An easy way to remember is that a highlysensitive test that is negative rules out a condition, whereas a highly-specific test that is positive rules in the condition. To help remember this, think “SNOut” for sensitivity negative rules out and “SPIn” for specificity positive rules in. Finally, before ordering a diagnostic test, consider the implications of the results, that is, what you

10 The Nurse Practitioner • Vol. 40, No. 8

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

www.tnpj.com

NP Insights and the patient want to do with the test results. Consider specific disease implications as well as age, comorbidities, life expectancy, and the patient’s needs, goals, and values. Should the test be positive, would the patient want or tolerate treatment? For example, would an asymptomatic 80-year-old person want to undergo treatment should a colon cancer screen be positive? What would be the potential risks and benefits of treatment? This should be discussed before performing the test. ■ Applying the principles Screening mammography for breast cancer can help practice applying the four testing principles. The prevalence of breast cancer in the average 50-year-old woman is 0.28%, which would be the pretest probability.2 For every 10,000 50-year-old women, 28 of them could be expected to have breast cancer. With such a low pretest probability, as is the case with screening tests, our goal would be to rule out breast cancer. To rule out, we want a highly-sensitive test (remember: “SNout”). Data vary regarding the sensitivity and specificity of screening mammography with a range of 68% to 90% sensitivity and 82% to 97% specificity. For our purposes, we will use 80% sensitivity and 90% specificity.3 (See Screening mammography with prevalence of 0.28%.) Ten thousand women are screened with mammography. Knowing the prevalence is 28/10,000, and the sensitivity is 80%, then 80% of the 28 positive cases (or 22) are identified with positive screening mammography. There are six women who have breast cancer but have a negative mammogram, missed by screening or false-negative results. The 90% specificity means that 90% of those testing negative do not have the disease. Thus, 8,975 of the 9,972 women who do not have cancer have www.tnpj.com

Screening mammography with prevalence of 0.28%3 Have breast cancer

Do not have breast cancer

Totals

Positive mammogram

22 (True positive)

997 (False positive)

1,019

Negative mammogram

6 (False negative)

8,975 (True negative)

8,981

28

9,972

10,000

Total Sensitivity: 80% Specificity: 90%

True and false-positive results with higher prevalence of 25% Has disease

Does not have disease

Totals

Positive test

2,000 (True positive)

750 (False positive)

2,750

Negative test

500 (False negative)

6,750 (True negative)

7,250

2,500

7,500

10,000

Total Sensitivity: 80% Specificity: 90%

negative mammography. There are 10% of the 9,972 women (997 of them) who have positive mammograms but do not have breast cancer. These are the false-positives, and because the pretest probability of breast cancer is so low, the number of false-positive mammograms is large. ■ Positive and negative predictive values Two other useful numbers, the positive predictive value (PPV) and the negative predictive value (NPV), can be determined from this information. PPV tells us the probability that someone with a positive test (based on prevalence of disease) actually has breast cancer (higher PPV the better for ruling in the disease). NPV indicates the probability that someone with a negative test does not have breast cancer (higher the NPV, the better for ruling out the disease). In this case, the PPV is 22/1,019, true-positives divided by the total

number of positive mammograms, or 2%. With the prevalence, sensitivity, and specificity used, a woman with a positive screening mammogram has only a 2% chance of actually having breast cancer. NPV is 8,975, the number of true-negative test divided by 8,981, the total negative tests or 99.9%. This means that a woman with a negative screening mammogram in this population has a 99.9% probability of not having breast cancer. A test like this with a high specificity or a high NPV is helpful for ruling out disease. ■ The impact of prevalence Prevalence makes a big difference (see True and false-positive results with higher prevalence of 25%). Now the PPV is 2,000/2,750 (or 73%), while the NPV is 6,750/7,250 (or 93%). With increasing prevalence (or pretest probability) of the PPV, the likelihood that a positive test really indicates presence of the disease goes up while The Nurse Practitioner • August 2015 11

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

NP Insights the NPV goes slightly down. Said another way, with high prevalence, we get better at ruling in a disease and worse at ruling them out. With increased prevalence, that is, increased pretest probability, there is a higher probability that a person with a positive test really has the disease or a higher PPV. Sensitivity and specificity alone are not enough to tell us the usefulness of a test. We must know the pretest probability, that is, the

On the other hand, a person who has many risk factors for or classic symptoms of myocardial ischemia, such as left-sided chest pressure with associated shortness of breath, nausea, and diaphoresis has a high pretest probability for CAD. A negative test would not be reassuring, as there is a much higher chance of a false-negative than a true-negative result. In this case, despite a negative test, treatment as if this patient has

As you learn more about your patient’s symptoms and the history, the pretest probability may go up or down. prevalence of the disease in the population we are working with. As can be seen, the higher the pretest probability for breast cancer, be it through increased risk factors or symptoms, the higher the positive predictive value and fewer false-positive results of the test. ■ Simplifying the process This can all sound confusing, but think about it in more general terms—without numbers. Consider the use of exercise treadmill testing (ETT) for someone to rule in or rule out CAD. A patient complaining of chest pain that is at low risk, for example, young and having pain with deep breathing or chest movement but not with exertion and no associated symptoms. would be considered to have a low pretest probability for CAD. Even without knowing the sensitivity and specificity of ETT, you know that based on the low pretest probability, there is a high likelihood of a false-positive test. A treadmill test “just to be sure” may not be so sure in a low-risk patient with a high likelihood of falsepositive results, which may lead to more unnecessary testing.

myocardial ischemia would be more appropriate, and it might be better to move to a higher specificity test first. The rule to remember when trying to test thoughtfully is to think about pretest probability. Pretest probability is not a static number. As you learn more about your patient’s symptoms and the history, the pretest probability may go up or down. The key is that the lower the pretest probability (or prevalence) as we saw with screening mammography, the higher the likelihood of false-positive results and the lower the PPV. If there is a high pretest probability (or prevalence), the risk is high for falsenegative results. ■ Adding the patient’s perspective Knowing prevalence or baseline risk is a helpful tool for both the clinician and the patient. Even if you do know the sensitivity and specificity, pretest probability gives you information about what the test will tell you. With this information, using shared decision-making with your patient can help decide whether or not to perform a test. The patient can use this information to make a more informed decision knowing the risk

of the condition prior to testing. Not only are you now more thoughtful in your testing, you are using the patient as your partner in the choice. This becomes an opportunity to discuss step 4 with the patient: What will you do with the results? Thoughtful testing will make a difference for both the clinician and the patient. Remember, when choosing diagnostic testing, consider: • What condition are you looking for? • What is the pretest probability for that condition? • What is the sensitivity of the test? • What is the specificity of the test? • What are the risks of the test? • What are the benefits or how will it change therapy or improve health? These questions, shared with the patient, will help lead to more deliberate and efficient testing. Thoughtful, evidence-based testing, rather than reflex or habit testing, can make a difference in healthcare. The goal is not to limit testing or to save money but to improve care. More tests do not mean better care, but appropriate testing does. The best reassurance we can give our patients is more information about the test and what it means for them. REFERENCES 1. Centers for Disease Control and Prevention. Colorectal Cancer Risk by Age. http://www.cdc. gov/cancer/colorectal/statistics/age.htm. 2. National Cancer Institute. Breast cancer risk in American women. http://www.cancer.gov/types/ breast/risk-fact-sheet. 3. Kavanagh AM, Giles GG, Mitchell H, Cawson JN. The sensitivity, specificity, and positive predictive value of screening mammography and symptomatic status. J Med Screen. 2000;7(2):105-110. Tom Bartol is an Advanced Practice Registered Nurse at Richmond Area Health Center, HealthReach Community Health Centers, Richmond, Me. The author has disclosed that he has no financial relationships related to this article. Questions or comments? E-mail bartolnp@gmail. com DOI-10.1097/01.NPR.0000469262.22776.1d

12 The Nurse Practitioner • Vol. 40, No. 8

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

www.tnpj.com

Thoughtful use of diagnostic testing: Making practical sense of sensitivity, specificity, and predictive value.

Thoughtful use of diagnostic testing: Making practical sense of sensitivity, specificity, and predictive value. - PDF Download Free
102KB Sizes 0 Downloads 4 Views