Sensitivity is –
**Core Concept**
Sensitivity is a statistical measure used to evaluate the accuracy of a diagnostic test or screening program. It represents the proportion of actual positives that are correctly identified by the test. In other words, it measures the test's ability to detect true cases of a disease.
**Why the Correct Answer is Right**
The correct formula for sensitivity is indeed **true positive/true positive + false negative**. This is because sensitivity focuses on the test's performance in identifying individuals who actually have the disease. By dividing the number of true positives by the sum of true positives and false negatives, we can calculate the proportion of actual positives that are correctly detected. This is a crucial aspect of evaluating the effectiveness of a diagnostic test or screening program.
**Why Each Wrong Option is Incorrect**
**Option B:** This option is incorrect because it confuses sensitivity with specificity. Specificity measures the proportion of actual negatives that are correctly identified by the test, not the proportion of actual positives.
**Option C:** This option is incorrect because it incorrectly defines sensitivity as the ratio of true negatives to the sum of true negatives and false positives. This is actually the formula for specificity, not sensitivity.
**Option D:** This option is incorrect because it incorrectly defines sensitivity as the ratio of true negatives to the sum of false negatives and true positives. This is not a valid formula for sensitivity.
**Clinical Pearl / High-Yield Fact**
When evaluating the sensitivity of a diagnostic test, it's essential to consider the prevalence of the disease in the population being tested. A test with high sensitivity will perform well in populations with a high prevalence of the disease, but may perform poorly in populations with a low prevalence.
**β Correct Answer: A. True positive/true positive + false negative**