‘Sensitivity’ of a test is –
**Core Concept**
The sensitivity of a test refers to its ability to correctly identify individuals with a specific disease or condition, also known as true positives. It measures the proportion of actual positives that are correctly identified by the test. In other words, it is the test's ability to detect the disease when it is present.
**Why the Correct Answer is Right**
Sensitivity is calculated by dividing the number of true positives by the sum of true positives and false negatives. This means that a test with high sensitivity will have fewer false negatives, indicating that it is good at detecting the disease when it is present. In clinical practice, high sensitivity is particularly useful for screening tests, where the goal is to identify as many cases as possible. This allows for early intervention and treatment.
**Why Each Wrong Option is Incorrect**
**Option A:** Specificity is the correct term for a test's ability to correctly identify individuals without the disease, or true negatives.
**Option B:** Predictive value is a measure of the probability that a positive test result is correct, but it is not the same as sensitivity.
**Option C:** Accuracy is the overall performance of a test, but it is not a direct measure of sensitivity.
**Clinical Pearl / High-Yield Fact**
Remember that sensitivity and specificity are inversely related, meaning that a test with high sensitivity will have lower specificity, and vice versa. This trade-off is important to consider when choosing a test for a particular purpose.
**Correct Answer: D. True positives divided by the sum of true positives and false negatives.