This ProfEd Assessment and Learning section of the LET Professional Education exam covers 5 expert-reviewed practice questions. Each question has a plain-English explanation and notes on why the wrong answers are wrong.
Sample 1
If a test consistently yields the same results when administered to the same group of students under the same conditions, the test is said to be:
- A Valid
- B Reliable✓
- C Objective
- D Practical
Answer: B
Reliability is the consistency or stability of test scores across repeated administrations. If the same students get nearly the same scores when tested twice under identical conditions, the test is reliable. Think of a bathroom scale: if it reads 60 kg every time you step on it, it is reliable. Reliability is about precision and reproducibility, not about whether the test actually measures what it claims to measure, which would be validity.
Why the other choices are wrong
- A. Validity is about accuracy in measuring what you intend to measure, not consistency.
- C. Objectivity refers to freedom from bias in scoring, typically associated with multiple-choice tests.
- D. Practicality concerns whether a test is feasible to administer, not its score consistency.
Sample 2
When a test has high reliability but low validity, it means that:
- A The test measures what it intends to measure inconsistently.
- B The test yields consistent results but does not measure the intended construct.✓
- C The test is too difficult for the average student.
- D The test results vary significantly across different groups of students.
Answer: B
Reliability and validity are distinct properties. Reliability is consistency; a test is reliable when it produces similar results over repeated administrations. Validity is accuracy; a test is valid when it measures what it claims to measure. A test can be highly reliable (producing consistent scores) but measure the wrong construct. For example, a test may consistently measure reading speed rather than comprehension, making it unreliable for assessing reading comprehension despite its consistency. A teacher must ensure both reliability and validity for assessment to be meaningful.
Why the other choices are wrong
- A. That describes LOW reliability. If a test is reliable, scores are consistent — even if the test isn't measuring the construct it's supposed to. The stem is the opposite scenario.
- C. Difficulty is a separate characteristic; it does not define the reliability-validity relationship.
- D. Differential results across groups concern fairness and bias, not the reliability-validity distinction.
Sample 3
Which type of assessment is most appropriate when a teacher wants to evaluate a student's final output in a way that mirrors real-life professional tasks, such as designing a functioning website for a local business?
- A Traditional assessment
- B Authentic assessment✓
- C Norm-referenced assessment
- D Placement assessment
Answer: B
Authentic assessment evaluates student performance on meaningful, real-world tasks that mirror what professionals actually do. When a student designs a functioning website for a local business, that is an authentic task. The assessment criterion is whether the website works and serves its intended purpose, not a paper-and-pencil test about web design. Authentic assessment engages higher-order thinking and demonstrates practical competence, aligning with constructivist and competency-based education. It is especially valued in K-12 curricula emphasizing application over recall.
Why the other choices are wrong
- A. Traditional assessment typically uses tests, quizzes, or worksheets, not real-world projects.
- C. Norm-referenced assessment compares students to each other; it does not address task authenticity.
- D. Placement assessment determines student readiness or level; it is not about evaluating real-world competence.