Choice developing edition item multiple test third validating
To do so, we compared their means and correlational patterns with other typically predicted constructs.We did not opt for correlation between the different versions of the test because this does not necessarily represent equivalence of the underpinning cognitive processes that manifest themselves into the final scores.Second, we tested whether the CRT response format altered the well-established association between performance in the CRT and benchmark variables: belief bias, denominator neglect, paranormal beliefs, actively open-minded thinking and numeracy.Third, we tested the psychometric quality of the different formats of the tests by comparing their internal consistency.We speculate that the specific nature of the CRT items helps build construct equivalence among the different response formats.We recommend using the validated multiple-choice version of the CRT presented here, particularly the four-option CRT, for practical and methodological reasons.
Since the correct answer is already included in the multiple-choice version of the test, this particular format might therefore be easier.
First, evidence from educational measurement research has pointed out the fact that despite a high correlation between open-ended (also called ).
Open-ended questions are more difficult to solve than multiple-choice ones for stem-equivalent items (i.e., that differ only by listing multiple choices), because presenting options enables a different array of cognitive strategies leading to increased performance (Bonner, ).
For instance, if participants generate an incorrect answer then a limited set of answers might provide unintentional feedback and eliminate that particular solution as a possible answer.
With multiple-choice questions, participants can use a backward strategy, in which they pick up an answer listed in the options and try to reconstruct the solution.
Participants can also guess whether they are uncertain about the options.