6533b7d3fe1ef96bd1260940
RESEARCH PRODUCT
Statement validity assessment: Inter-rater reliability of criteria-based content analysis in the mock-crime paradigm
Matthias GamerGerhard VosselHans-georg RillHeinz Werner Gödertsubject
CorrelationInter-rater reliabilityValidity assessmentCohen's kappaContent analysisStatement (logic)StatisticsPoison controlPsychologyApplied PsychologyReliability (statistics)Pathology and Forensic MedicineReliability engineeringdescription
Methods. Three raters were trained in CBCA. Subsequently, they analysed transcripts of 102 statements referring to a simulated theft of money. Some of the statements were based on experience and some were confabulated. The raters used 4-point scales, respectively, to judge the degree to which 18 of the 19 CBCA criteria were fulfilled in each statement. Results. The analysis of rater judgment distributions revealed that, with judgments of individual raters varying only slightly across transcripts, the weighted kappa coefficient, the product-moment correlation, and the intra-class correlation were inadequate indices of reliability. The Finn-coefficient and percentage agreement, which were calculated as indices independent of rater judgment distributions, were sufficiently high with respect to 17 of the 18 assessed criteria. CBCA differentiated significantly between truthful and fabricated accounts. Conclusions. The inter-rater reliability of CBCA achieved in the present study was satisfactory both, if considered absolutely, and as compared with other empirical findings. This suggests that CBCA can be utilized in the mock-crime paradigm with a sufficient degree of reliability.
year | journal | country | edition | language |
---|---|---|---|---|
2005-09-01 | Legal and Criminological Psychology |