6533b7cffe1ef96bd1259042
RESEARCH PRODUCT
The Elephant in the Machine: Proposing a New Metric of Data Reliability and its Application to a Medical Case to Assess Classification Reliability
Domenico AlbanoAndrea CampagnerAlberto AliprandiCarmelo MessinaLuca Maria SconfienzaMarcello ZappiaFrancesco Di PiettoAngelo GambinoAlberto BrunoVito ChiancaFederico CabitzaDavide OrlandiLuigi PedoneSalvatore GittoAngelo Corazzasubject
Computer sciencekneeMachine learningcomputer.software_genrelcsh:TechnologyTask (project management)lcsh:Chemistry03 medical and health sciencesMagnetic resonance imaging0302 clinical medicine0504 sociologyGeneral Materials Science030212 general & internal medicinelcsh:QH301-705.5InstrumentationCompetence (human resources)MRNetReliability (statistics)Fluid Flow and Transfer ProcessesGround truthreliabilityBasis (linear algebra)Point (typography)lcsh:Tbusiness.industryComputer Science::Information RetrievalProcess Chemistry and Technology05 social sciencesGeneral Engineering050401 social sciences methodslcsh:QC1-999Computer Science ApplicationsInter-rater reliabilitymachine learninglcsh:Biology (General)lcsh:QD1-999lcsh:TA1-2040inter-rater agreementArtificial intelligenceMetric (unit)lcsh:Engineering (General). Civil engineering (General)businessground truthcomputerlcsh:Physicsdescription
In this paper, we present and discuss a novel reliability metric to quantify the extent a ground truth, generated in multi-rater settings, as a reliable basis for the training and validation of machine learning predictive models. To define this metric, three dimensions are taken into account: agreement (that is, how much a group of raters mutually agree on a single case)
year | journal | country | edition | language |
---|---|---|---|---|
2020-06-10 | Applied Sciences |