SPSS Tutorial: Inter and Intra rater reliability (Cohen's Kappa, ICC)

preview_player
Показать описание
*Sorry for the sketchy resolution quality of the SPSS calculations.

Interpretation reference: Portney LG & Watkins MP (2000) Foundations of clinical research Applications to practice. Prentice Hall Inc. New Jersey ISBN 0-8385-2695-0 p 560-567

In this video I discuss the concepts and assumptions of two different reliability (agreement) statistics: Cohen's Kappa (for 2 raters using categorical data) and the Intra-class correlation coefficient (for 2 or more raters using continuous data). I also show how to calculate the Confidence Interval Limits for Kappa and the Standard Error of Measurement (SEM) for ICC.

Рекомендации по теме
Комментарии
Автор

Thank you for this information. Much appreciated!

pheladimokoena
Автор

I have a few questions:
For my study i have 3 raters:


1. For 3 raters for continuous variables, which one should be use - Fleiss Kappa (> more than 2 raters) or ICC (more than 2 raters) ?
In this website :


says:
the assumption with Cohen’s kappa is that your raters are deliberately chosen and fixed.


If the raters are fixed, but i have 3 raters with continuous data should i use ICC or Cohen


2.
Is it a must to use ICC for 3 raters? or in some situations, Cohen's kappa can be used?

aznurmmu
Автор

Thank you for the video.

According to Gisev, N., et al. (2013). "Interrater agreement and interrater reliability: key concepts, approaches, and applications." Res Social Adm Pharm 9(3): 330-338. - ICC can be used for categorical, ordinal or continuous data, which is different to what you suggest here. Can you elaborate please ?

Thanks

hamidD
Автор

Is "Inter-rater Reliability" a synonym for "Inter-coder" or "observer"-reliability??

Definitely a point of confusion for my Research Methods class and I!

stevenalcala
Автор

Can ICC be used if data is not normally distributed? If no, what test to use in such cases?

prachimehta
Автор

How do you calculate intraclass correlation coefficient when the data is not normally distributed for intra-rater reliability?

Hello-dghu
Автор

Can you give me the exact reference for P&W 2009 interpretation of ICC?

tomaszamora
Автор

Hi, I have sequence data in which I have categories that have certain values rated by two raters. As a simplified example, assume that I have a sequence of colors; each color appears on the screen for a specific duration. The rater task is to make a note of the appearing color and the duration in a list form. Here, I have 3 things to compare in the same time: the categories, the sequence and the duration. Is there anyway to compare the result of two raters?

mmdsjo
Автор

I am assessing how participants' self-report of change in substance use (on a 5 point scale from low - no change - high), corresponds to their self-report of the same change measured at a different time point. Could I use cohen's kappa? I guess it is still intra-rater reliability. But a little confused as to whether cohen's kappa is appropriate when seeing agreement between two ratings by the same rater

ishaandolli
Автор

hi, i have 41 raters from a psychology class who rated 19 different images of mens torsos for muscularity using a likert scale of 1 (low muscularity) and 5 (hypermusculrity). I thought ICC was appropriate but now I ma not so sure as each model has confused me. thanks

August_Hall
Автор

#Ask what if I have 15 repondents and 3 trial with categorical variable...how to analyze that using kappa?

muhammadanshory
Автор

Hi, if there are 4 observers amounting to 6 crosstabs (kappa scores), do I take the average of the 6 kappa scores? Or do I use the reliability to achieve the average measure? Sorry if my question doesnt make sense, I'm of no statistics background.

phzar
Автор

Hello and thank you for the video,

Do you know how to calculate a single inter-rater reliability ICC value when the 2 raters have both measured 2 or more trials? So rater1trial1 + rater2trial1, rater1trial2 + rater2trial2. Do you average the ICC values of those trials or is there another way?

chrisgamble
Автор

I'm doing content analysis but I've already documented the items in Microsoft Word. I only have one rater and 500 samples with 7 categories of items. Can you suggest any ways that can calculate the reliability?

lilymks
Автор

Hye, I had read a journal that using cohen's kappa for multiple rater. Do you have any tutorial for cohen's kappa of multiple rater?

putrinurshahiraabdulrapar
Автор

Is it possible to calculate Kappa if you have 5 different categorical variables and two raters?

tanjica
Автор

The image quality of the video is too low

mariociencia
Автор

Can not see the number clearly. No usefull at all.

indrayadigunardi