Calculating and Interpreting Cohen's Kappa in Excel

preview_player
Показать описание
This video demonstrates how to estimate inter-rater reliability with Cohen’s Kappa in Microsoft Excel. How to calculate sensitivity and specificity is reviewed.
Рекомендации по теме
Комментарии
Автор

Super useful, helped me calculate interrater reliability for program assessment of student literature reviews. Thanks!

DavidKirschnerPhD
Автор

Thanks Todd! This is great.

@Ben van Buren - Cohen's Kappa is used in many academic articles but it did not originate there. It's actually from the Cohen & Cohen book from 1960. I'm using a more recent version, the citation is:
Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2013). Applied multiple regression/correlation analysis for the behavioral sciences. Routledge.

zacrogers
Автор

Thank you so much, this is exactly what I've been looking for.

sofiaquijada
Автор

I have a very strange Kappa result: I have checked for a certain behavior in footage of animals, which I have assessed twice. For 28 animals, I have agreed 27 times that the behavior is present and have disagreed only once (the behavior was present in the first assessment, but not in the second). My data is organized as the following matrix:

0 1
0 27

And that gives me a Kappa value of zero which I find very strange because in only 1 of 28 assessments I disagree. How come it is considered these results as pure chance?

ProfGarcia
Автор

what if I have more than 3 values? e.g. not just 0 and 1, but 2, 3, 4 or even more?

ruudparklimy
Автор

Great concise presentation, very useful!Much appreciated!

Muuip
Автор

Thank you for your video. Could you explain how to handle ratings that are missing, where one rater recorded a score and the other did not?

charlesdrehmer
Автор

hi! how do you calculate confidence invervals and standard error for the kappa values using excel? Thank you for your very helpful video

franciscocallebernal
Автор

Thank you so much for the well explained video,
It really helped me very much .

You are an excellent teacher.

LukyDi
Автор

I imagine this would be helpful in research pertaining to rating the acquisition of counseling skills in student counselors

MarkVanderley
Автор

Probably need a little more discussion of sensitivity and specificity, although I expect it's also addressed in some other videos and in the book.

greggelliott
Автор

the very important person
Thank you!!

derejebirhanu
Автор

such a wonderful and helpful video! Thanks a lot!

is
Автор

Very informative.  However, what do you do when:
a) the Pe is 1 (and then the equation is divisible by 1)? Assume the Kappa is 1?
b) have a very low Kappa when the raters agree on all but one of the ratings? Surely it should be higher? I have 2 raters, , 20 subjects.  If they agree on 19, and differ on 1, the Kappa is nearly 0.

sparkly
Автор

wish my lecturer could explain like you

GamingBoxChannel
Автор

Thanks for this very good video. The excel functions make my life so much easier :-)

ringwormts
Автор

Dr. Grande: what would you do if the Kappa agreement turns to be too low? Should both coders recode the material in order to match and increase the value? Or what do you suggest? Thanks in advance.

MrJsanabria
Автор

Hi. What about calculating sample size for Kappa? Do you think it is
problematic to set the null hypothesis at K=0.0? I believe this would
be the same at what others call to set K1=0.0, when many state the K1
should be the minimum K expected. Thanks

jorgemmmmteixeira
Автор

Dear Dr. Grande,
i have maybe a simpe question. But Reseacher and RA are people, which give their responses to the survey f.a., right?! So, this number can be very high then. And I´ve got 5 criteria like satisfactory etc . But I think i´ve understood how to do this. I should probably split people, giving responses into groups, in order to come up with the coefficient.

is
Автор

Suppose you have five categories from low to high. Since it is not dichotomous as it is here, do you still use the same approach?

Lyn-egid