Determining Inter-Rater Reliability with the Intraclass Correlation Coefficient in SPSS

preview_player
Показать описание
This video demonstrates how to determine inter-rater reliability with the intraclass correlation coefficient (ICC) in SPSS. Interpretation of the ICC as an estimate of inter-rater reliability is reviewed.
Рекомендации по теме
Комментарии
Автор

My thesis is due in 2 hours--you just saved me so much stress kind sir

SierraKyliuk
Автор

I have been checking a couple of videos but this is by far the most explanary one. Thank you so much.

amelamel
Автор

As an unexperienced instructor I am always interested in how "strict" or "easy" I am with my grading so this video on seeing how similar the three teachers graded was particularly interesting to me, thanks Dr. Grande.

bradleyfairchild
Автор

Thank you so much. I learned the things -unfortunately- I couldn't learn from my statistics teacher. You saved my life writing my dissertation.

halealkan
Автор

This was extremely helpful. And you used almost the same data I had, so it really helped!

kemowens
Автор

Thank you SO much, dr. Grande! This is more than helpful. Bless you!

normallgirl
Автор

Thanks for this video! What is the correct APA way to report this analysis? Thanks!

bmcvinke
Автор

Dr. Grande, I wonder whether it is possible to use the Intraclass Correlation Coefficient (ICC) with a two-way mixed model to calculate inter-rater reliability when the raters have rated only a subset of the total subjects. For example, Instructor 1 rates all subjects (n = 30), while Instructor 2 rates the first half (n = 15) and Instructor 3 rates the remaining half (n = 15). Thank you so much for your help!

Dalenatton
Автор

i’m so grateful for this video!!! thank you so much you are one of the reason that make me pass the defense (if i pass T.T) thank youuuu

ryuguji
Автор

Thanks for your video. I have a question :
What does this error indicate ? "Kappa statistic cannot be computed. It requires a two-way table in which the variables are of the same type."
Despite I utilized a two-way table in which the variables are of the same type (both of them nominal), I get that from SPSS.
What should I do about that ?

Radiology_Specific
Автор

if one observer calculates in different times we again select the avarege one or single one for the ICC score? chatgpt says to me if there is one observer you should select the single one. thank you

ahtade
Автор

Thank you so much! Very useful information!❤

thanhnhanphanthi
Автор

Hi Todd! Thank you for a great video. I was wondering if you could use the ICC for determining reliability between both 1) one rater who does the same measurement twice, and 2) two raters who does the same measurement one time each. If so - which numbers in the output shows which answer to my two questions? Many thanks!

sofiaskroder
Автор

Hello i just want to ask, i have 4 raters and they have rated the rubric consisting of 1-4 which are (1=beginning, 2=developing, 3=competent, and 4=accomplished).

Do i need to put label on their ratings?

nicolecatubigan
Автор

Thank you for the video. It is very helpful!!

Alinkas
Автор

So if we have multiple raters here and each rater is rating each participant on a scale of 1-5 for each item, how would that work?

ShafinaIVohra
Автор

Thank you. This was helpful, but I have a question. What should I do if the ICC coefficient is less than 0.7, should I delete part of the data or what?

manarahmed
Автор

What do you do if all the instructors (rafters) have the same score, how do you calculate that. When I tried to do this on SPSS it gave me nothing.

Chocotreacle
Автор

What if the sig more than 0.05 but got ICC above 0.7.. is that still means excellent agreement?

sitimunirah
Автор

Can you also do this but with ordinal data? Lets say I have a rubric that has been made as the following: bad(1), good (2), very good(3), excellent (4) ?

connormacmillan