About: Inter-rater reliability     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : dbo:University, within Data Space : dbpedia.demo.openlinksw.com associated with source document(s)
QRcode icon
http://dbpedia.demo.openlinksw.com/describe/?url=http%3A%2F%2Fdbpedia.org%2Fresource%2FInter-rater_reliability

In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon. Assessment tools that rely on ratings must exhibit good inter-rater reliability, otherwise they are not valid tests.

AttributesValues
rdf:type
rdfs:label
  • Inter-rater reliability (en)
  • Interrater-Reliabilität (de)
  • Adostasun neurri (eu)
  • Concordance inter-juges (fr)
  • Concordância entre avaliadores (pt)
  • 評分者間信度 (zh)
rdfs:comment
  • Die Interrater-Reliabilität oder Urteilerübereinstimmung bezeichnet in der empirischen Sozialforschung (u. a. Psychologie, Soziologie, Epidemiologie etc.) das Ausmaß der Übereinstimmungen (= Konkordanzen) der Einschätzungsergebnisse bei unterschiedlichen Beobachtern („Ratern“). Hierdurch kann angegeben werden, inwieweit die Ergebnisse vom Beobachter unabhängig sind, weshalb es sich genau genommen um ein Maß der Objektivität handelt. Die Reliabilität ist ein Maß für die Güte der Methode, die zur Messung einer bestimmten Variablen eingesetzt werden. Dabei kann zwischen Interrater- und Intrarater-Reliabilität unterschieden werden. (de)
  • La concordance inter-juges (dite aussi fiabilité inter-juges ou inter-observateurs) est une mesure statistique de l'homogénéité des jugements formulés par plusieurs évaluateurs face à une même situation, c'est-à-dire à mesurer quantitativement leur degré de consensus. (fr)
  • 在統計學中,評分者間信度(英語:inter-rater reliability;inter-rater agreement;inter-rater concordance;interobserver reliability)指的是評分者間對於某件事情彼此同意的程度。其分數顯示評分者之間想法相似和共識的程度。評分者間信度與並不同。 (zh)
  • In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon. Assessment tools that rely on ratings must exhibit good inter-rater reliability, otherwise they are not valid tests. (en)
  • Estatistikan, adostasun neurriak balorazio metodo edo epaile ezberdinen arteko adostasuna neurtzeko erabiltzen dira, hainbat elementu edo objekturi buruz emandako balorazioei buruz. Ohizkoak dira giza ezaugarri bati buruz epaile edo balorazio metodo ezberdinek ematen duten neurketa fina den erabakitzeko azterketan. Fenomeno baterako egin daitezkeen neurketen zehaztasuna (ez ordea baliozkotasuna) aztertzeko ere erabil daitezke. Adibidez, haizearen norabidea jasotzeko orduan bi neurketa tresnen artean adostasun txikia bada, bietako bat gutxienez oker dagoela erabaki behar da, baina adostasuna handia izanda ere, ezingo da baieztatu neurketa tresnak baliozkoak direnik, norabidea ongi neurtzen denik alegia. (eu)
  • Em estatística, a concordância entre avaliadores (também chamada por vários nomes semelhantes, como confiabilidade entre avaliadores, confiabilidade entre observadores, confiabilidade entre codificadores e assim por diante) é o grau de concordância entre observadores independentes que classificam, codificam ou avaliam o mesmo fenômeno. As ferramentas de avaliação que dependem de classificações devem apresentar boa confiabilidade entre avaliadores, caso contrário, não são testes válidos. (pt)
foaf:depiction
  • http://commons.wikimedia.org/wiki/Special:FilePath/Bland-Altman-Plot.png
  • http://commons.wikimedia.org/wiki/Special:FilePath/Comparison_of_rubrics_for_evaluating_inter-rater_kappa_(and_intra-class_correlation)_coefficients.png
dcterms:subject
Wikipage page ID
Wikipage revision ID
Link from a Wikipage to another Wikipage
Link from a Wikipage to an external page
sameAs
dbp:wikiPageUsesTemplate
thumbnail
has abstract
  • Die Interrater-Reliabilität oder Urteilerübereinstimmung bezeichnet in der empirischen Sozialforschung (u. a. Psychologie, Soziologie, Epidemiologie etc.) das Ausmaß der Übereinstimmungen (= Konkordanzen) der Einschätzungsergebnisse bei unterschiedlichen Beobachtern („Ratern“). Hierdurch kann angegeben werden, inwieweit die Ergebnisse vom Beobachter unabhängig sind, weshalb es sich genau genommen um ein Maß der Objektivität handelt. Die Reliabilität ist ein Maß für die Güte der Methode, die zur Messung einer bestimmten Variablen eingesetzt werden. Dabei kann zwischen Interrater- und Intrarater-Reliabilität unterschieden werden. (de)
  • Estatistikan, adostasun neurriak balorazio metodo edo epaile ezberdinen arteko adostasuna neurtzeko erabiltzen dira, hainbat elementu edo objekturi buruz emandako balorazioei buruz. Ohizkoak dira giza ezaugarri bati buruz epaile edo balorazio metodo ezberdinek ematen duten neurketa fina den erabakitzeko azterketan. Fenomeno baterako egin daitezkeen neurketen zehaztasuna (ez ordea baliozkotasuna) aztertzeko ere erabil daitezke. Adibidez, haizearen norabidea jasotzeko orduan bi neurketa tresnen artean adostasun txikia bada, bietako bat gutxienez oker dagoela erabaki behar da, baina adostasuna handia izanda ere, ezingo da baieztatu neurketa tresnak baliozkoak direnik, norabidea ongi neurtzen denik alegia. Adostasun neurri ezberdinak erabili behar izaten dira epaile kopurua 2 edo 2 baino handiagoa den eta balorazioa egiteko erabiltzen den arabera (nominala, ordinala edo tarte erakoa). Kendallen tau esaterako bi epailetarako bakarrik erabil daiteke. (eu)
  • In statistics, inter-rater reliability (also called by various similar names, such as inter-rater agreement, inter-rater concordance, inter-observer reliability, inter-coder reliability, and so on) is the degree of agreement among independent observers who rate, code, or assess the same phenomenon. Assessment tools that rely on ratings must exhibit good inter-rater reliability, otherwise they are not valid tests. There are a number of statistics that can be used to determine inter-rater reliability. Different statistics are appropriate for different types of measurement. Some options are joint-probability of agreement, such as Cohen's kappa, Scott's pi and Fleiss' kappa; or inter-rater correlation, concordance correlation coefficient, intra-class correlation, and Krippendorff's alpha. (en)
  • La concordance inter-juges (dite aussi fiabilité inter-juges ou inter-observateurs) est une mesure statistique de l'homogénéité des jugements formulés par plusieurs évaluateurs face à une même situation, c'est-à-dire à mesurer quantitativement leur degré de consensus. (fr)
  • Em estatística, a concordância entre avaliadores (também chamada por vários nomes semelhantes, como confiabilidade entre avaliadores, confiabilidade entre observadores, confiabilidade entre codificadores e assim por diante) é o grau de concordância entre observadores independentes que classificam, codificam ou avaliam o mesmo fenômeno. As ferramentas de avaliação que dependem de classificações devem apresentar boa confiabilidade entre avaliadores, caso contrário, não são testes válidos. Há uma série de estatísticas que podem ser usadas para determinar a confiabilidade entre avaliadores. Diferentes estatísticas são apropriadas para diferentes tipos de medição. Algumas opções são a probabilidade conjunta de concordância, como o kappa de Cohen, o pi de Scott e o ; ou correlação interexaminadores, coeficiente de correlação de concordância, correlação intraclasse e alfa de Krippendorff. (pt)
  • 在統計學中,評分者間信度(英語:inter-rater reliability;inter-rater agreement;inter-rater concordance;interobserver reliability)指的是評分者間對於某件事情彼此同意的程度。其分數顯示評分者之間想法相似和共識的程度。評分者間信度與並不同。 (zh)
gold:hypernym
prov:wasDerivedFrom
page length (characters) of wiki page
foaf:isPrimaryTopicOf
is differentFrom of
is rdfs:seeAlso of
is Link from a Wikipage to another Wikipage of
Faceted Search & Find service v1.17_git139 as of Feb 29 2024


Alternative Linked Data Documents: ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 08.03.3330 as of Mar 19 2024, on Linux (x86_64-generic-linux-glibc212), Single-Server Edition (378 GB total memory, 59 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2024 OpenLink Software