Improving participation and interrater agreement in scoring Ambulatory Pediatric Association abstracts. How well have we succeeded?

Arch Pediatr Adolesc Med. 1996 Apr;150(4):380-3. doi: 10.1001/archpedi.1996.02170290046007.

Abstract

Objective: To determine whether increasing the number and types of raters affected interrater agreement in scoring abstracts submitted to the Ambulatory Pediatric Association.

Methods: In 1990, all abstracts were rated by each of the 11 members of the board of directors of the Ambulatory Pediatric Association. In 1995, abstracts were reviewed by four to five raters, including eight members of the board of directors, two chairpersons of special interest groups, and 10 regional chairpersons, for a total of 20 potential reviewers. Submissions were divided into the following three categories for review: emergency medicine, behavioral pediatrics, and general pediatrics. Weighted percentage agreement and weighted kappa scores were computed for 1990 and 1995 abstract scores.

Results: Between 1990 and 1995, the number of abstracts submitted to the Ambulatory Pediatric Association increased from 246 to 407, the number of reviewers increased from 11 to 20, the weighted percentage agreement between raters remained approximately 79%, and weighted kappa scores remained less than 0.25. Agreement was not significantly better for the emergency medicine and behavioral pediatrics abstracts than for general pediatrics, nor was it better for the raters who reviewed fewer abstracts than those who reviewed many.

Conclusions: The number and expertise of those rating abstracts increased from 1990 to 1995. However, interrater agreement did not change and remained low. Further efforts are needed to improve the interrater agreement.

MeSH terms

  • Abstracting and Indexing / standards*
  • Ambulatory Care*
  • Confidence Intervals
  • Humans
  • Observer Variation
  • Pediatrics*
  • Peer Review, Research / standards*
  • Random Allocation
  • Societies, Medical*
  • United States