Chem. Engr. Education, 31(1), 32-33 (Winter 1997).


Rebecca Brent
Richard M. Felder
North Carolina State University

Something (maybe the only thing) that most university administrators and educational reformers agree on is that the teaching evaluation methods used on their campuses leave a lot to be desired. The administrators often use inadequacies in the usual procedure (tabulating course-end student ratings) to justify the low weighting generally given to teaching in tenure and promotion decisions. The reformers (who include many administrators) recognize that their efforts will probably be futile unless they can provide hard evidence that alternative instructional methods really work, which will take better measures of teaching effectiveness than the ones commonly used.

In previous columns, we addressed the validity of student ratings and methods of increasing their usefulness1,2 and discussed benefits and potential pitfalls of teaching portfolios.3 This column concerns peer review, a teaching assessment technique in which faculty members observe and evaluate classroom instruction. The evaluation may go directly to the instructors to help them improve their teaching, or it may go into a teaching portfolio, a promotion/tenure dossier, or an award nomination package.

Peer reviews can contribute significantly to the evaluation of teaching if they are well designed and conducted, but as a rule they are neither. In most cases, faculty members who have no training and little idea of what to look for--and who might or might not be good teachers themselves--sit in on a lecture and make notes on whatever happens to catch their attention. The validity of this technique is questionable, to say the least, as is its fairness to the observed instructor.

There are better alternatives. Following are some critical questions that should be raised whenever peer review is contemplated and some suggested answers.

Many other statements could be included, some of which might be particularly applicable to laboratory or clinic settings. Weimer, Garrett, and Kerns5 provide a comprehensive list of teacher behaviors that can be used to develop a customized peer review checklist. Faculty members in a department might collectively select the behaviors to be included on the instrument. The attendant discussion would promote understanding of what constitutes good teaching and would thereby promote good teaching.

This peer review process requires more effort than the usual unstructured procedure, but the questionable validity and potential unfairness of the latter approach are serious concerns. If peer review is to be done at all, making the effort to do it right is in the best interest of the faculty, the department, and the university.


  1. Felder, R. M., "What Do They Know Anyway," Chemical Engineering Education, 26(3), 134 (1992).
  2. Felder, R. M., "What Do They Know Anyway: 2. Making Evaluations Effective," Chemical Engineering Education, 27(1), 28 (1993).
  3. Felder, R. M., "If You've Got It, Flaunt It: Uses and Abuses of Teaching Portfolios," Chemical Engineering Education, 30(3), 188 (1996).
  4. Peer Observation of Classroom Teaching,Center for Teaching & Learning at Chapel Hill, North Carolina, CTL 15 (1994).
  5. Weimer, M., J. L. Parrett, and M. Kerns, How am I teaching? Forms and Activities for Acquiring Instructional Input, Magna Publications, Madison, Wisconsin, 1988. This reference provides a variety of useful resources for assessment of teaching, including forms for student-, peer-, and self-ratings of classroom instruction and course materials.

Return to list of columns
Return to main page