Chem. Engr. Education, 31(1), 32-33 (Winter 1997).
IT TAKES ONE TO KNOW ONE
Richard M. Felder
North Carolina State University
Something (maybe the only thing)
that most university administrators and educational reformers
agree on is that the teaching evaluation methods used on their
campuses leave a lot to be desired. The administrators often
use inadequacies in the usual procedure (tabulating course-end
student ratings) to justify the low weighting generally given
to teaching in tenure and promotion decisions. The reformers
(who include many administrators) recognize that their efforts
will probably be futile unless they can provide hard evidence
that alternative instructional methods really work, which will
take better measures of teaching effectiveness than the ones commonly
In previous columns, we addressed
the validity of student ratings and methods of increasing their
usefulness1,2 and discussed benefits and potential
pitfalls of teaching portfolios.3 This column concerns
peer review, a teaching assessment technique in which faculty
members observe and evaluate classroom instruction. The evaluation
may go directly to the instructors to help them improve their
teaching, or it may go into a teaching portfolio, a promotion/tenure
dossier, or an award nomination package.
Peer reviews can contribute significantly
to the evaluation of teaching if they are well designed and conducted,
but as a rule they are neither. In most cases, faculty members
who have no training and little idea of what to look for--and who
might or might not be good teachers themselves--sit in on a lecture
and make notes on whatever happens to catch their attention.
The validity of this technique is questionable, to say the least,
as is its fairness to the observed instructor.
There are better alternatives. Following
are some critical questions that should be raised whenever peer
review is contemplated and some suggested answers.
- Is the purpose of the peer
review formative (to improve
teaching) or summative (to provide data to be used in personnel
decisions)? The recommended procedures for formative and summative
evaluation are much different; attempting to do both with a single
review is usually a mistake.
- How should formative peer
reviews be carried out?
Reviews intended to improve teaching may be relatively informal.
Faculty members might participate in a semester-long program
of observation and feedback, or they might simply invite teaching
consultants or colleagues with reputations as outstanding teachers
to observe one or two classes and offer comments and suggestions.
In either case, the feedback goes only to the observed
- How should summative reviews
be carried out? A much
higher level of structure is needed to make summative reviews
fair, reliable (repeated assessments converge on the same ratings),
and valid (what is rated as good teaching really is good teaching,
and similarly for inadequate teaching). The remainder of this
column concerns this type of review.
- Who should do the reviewing?
Reviewers should be good teachers (see column title) who recognize
that different styles of teaching can be equally effective. They
should have received training from teaching center staff or education
faculty members on what to look for in a classroom. Training
dramatically increases the likelihood that evaluations from different
reviewers will be consistent with one another (reliability) and
with accepted standards for good teaching (validity).
- How should the review be performed?
The following process, adapted from procedures used at several
different institutions, has been found to yield good results.
- Two or more faculty members are selected from a pool of individuals
who have received peer review training.
- The reviewers conduct
at least two class visits during a semester, preceding each visit
with a brief meeting at which the instructor provides pertinent
information about the class to be observed and (optionally) copies
of relevant course materials such as syllabi, instructional objectives,
assignments, and tests. The reviewers observe for the entire
class period and independently complete rating checklists.
Soon afterwards, they have a post-visit conference with the instructor
to discuss their observations and invite responses.
- After all
visits and conferences have been completed, the reviewers compare
and reconcile their checklists to the greatest extent possible.
They then write a summary report which is placed in the instructor's
teaching portfolio or personnel file.
- What should the rating checklist
contain? The checklist
is a collection of statements about the observed classroom instruction.
The reviewers indicate their levels of agreement or disagreement
with each statement, adding explanatory comments where appropriate.
Most such instruments include statements like these.4
The instructor (a) begins class on time, (b) presents goals
or objectives for the period, (c) reviews prior material, (d)
presents material in a logical sequence, (e) periodically relates
new material to previous learning and experience, (f) summarizes
main points at the end of the period, (g) ends class on time.
- Knowledge. The
instructor (a) demonstrates a thorough and up-to-date knowledge
of the subject matter, (b) answers questions clearly and accurately.
- Presentation. The
instructor (a) speaks clearly, (b) holds the students' attention
throughout the period, (c) highlights important points, (d) presents
appropriate examples, (e) encourages questions, (f) seeks active
student involvement beyond simple questioning, (g) attains
active student involvement, (h) explains assignments clearly and
- Rapport. The
instructor (a) listens carefully to student comments, questions,
and answers and responds constructively, (b) checks periodically
for student understanding, (c) treats all students in a courteous
and equitable manner.
Many other statements could be included,
some of which might be particularly applicable to laboratory or
clinic settings. Weimer, Garrett, and Kerns5 provide
a comprehensive list of teacher behaviors that can be used to
develop a customized peer review checklist. Faculty members in
a department might collectively select the behaviors to be included
on the instrument. The attendant discussion would promote understanding
of what constitutes good teaching and would thereby promote good
This peer review process requires
more effort than the usual unstructured procedure, but the questionable
validity and potential unfairness of the latter approach are serious
concerns. If peer review is to be done at all, making the effort
to do it right is in the best interest of the faculty, the department,
and the university.
- Felder, R. M., "What
Do They Know Anyway," Chemical Engineering Education,
26(3), 134 (1992).
- Felder, R. M., "What
Do They Know Anyway: 2. Making Evaluations Effective," Chemical
Engineering Education, 27(1), 28 (1993).
- Felder, R. M., "If
You've Got It, Flaunt It: Uses and Abuses of Teaching Portfolios,"
Chemical Engineering Education, 30(3), 188 (1996).
- Peer Observation of
Classroom Teaching,Center for Teaching & Learning at Chapel Hill, North Carolina,
CTL 15 (1994).
- Weimer, M., J. L. Parrett,
and M. Kerns, How am I teaching? Forms and Activities for
Acquiring Instructional Input, Magna Publications, Madison,
Wisconsin, 1988. This reference provides a variety of useful
resources for assessment of teaching, including forms for student-,
peer-, and self-ratings of classroom instruction and course materials.
Return to list of columns
Return to main page