Chemical Engineering Education, 32(2), 126-127 (Spring 1998).


ABET CRITERIA 2000: AN EXERCISE IN
ENGINEERING PROBLEM SOLVING

Richard M. Felder

In recent decades, the biggest accreditation hurdle for most of us has been persuading ABET that we were really teaching all the engineering design we claimed to be teaching. Starting in 2001, when "Engineering Criteria 2000" becomes the accreditation standard for all U.S. engineering programs, the hurdle will be a lot higher. Under the new system, for example, we will have to demonstrate that our graduates possess the skills to function on multidisciplinary teams, communicate effectively, and engage in lifelong learning, and that they understand contemporary issues, professional and ethical responsibility, and the impact of engineering solutions in a global/societal context. (Details can be found on the ABET Web site at www.abet.org .) No one has defined exactly what all that really means, but it seems clear that producing students with those characteristics will require some major changes in what we teach and how we teach it.

What makes Criteria 2000 particularly challenging--and either exciting or threatening, depending on your point of view--is its requirement of outcomes assessment. In the past, we could gain full accreditation simply by showing that we were teaching the required amount of mathematics, chemistry, design, etc. We will still have to do that when the new system is in force, but now we will also have to demonstrate how well students are learning the prescribed content and skills. Moreover, we will have to satisfy our ABET visitors that we have in place a process to modify our curricula if any required learning outcomes fail to meet the new criteria. In other words, engineering curricula are now like open-loop process systems, but starting in 2001 they will have to function as closed-loop feedback-controlled systems.

The difference between these two modes of operation is as profound as it is in manufacturing processes, only the difficulties of designing and implementing an optimal control scheme in an education context are greater. Consider the contrasts.

Table 1. Feedback control in manufacturing and educational systems.

 

Manufacturing process

Engineering curriculum

Measured
variables (MV)

  • yield, purity, hardness, production rate, number of defects, rate of return
    (easy to assess)
  • content knowledge (easy to assess)
  • skill levels (difficult to assess)

Assessment
techniques

  • process variable measurement and calculation (objective)
  • exams (objective?)
  • performance assessment (subjective)

Set point (SP)
(target)

  • numerical values (objective)
  • exam scores
    (objective)
  • performance ratings (subjective)

Feedback signal

  • |MV-SP| (clear)
  • |MV-SP| (fuzzy)

Control variables

  • temperature (clear)
  • pressure (clear)
  • feed rate (clear)
  • PID tuning parameters (clear)
  • course content
    (clear)
  • curriculum design (fuzzy)
  • instructional methods (very fuzzy)

Required control
variable adjustments

  • qualitatively clear
  • quantitatively determinable by measurement or simulation
  • easy to implement
  • qualitatively fuzzy
  • quantitatively difficult to predict or measure
  • hard to implement (for both technical and human reasons)

Benefits

  • easy to demonstrate
  • difficult to demonstrate

 

This table is not intended to suggest that control of manufacturing systems is easy, but rather that it is much easier than control of educational systems. Deciding what you want a manufacturing process control system to accomplish, designing and implementing the system, and determining how well it works once it is in place are all relatively straightforward exercises. In an educational system, little is straightforward. Desired outcomes tend to be either vague or controversial; the effects of system changes on learning outcomes are difficult to assess unambiguously (there are always several possible causes for any observed effect); and both the costs of the changes and the benefits of the outcomes are endlessly arguable. Furthermore, few industrialists would argue against attempting to improve product quality or rate of return on investment, but any proposed change in curriculum structure or instructional methods faces almost certain opposition from some faculty members and administrators.

As engineering departments begin to face the prospect of confronting these difficulties, they will seek answers to several questions:

  1. What data must be collected to assess the required skills? Results of standardized tests? Videotaped oral presentations? Multi-year student portfolios? Must assessment data be collected for all students, or only a representative sample? If the latter, how big should the sample be, and how should it be chosen?
  2. Who should evaluate the student products in light of the accreditation criteria? The students' course instructors? One or more additional faculty members? Should training be provided to evaluators to ensure interrater reliability? Who should provide it?
  3. What percentage of students in the sample population must satisfy each criterion? What percentage of the criteria must be satisfied for a department to qualify for full accreditation?
  4. Will it be enough for a department to show that it is doing something--anything--to take assessment results into account in curriculum and instructional planning, or will the effectiveness of corrective measures be evaluated as well? What criteria will be used to evaluate them?

All of us will be seeking answers to these questions in the next few years, and answers will certainly be found. Producing graduates with the specified characteristics and proving that we have done it may be an extremely tough optimal control problem, but engineers are used to solving tough problems and we'll eventually solve this one too.

From now until 2001, departments applying for accreditation may choose whether to go by the old or the new criteria, and thereafter the new criteria will be used exclusively. Some departments acknowledge that the change is inevitable and are wisely starting to modify their instructional programs in anticipation and to assess the learning outcomes. Others are choosing to ignore the whole thing, perhaps hoping that it will go away. It probably won't. In recent years industry and funding agencies like the NSF have increasingly called for changes along the lines of the new criteria, and departments who discount the new requirements may be in for a rude surprise when their ratings come in.

Or they may not be. Perhaps the most important question about the new system is,

    5. How serious will ABET be about Engineering Criteria 2000?

Several departments have already been evaluated under Criteria 2000 and have received full accreditation, but ABET may not be strictly enforcing the new criteria in this pilot stage. For example, one of these departments argued that its faculty's involvement in multidisciplinary research was sufficient to demonstrate that its students were equipped to work in multidisciplinary teams, and the ABET visitor apparently bought this argument. Granted, it may be reasonable for ABET to go easy on volunteer departments now in exchange for the opportunity to test-drive the new system. If such arguments are accepted after 2001, however, there is little chance that Criteria 2000 will be taken seriously enough to accomplish its intended reform of undergraduate engineering education. On the other hand, if ABET puts teeth into its requirements and one or two prominent departments that do not make serious efforts to meet the new criteria are denied 6-year accreditation, reform will almost surely take place. All of us will be watching attentively for signs of how the drama will play out. It promises to be an interesting three years.


Return to list of columns
Return to main page