Milestone Report for the project: Curriculum Design, Production & Delivery of MEA 200 as a Web-Course

Part 5

COURSE VENUE COMPARISONS

Updated on: Thursday, 19-Oct-2017 17:18:04 EDT

Finally, the last part of this report presents the final comparative results of student performance for MEA 200 in five different venues.

Directory

Experiences and Initial Comparisons
Return to Project Background, Goals and Objectives

Experiences and Final Comparisons


Cooperative Learning

In my first report I identified group activities and cooperative learning as major limitations for web-based courses, and the results from this study bears out my concern. Except for a few classes, group activities have been less than satisfactory. Some of the problems associated with this ineffectiveness are:

Convincing students that they gain an advantages by working as a group

I do not think the students in this class gave themselves enough opportunity to be convinced that group activities is an important enhancement for learning and understanding. In fact, however, the problem may be deeper than that -- in each of the summer terms or semesters up until the Fall of 2000, I provided a listserv for all my classes with the intent that they would use it to communicate with me and with each other (a subtle way to provide additional opportunities for group learning). I believe strongly that students must take responsibility for their own learning, so I left it up to each individual student to subscribe (I did not automatically subscribe the entire class). The results were very disappointing. Generally, less than 15% of each class subscribed, and, while some students took maximum advantage of this forum for asking questions, on only two occasions did other students on the listserv attempted to answer the questions posted by their classmates -- they waited for me to do it. This behavior seems to be the norm for most of my colleague's classes as well -- students don't want to commit themselves individually to answer a question. Of course this is why we use group dynamics in the class room -- to get students to participate in a small group of peers, agree on an answer and have someone randomly picked to speak for the entire group, not themselves. This leads then into the discussion in the next paragraph.

Providing a workable means by which students may participate in cooperative learning

Some web-class groups attempted to communicate by email (but this is asynchronous) and some chose to meet in the library, face to face (but had difficulty finding a mutually agreeable time). The University eventually provided 'chatroom' software that allowed for synchronous communication, but the results were mixed (difficult to administer if more than six people are participating - so I abandoned that venue). Now that everyone is confortable with IM, this is the means now used. Establishing this type of mechanism for group interactions is essential if the web-course begins to attract students from outside the University who cannot come to campus for group meetings.


Comparisons of Student Performance

Shown below is the final comparison of the performance of students for all academic terms of this study. The criteria used for the comparisons of success in this course across all venues is the mean final total scores earned by students (which includes four exams and 32 written assignments described in Part 3 of this report).

To the extent possible, exams with the same degree of difficulty were administered to all students in all venues during the study period, so the comparison of the student's success in each venue for each academic period should be valid. In fact, during the Spring 98 semester and the Summer 98 terms, the exams and homework were purposely identical for all the venues. Although the exams are returned and discussed in class after each exam, students do not retain copies. This should also help insure the comparability of results across all academic periods because no one group would have access to a file of old exams.

The results of DELTA Video and Cable courses are considered equivalent and, where both were offered during the same academic term, the scores were combined. Not included in this comparison are results from the written independent study course, which was offered through UNC-CH Continuing Education and where more than 70 students registered for the course - less than 20 completed the course of study. The grading scheme for this course was deemed not comparable with the other venus.

Mean Total Scores by Venue and by Student Classification

The mean final total scores of students in five of the six venues being compared in this project are shown in the Table and the graphs below.

Horizontal headings in the table are: Type Class/Academic Period, Total # of students registered for each period, and separate listings of the number of and mean scores for FR, SO, JR, SR, SP & GR; and mean GPA for all students in each of these classes at the completion of the term.

Vertical Heading for each class are the mean total scores (Totals) for each of the academic periods of the study. Furthermore, V is DELTA video, C is DELTA cable and W is web (TRACS or DELTA). If no letter prefix is given, then the traditional on-campus semester or summer term class is implied. Scores were combined for the summer 1997 Video and Cable courses. Students who did not complete the course (and received a failing grade) have been excluded.

Class/Academic Period
# Stud.
Mean Tot
FR #
FR Mean
SO #
SO Mean
JR #
JR Mean
SR #
SR Mean
SP #
SP Mean
GR #
GR Mean
GPA

Totals

568
83.4
100
76.3
134

79.8

127
81.2

148

85.9

55
81.6
4
96.4

Spr 97

97
81.3
31
75.1
32
83.6
17
83.3
15
88.4
2
72.3

.

.

2.8

V Spr 97

7
78.4
1
72.2

.

.

2
85.7
3
78.9
1
68.8

.

.

2.1

1st Sum 97

22
83.6
3
73.0
1
80.2
4
83.9
10
87.8
4
81.6

.

.

2.3

2nd Sum 97

26
78.2
1
81.9
3
73.9
3
72.2
14
82.9
5
83.0

.

.

2.4

V/C Sum 97

3
80.0

.

.

2
73.2

.

.

.

.

1
93.6

.

.

2.0

Fall 97

74
80.2
16
79.5
22
79.6
15
86.9

13

75.8
7
74.8
1
96.0
2.7

W Fall 97

15
85.5
1
81.4
2
79.2
8
87.2
3
81.6
1
86.4

2.9

V Fall 97

5
76.0

.

.

.

.

1
61.3

.

.

4
85.9

.

.

2.4

Spr 98

105
81.8
18
80.7
35
80.5
25
84.6
22
84.0
5
72.1

2.8

W Spr 98

14
78.2
2
65.9
4
86.8
4
75.0
4
78.8

.

.

.

.

2.6

DELTA W Spr 98

2
98.1

.

.

.

.

.

.

1
99.6
1
96.5

.

.

2.7

1st Sum 98

21
88.1

.

.

2
79.6
4
87.0
12
90.6
2
79.5
1
96.8
2.9

2nd Sum 98

23
82.0

.

.

3
84.9
4
80.6
10
85.9
6
75.2

.

.

2.5

DELTA W Sum 98

5
79.6

.

.

1
63.7
2
82.6

.

.

2
84.5

.

.

1.9

C Sum 98

1
95.2

.

.

.

.

.

.

.

.

1
95.2

.

.

3.6

Fall 98

69
81.6
21
75.4
15
83.7
19
82.4
13
85.2
1
67.1

.

.

2.8

W Fall 98**

20
79.6
2
68.9
7
79.1
4
67.4
6
88.0
1
97.2

.

.

2.2

DELTA W Fall 98

3
81.5

.

.

.

.

.

.

2
82.7
1
79.1

.

.

.

Spr 99*

.

W Spr 99***

.

V Spr 99

2
97.4

.

.

.

.

.

.

2
97.4

.

.

.

.

3.3

1st Sum 99

26
77.4
1
71.0
3
71.8
6
83.7
11
82.2
4
84.9
1
89.8
2.6

2nd Sum 99

20
88.4
2
92.3
2
86.8
7
86.6
6

90.7

4
85.3
2
92.2
2.5

OIT W Sum 99

8
81.7
1
74.0

.

.

3
90.2
2
86.1
2
68.3

.

.

3.0

V Sum 99

8
84.3
1
90.8
1
96.2
3
83.5
3
85.7

.

.

.

.

2.4
*Team-taught, so scores not included; **PBS student was very bright, elderly diabled man; ***Data lost

Enrollments by Student Classification During Study Period

As has been true for all semesters that I have taught this sophomore-level course, and as shown on the right in the bar graph of student enrollment for the study period, sophomores (SO) make up just less than 25% of the students -- there are more seniors (SR) and nearly as many juniors (JR) enrolled, and a fairly large proportion are freshmen (FR).

This course was written with a scientific rigor that challenges students at the SO level. As a result, I discouraged FR (particularly during the first semester they are enrolled) from taking the course and, to keep from punishing SO students by having them compete with the JR and SR students (and occasional graduate [GR] students) that register for this class, I did not (and do not now) curve grades. I have adopted a fixed-grading system for this course so that all students will know exactly where they stand throughout the entire academic period.

Finally, as a result of the presence of so many JR and SR in the class, even though FR make up nearly 18% of the population, the mean total scores for all venues were relatively high (as is demonstrated by the mean of 83.4% for the study period).

Mean Total Scores by Student Classification During Study Period

Furthermore, and as expected and confirmed in the graph of mean total scores by student classification for the study period on the left, I expected and found a direct relationship between student classification and mean total scores, with SR and GR being the most dominate.

Not suprisingly, graduate students have the highest scores (but only a total of 4 students have enrolled during the study period). SR have the next highest mean score (85.9), nearly 3 points above the mean, and FR have the lowest mean score (75.1), more than 8 points below the mean. Somewhat suprisingly, the third highest mean score (81.6) are the part-time students (the SP category, which includes undesignated students - UGS and post-baccalaureate students - PBS). The UGS students constitute the majority of the SP students and many take classes to earn enough credits and/or improve their GPA to enroll as full-time students. Less than 10% of the SP students are PBS students, and they generally take the course to learn about the ocean -- they generally have scores well above the mean and above the scores of UGS students, and raise the overall SP mean total score.The mean scores for SO is 79.8, almost 4 points less than the mean, and even JR are more than 2 points below the mean.

 

Comparison of Means by Venue

Two goals of this study were to create a web course that was equivalent to the traditional lecture course and to compare student learning (as measured by the mean total scores earned at the end of each term) for all of the venues in which this course is taught. To help answer that important question, I consolidated all the course mean total scores and FR, SO, JR, SR, SP, GR and GPA mean scores by venue, as shown in the table below.

Acad Period
Total E.
Total Mean
FR E.
FR Mean
SO E.
SO Mean
JR E.
JR Mean
SR E
SR Mean
SP E
SP Mean
GR E
GR Mean
GPA

Reg. Term

345

83.7

86

77.7

104

81.9

76

84.3

63

83.4

15

71.6

1

96.0

2.8

Sum. Term

149

82.6

7

79.6

15

79.0

28

82.0

63

86.4

29

76.7

4

92.9

2.5

Reg. Web

49

80.7

5

71.4

13

82.7

15

77.1

13

82.6

2

89.9

2.7

DELTA Web

18

84.7

1

74.0

1

63.7

5

86.4

5

87.8

6

82.4

2.5

V/C

27

86.1

2

82.0

3

84.7

7

78.9

7

87.7

7

86.1

2.6

The total mean score for this study period is 83.4, and the standard deviation is only plus or minus 3.08, so it varies from the mean by less than 3.7%. We can conclude, therefore, that there is little difference in the mean performance of students in each venue, so the courses can considered equivalent (see the Meta-Analytic study below). Note also that the mean GPA of students in all these venues is nearly the same.

It is interesting to note that the lowest mean scores were earned by students taking my regular semester lecture classes (probably because more than a quarter of the students were freshman, who had the lowest mean scores in that venue), while the highest scores were from those who took my Video/Cable or DELTA Web courses (where low enrollment greatly skew the results) -- see more on this below.

 

 

Comparison of Means between Traditional and DE Classes

Another interesting analysis of the study data, that is very consistent with my study conclusions, has been published by Mickey Shachar, Ph.D., Assistant Professor - College of Health Sciences and Education ,Touro University International, Anaheim, CA ("Differences Between Traditional and Distance Learning Outcomes: A Meta-Analytic Approach." UMI Dissertation Services, ProQuest, 2002. ISBN 0-493-87403-8). I provided him with all of my raw data and, if you look closely, will find that the total number of students he included in his analysis were slightly higher than the number I used in my study (I eliminated a few students who did not complete the course, for instance).

His meta analysis also compared the differences between the academic performance (the final course mean total scores) of students enrolled in distance education courses, relative to those enrolled in traditional settings during the same academic period. Dr. Shachar grouped all distance education courses (web; video/cable) into one category called DE for his comparisons and, in addition, calculated the effect size for seven academic periods. The effect size is the difference between the means, divided by the pooled standard deviation but, more importantly, shows the EFFECT of a treatment or procedure (in this case, teaching at a distance) on the experimental group (DE students), relative to the control untreated group (Traditional students).

The table below shows the effect of this treatment on academic performance for the seven academic periods, where a negative sign indicates Traditional > DE and positive indicates DE > Traditional. Note that there is little difference in the means or effect size; the standard deviations for the DE classes are generally, but not exclusively, larger, partly as a result of smaller class size and because the quality of students is more variable in DE classes. Dr. Shachar expressed his pleasure with my comparative results, saying that the precise and accurate calculations of the effect sizes were within the 95% confidence interval.

Traditional

DE

Class
Effect size
N
Mean
SD
N
Mean
SD

Fall 98

-0.14
69

81.2

10.26

24

79.8

11.55

Spring 97

-0.26
97

81.3

11.13

7

78.4

8.44

Fall 97

+0.26
74

80.2

10.15

21

82.8

12.98

Spring 98

-0.12
105

81.8

10.37

16

80.6

13.51

Summer 97

-0.21
48

82.0

9.60

3

80.0

12.31

Summer 98

-0.25
44

84.9

10.94

6

82.0

12.47

Summer 99

+0.11
45

85.0

10.06

14

86.1

9.82

Total: 573

482

91

Having concluded that there is little difference in the courses offered by each of these four venues, or between Traditional and DE students, how do we account for the obvious variability in the mean scores shown in the previous session when we compared success by student classification? As might be expected, the least variability is by SR students, and the most by FR, SO and SP students. What accounts for that variability?

Mean Total Scores by Academic Period

To help determine the variability questions, the mean total scores for all the courses taught during the study period are plotted in the graph on the right.

Five courses with means markedly above the mean for the study period deserve comment -- three were OIT courses with very small enrollments and very good students: (1) in the OIT Web Spring 98 class (#11 on x-axis) two very mature students (one SR & one PBS) who worked in computer-based jobs; (2) in the OIT Cable Summer 98 class (#15 on x-axis) a married women was working full time as an account executive and working toward her degree; and (3) in the OIT web course (#19 on x-axis) two very good seniors were the only two registered -- clearly the quality of the students and the enrollments of two or less skewed the results.

The highest mean score for non-OIT courses were earned by 21 students in the 1st Summer 98 term (#12 on x-axis) -- 12 were graduating seniors, one was a graduate student, and no freshman were enrolled, and the 2nd Summer 99 term (#21 on x-axis) -- where 12 students were SR, PBS or GR.

 

 

Variability in Mean Scores by Academic Period

The mean scores shown above mask the variability of the scores that make up the mean. Overall, seniors have the highest mean total scores for undergraduates but, as can be seen in the graph on the left, which shows the mean scores by student classification for each academic term, seniors do not always have the highest mean scores for each academic period.

Note in particular the high FR score (square symbol) for the 2nd Summer 99 (#21), and the Video Summer 99 (#23) -- as can be seen from the table above, these high scores were earned by some very good freshman students.

Also note that in three instances (Video Spring 97, #2; Fall 97, #6; & Web Fall 97, #7), the mean JR scores were higher than the mean SR scores; also in the Web Fall 97 (#7),the mean SO scores were just a bit higher than the JR scores and significantly higher than the SR scores; and in the Video Summer 99 course, the mean SO score earned by one student was more than 10 points higher than the mean of the three SR scores.

Adding to this variability consideration, the first web course in the Fall of 1997 had some exceptional students who just happened to be juniors and sophomores, and the best students in the second web course were sophomores. Note finally, that in the Web Spring 98 classes (#10), the mean SO scores were significantly higher than the means of both the JR and SR scores.

Clearly, something more than a student's classification accounts for the variability in the means by academic period. If there is no real difference in the course when taught as a regular lecture, summer lecture, internet or video/cable independent study class and student classification does not fully explain the variability in mean total scores, then how else can this variability be explained?

Composite Mean Total Scores Versus Student's Overall GPA

We would expect (and the plot on the left clearly demonstrates) that there is a strong correlation between the total mean score earned by a student and his or her GPA at the time the course was completed -- the correlation coefficient for this composite plot of the 585 students who took courses during this study period (and for whom GPAs could be obtained) is 0.6253.

The suggestion that GPA is one of the best predictors for students success in all these classes is further demonstrated by the plot below of the mean GPA of students versus total mean scores in each of 22 classes; the correlation coefficient is 0.5872.

 

 

 

This would suggest that much of the variability shown in the graph for the four regular semesters (and as shown above in the tables and other graphs) is due to the GPA of the students who register for the course in any particular academic period. Students self-select enrollment in each of these venues, and clearly the overall quality of the students in any one venue is quite random. Obviously, classes during some academic term have a higher proportion of these better students, while others do not.

 

 

 

 

 

 

 

 

Some Conclusions About Student Performance For the Study

 

Return to Directory this Page