Updated on: Thursday, 19Oct2017 17:18:04 EDT
Finally, the last part of this report presents the final comparative results of student performance for MEA 200 in five different venues.


In my first report I identified group activities and cooperative learning as major limitations for webbased courses, and the results from this study bears out my concern. Except for a few classes, group activities have been less than satisfactory. Some of the problems associated with this ineffectiveness are:
I do not think the students in this class gave themselves enough opportunity to be convinced that group activities is an important enhancement for learning and understanding. In fact, however, the problem may be deeper than that  in each of the summer terms or semesters up until the Fall of 2000, I provided a listserv for all my classes with the intent that they would use it to communicate with me and with each other (a subtle way to provide additional opportunities for group learning). I believe strongly that students must take responsibility for their own learning, so I left it up to each individual student to subscribe (I did not automatically subscribe the entire class). The results were very disappointing. Generally, less than 15% of each class subscribed, and, while some students took maximum advantage of this forum for asking questions, on only two occasions did other students on the listserv attempted to answer the questions posted by their classmates  they waited for me to do it. This behavior seems to be the norm for most of my colleague's classes as well  students don't want to commit themselves individually to answer a question. Of course this is why we use group dynamics in the class room  to get students to participate in a small group of peers, agree on an answer and have someone randomly picked to speak for the entire group, not themselves. This leads then into the discussion in the next paragraph.
Some webclass groups attempted to communicate by email (but this is asynchronous) and some chose to meet in the library, face to face (but had difficulty finding a mutually agreeable time). The University eventually provided 'chatroom' software that allowed for synchronous communication, but the results were mixed (difficult to administer if more than six people are participating  so I abandoned that venue). Now that everyone is confortable with IM, this is the means now used. Establishing this type of mechanism for group interactions is essential if the webcourse begins to attract students from outside the University who cannot come to campus for group meetings.
Shown below is the final comparison of the performance of students for all academic terms of this study. The criteria used for the comparisons of success in this course across all venues is the mean final total scores earned by students (which includes four exams and 32 written assignments described in Part 3 of this report).
To the extent possible, exams with the same degree of difficulty were administered to all students in all venues during the study period, so the comparison of the student's success in each venue for each academic period should be valid. In fact, during the Spring 98 semester and the Summer 98 terms, the exams and homework were purposely identical for all the venues. Although the exams are returned and discussed in class after each exam, students do not retain copies. This should also help insure the comparability of results across all academic periods because no one group would have access to a file of old exams.
The results of DELTA Video and Cable courses are considered equivalent and, where both were offered during the same academic term, the scores were combined. Not included in this comparison are results from the written independent study course, which was offered through UNCCH Continuing Education and where more than 70 students registered for the course  less than 20 completed the course of study. The grading scheme for this course was deemed not comparable with the other venus.
The mean final total scores of students in five of the six venues being compared in this project are shown in the Table and the graphs below.
Horizontal headings in the table are: Type Class/Academic Period, Total # of students registered for each period, and separate listings of the number of and mean scores for FR, SO, JR, SR, SP & GR; and mean GPA for all students in each of these classes at the completion of the term.
Vertical Heading for each class are the mean total scores (Totals) for each of the academic periods of the study. Furthermore, V is DELTA video, C is DELTA cable and W is web (TRACS or DELTA). If no letter prefix is given, then the traditional oncampus semester or summer term class is implied. Scores were combined for the summer 1997 Video and Cable courses. Students who did not complete the course (and received a failing grade) have been excluded.
















Totals 





79.8 


148 
85.9 





Spr 97 












. 
. 

V Spr 97 




. 
. 






. 
. 

1st Sum 97 












. 
. 

2nd Sum 97 












. 
. 

V/C Sum 97 


. 
. 


. 
. 
. 
. 


. 
. 

Fall 97 








13 






W Fall 97 














V Fall 97 


. 
. 
. 
. 


. 
. 


. 
. 

Spr 98 














W Spr 98 










. 
. 
. 
. 

DELTA W Spr 98 


. 
. 
. 
. 
. 
. 




. 
. 

1st Sum 98 


. 
. 











2nd Sum 98 


. 
. 








. 
. 

DELTA W Sum 98 


. 
. 




. 
. 


. 
. 

C Sum 98 


. 
. 
. 
. 
. 
. 
. 
. 


. 
. 

Fall 98 












. 
. 

W Fall 98** 












. 
. 

DELTA W Fall 98 


. 
. 
. 
. 
. 
. 




. 
. 
. 
Spr 99* 
. 

W Spr 99*** 
. 

V Spr 99 


. 
. 
. 
. 
. 
. 


. 
. 
. 
. 

1st Sum 99 















2nd Sum 99 









90.7 





OIT W Sum 99 




. 
. 






. 
. 

V Sum 99 










. 
. 
. 
. 

As has been true for all semesters that I have taught this sophomorelevel course, and as shown on the right in the bar graph of student enrollment for the study period, sophomores (SO) make up just less than 25% of the students  there are more seniors (SR) and nearly as many juniors (JR) enrolled, and a fairly large proportion are freshmen (FR).
This course was written with a scientific rigor that challenges students at the SO level. As a result, I discouraged FR (particularly during the first semester they are enrolled) from taking the course and, to keep from punishing SO students by having them compete with the JR and SR students (and occasional graduate [GR] students) that register for this class, I did not (and do not now) curve grades. I have adopted a fixedgrading system for this course so that all students will know exactly where they stand throughout the entire academic period.
Finally, as a result of the presence of so many JR and SR in the class, even though FR make up nearly 18% of the population, the mean total scores for all venues were relatively high (as is demonstrated by the mean of 83.4% for the study period).
Furthermore, and as expected and confirmed in the graph of mean total scores by student classification for the study period on the left, I expected and found a direct relationship between student classification and mean total scores, with SR and GR being the most dominate.
Not suprisingly, graduate students have the highest scores (but only a total of 4 students have enrolled during the study period). SR have the next highest mean score (85.9), nearly 3 points above the mean, and FR have the lowest mean score (75.1), more than 8 points below the mean. Somewhat suprisingly, the third highest mean score (81.6) are the parttime students (the SP category, which includes undesignated students  UGS and postbaccalaureate students  PBS). The UGS students constitute the majority of the SP students and many take classes to earn enough credits and/or improve their GPA to enroll as fulltime students. Less than 10% of the SP students are PBS students, and they generally take the course to learn about the ocean  they generally have scores well above the mean and above the scores of UGS students, and raise the overall SP mean total score.The mean scores for SO is 79.8, almost 4 points less than the mean, and even JR are more than 2 points below the mean.
Two goals of this study were to create a web course that was equivalent to the traditional lecture course and to compare student learning (as measured by the mean total scores earned at the end of each term) for all of the venues in which this course is taught. To help answer that important question, I consolidated all the course mean total scores and FR, SO, JR, SR, SP, GR and GPA mean scores by venue, as shown in the table below.
















Reg. Term 

83.7 

77.7 

81.9 

84.3 

83.4 

71.6 

96.0 

Sum. Term 

82.6 

79.6 

79.0 

82.0 

86.4 

76.7 

92.9 

Reg. Web 

80.7 

71.4 

82.7 

77.1 

82.6 

89.9 


DELTA Web 

84.7 

74.0 

63.7 

86.4 

87.8 

82.4 


V/C 

86.1 

82.0 

84.7 

78.9 

87.7 

86.1 

The total mean score for this study period is 83.4, and the standard deviation is only plus or minus 3.08, so it varies from the mean by less than 3.7%. We can conclude, therefore, that there is little difference in the mean performance of students in each venue, so the courses can considered equivalent (see the MetaAnalytic study below). Note also that the mean GPA of students in all these venues is nearly the same.
It is interesting to note that the lowest mean scores were earned by students taking my regular semester lecture classes (probably because more than a quarter of the students were freshman, who had the lowest mean scores in that venue), while the highest scores were from those who took my Video/Cable or DELTA Web courses (where low enrollment greatly skew the results)  see more on this below.
Another interesting analysis of the study data, that is very consistent with my study conclusions, has been published by Mickey Shachar, Ph.D., Assistant Professor  College of Health Sciences and Education ,Touro University International, Anaheim, CA ("Differences Between Traditional and Distance Learning Outcomes: A MetaAnalytic Approach." UMI Dissertation Services, ProQuest, 2002. ISBN 0493874038). I provided him with all of my raw data and, if you look closely, will find that the total number of students he included in his analysis were slightly higher than the number I used in my study (I eliminated a few students who did not complete the course, for instance).
His meta analysis also compared the differences between the academic performance (the final course mean total scores) of students enrolled in distance education courses, relative to those enrolled in traditional settings during the same academic period. Dr. Shachar grouped all distance education courses (web; video/cable) into one category called DE for his comparisons and, in addition, calculated the effect size for seven academic periods. The effect size is the difference between the means, divided by the pooled standard deviation but, more importantly, shows the EFFECT of a treatment or procedure (in this case, teaching at a distance) on the experimental group (DE students), relative to the control untreated group (Traditional students).
The table below shows the effect of this treatment on academic performance for the seven academic periods, where a negative sign indicates Traditional > DE and positive indicates DE > Traditional. Note that there is little difference in the means or effect size; the standard deviations for the DE classes are generally, but not exclusively, larger, partly as a result of smaller class size and because the quality of students is more variable in DE classes. Dr. Shachar expressed his pleasure with my comparative results, saying that the precise and accurate calculations of the effect sizes were within the 95% confidence interval.











Fall 98 


81.2 
10.26 

79.8 
11.55 
Spring 97 


81.3 
11.13 

78.4 
8.44 
Fall 97 


80.2 
10.15 

82.8 
12.98 
Spring 98 


81.8 
10.37 

80.6 
13.51 
Summer 97 


82.0 
9.60 

80.0 
12.31 
Summer 98 


84.9 
10.94 

82.0 
12.47 
Summer 99 


85.0 
10.06 

86.1 
9.82 
Total: 573 


Having concluded that there is little difference in the courses offered by each of these four venues, or between Traditional and DE students, how do we account for the obvious variability in the mean scores shown in the previous session when we compared success by student classification? As might be expected, the least variability is by SR students, and the most by FR, SO and SP students. What accounts for that variability?
To help determine the variability questions, the mean total scores for all the courses taught during the study period are plotted in the graph on the right.
Five courses with means markedly above the mean for the study period deserve comment  three were OIT courses with very small enrollments and very good students: (1) in the OIT Web Spring 98 class (#11 on xaxis) two very mature students (one SR & one PBS) who worked in computerbased jobs; (2) in the OIT Cable Summer 98 class (#15 on xaxis) a married women was working full time as an account executive and working toward her degree; and (3) in the OIT web course (#19 on xaxis) two very good seniors were the only two registered  clearly the quality of the students and the enrollments of two or less skewed the results.
The highest mean score for nonOIT courses were earned by 21 students in the 1st Summer 98 term (#12 on xaxis)  12 were graduating seniors, one was a graduate student, and no freshman were enrolled, and the 2nd Summer 99 term (#21 on xaxis)  where 12 students were SR, PBS or GR.
The mean scores shown above mask the variability of the scores that make up the mean. Overall, seniors have the highest mean total scores for undergraduates but, as can be seen in the graph on the left, which shows the mean scores by student classification for each academic term, seniors do not always have the highest mean scores for each academic period.
Note in particular the high FR score (square symbol) for the 2nd Summer 99 (#21), and the Video Summer 99 (#23)  as can be seen from the table above, these high scores were earned by some very good freshman students.
Also note that in three instances (Video Spring 97, #2; Fall 97, #6; & Web Fall 97, #7), the mean JR scores were higher than the mean SR scores; also in the Web Fall 97 (#7),the mean SO scores were just a bit higher than the JR scores and significantly higher than the SR scores; and in the Video Summer 99 course, the mean SO score earned by one student was more than 10 points higher than the mean of the three SR scores.
Adding to this variability consideration, the first web course in the Fall of 1997 had some exceptional students who just happened to be juniors and sophomores, and the best students in the second web course were sophomores. Note finally, that in the Web Spring 98 classes (#10), the mean SO scores were significantly higher than the means of both the JR and SR scores.
Clearly, something more than a student's classification accounts for the variability in the means by academic period. If there is no real difference in the course when taught as a regular lecture, summer lecture, internet or video/cable independent study class and student classification does not fully explain the variability in mean total scores, then how else can this variability be explained?
We would expect (and the plot on the left clearly demonstrates) that there is a strong correlation between the total mean score earned by a student and his or her GPA at the time the course was completed  the correlation coefficient for this composite plot of the 585 students who took courses during this study period (and for whom GPAs could be obtained) is 0.6253.
The suggestion that GPA is one of the best predictors for students success in all these classes is further demonstrated by the plot below of the mean GPA of students versus total mean scores in each of 22 classes; the correlation coefficient is 0.5872.
This would suggest that much of the variability shown in the graph for the four regular semesters (and as shown above in the tables and other graphs) is due to the GPA of the students who register for the course in any particular academic period. Students selfselect enrollment in each of these venues, and clearly the overall quality of the students in any one venue is quite random. Obviously, classes during some academic term have a higher proportion of these better students, while others do not.