At the request of the SNHU Faculty Senate I attended the February 17th meeting to discuss concerns with the online delivery of course evaluations using CoursEval. This invitation gave me the push (deadline) I needed to complete the comparative analysis of paper and electronic delivery of evaluations which I had begun and address some of the concerns raised by faculty.
For a brief history of CoursEval at SNHU, please see my December 17th blog post.
Some of the concerns that have been raised by faculty since the implementation of CoursEval include:
Comments taken from December 16, 2009 Faculty Senate Minutes:
- The response rate is low – does not provide enough information regarding the course
- Paper evals were anonymous but this [online] is so anonymous that students will say more and maybe be extra critical
- Respondents may say things online that they would not say face to face, or in a classroom environment
- There are too many highs and lows, not enough middle comments
- Faculty does not mind being evaluated, but these are not valid
- Rate of return is a real issue
- Do we need a “carrot and stick” approach to increase participation? That might really help. Ways of encouraging students to respond to the surveys: do not display student’s grades unless the survey has been completed by the student.
While these are legitimate concerns, I wanted to see if there was anything to them by conducting a comparison between paper evaluation data from Fall 2008 and electronic evaluation data from Fall 2009. Other CoursEval universities including SUNY Buffalo, Purdue, U. of Miami, U. of Texas – Austin, and BYU have conducted similar analyses. Click here to see a summary report. How does SNHU compare?
To conduct an analysis I asked each school at SNHU to provide me with a sample of evaluation data from Fall 2008 to use as my “paper” sample. The only criteria is that the evaluations had to be from faculty who taught the same courses in Fall 2009 which was my “electronic” sample. I received evaluations from the School of Liberal Arts and the School of Education for a total of 14 course sections. “Paper” evaluation data was not received from the School of Business or School of Community Economic Development. The College of Online and Continuing Education (COCE) was excluded from analysis as they recently changed their evaluation instrument so a legitimate comparison was not possible. Of the course evals received, only 12 sections were usable as 2 were for instructors or classes not also taught in Fall 2009. Data from Fall 2008 was matched to data from Fall 2009 by instructor and course and independent sample T-tests were completed on each pairing. The entire 2008 sample was also compared and graphed against the 2009 sample. Comments were also analyzed for quantity and type (positive, negative, neutral).
- Response rates were lower for electronic evaluations using CoursEval than for paper evaluations. For the Fall 2008 paper sample the average response rate was 89% with a high of 96% and a low of 71%. For fall 2009 the average response rate was 64% with a high of 94% and a low of 18%.
- Answers to the the Likert Scale questions (questions 1 – 19) showed no statistically significant difference between answers given on paper and those given using CoursEval. Responses tended to be more positive on the electronic evaluations than on the paper evaluations.
- Comments (question 20) were slightly lower in quantity on the electronic evaluations but showed no difference in the proportion of positive, negative, and neutral responses.
Response to concerns (see above):
Even though the sample size was small (12 Fall 2008 courses and 12 Fall 2009 courses), the findings are consistent with those found at the other universities mentioned above.
- Response rates are lower but this does not seem to have a negative impact on the quality of evaluation data
- The transition from paper to electronic evaluations has not resulted in lower or more critical ratings of faculty and their courses. If anything the ratings may have gone up. The comments are not more critical or negative
- The distribution of “highs”, “lows”, and “middles” remained the same
- Not really sure what the “these are not valid” concern means and why the electronic evaluations would not be valid
- Rate of return for electronic evaluations is lower than paper. While this doesn’t have a negative impact, it would be good to increase response rates. Higher response rates will likely come with familiarity as has been seen with COCE evaluations whose response rates have increased.
- We can increase response rates by encouraging student participation. If faculty explain to their students the importance of the evaluations and the value they place in the feedback then response rates will likely rise.
- We cannot tie access to grades to evaluation completion.
A copy of the slides presented to the SNHU Faculty Senate can be accessed by clicking here.
Overall, the transition from paper to CoursEval appears to be a positive one. There has been no negative impact on results as feared, faculty have quicker access to evaluation results, staff efficiency has increased, and the ability to secure and access evaluation data is better.
You can see more information about CoursEval at SNHU by clicking this menu above:
If you have any questions or comments, please post them by clicking the comments link at the top of this post.