Skip to Main Content
bottom of iphone 3 with text - ISQs online!

Response Rates and the Online ISQ Process

The following information is taken from the University of Oregon online evaluation website: 


Worried about the accuracy or validity of the course evaluations system?

Online evaluations save money, lower staff workload, decrease the margin for error, preserve class time that would otherwise be spent on in-class evaluations, and allow quick data turnaround. Still, going online is a huge change from having paper evaluations. As a faculty member, you naturally care about the feedback your students have to offer, and want to keep the accuracy and volume of that feedback and high and quality as possible. Many faculty have questions about online evaluations, and wonder if online evaluations can retain the positives they’ve come to depend on with paper evaluations. Below, some common faculty concerns are detailed, along with some of the literature that speaks to those concerns.

Are online evaluations as accurate as paper evaluations?

Yes. There is a prevalent belief that paper evaluations will come closer than online evaluations, to the rating that most accurately reflects the quality of a faculty member’s teaching. If this was accurate, scores between paper and online evaluations would differ substantially. Studies have found, however, that quantitative results do not differ significantly regardless of whether an evaluation is delivered in online or paper formats (Johnson, 2002; Kasiar et al, 2002; Liegle & McDonald, 2004; Donovan et al., 2006; Hardy, 2003; Heath, et al., 2007; Spooner, et al., 1999, Matz, 1999).


Access the IDEA Center Paper on Course evaluations

Won’t allowing “absentee” students to participate, lower my evaluation scores?

No. It’s true that by giving the evaluations in class, students that attend less often may be excluded more often. This is often seen as a positive, since these students are assumed to have less of a basis for evaluating a class, and are also assumed to be students who would evaluate a faculty member more negatively. The concern is that by giving this population of students a greater opportunity to evaluate, more negative feedback will be collected, lowering a faculty member’s overall rating.


Actually, students with a higher GPA (presumably the “better” and more consistently attending students) complete online evaluations at over twice the rate of students with a poor GPA (Thorpe, 2002; Layne et. al., 1999). Likewise, students expecting poor grades in a class are no more likely, and are generally less likely to score an instructor below the class mean than students expecting good grades (Layne et. al., 1999; Avery et. al., 2006; and Thorpe, 2002).


While this may allay concerns, there is another reason for capturing the feedback from the population of students that comes less frequently, or begins to have spottier attendance at the end of the term; they may be in a unique position to help point out things about a course that a faculty member would definitely want to know. While positive reinforcement is always welcome, knowing why a student either failed to engage, or became unengaged from a course can show where to make tweaks and improvements that will benefit all students.

Will students give as much, and as quality qualitative feedback online?

Yes. When a faculty member gets their evaluations, they generally hope for comments, substantive feedback, and detail that will give them enough information to know whether a change in their teaching or course content is warranted, and if so, exactly what that change should be.


Contrary to expectation, paper evaluations do not offer a greater benefit than online in this area. In fact, a higher percentage of students who respond to evaluations given online include qualitative feedback (Donovan et. al. 2006; Johnson, 2002; Kasiar et. al., 2002; Laubsch, 2006; and Layne et. al., 1999). The amount of online qualitative feedback is also greater than that in the paper evaluations. In research analyzing word count, studies find that qualitative feedback from online evaluations has, on average, between 4 and 7 times more words than that from paper evaluations (Kasiar et. al., 2002; Hardy, 2003; Hmieleski & Champagne, 2000). Perhaps most importantly, several studies have examined the quality of the comments submitted through both formats (paper vs. online), and found that online comments were more substantive, defined by more words per comment, more descriptive text, and more detailed feedback (Donovan et. al., 2006; Johnson, 2002; Collings & Ballantyne, 2004; Ballantyne, 2003).

Don’t online evaluations have lower return rates than paper?

That is completely up to you. Studies comparing rates of online vs. paper evaluation find that online evaluations generally have lower response rates than do paper evaluations, barring incentives and interventions (e.g. reminder messages, rewards). How much higher the rates are is a matter up for more debate. In academic refereed papers where no incentives or interventions were explicitly listed in the paper, and there were paper comparison rates available, paper response rates averaged 13% higher than online rates. Studies outlined in refereed journals, however, are not the same as real life examples from universities using online systems across their campuses. If data from websites, papers, and correspondence from universities using online course evaluations campus-wide are compared, the difference between paper and online response shrinks to 8%, again with no incentives. Adding incentives can boost response rate, dependent upon which incentives or interventions are used, from 7-25% (Ravenscroft & Enyeart, 2009; Norris & Conn, 2005; Johnson, 2002). University of Oregon uses a grade hold incentive and reminder notices. Though our average online response rate is already high, it’s higher in courses where faculty make it a point to let their students know how to find the evaluations, that the students’ comments are valued, and how the data is used overall. Read more about how you can raise YOUR response rates.


Is that better than our paper response rates? While response rates were not collected when we used paper evaluations, the sheer volume of evaluations collected has skyrocketed since going online. In Winter of 2007, only 32,000 Scantron forms were printed for that term’s evaluations. During Winter term 2013, there were 84,960 evaluations completed online.


Still, there are always people who insist that, prior to going online, they had a better response rate. But were all of those evaluations legitimate? In detailing its own process of moving to online evaluations, University of British Columbia made an interesting discovery that most institutions never consider. In an online survey, only students who are validly enrolled in a course can evaluate that course, and they are limited to evaluating one time. Several of the courses in UBC’s first few "test" terms, had response rates higher than 100% for their paper distribution. For comparison purposes in their paper, UBC simply reduced those figures to 100%, but it's interesting to be reminded that response rates may be artificially high for paper evaluations (University of British Columbia, 2010).

How can I make a difference in response rates?

At institutions where evaluation is taken seriously by the administration and the faculty, students can feel that their feedback matters, and respond accordingly. The relationship between student and faculty member is highly personal and individual, and plays the biggest role in a student deciding whether to evaluate.

Many students surveyed believe that faculty do not take evaluations seriously, and do not make changes as a result of the students’ reviews (Marlin, 1987; Nasser & Fresco, 2002; Spencer & Schmelkin, 2002). In fact, when asked, very few instructors report having made changes in direct response to student evaluation input (Beran & Rokosh, 2009). If faculty value course evaluations, educate the students on how they are used, and emphasize to students that their input will be taken seriously, however, there is a positive effect on response rates (Gaillard et. al., 2006). Constructive, informative, and encouraging instructor-student engagement around the course evaluation process is very important in maintaining or improving response rates (Norris & Conn, 2005; Johnson, 2002; Anderson et. al., 2006; Ballantyne, 2003).


A Brigham Young University study suggests that improved instructor and student engagement had helped response rates improve from 40 per cent to 62 per cent during three pilot projects (Johnson, 2002). The same study also showed a strong correlation between the level of communication and response rate.

There tends to be an overall bias that response rates simply “are what they are”, but nothing could be further from the truth. In fact, faculty and their attitude towards evaluation in their classroom, make a huge difference in response rates. Would you like higher response rates? Better feedback? More substantive comments? The power to make that happen is in your hands – in fact, no one can make more of a difference in this area than you.


Here are some ideas to help you encourage your students towards evaluating. Some may work better for you than others, or work better with your personal style. Try them, and see which work best for you.

  • Early reminder – 2 to 3 weeks prior: While we already automatically send reminder messages to students during the evaluation period, one study (Norris & Conn, 2005) noted a great increase in student response rates when students were given an early notification that evaluations were approaching. A reminder at around 2 to 3 weeks before the term ended was found to be ideal.
  • Reminders into term – check how students are doingFrom the Faculty Self- Service module in myWings, you can click ISQ response rates and monitor completion during the evaluation period. If your classes aren’t submitting evaluations at the rates you’d like to see, remember to mention the evaluations to them in class, letting them know how important their feedback is to you. In Johnson’s 2002 study, where he followed up with non-responding students, 50% of the non responders reported having no idea that the survey was available to be taken, and another 16% forgot.
  • Make it an assignment: Many faculty are against offering credit for students to do evaluations. The good news is, you don’t have to! Making an evaluation an assignment, even with no point value attached, raises response rates more drastically than almost any other intervention (Johnson, 2002).
  • Give instructionsWhile there are instructions to finding the course evaluations in the emails we send out to students, many students simply don’t read. Mention that course evaluations are found on the  Student tab in myWings. If they can’t find the link – they can’t evaluate.
  • Stress the importance of evaluation: Students are more likely to complete course evaluations if they understand how they are being used, and believe their opinions matter.
    • Detail how the University uses evaluation feedback: Many students don’t realize that their evaluations are looked at by all department chairs, and by promotion and tenure committees campus-wide. Let them know that this data is valued, and used, by University administrators.
    • Detail how YOU use evaluation feedback: One of the best ways to let students know that their opinion matters, and that you use it to improve your teaching, is to give them an example of how you’ve done so in the past. Share with the students some feedback that you’ve received in the past, and let them know the changes you made as a result.

That’s great, but what I really want is more detailed written feedback. How do I get that?

Simple. Ask for it! Remember that a higher percentage of students who respond to evaluations given online include qualitative feedback, (Donovan et. al. 2006; Johnson, 2002; Kasiar et. al., 2002; Laubsch, 2006; and Layne et. al., 1999), the amount of online qualitative feedback is also greater than that in the paper evaluations (Kasiar et. al., 2002; Hardy, 2003; Hmieleski & Champagne, 2000), and online comments are more substantive and detailed than feedback on paper evaluations (Donovan et. al., 2006; Johnson, 2002; Collings & Ballantyne, 2004; Ballantyne, 2003). All that’s left in that equation is making sure you get the feedback you’re most interested in hearing about. Are you trying out a new textbook this term, or did you add a new subject area in your lectures? Mention it in class when you’re talking about the evaluations, and let students know that you’d really like to hear how the material worked for them. Invite them to give you feedback on exactly what you most want to know about, and then demonstrate what type of feedback is most helpful to you. Let them know that “Great Professor,” is very nice, but not very helpful. Read them some examples of feedback that IS helpful. Show them what about the feedback was useful to you, and what helps you know how to make changes.


Still have questions? Contact the ITS Help Desk at 620-4357, and thank you for making our course evaluations online system as success!


Anderson, H. M., Cain, J. & Bird, E. (2005). Online student course evaluations: Review of literature and a pilot study. American Journal of Pharmaceutical Education, 69 (1), 34-43.

Anderson, J., Brown, G. & Spaeth, S. (2006). Online student evaluations and response rates reconsidered. Innovate, 2(6). Retrieved from

Avery, R. J., Bryant, W. K., Mathios, A., Kang, H., & Bell, D. (2006). Electronic Course Evaluations: Does an Online Delivery System Influence Student Evaluations? Journal of Economic Education, 37(1): 21-37.

Ballantyne, C.S. (2003). Online evaluations of teaching: An examination of current practice and considerations for the future. In D. L. Sorenson & T. D. Johnson (Eds.), New Directions for Teaching and Learning #96: Online students ratings of instruction (pp. 103-112). San Francisco, CA: Jossey-Bass.

Beran, T., & Rokosh, J. (2009). Instructors' perspectives on the utility of student ratings of instruction. Instructional Science, 37(2): 171-184.

Collings, D., & Ballantyne, C. (2004). Online student survey comments: A qualitative improvement? Paper presented at the 2004 Evaluation forum, Melbourne, Australia. Retrieved from

Donovan, J., Mader, C. E., & Shinsky, J. (2006). Constructive student feedback: Online vs. Traditional course evaluations. Journal of Interactive Online Learning. 5(3), 283-295.

Gaillard, F., Mitchell, S, & Kavota, V. (2006). Students, Faculty, And Administrators’ Perception Of Students’ Evaluations Of Faculty In Higher Education Business Schools. Journal of College Teaching & Learning, 3(8): 77-90.

Hardy, N. (2003). Online ratings: fact and fiction. New Directions for Teaching and Learning, 96, 31-41. Retrieved from

Heath, N. M., Lawyer, S. R., & Rasmussen, E, B. (2007). A comparison of web-based versus pencil-and-paper course evaluations. Teaching Psychology, 34, 259-261. Retrieved from

Hmieleski, K. & Champagne, M. V. (2000). Plugging in to course evaluation. The Technology Source Archives, Sept./Oct. Retrieved from

Johnson, T. (2002). Online student ratings: Will students respond? Paper presented at the annual meeting of the American Educational Research Association, New Orleans, 2002. Retrieved from

Kasiar, J. B., Schroeder, S. L. , & Holstad, S. G. (2002). Comparison of Traditional and Web-Based Course Evaluation Processes in a Required, Team-Taught Pharmacotherapy Course. American Journal of Pharmaceutical Education, 66: 268-270.

Laubsch, P. (2006). Online and in‐person evaluations: A literature review and exploratory comparison. Journal of Online Learning and Teaching, 2(2). Retrieved from

Layne B. H., DeCristoforo, J. R., & McGinty, D. (1999). Electronic versus traditional student ratings of instruction. Res Higher Education, 40:221-232.

Liegle, J. O., & McDonald, D. S. (2004, November 5). Lessons Learned From Online vs. Paper‐based Computer Information Students' Evaluation System. Information Systems Education Journal, 3(37). Retrieved from

Marlin, J. (1987). Student Perceptions of End-of-Course Evaluations. The Journal of Higher Education, 58(6): 704-716.

Matz, C. (1999). Administration of web versus paper surveys: Mode effects and response rates. (Masters Research Paper). University of North Carolina at Chapel Hill. (ERIC document ED439694).

Nasser, F., & Fresko, B. (2002). Faculty Views of Student Evaluation of College Teaching. Assessment & Evaluation in Higher Education, 27(2): 187-198.

Norris, J., & Conn, C. (2005). Investigating Strategies for Increasing Student Response Rates to Online-Delivered Course Evaluations. Quarterly Review of Distance Education, 6: 13-29.

Nulty, D. (2008, June). The adequacy of response rates to online and paper surveys: what can be done? Assessment & Evaluation in Higher Education, 33(3), 301-314. Retrieved from

Ravenscroft, M. & Enyeart, C. (2009). Online Student Course Evaluations: Strategies for Increasing Student Participation Rates: Custom Research Brief. Education Advisory Board, Washington D.C. PDF download. Retrieved from:

Spencer, K. & Pedhazur Schmelkin, L. (2002). Student Perspectives on Teaching and its Evaluation. Assessment & Evaluation in Higher Education, 27(5): 397-409.

Spooner, F., Jordan, L., Algozzine, R., & Spooner, M. (1999). Student rating of instruction in distance learning and on-campus classes. The Journal of Educational Research, 92:132.

Thorpe, S. W. (2002). Online student evaluation of instruction: An investigation of non-response bias. Paper presented at the 42nd annual Forum for the Association for Institutional Research, Toronto, Ontario, Canada.

University of British Columbia, Vancouver. (2010, April 15). Student Evaluations of Teaching: Response Rates. Retrieved from