5 Implementation Best Practices
The recommendations in this chapter were approved unanimously by the committee.
The preceding chapters define what the Student Perceptions of Learning Experience (SPLE) measures — six aspects of the learning environment that students are qualified to report on — and how its results should be scored and reported. This companion chapter addresses a third question: how should the instrument be administered?
Cal Poly’s transition to semesters — from 10-week quarters — is a once-in-a-generation opportunity to design the administration of this instrument from scratch rather than inheriting the practices of a system it replaces. The recommendations below draw on the peer-reviewed literature and on the published practices of peer institutions to propose a concrete implementation model for the SPLE.
5.1 Scope
This chapter addresses the implementation of the SPLE — the summative instrument whose results enter the personnel file under CBA §15.17. It does not address the evaluation of teaching more broadly, nor does it address course design, pedagogy, or the other dimensions of the TEval framework that are assessed through peer review, self-reflection, and other evidence sources.
The broader literature on teaching evaluation recognizes that many institutions complement their end-of-term summative instrument with informal mid-semester formative feedback — brief, anonymous check-ins designed to give instructors actionable information while the course is still in progress. Oregon’s two-survey model, Angelo and Cross’s Classroom Assessment Techniques (1993), and Harvard’s early-feedback recommendations (Bok Center) all exemplify this practice. Developing a formative feedback process at Cal Poly is a separate effort. A sub-committee of this Ad Hoc Committee prepared a separate document that is not part of this report — Formative Learning Feedback: A Companion to the Student Perceptions of Learning Experience Report — that addresses this topic in detail. This chapter does not address it further.
The sections that follow focus exclusively on the SPLE instrument: when to administer it, how to administer it, how to maximize response rates, and how to frame it to minimize bias.
5.2 Timing
5.2.1 The literature consensus
The peer-reviewed literature is clear on one point: summative course evaluations should be administered during the last one to two weeks of instruction, before final examinations begin. Administering evaluations before students receive final grades avoids contaminating responses with grade-related anxiety or gratitude — a well-documented source of bias (Centra, 2003; Marsh, 2007). Administering them too early misses late-semester developments in the learning environment.
| Institution | Evaluation Window | Source |
|---|---|---|
| San José State | ~10 days; last 2 weeks of classes | SJSU Teaching Evaluation; SOTE Interpretation Guide (2022) |
| San Diego State | ~14 days; two-week window before finals | SDSU Student Feedback |
| UC Davis | Last week of each quarter (~7 days) | UC Davis ACE |
| UC Santa Barbara | Week 9 Monday – Week 10 Friday (~10 days) | UCSB Course Evaluations |
| UC San Diego | Week 9 Monday – Week 10 Saturday 8 AM (~6 days) | UCSD SET |
All of these institutions release results only after final grades have been submitted — a universally recommended practice that protects anonymity and ensures that neither students nor instructors face grade-related pressure during the evaluation period.
5.2.2 Recommendation for Cal Poly’s semester
The SPLE window should be open during the last two weeks of instruction before finals week. This two-week window is consistent with the practice at most peer institutions, provides sufficient time for reminders and in-class completion, and ensures that the evaluation captures students’ experience of nearly the full semester without bleeding into the final examination period.
5.3 Mode of administration
5.3.1 The response-rate problem
The single most important administrative decision is the mode of administration, because it largely determines the response rate.
The evidence is unambiguous: in-class administration produces the highest response rates. Paper-based in-class administration historically achieved 80–90% response rates (Nulty, 2008; Berk, 2013). By contrast, online-only outside-class administration typically produces 30–60% response rates — a range in which self-selection bias is a serious threat to the validity of the data (see Section 5.4).
5.3.2 The hybrid model
A growing number of institutions have adopted a hybrid approach: dedicating class time for students to complete the evaluation online, on their own devices. This combines the response-rate benefits of in-class administration with the logistical efficiency of an online platform. Studies report response rates of 70–80% with this model — comparable to traditional paper-based in-class administration (Berk, 2013; Chapman and Joines, 2017).
The hybrid model is particularly well suited to the SPLE. The instrument is designed to be short and focused — a student can complete it in under ten minutes on a phone. Ten to fifteen minutes of dedicated class time is more than sufficient, even accounting for the time to display the link, wait for students to access it, and allow for thoughtful responses.
5.3.3 Recommendation
The summative SPLE should use a hybrid model: during the evaluation window, each instructor dedicates 10–15 minutes of class time for students to complete the survey online. The instructor displays the survey link (URL or QR code), then leaves the room. A designated student or TA signals the instructor to return when time is up. Students who are absent during the in-class session complete the evaluation outside of class during the remainder of the window.
This is the single most effective step the university can take to ensure that the SPLE produces high enough response rates.
5.4 Maximizing response rates
5.4.1 Why response rates matter
When response rates are low, the students who choose to respond may differ systematically from those who do not — they may be more satisfied, more dissatisfied, higher-performing, or lower-performing than the class as a whole. This self-selection bias is not a theoretical concern; it is well documented. As Stark (2026) emphasizes, students who submit evaluations are a self-selected sample of convenience, not a random sample, and there is no statistical basis for extrapolating from respondents to the class as a whole.
Springer (2015) found that online evaluation respondents differed from non-respondents in academic achievement, satisfaction, and motivation. Holtgraves and colleagues (2023) found that non-respondents were not a random subset of enrolled students and that the resulting bias could not be corrected by statistical adjustment. Springer (2016) further showed that the direction of the bias varied by course context — meaning that the bias cannot be predicted or corrected post hoc.
The literature identifies the following strategies, roughly ranked by their demonstrated impact on response rates:
Dedicate class time for online completion. This is the single most effective intervention. It converts the evaluation from a task students must remember to do on their own time into one that is built into the structure of the course (Berk, 2013; Chapman and Joines, 2017).
Multiple automated reminders. Adams and Umbach (2012) found that four reminders spaced at 2–3 day intervals brought response rates to approximately the 70th percentile of course-level rates. Each additional reminder (up to four) produced a statistically significant increase.
LMS integration. Embedding the evaluation link within Canvas — as a dashboard notification, a pop-up reminder, or a course navigation item — reduces the friction of locating and accessing the survey. Students are already in the LMS daily; the evaluation should meet them there.
Instructor communication. When instructors discuss the evaluation on Day 1 (e.g., a syllabus note explaining that the SPLE asks about the student’s learning experience and that the data are read and taken seriously), and again when the evaluation window opens, response rates increase modestly. The mechanism is legitimacy: students participate when they believe their feedback matters (Chen and Hoshower, 2003).
Class-level incentives. Goodman, Anson, and Belcheir (2015) found that a class-level incentive (e.g., a bonus point if the class achieves an 80% response rate) increased response rates by approximately 22 percentage points. Class-level incentives avoid the coercion problem of individual incentives because no individual student’s participation can be identified.
5.4.2 Recommendation
We recommend the following:
- Dedicate class time. Each instructor should set aside 10–15 minutes during the evaluation window for students to complete the SPLE online in class. The instructor displays the survey link or QR code, then leaves the room. This is the single most effective intervention for achieving high response rates.
- Send four automated reminders at 2–3 day intervals during the evaluation window, via email and Canvas notification.
- Integrate with the LMS. Embed the evaluation link within Canvas — as a dashboard notification, pop-up reminder, or course navigation item — so that the survey meets students where they already are.
- Encourage instructor communication. A brief mention on Day 1 (e.g., a syllabus note explaining that the SPLE asks about the student’s learning experience and that the data are taken seriously), repeated when the window opens, increases participation.
5.5 Framing the instrument to minimize bias
5.5.1 The evidence on anti-bias framing
A natural question is whether the instructions presented to students before they complete the evaluation can reduce the biases documented in the literature — particularly gender bias. The answer is nuanced: it depends entirely on what kind of framing is used.
Normative framing — generic appeals to fairness such as “Please evaluate your instructor fairly, regardless of their gender, race, or other characteristics” — has been shown to have no significant effect on evaluation outcomes. Boring and Philippe (2021) tested this directly in a large-scale field experiment at Sciences Po and found that a normative anti-bias warning produced no detectable change in the gender gap.
Informational framing — pairing the warning with institution-specific data showing that previous cohorts had evaluated male and female instructors differently — produced a markedly different result. In the same experiment, Boring and Philippe found that informational framing significantly reduced the gender bias, raising ratings of female instructors without affecting ratings of male instructors. The effect was driven primarily by male students’ evaluations of female instructors; female students’ ratings were not significantly affected by either treatment.
An important caveat: The evidence that informational framing reduces bias applies to structured Likert-scale items. It does not extend to open-ended responses, where the unstructured format gives bias room to operate regardless of how the prompt is framed. Owen, De Bruin, and Wu (2024) found that even directed, structured prompts — while they improved the specificity and constructiveness of open-ended comments — did not reduce gender bias. This is one of the reasons the committee considered discontinuing open-ended questions from the summative instrument, and ultimately voted to retain them only under the structured prompts and guardrails described in Chapter 3.
5.5.2 Recommendation
The SPLE should open with a brief, concrete, data-informed preamble — not a generic “be fair” appeal, which the evidence shows is ineffective, but a factual statement that provides students with context about what the survey measures and what the research shows about evaluation biases. The preamble should:
Name what the survey measures. Remind students that the SPLE asks about their own experience of the learning environment — not a verdict on the instructor as a person or professional.
Provide specific information about documented biases. A brief, factual statement — e.g., “Research shows that students’ evaluations of their learning experience can be influenced by characteristics of the instructor unrelated to the learning environment, such as gender and race. Being aware of this tendency helps produce more accurate feedback.”
Reinforce the survey’s purpose. The data are used to understand the student learning experience and to support faculty development and evaluation. Thoughtful, honest responses improve the quality of the data.
The name Student Perceptions of Learning Experience is itself a framing device. By directing attention to the student’s experience rather than to the instructor’s performance, the instrument’s name reinforces the experiential focus that the bias literature recommends.
Student Perceptions of Learning Experience
This brief survey asks about your experience in this course — the learning environment, your interactions with the instructor, and how you perceive the course was structured. It does not ask you to evaluate the instructor’s teaching ability or the course content.
Research shows that students’ responses to surveys like this can be influenced by characteristics of the instructor — such as gender, race, and accent — that are unrelated to the learning environment. Being aware of this tendency helps you provide more accurate feedback.
Your responses are anonymous and will not be shared with the instructor until after final grades have been submitted. Please respond thoughtfully and honestly.
5.6 Recommended implementation model
The following table synthesizes the evidence reviewed in this chapter into a concrete recommendation for administering the summative Student Perceptions of Learning Experience under Cal Poly’s semester calendar.
| Element | Recommendation | Rationale |
|---|---|---|
| When | Last two weeks of instruction before finals week | Literature consensus: last 1–2 weeks of instruction, before finals begin (Centra, 2003; Marsh, 2007) |
| Mode | Hybrid: dedicated class time for online completion | Single most effective method for achieving 70%+ response rates (Berk, 2013) |
| Class time | 10–15 min; instructor displays link/QR code, then leaves | The instrument is short and focused — feasible in under ten minutes |
| Framing | Informational preamble (data-informed, not generic) | Boring and Philippe (2021): informational framing reduces gender bias; generic appeals do not |
| Reminders | 4 automated reminders at 2–3 day intervals | Adams and Umbach (2012): achieves ~70% response rates |
| Results release | Only after final grades are submitted | Universally recommended; protects anonymity and reduces grade-anxiety bias |
Administering the SPLE in the final two weeks of instruction, with dedicated class time and an informational preamble is an achievable model — it requires no new technology, no additional personnel, and minimal class time — and it reflects the best available evidence on how to implement a student survey that is both useful and fair.