§8.3.6. Dimension 3: Class Climate

This dimension is about the extent to which the class climate reflects regard for students as persons, is supportive, and cooperative, whether it encourages motivation and engagement for all students, whether all students feel included, how student-student and student-instructor dialogue are fostered, what the students’ views of their learning experiences are, and how the instructor has sought student feedback and used it to inform their teaching.

Sources of evidence: Syllabi, reflection, class observation, Student Perceptions of Learning Experience instrument, letters from students.

This is the only dimension assessed through the Student Perceptions of Learning Experience (SPLE) instrument, which asks students to report on six aspects of class climate:

§8.3.6.1. Interpreting Student Perceptions of Learning Experience results

Evaluators and candidates should interpret SPLE results with care, following the scoring, reporting, and visualization guidelines established in the “Student Perceptions of Learning Experience: Rationale and Broad Principles of Design” report. Key principles include:

  • Frequency distributions and percentages, not averages. SPLE responses are ordered categorical data. They must not be averaged, and evaluators should examine the full distribution of responses, not any summary statistic.
  • No cross-comparisons. SPLE results must not be compared across instructors, courses, departments, or disciplines. Differences in scores may reflect demographic biases, course characteristics, or nonresponse patterns rather than differences in the learning environment.
  • No extrapolation. Results from respondents should not be extrapolated to non-respondents. Students who submit evaluations are a self-selected sample of convenience, not a random sample.

§8.3.6.2. Inherent limitations of student evaluation data

Even when student survey items are framed as experiential reports about class climate — as in the SPLE — rather than as evaluative judgments about teaching effectiveness, evaluators must be mindful of the inherent limitations of student evaluation data. These include, but are not limited to, the following factors (Stark, 2026):

§8.3.6.3. Department-associated questions

Departments are not required to add questions to the Student Perceptions of Learning Experience instrument. The university-wide items are designed to provide a comprehensive assessment of class climate across six aspects, and many departments will find them sufficient.

Departments that wish to add questions should weigh the benefit of additional information against the cost of making the instrument more burdensome for students to complete. A longer survey reduces response rates, and lower response rates weaken the representativeness of the data — undermining the very information the additional questions are meant to provide.

If a department elects to add questions, those questions must meet the same standards that govern the university-wide items. The bar is high:

  • Students must be qualified to answer. The question must concern something students can report on from their own experience, without requiring disciplinary or pedagogical expertise.
  • Students must be able to answer with minimal bias. The question must elicit an experiential report, not an evaluative judgment. Items that ask students to assess teaching effectiveness, course quality, or instructor competence are not permitted, as these are the items the literature identifies as most susceptible to bias.
  • Closed-ended, structured items only. Department-associated questions must be closed-ended items on the five-point Likert scale. The university-wide instrument already includes open-ended questions with structured prompts and guardrails designed to minimize equity bias; there is no need for departments to add additional open-ended questions at the department level.

Department-associated questions must use the same five-point ordered categorical (Likert) response scale as the university-wide items (Strongly Agree, Agree, Neither Agree nor Disagree, Disagree, Strongly Disagree, plus Not Applicable). They must be scored and reported identically to the university-wide questions — as frequency distributions of raw counts and percentages, with no numerical averages, no cross-comparisons, and no extrapolation from respondents to non-respondents. Every guardrail established in the Scoring and Reporting Guidelines of the “Student Perceptions of Learning Experience” report for the university-wide items applies in full force to department-associated questions, lest the protections built into the university-level instrument be undone at the department level.