Three years ago, in 2020, the assessment method implemented in the course Introduction to Computer Science (CS1), taught at the Technion's Faculty of Computer Science, changed from paper-based exams to executable exams. As Yael Erez (co-author of this blog) pointed out in her April 19, 2023 BLOG@CACM post How is History Shaping My Ph.D. Research? Unexpectedly!, it is accepted today throughout the Technion that executable exams are here to stay. No lecturer, teaching assistant, or student would even dare suggest a return to paper-based exams.
Each year, more than 1,000 Technion computer science, electrical engineering, and data science undergraduates enroll in the CS1 course. In such populous courses, it is almost impossible to evaluate and grade paper-based exams in a meaningful, consistent, and fair manner. Although this is a well-known observation, most CS1 courses worldwide still administer exams that are both written and assessed manually. At the same time, executable exams in which the students submit computer programs that they develop during the exam in a software development environment, are consistent with the requirements of the field and enable automatic (and therefore consistent) assessment.
On July 7, 2021, we shared preliminary insights from the change process the Technion underwent in our BLOG@CACM post 10 Tips for Implementing Executable Exams. These insights did not, however, capture the deep cultural change associated with assessment in higher education that is required when the assessment method changes (among other things, in terms of infrastructures and roles). This blog aims at closing this gap. Following Kotter's 8-step change model (Kotter, 2012), we analyze the organizational change that took place at the Technion when the assessment method in the CS1 course changed from paper-based exams to executable exams.
Using Kotter's change model for the analysis of the implementation process of executable exams in the Technion's CS1 course.
Table 1 reflects the process the Technion underwent over the last three years, in which it adopted executable exams in the CS1 course. We decided to present this process according to the main properties of the assessment method implemented in the course during this 3-year period by semester to highlight the change stability in the last four semesters.
In what follows, we analyze this process according to Kotter's 8-step model of organizational change. It is important to note that, in general, change processes are long and are susceptible to failure for a variety of reasons: organizational culture, bureaucracy, lack of teamwork, lack of leadership, people's fear of the unknown, and resistance to change, to list a few. In the following analysis, we provide details on how we addressed these challenges throughout the change process.
1) Create a sense of urgency.
During the Covid-19 pandemic, in the Spring 2020 and Winter 2021 Semesters, all Technion students took their final exams remotely, online. This change in exam format applied also to the exam in the CS1 course. For the first online exam (2020 Spring semester), we decided to keep the paper-based exam and asked the students to scan their handwritten exams and submit them through the Technion's LMS. During the exam, however, it was easy (and ridiculous) to observe that the students were using the computer for everything but programming: they logged into a video call for proctoring purposes, downloaded the exam from the LMS, scanned it, and finally uploaded it back to the LMS. Based on this observation, the second time the students were tested remotely due to the pandemic (2021 Winter semester), we implemented the executable exam format for the first time. The following semester of the 2021 academic year, when the exam was brought back on campus, a paper-based exam was, however, administered once more.
This initial experience with executable exams highlighted the observation that a paper-based exam is not consistent with the main purpose of an introductory programming course, which is to let students gain meaningful programming experience (including technical, professional, and cognitive tools and habits) in preparation for their future studies and professional development.
To emphasize the urgent need to align the exam format with the demands of the field, during the 2021 summer semester, we scheduled a meeting with the Vice Dean of Undergraduate Studies at the Faculty of Computer Science. Following this meeting, a decision was made to gradually transition the assessment approach in the CS1 course to an executable exam format.
2) Build a guiding coalition.
To create a strong coalition to lead the change, a lead team was assembled that included several teaching assistants from the CS1 course and one lecturer from another Technion faculty who teaches Introduction to CS course and also recognized the opportunity to lead a change in the exam format.
3) Form a strategic vision.
A brainstorming session with the course staff was facilitated before the onset of the 2021 Winter semester in which executable exams were applied for the first time. In this session, we formalized guidelines for managing an executable exam, with special focus on pedagogical and technological issues to ensure that the executable exam will be fair, valid, and reliable. For example, we decided to (a) let the students use the same development environment during the exam that they used in their home assignments during the semester; (b) retain the same question format used in the paper-based exam (at least in the preliminary stages of the transition process); (c) check the exams manually in addition to the assessment of the automatic grader; and (d) establish a support center that will operate during the exam to overcome technical problems.
4) Enlist a volunteer army.
In order to get the word out about the change process and its advantages, and to start establishing an "army" of supporters of the transformation process, we took the following measures:
5) Enable actions by removing barriers.
To enable the smooth administration of the executable exams, we removed technological and psychological barriers. Specifically,
6) Generate short-term wins.
Short-term wins in the process of educational change, as described in this blog, can be illustrated by various means. For instance, we can demonstrate that students acquire more valuable skills with fewer resources (Hazzan, 2020) or that the distribution of grades does not disproportionately favor or disadvantage any specific subgroup of students, thus benefiting the entire student body overall.
In the 2022 winter semester, in response to several concerns raised about the validity and equity of executable exams, an experiment was designed to compare the achievements of students in the same course who are assessed using two different formats. Specifically, the 1,000 students who enrolled in the CS1 course in the 2022 Winter semester were given the opportunity to take an executable end-of-semester exam. Three hundred students chose this option. Using stratified sampling by gender, age, and faculty, half of the 300 students who chose this option were selected to take the executable end-of-semester exam. Thus, we compared the achievements of three groups of students: a) students who took the executable exam; b) students who chose to take the executable exam but were not selected and took the paper-based exam; and c) students who did not choose to take the executable exam and took the paper-based exam.
The results of these comparisons revealed that student outcomes in this course were very consistent. As Table 2 shows, the differences between the average grades of the three groups of students were very small and can be explained by the different characteristics of the students who chose to take an executable exam. Due to space limitations, we will not delve into the details of these characteristics here.
Differences in terms of gender were also compared. Results showed that the differences in exam grades between males and females were consistent in the paper-based and executable exam groups. All of these insights were made available to the students and teaching assistants.
7) Sustain acceleration.
The change in the exam format impacts the way we teach and the way students learn. To sustain the change in the exam format, the teaching and learning material should therefore be updated. Thus, as part of our collaboration with the Center for Promotion of Learning and Teaching (presented in above Step 4 - Enlist a volunteer army), lecture slides, tutorials, and the formative assessment tasks were all adapted to the new summative assessment format. For example, since in executable exams students perform an authentic software development process, automated tests cannot be neglected, and the art of code testing is now included in the course material.
8) Institute change.
Assimilating the change in the assessment format required several institutional changes in the basic organizational routines. For example, a plugin called VPL (Virtual Programming Lab), which automates the assessment process and aligns students' work during the semester with the end-of-semester exam format, was added to Moodle (our LMS); a low stake midterm exam was added as of the 2022 summer semester, to provide students with an opportunity to experience an executable exam prior to the end-of-semester exam; and classrooms equipped with an electrical outlet for each student are now automatically assigned for the executable exam.
To summarize the change process that has taken place up to this point in time, we highlight two important lessons that we have learned so far, the first focusing on the people, the second on the process:
Hazzan, O. Is A (Nearly) Zero-Cost Model Plausible for Science and Engineering Programs?, BLOG@CACM, May 27, 2020.
Kotter, J. P. Leading Change. Harvard Business Press, May-June 1995.
Yael Erez is a lecturer at the Technion's Faculty of Computer Science and a staff member at the Department of Electrical Engineering at the Braude College of Engineering in Karmiel. She is currently a doctoral student at the Technion's Department of Education in Science and Technology, under the supervision of Orit Hazzan. Orit Hazzan is a professor at the Technion's Department of Education in Science and Technology. Her research focuses on computer science, software engineering, and data science education. For additional details, see https://orithazzan.net.technion.ac.il/ .
No entries found