The wonderful Rissa Sorensen-Unruh put a good question on Twitter:
And she and some other folks were nice enough to entertain my ideas, and then she pointed out that I’d basically started a blog post and I should go finish it so she could refer to it without counting on Twitter.
The first thing I noticed was that there are 2 questions there. Should we be using automated proctoring, and should we be proctoring at all?
The first question is easy for me to answer today. I simply haven’t seen a single lockdown browser or AI proctoring software option which I think delivers more value than it causes in harm. Look at the student experiences collected at https://twitter.com/Procteario – these are products which crash student computers, which distract and distress and insult students, which introduce the cover of “algorithms” where humans should be responsible for their judgment. And they visit these consequences worst on our most vulnerable students. So as of right now, I think colleges and universities shouldn’t be paying for these services.
(Do I believe that forever? Probably not. I can imagine a world where an AI proctor is closer to a teacher than a prison guard. But the language these systems use to sell themselves doesn’t convince me their designers can imagine it.)
The second question is actually really complicated. Should we proctor? Well, I think having a proctor for a driving test is probably a good idea, yes. When where we’re actually evaluating process, not just outcome, we have to have an observer. We’re not really talking about those when we talk about the kind of proctoring you can outsource, of course, but I think it’s useful to consider the kind of test where the observer adds value.
Similarly, I think it’s possible to agree that there are a set of examinations where the consequences of cheating would be really, really high, and it’s appropriate to impose barriers to success to make sure cheaters don’t get through. Medical boards come to mind; I think it’s pretty important that we not license doctors who cheat. For me, this line is somewhere around “matters of public safety” but more nuanced thought about the definition of “consequences” would be worthwhile.
We’ve started with a troubling assumption: “assume a cheater.” When we start with the assumption that every class includes a student who is, in essence, trying to steal their degree, then all sorts of choices become justifiable, and not only justifiable but necessary. Prioritizing catching cheaters also send the message that cheating is common, which can’t be a good message to send students.
I’m reminded that a long time ago I did some research into honor codes. One of the findings which stuck with me (though I can’t find it at the moment) was that the presence of an “honor code” per se wasn’t nearly as important as the presence of an active discussion about academic integrity on campus. This is the approach which starts by questioning the assumption – what can we do to have fewer cheaters? How do we inculcate a positive value for academic integrity, instead of a fear of being caught cheating? Could we just make the amount of cheating go down?
This is related to the question of how you build assessments and courses which students don’t want to cheat on, which you’ll notice, is at the heart of the original debate. There are lots of approaches to these, using language like “authentic assessments” and “nondisposable assignments.” And they’re all really good ideas, since they get at moving students toward practicing more complex skills, hopefully in more motivating environments. That’s what we’re all looking for, right?
Well, sort of. Every discipline does have some body of knowledge which you just need to have in your head to be successful. An “authentic assessment” might be an indirect way to test that knowledge, while a more direct test might be a better measure of exactly what’s known and what isn’t. If you’re looking to identify the specific areas where a student and a teacher need to focus their efforts, less “authentic” measures might make sense.
We’ve shifted into the zone of formative assessment, though. If the point is less to take a measurement than to use that measurement to further learning, then suddenly we’re in the zone of retrieval practice and spaced repetition and automatic re-takes on quizzes… in other words, strategies which don’t particularly require proctors. (At least not if the students actually understand why we’re using these approaches.)
I’m reminded of Jim Lang’s fantastic Cheating Lessons, which looks at the ways in which course and assignment design can incentivize or remove the incentives to cheat. One of the things I really like about this book is the way Lang offers both an extreme example of a course which has been radically redesigned to address a reason students cheat, and some less extreme examples of courses which make smaller changes.
Because these things are more work, right? Even the more liberal approaches to quizzing imply spending more time writing good quiz questions. Authentic assessments might mean searching out authentic but appropriate data sets; they mean spending more time coaching students and eventually more time grading more complicated assignments. This is the work of good teaching, but it is labor that has to come from somewhere. It can’t be by accident that I see lots of good teaching innovations which return in a year, scaled back a little. Faculty members find out which parts work best and trim the less valuable parts away. Much the way their students do.
And more complex assessments are also labor for our students. Many of our students are already stretched to their max between commitments inside and outside the classroom. The “opportunity” to work harder, learn more, demonstrate their accomplishments more meaningfully may be a double-edged sword, for those students who need to learn about prioritization and letting some things be “good enough”. I don’t in any way mean that classes should be dumbed down, but scale is important, and it needs to be considered across the whole student experience.
Speaking of “good enough”, it’s time to not write my cool closing paragraph and just hit publish…
If you read that far, you deserve something, so here’s the clip where I learned that some people call proctors “invigilators”.
The original thread is really good and I encourage you to check it out. Lots of folks had good points which fed into any good points I made.