AI is Finding Uses Monitoring Live Testing Situations
AI in live proctoring works like a smoke detector, alerting humans to a potential problem, and then humans, not technology, make the judgments.
Artificial Intelligence—AI. It may sound scary. And it’s anything but perfect.
Individuals get terrified when people talk of utilizing AI in test and exam proctoring, in conjunction with a system that deters and detects cheating. That’s reasonable. But it’s also limited, and it ignores the breadth of what AI technologies can do—and, more importantly, don’t do—during remote testing monitoring.
Let’s take a quick look.
To begin, it’s a good idea to approach inquiries concerning AI in test proctoring with the knowledge that the AI tools and approaches utilized during an exam can differ substantially. The range of AI tools available ranges from none to a plethora of indicators that can correctly monitor and evaluate everything from background noise to keystroke accuracy and speed, among other things. The presence of a proctoring solution in a remote test does not imply that it employs AI.
It’s also crucial to remember that where a test falls in the range of review and monitoring methods is largely determined by what the test provider, the institution, wants. Remote test proctors adhere to the norms and procedures established by the schools or professors; they do not make up their own or utilize anti-cheating software that the schools do not approve of. To put it another way, if a test uses AI, it’s very definitely because the school requested it, and for good reason.
What Exactly is AI and What Does it Do
Let’s take a quick look at what AI is—what it does, keeping in mind that AI and test proctoring are not the same things and that schools, not proctoring firms, make such decisions.
AI is a machine that gathers and evaluates data. In that sense, it’s similar to an exam in that it collects data and scores it on a scale. What sets AI apart is its ability to “learn” from both its successes and failures. AI systems improve in accuracy as they are used.
That brings us to the point of what AI cannot accomplish during remote testing. It does not determine who is cheating, and I cannot stress this enough. Some acts are not “flagged” as cheating, and the student is not penalized. It doesn’t have a specific score for cheating vs. not cheating—look away from your screen twice in a minute and you’re good three times and you’re failing. That is simply not the case.
The reason for this is that the AI systems used in the test proctoring that we use simply alert humans. The decision is made by humans. Perhaps not every proctoring service follows this procedure, but they should.
Here Is an Example of AI Monitoring Test Takers
Let me give you an illustration. If a student answers complex engineering questions faster than 99.5 percent of other pupils, this could be cause for concern. It’s possible that they knew the questions and answers ahead of time. It could also mean that the person taking the test is a well-prepared engineering genius. In this case, an AI system might notify a test proctor of the unexpected event, but the proctor, and eventually the test-professor, takers will decide whether the test-taker is a genius or a scallywag.
That last sentence is crucial. Even if this hypothetical student sets off the AI alarm because of how rapidly they respond, and even if a reviewing proctor alerts the professor, the professor may determine it’s acceptable. Perhaps they are aware that the pupil is a top academic performer. Alternatively, they may decide that more investigation is required because the student has never attended a single class. Professors and school staff, not AI, decide what constitutes wrongdoing and what should be done about it.
AI is the Future
In this sense, AI functions similarly to a smoke detector in your home. Yes, it is smarter, yet it performs a comparable function. A smoke detector searches all day and night for one thing, and when it finds it, it sounds an alarm. However, a human must then decide if the meatloaf was left in the oven for too long or whether it is time to take the children and dogs and flee. AI, like a smoke detector, may warn people about situations they might otherwise overlook. People, on the other hand, make the decisions.
Furthermore, because AI “learns” from what it “gets right,” the tools will make fewer mistakes and provide fewer false warnings as they work. Whereas older AI systems may have detected an uncommon occurrence when someone sneezed, the system will fix itself after a few “corrections” by humans, and sniffling will no longer be highlighted. That’s a positive thing. We want AI systems—indeed, all systems—to be accurate and improve over time.
When actual people are combined with AI, the result can be pretty powerful. Humans can fix and improve AI, and AI may warn humans about things they might otherwise overlook. Humans and AI both improve throughout time as a result of the relationship. The dirty secret is that AI is utilized just as often to help our proctors improve as it is to “detect” cheating, especially in online proctoring.
There is no “score” by which an AI system can determine cheating in proctoring and evaluation. And there is no system in place that will use an “AI score” to decide a grade or academic outcome. Schools and instructors should not attempt to deploy proctoring or AI tools in this way without human decision-making. The systems were not built to do so, and they are unlikely to be able to do so.
My Opinion
The basic line is that AI systems utilized for online exam proctoring aren’t everywhere, they’re frequently highly specialized, and they’re never utilized in place of humans. Robocop isn’t watching anyone take a test, even though they can detect things that humans can’t. Big Brother will never let you down. That is something AI cannot do.