Professors Who Use AI Get Better Results in the Classroom
When I began teaching data science and artificial intelligence at the Pratt School of Engineering at Duke University, I was disappointed by how little insight I thought I had about how productive my teaching was before the final exam grades and student evaluations came in at the end of the semester.
Being new to teaching, I spent time reading about pedagogical best practices and how strategies such as mastery learning and personalized one-on-one instruction could significantly enhance student performance. And, even with my relatively small class sizes, I did not feel I had sufficient insight into the learning of each student to provide them with valuable personalized guidance. In the middle of the semester, I would not have been able to give you a really clear answer if you had asked me to tell you exactly what a particular student had learned from class to date, and where he or she was failing. I had to ask them where they needed guidance when students came to me for one-on-one coaching, and hope that they were self-conscious enough to know.
Knowing that my colleagues teach much greater class sizes in other programs and universities than mine, I asked them how conscious they thought they were at any point in time of the degree of mastery of each of their students. They acknowledged, for the most part, that they were still mostly “flying blind” before the final evaluation results came in. Historically, a tradeoff between scale and realistic standard of instruction is one of the most vexing problems in education: as class sizes grow larger, a teacher’s ability to offer the form of personalized guidance shown to be most productive by learning science research is diminished.
But as educators in the new world of online education, we have access to ever-increasing quantities of knowledge that can give us insights into individual student learning, including recorded lecture videos, electronically submitted assignments, discussion boards, and online quizzes and evaluations. We started a research project at Duke in the summer of 2020 to explore how we could use this knowledge to help us do our job better as instructors. “As an instructor, how can I use the data available to me to support my ability to provide my students with effective personalized guidance?” was the basic question that we set out to address. ”
Tracking Student Engagement
What we wanted to know was what content they have mastered for any given student in a class at any point during a semester, and what are they struggling with? The Knowledge Space Theory model, introduced by Doignon and Falmagne in 1985 and greatly expanded since argues that there is a distinct set of subjects (or “items”) that often have interdependence in a given “domain” of knowledge (such as the subject of a course). The collection of topics that a student has learned to date is called their “knowledge state.” It is important to consider the knowledge state of each student at any point to provide successful instruction for the entire class and to provide personalized guidance for individual students.
But how does one describe the information state of a student? The most popular approach is through evaluation, either through homework or quizzes and tests. Each week, I use low-stakes formative quiz evaluations for my students. Around 10 questions are included in each quiz, with about half of the questions testing student knowledge of topics discussed in the lecture last week, and the other half covering topics from earlier in the course. In this way, each week, I continue to assess the mastery of topics by students from the entire course. We also have weekly homework, which measures many subjects covered to date.
But for tens or hundreds of students in a class to find trends that provide insight into the information states of the students, sorting through dozens or hundreds of quiz or homework question results is not the easiest process. In many things, successful teachers need to be nice, offering convincing lessons, producing and assessing homework and evaluations, etc., but most teachers are not qualified data scientists as well, nor do they have to do their jobs.
Machine learning is where this comes in. Machine learning is used to recognize data patterns, and in this case, the program can be used to classify the information states of students through quizzes and homework from their performance patterns.
What Does AI in the Classroom Look Like
To help enhance my teaching and that of my fellow faculty members in the Duke AI for Product Innovation masters program, we set out to build a method that could recognize the learning state of each student at any time and present that knowledge to both teacher and student, given a set of class quiz and homework results and a set of learning topics. This will encourage the instructor’s more productive personalized instruction and the student’s greater knowledge of where they need to place additional emphasis in their analysis. Also, a teacher could gain insight into where the class has learned the content effectively and where he or she wants to strengthen those concepts by aggregating this knowledge through the class.
The project resulted in the development of the Intelligent Classroom Assistant, a prototype instrument. The instrument reads the outcomes of the instructor-provided class quiz or homework and the range of learning topics covered in the course so far. It then uses a machine-learning algorithm to analyze the data and provides the teacher with three automated evaluations of quiz and homework topics that the class has struggled with; learning topics that the class has and has not mastered, and each student’s results.
The mapping of quiz questions and homework problems to the most important learning subject was one of the main challenges in creating the method. I built a custom algorithm to accomplish this, using natural-language processing and drawing on open-source libraries to understand the meaning of each query and map it to the primary learning topic it was intended to test.
Testing the Robot
While I was teaching the Sourcing Data for Analytics course at Duke, an introductory data science course for graduate engineering students covering technological as well as regulatory and ethical issues, the Intelligent Classroom Assistant tool was developed. As the semester progressed, this allowed me to try the method for my class.
One of the main things I wanted to test was how well each quiz or homework question could be categorized by the algorithm behind the hood of the tool into the most important of the 20 learning topics covered in the course. The algorithm correctly defined the related learning topic about 82 percent of the time on the full set of 85 quiz questions that I used during the semester. This was good enough to make the analysis given by the instrument useful to me, but not perfect.
I used the prototype in two main ways during the course to educate my teaching. Due to poor student results, I spent extra time in lecture sessions covering learning topics and relevant quiz questions that were flagged by the instrument. And I used the tool’s customized student analysis module during one-on-one support sessions with students to understand where the student needed extra encouragement and make tutoring sessions more oriented.
It’s too early to measure whether the tool improved student results since it was new in the course I used it, so there is no historical reference benchmark. But we are expanding the use of the instrument this year and are working to determine the impact it has on student engagement and performance. In another engineering class of 25 and even in an undergraduate finance class of more than 200 students, we are trying it out. In my spring machine learning class, I also plan to use the prototype to direct my teaching through the semester. Since students will benefit from seeing the results of the analysis of the instrument as much as teachers, we plan to include the inclusion of a student portal for spring that allows students to see their results and provides students with customized study feedback based on their defined state of knowledge.
Conclusion
The amount of electronic data that teachers now have available will help support their teaching. But educators are not (usually) data scientists themselves, and they need tools for analytics to help them extract data value. However, although these devices are beneficial, their importance is directly proportional to how well a teacher determines course learning goals and structures materials and tests to support and assess those goals.
In addition to helping teachers to enhance the quality of their classes (as measured by student learning outcomes), machine learning tools such as the Intelligent Classroom Assistant will also enable them to do so on an increasing scale, offering the promise of widespread personalized teaching. When teachers can teach more effectively, learners can learn more, and we all benefit from it as a community.