Imagine a society where every student has access to a private tutor. A society where students receive instant feedback. A society where every teacher in America is supported by their teaching assistant. In the 21st century, technology leads us to a pedagogical mechanism that can accomplish these feats: artificial intelligence.
In November 2022, OpenAI launched its free generative artificial intelligence (AI) tool, ChatGPT. By January 2023, less than two months after its launch, ChatGPT reached over 100 million users. To put that into perspective, it took Netflix three and a half years to surpass one million users. ChatGPT, however, will be just a blip in the history of AI. Machine learning technology will disrupt much of our society, human learning included. The invasion of AI is already overturning the educational ecosystem; that much is obvious to most students and teachers. The key issue still unaddressed in most classrooms is how students and teachers will adapt to AI. Is AI doomed to be a parasite, sucking us dry of our intellectual ability and integrity? Or, can we arrange a symbiotic niche for AI to mutually benefit teachers and students?
Unfortunately, as we approach 2024, education is in a critical state. A recent Pew Research Center study revealed that 82 percent of K-12 teachers believe education has worsened over the past five years and are pessimistic about the future. However, AI in education holds the potential to support schools where they struggle. With a wealth of readily available resources and lessons, educational AI tools can help bridge gaps in access. Teachers can save approximately five to ten hours of work each week using generative AI and offer students instant feedback and individualized attention. Moreover, many educational AI tools are free or low-cost, making them accessible to underfunded education systems that could greatly benefit from technological advancements.
Despite this promising potential, AI has not yet made a significant impact. A 2023 study found that only 18 percent of educators were using AI in the classroom. The reluctance to use generative AI is primarily due to fear. Teachers across the country, including many at the high school, see AI as an omen of the armageddon of modern education. In most classrooms, this has led to a total ban on AI use, both in and out of class. Teachers have seen banning AI as the only solution to plagiarism, and they are justified in their worry. AI plagiarism makes cheating a private, easily accessible endeavor. No more peeking over someone’s shoulder to copy their homework; you can just copy and paste into ChatGPT and copy and paste out. If students mindlessly use ChatGPT, rather than comprehending and processing the information, they will simply extract an answer from the text without understanding it. Over time, this behavior will lead to students experiencing a gradual decline of necessary cognitive skills.
This crisis, however, is overblown and under-addressed by a total ban. According to a Stanford University study, the number of students admitting to cheating in school remained the same before and after the release of ChatGPT. Additionally, Turnitin, an AI detection software known for false positives, reported that only three out of every 100 assignments it evaluated were generated mostly by AI. Students who want to cheat with AI won’t be discouraged by a school ban; they will simply be more discreet about using AI. AI detection services have been deemed essentially useless due to their inconsistent results and high frequency of false positives. With the structure of assignments unchanged, students who would cheat will cheat, simply by working around the ban. This is the core of the AI education issue, which we will inevitably have to address as this technology grows: How do we capture the vast potential of AI for teachers and students without enhancing or permitting cheating? If, with a wave of a magic wand, we could make everyone not cheat—or better yet, change the culture around education to be more learning-based and less achievement-based—we would. However, these changes will not be so easy, and the potential of AI to improve education will outweigh the downsides if we address it properly with guardrails, policy and curricular changes.
AI must be sculpted into a tool specifically for education if it is to be used. If a teacher wants to allow a student to use AI on an assignment, they might first need to shape the traditional assignment into something different. For example, rather than having AI generate a thesis for a student, you first have the student come up with their thesis and then work with AI to revise it. Traditionally, each student would brainstorm multiple theses and then meet with the teacher individually, wasting a whole day of class; now, a student can complete that process in minutes. Luckily, the weight of shifting curricula won’t be placed on teachers alone. Currently, there are a plethora of AI-based education services in development or in beta use that will create unbiased and thoughtful curricula while also learning from student input.
The University of Pennsylvania conducted a study on the effect free-reign use of AI, and the OpenAI educational version of ChatGPT: TutorGPT, had on students. The students without guidelines performed worse than students not using AI at all. Conversely, students using TutorGPT performed better than both students using AI freely and students using traditional study methods. TutorGPT, however, lacks guardrails. A student could easily ask TutorGPT to write the assignment for them. In the AI in BHS club, we decided to search and test GenAI resources to find which would be best suited for students at the high school. We gathered that Khanmigo, Khan Academy’s AI teacher assistant and tutor software, had the greatest range of capabilities, along with the strongest guardrails against cheating.
With this in mind, we decided to study the potential effects of Khanmigo in classrooms at the high school to get feedback from student and teacher experiences via a survey. The primary subject Generative AI will affect is English. Therefore, we decided to test out Khanmigo in a multi-grade, SWS Writers of Color Honors English class reading “Giovanni’s Room” by James Baldwin. Our study focused on whether teacher materials, in this case, discussion prompts, could be effectively and efficiently generated with the assistance of AI. The efficiency was clear. All Khanmigo needed was the book and chapter (while a more specific outline probably would have given better results), and it generated a discussion prompt for students to respond to.
Generally, students thought the prompts were satisfactory. Out of 14 surveyed, 11 said the assignment was worthwhile to do, one was unsure, and two said it was not worthwhile. Students also felt that it promoted deeper engagement with the content. One student declared that as they began to write, they found more and more details they hadn’t noticed in the chapter. When asked how this assignment compared to others, a student shared, “It’s pretty similar, which is scary.” Multiple students expressed this sentiment, illustrating that schools need to do a better job of improving students’ understanding of AI and its applications. We also asked students if they cared whether discussion prompts were created by humans or computers. The responses were pretty split down the middle; some students had no preference, while others expressed that teachers should be putting in effort. It is also important to recognize the privilege of Brookline students who expect to have teachers who truly care about their students and the classes they teach. Many students in other school districts are not as fortunate, and so they may have less of a preference for teacher engagement. All in all, the implementation of Khanmigo was a success. Students were happy, teachers saved time and AI didn’t take over the world.
As AI improves, so will these educational tools. Our policies as a school will have to change with them to accommodate and properly counter the advances of new technology. The key is to change the view of this technology in our school’s culture from one of denial and resentment to one of curiosity, innovation and honesty. The concerns of teachers, students and researchers alike are completely valid, but to address them, we need to implement tools and techniques to safeguard against them. If we establish robust guardrails, develop thoughtful policies and adapt our curricula, we can create an educational landscape where AI serves not as a threat but as a partner in learning.
In this evolving educational ecosystem, the key will be to shift our focus from merely preventing cheating to fostering a culture of learning. By doing so, we not only prepare our students for the future but also ensure that they emerge as well-rounded, educated citizens, ready to contribute positively to society. The journey may be challenging, but with collective effort and an optimistic outlook, we can turn the tide and harness the power of AI to create a brighter, more equitable future for all students.