A student submitted a perfect report on a recent test. I gave him full marks because I saw that he used his Chat GPT in OpenAI to create the answer.
COVID-19 has changed the way we teach and assess. Exams and tests are required to go online. We also decided to make all exams “open universe.” Students have full access to their notes and the internet. This is how they work in the “real” world and how we should evaluate them. Of course, this approach has consequences.
Encourage collaboration and plagiarism. You can police verbal communication when students are in the same room. It has embedded checks to detect file sharing, but cannot monitor email communications. Educational software creates “lockdown” software that limits virtual collaboration, but these systems also cut off access to the internet, invalidating my approach to testing.
After being a teacher for 50 years, I can smell plagiarism. It's like driving down a bumpy country road and then crashing onto a paved highway. Using Google and other search engines, I was able to find the texts covered by my students.
Nowadays it is becoming difficult. Many students use writing aids like Grammarly, which I recommend. With more and more students having a first language other than English, they no longer have to spend time correcting spelling and sentence structure, which is a good thing.
However, AI can not only correct grammar and spelling, but also write in different styles. AI is also rapidly evolving. Re-entering the same prompt (words in the AI question) will generate different answers. You can no longer use Google Search to see if a student has copied the AI output.
These developments will force universities to rethink their teaching and evaluation practices. First, many AI products require paying for a premium version, so students who are able and willing to pay can take advantage of more sophisticated systems and produce higher-level work. . This creates academic inequality, which has always existed. In earlier times, some students paid for a private tutor. Others paid impersonators for papers, determining the quality of the product by price.
Second, teachers must change their assessment of students. Some stick to the multiple-choice format that is standard in many undergraduate courses. Although it is much easier, the markings get in the way of publication of the research and hinder publicity. Possible solutions include banning phones, writing tests in lead-lined rooms, and requiring students to wear swimsuits.
For upper-level courses with essays, the topic should be complex with multiple subtopics. Calculation exercises require students to reproduce more difficult problems that consider the steps required to solve common problems. Teachers need to spend more time designing questions and topics. Marking also takes a lot of time. There are no easy answers.
Research will also change. AI systems are performing more complex numerical analysis. You can enter data, complete analysis, present code in several programming languages, and prepare text documents to explain and interpret results. Other systems can analyze text and perform sentiment analysis to detect the “emotional underpinnings” of human testimony. Finally, AI can now write decent literature reviews with citations.
Not to sound like an AI fan, but there are deep concerns. I follow two of his Substack commentators who discuss AI. Ethan Mollick, the glass-half-full guy, posts the latest developments celebrating the magical wonders of AI. Gary Marcus, the glass-half-empty guy who drains the water, gleefully cites AI's latest misfortunes.
And there are many misfortunes such as:
• Creating an inclusive image of Nazi soldiers by showing women, Asians, and black people as Stormtroopers. (Google Gemini);
• Depiction of an ant entering a nest with four legs instead of six (Sora in Open AI).
• Provide fabricated citations for scientific papers (OpenAI Chat GPT3.5).and
• Adopt harsher sentences for defendants with African American names in discussions about justice and sentencing. (Open AI)
Despite these grave mistakes, I tend to side with Morik. AI capabilities are rapidly evolving and these misfortunes will be resolved.
But AI leaves me with deep misgivings.
People using AI to calculate legal disasters. After 50 years of doing statistics, you develop a “nose” for numbers, just as you develop a nose for plagiarism. But there are countless of us.
Fewer people will be able or inclined to perform independent analysis to confirm the results. This doesn't matter for undergraduate exams, but it can be a little more important when you're managing the flight paths of thousands of planes in the sky at any given time.
When writers and researchers prepare material with the help of AI, that material enters a “corpus.” This is a huge amount of reference material on the Internet that large language models such as Chat GPT use to generate responses.
If people who know little about history accept the results of AI uncritically, they will be filled with false information. TikTok, X, Meta, etc. pale in comparison to AI’s ability to free us from reality.
China has the most widespread use of closed-circuit cameras to monitor its citizens. Beyond facial recognition, AI can detect identity using gait or iris scans. How far away are we from scanning our brainwaves to interpret our thoughts?
Then techno-fascism will be complete.
Finally, human progress often requires incredible leaps in perspective.
Einstein solved a problem in Newtonian physics by changing the obvious assumption that space is linear and time constant.
He created a physics with curved space and dynamic time, producing predictions that endure continuous challenge.
Illogical leaps like this move science forward. For me, that is the deep question in AI and the advancement of knowledge: Will AI drive future intellectual revolutions or suppress them?
Gregory Mason is an associate professor of economics at the University of Manitoba.