Why universities must adopt assessment models to the threat posed by artificial intelligence

JavaScript is disabled!

Please enable JavaScript to read this content.

Like human teachers, most autonomous creative robots are also configured to 'learn' as they interact with users. [iStockphoto]

Throughout mankind's history, sudden spikes of technology have often disrupted the existing order, sometimes irreversibly.

A memorable example is when the famed physicist Albert Einstein postulated his theory of relativity at the turn of the last century. Despite much resistance from contemporary scientists, relativity completely antiquated the existing knowledge of stellar mechanics- and practically rubbished a large portion of Sir Isaac Newton's discoveries.

Without a doubt, Elon Musk & Co.'s evangelistic promotion of the electric car will sooner or later consign fossil-fuel vehicle engines to the same fate. Some traditional carmakers among them Jaguar, Mercedes Benz, Volvo and Lexus have reportedly already committed to transitioning fully to the production of EV (Electric Vehicles) by 2030.

The most perplexing technological advances have taken place during the era of computer science, which has within the last few decades completely upended the world's technological order and lifestyles. At last, the world's richest people are no longer pot-bellied old men in grey suits boasting real estate, but rather, code-writing excitable young techies in black T-shirts!

Artificial Intelligence (AI) - the revolutionary technology so aptly embodied by Lensa, DALL-E and ChatGPT- the startling apps recently launched by OpenAI and others- is currently the buzzword in tech. According to Wikipedia, the ChatGPT chatbot is able to 'write and debug computer programmes; to compose music, teleplays, fairy tales, and student essays; to answer test questions (sometimes, depending on the test, at a level above the average human test-taker); to write poetry and song lyrics; to play games; and to simulate an ATM.'

Like human teachers, most autonomous creative robots are also configured to 'learn' as they interact with users. Major news outlets have always been known to use AI to automatically generate some of their news, yet ChatGPT- still in its infancy- seems to have busted traditional guardrails. To sample modern AI, as I have done, is to discover how imperilled human labour and creativity might soon be. It seems completely capable of mimicking any human endeavour with perfect precision.

Interestingly, modern technologies never fail to offer much grief as they improve productivity. Einstein's advanced physics, for instance, ended up becoming the basis on which the atomic bomb dropped on Hiroshima was built. Isaac Newton's famous altercation with Germany's Gottfried von Leibniz over the discovery of integro-differential calculus was intense enough to kill cross-channel collaboration between British and other European scientists for decades.

Similarly, AI is already showing signs of having potential to put us all - the IT practitioners who develop it included - out of work. I recently gave ChatGPT brief verbal instructions to write a Python program to solve a certain problem. As I watched, it proceeded to churn out bundles of elegant lines of code in seconds -complete with commenting blocks! For context, a professional software engineer would have charged me thousands of dollars-and toiled away in seclusion for weeks- to complete the same assignment.

One arena where the exponential proliferation of AI poses a real and present danger is universities and other citadels of knowledge whose stock of trade are theses, dissertations, continuous assessment, and written examinations.

In the AI age, these institutions' biggest headache will be sifting the genuine efforts by students from machine-made content. As it is now, any of the cutting-edge language processing AI models could easily conjure a fifty-thousand-word thesis on any specific topic complete with all relevant citations and bibliography, in minutes and for free.

One perplexed colleague has told me that rather than struggling to write theses, graduate students might instead strive to insert errors and grammatical mistakes in otherwise perfect computer-generated articles to make them look marred and human. Let us brace for the of death the thesis! RIP Anti-plagiarism software!

Yet, universities must not sit by wringing their hands, but rather adapt proactively to the emerging realities. Offering more coursework in graduate programmes and changing testing methods to give more weight to oral examinations over written ones is a good starting point. This could be achieved by retooling curricula to eschew mere deposition, in favour of actual creation and absorption of knowledge.

There is a redeeming feature in some AI companies' pledge to insert sophisticated and indelible watermarks in their chatbots' output to discourage unauthorised transmission, though this might still be circumvented by a determined cheat. Also, as counter-intuitive as it may sound, governments will need to institute regulatory measures to limit the pernicious capabilities of AI in their own territories.

Another silver lining of the AI revolution might be the possible resurgence of interest in humanities and art subjects by university students. It has been observed by many commentators that these disciplines are less susceptible to AI influence than STEM because, by default, they demand a certain amount of human empathy.

The fact that the livelihoods most threatened by AI advancement are those of IT practitioners is a perfect paradox of the creature eating its creator!