Society is witnessing a growing enthusiasm for artificial intelligence (AI) and its accompanying tools.
Much praise has been given to the benefits of AI, particularly its ability to assist in making timely decisions and processing vast amounts of data quickly.
On the African continent, there is growing inquiry into the potency AI technology can leverage in transforming value chains within the public and private sector.
Ultimately, this could potentially serve as a catalyst for economic and social change for the continent.
For SA, a number of researchers in the academy are also caught in the AI fad by aligning their research into this area.
The desire at best is answering the question as to how we can leverage AI technology for the betterment of human and organisational capability development and outcomes.
Policymakers have responded by setting up institutes dedicated to prioritising AI research.
Saliently, these include the establishment of the AI Institute of SA in 2022 and also the Centre for Artificial Intelligence Research through the department of science and innovation.
Yet despite these advances, we need to be cautious. One such caution concerns a phenomenon known as AI hallucinations.
In simple terms, AI hallucinations refer to the false confidence an AI system exhibits when generating information that appears credible but is, in fact, inconsistent, inaccurate, or entirely fabricated.
Others have described this as a nonsensical output that demands careful verification before presentation or dissemination.
In a study published in the journal Humanities and Social Sciences using a case of 243 instances of distorted information generated through AI, some common errors were systematically classified as findings to the study.
These included errors involving facts, inconsistencies, logic, reasoning, inaccuracies and also fabrication.
The findings give attention to the need of paying an extra layer of scrutiny to any information that is gathered through AI tools.
At play here is a dual veneer of false confidence from both the AI system and the end user.
First, the AI system relies heavily on the queries and prompts given by the user to generate its output.
During this process, much can go awry. Some results may be anchored in facts and events that truly happened, while others may be tenuous, speculative, or outright incorrect.
The danger lies in that all of it is presented with the same confidence, leaving it to the user to discern what is accurate.
The second layer of false confidence lies with the human user.
Here, factors such as impression management come into play where people use information, verified or not, to advance an agenda or impress others.
This often occurs when users fail to question or validate AI-generated content before using it to make decisions or support arguments.
Given this backdrop, it is not surprising to see the challenges that have already emerged in sectors of SA’s economy due to uncritical AI use.
Consider the legal profession, for example. Recently, there have been instances where judges have reprimanded lawyers for citing non-existent case law — citations that turned out to be hallucinated outputs from AI tools.
In one notable case, an acting judge explicitly attributed the inclusion of fake legal citations in an argument to the use of AI-generated content.
So, what should we do?
First, we must embrace the genuine benefits of AI tools in making our work and lives more manageable.
With their burgeoning popularity, it is clear we are only beginning to unlock their full potential.
These tools will continue to evolve and become embedded in our daily lives.
Second, the duty of care and responsibility in using AI lies squarely with us, the end users.
Using unverified information is wrong and can place us in precarious, even legally liable situations.
I questioned ChatGPT, a popular generative AI tool to offer advice in dealing with the challenge of AI hallucination.
The response: “When using AI, remember to verify its information with trusted external sources, always check the evidence or references it provides, and treat it as an assistant, not as the final authority.”
Third, and most importantly, we must not abdicate our agency and judgment in the face of technological progress.
The confidence we place in AI outputs must be balanced with scepticism and a willingness to question.
We cannot allow ourselves to become slaves to the machine, letting our critical thinking atrophy in the glow of AI-generated text.
Remaining vigilant and exercising our human judgment is an essential skill in the present and future.
We are truly living at the height of a technological moral panic, a time when our ability to exercise our executive functioning skills is being eroded, precisely when we need them the most.
It is a period in which voices of falsehood are legion, spreading at the mere click of a button, often without verification or reflection.
Yet, this is also the very moment when we must be most vigilant and rise to the task of cultivating further the skills and habits that affirm our commitment to truth, discernment, and verification.
This is what makes us humans and our exercise of dominion.
AI can be a powerful ally but only if we, as humans, remain in charge of discerning what is true, reliable, and worthy of trust.
Willie Chinyamurindi is a professor in the department of applied management, administration and ethical leadership at the University of Fort Hare. He writes in his personal capacity.
The piece was published by by The Herald.