By Lance Myburgh
What happens to knowledge when machines can generate essays, compose music, and answer questions with remarkable confidence? That was the central question at the opening session of the Vice-Chancellor’s Artificial Intelligence (AI) Conversations hosted by Rhodes University on 10 March 2026.
Titled AI: Ethics, Intimacies and Ecologies, the panel discussion brought together researchers, students, and members of the public to reflect on how rapidly evolving AI technologies are reshaping higher education, research, and society. More than a debate about technology, the evening explored a deeper challenge: how universities must rethink their role as creators and guardians of knowledge in an AI-driven world.
Opening the discussion, Vice-Chancellor Professor Sizwe Mabizela framed the event as the beginning of a broader conversation. He emphasised that universities must approach AI neither with blind enthusiasm nor outright rejection. “We are living through a moment of profound technological transformation,” he said, adding that institutions must engage with AI thoughtfully while preparing graduates who can use these tools “responsibly, ethically and productively”.
The event marked the first in a year-long series of conversations exploring how AI is reshaping teaching, research, and society. Facilitated by Dean of Humanities, Professor Siphokazi Magadla, the discussion brought together scholars from different fields whose research reveals how AI is already influencing how knowledge is produced, interpreted, and applied.
For Sioux McKenna, Professor of Higher Education Research, the question is not simply what AI can do, but what happens when people begin to treat its outputs as authoritative knowledge. Drawing on her research into AI-generated text, she cautioned that the technology’s confident tone can disguise underlying biases embedded in its training data. “It’s not a glitch, it’s a design,” Prof McKenna told the audience. “It is following what it is meant to do.”
Her research shows that patterns in AI outputs often mirror existing social inequalities, from racial and gender stereotypes to global power imbalances. In a university setting, where students increasingly encounter AI-generated information in their studies, this raises pressing questions about how knowledge is evaluated and trusted.
If universities fail to address these challenges, Prof McKenna suggested, they risk allowing algorithmic systems to quietly shape what students read, think and write.
Doctoral researcher Sibusiso Ncanywa highlighted a different form of exclusion: the ways AI systems can overlook entire cultural traditions.
Ncanywa’s research focuses on AI music classification systems. Models that performed with more than 95% accuracy when analysing Western instruments struggled dramatically when applied to recordings of the Uhadi, a traditional southern African musical bow preserved in the archives of the International Library of African Music. “The error is not merely technical,” Ncanywa explained. “It is an erasure.”
Rather than seeking a single definitive model, Ncanywa proposes developing multiple systems that reveal what each one fails to detect. For him, this approach reflects a more inclusive vision of knowledge, one that acknowledges difference rather than forcing it into a single framework.
Senior anthropology lecturer Dr Dominique Santos brought yet another dimension to the discussion by examining the hidden human and environmental costs behind digital technologies.
Dr Santos shared a personal moment that brought this reality into sharp focus. While experimenting with an AI tool to generate a playful caricature image, she later learned that producing the image required a large amount of water to cool the servers processing the request. The irony, she told the audience, was that she was sitting in a house without running water at the time.
“There is nothing artificial about AI,” Dr Santos reflected. “It is utterly human.”
Her research explores the global systems that make digital technologies possible, from water-intensive data centres to the mining of rare earth minerals such as coltan in the Democratic Republic of Congo, a resource found in nearly every electronic device.
By drawing attention to these often-invisible infrastructures, Dr Santos argues that universities have a responsibility to help students understand the broader ecological and social contexts behind the technologies they use.
Professor Willie Chinyamurindi, a business management professor from University of Fort Hare, turned the conversation towards how universities themselves are responding. Too often, he suggested, institutions approach AI through what he described as “surveillance pedagogy”, focusing on detecting whether students have used AI tools rather than teaching them how to use them responsibly.
Instead, Professor Chinyamurindi advocates a model of stewardship, where students are treated as partners in the creation of knowledge. Universities, he argued, must move beyond policing technology and instead cultivate the critical skills needed to engage with it thoughtfully and ethically.
The lively discussion that followed reflected the diverse perspectives in the room. The audience raised questions about AI’s potential to democratise access to knowledge, while others warned of its capacity to reinforce new forms of inequality and digital colonialism.
As the Vice-Chancellor’s AI Conversations series continues throughout the year, Rhodes University hopes to create a space where scholars, students, and the wider public can grapple with these questions together. In an era when machines can generate answers instantly, the role of universities may be more important than ever, not simply to produce knowledge, but to ensure that it remains thoughtful, ethical, and grounded in the complexities of the human world.
