By Azwidohwi Mamphiswana
The rapid spread of AI-driven disinformation is becoming one of the biggest threats to agriculture, with false narratives undermining innovation, damaging agribusinesses and eroding public trust in food production.
This was the stark warning from Alan Hardacre, a global expert in advocacy and public affairs, during the recent Africa Agri Tech Conference and Expo at the CSIR Convention Centre in Pretoria.
In his keynote address titled ‘Sowing Truth in a Digital Storm: Addressing Technophobia and AI-Driven Disinformation in Agriculture’, Hardacre emphasised that misinformation was not just a nuisance, but a serious risk to the future of farming.
“Misinformation feeds disinformation. Disinformation feeds misinformation. They work together. It has never been more important to separate fact from fiction, but it has also never been more difficult,” Hardcare said.
He used an example of Arla Foods’ methane-reducing feed additive aimed at cutting emissions by 35%. Initially praised for its sustainability by experts, it quickly faced false social media claims that it was poisoning cows and making milk unsafe for consumers, all within 48 hours.
“What they thought was a positive story turned against them. Within two days, misinformation had reached consumers, causing major retailers to reconsider their support for Arla’s products.”
The crisis showed just how quickly false narratives could spiral out of control, turning an innovation into a liability and putting an entire supply chain at risk.
Hardacre said AI-powered tools such as deepfake videos, AI-generated articles and fake social media accounts, could manipulate public perception with alarming efficiency.
To demonstrate how vulnerable agriculture was, Hardacre revealed an experiment he conducted using ChatGPT.
“I asked ChatGPT, ‘How easy is it to run a disinformation campaign on agriculture in South Africa?’ The answer was: ‘Relatively easy’.”
He explained that existing political and economic tensions made the sector particularly susceptible to misinformation, which could quickly escalate into a crisis.
One alarming example involved a farm in the Western Cape, where AI-driven precision irrigation was used to optimise water use. Within days of implementing the technology, false claims circulated online that the farm was replacing workers with robots, leading to public outrage and calls for AI bans in agriculture.
“If something gets widely shared, it snowballs. It would take only 15 minutes to build an entire online disinformation campaign against any one of you.”
Hardacre’s warnings resonated with many industry leaders at the conference.
AgriSA COO Jolanda Andrag stressed the importance of industry-wide cooperation to tackle disinformation.
“Declining trust in agriculture is one of our biggest challenges. If we don’t address misinformation together, we risk losing credibility with the very people we feed,” said Andrag.
Z22 chief information officer Martin Jansen warned that overcomplicated technology could sometimes fuel misinformation, making consumers more sceptical.
“Technology should be an exoskeleton of ‘awesome’ around a human being. Automation should enhance human capability rather than replace it,” Jansen said.
Pieter Geldenhuys, who is the director at the Institute for Technology Strategy and Innovation, dismissed the notion that AI was replacing human intelligence.
He called for a smarter integration of AI in decision-making.
“AI should be a tool to eliminate bias, not a competitor to human intelligence,” he said.
Hardacre said there were various proactive steps the agricultural could take against disinformation.
These included crisis preparedness strategies, fostering transparency and trust with consumers, collaborating across the industry to combat false narratives and actively engaging with the public to correct misinformation before it spread.