By the end of this year, the artificial intelligence (AI) market in South Africa is projected to reach $2.4 billion, with an annual growth rate of 21% expected between now and 2030. Despite the potential for AI to mitigate security risks and enhance decision-making, Anna Collard, SVP of Content Strategy & Evangelist at KnowBe4 AFRICA, warns of associated risks that need consideration.
“Generative AI models are trained on data from various sources,” she explains, noting that these sources lack verification, context, and regulation. While AI is useful for administrative tasks, relying on it for decisions with potential life implications is concerning.
AI, constructed on human creative efforts and often flawed and biased data, poses risks with long-term consequences.
Here are 6 concerning risks:
1. AI Hallucinations: Instances where AI produces fake or nonsensical outputs, particularly when faced with prompts beyond its training data.
2. Deepfakes: The rise of fake images, deep neural networks, and Generative Adversarial Networks (GAN) led to sophisticated manipulations of images, audio, and video.
3. Automated and More Effective Attacks: GANs aid cybercriminals in impersonation attacks and the creation of sophisticated phishing emails.
4. Media Equation Theory: Humans tend to attribute human characteristics to intelligent machines, leading to increased vulnerability to manipulation.
5. The Manipulation Problem: AI’s ability to simulate emotions and respond to sensory input in real-time opens opportunities for the dissemination of predatory content and scams.
6. Ethical Issues: Bias in data and the absence of regulations regarding AI development raise ethical concerns, requiring a proactive approach to managing and detecting risks.
Collard emphasizes the importance of mindful sharing of information with AI chatbots and virtual assistants, urging critical thinking, mindfulness, and fact-checking when relying on AI.