(Credit: Reuters/Dado Ruvic)
Artificial Intelligence (AI) and Machine Learning (ML) are transforming industries, from finance and healthcare to cybersecurity and supply chains. However, with these advancements come significant risks — ethical dilemmas, misinformation, data privacy concerns, and cyber threats. Risk managers are at the forefront of identifying, mitigating, and responding to these challenges. This topic article explores how risk professionals are addressing AI’s biggest vulnerabilities, balancing innovation with security and ethics.
While AI continues to drive efficiency and innovation, it also introduces complex risks that organisations must manage effectively. Risk managers from diverse industries share how they are tackling these emerging challenges and shaping responsible AI practices.
Pressing Risks That Keep Risk Managers on High Alert
Christine Kempeneers, a Governance and Risk Head in Maplecrest Group, shares about several risks of AI that keep her up at night.
“Cybersecurity will always be top of mind,” Christine explains. “Just as AI can enhance our processes, it can equally be leveraged by threat actors to automate and amplify cyberattacks. AI can even be used to develop new kinds of threats. This adds complexity to an already fast-evolving cyber risk landscape. Moreover, AI systems are vulnerable to attacks, which could lead to data breaches or system failures — something we must remain vigilant about as we integrate AI into our systems and operations.”
Christine also points to the risks of AI bias, which can result in discriminatory outcomes. “For instance, some facial recognition systems have demonstrated higher error rates in identifying individuals with darker skin tones, leading to misidentification and potential injustices — particularly in law enforcement and security contexts. We have also seen generative AI models produce stereotypical imagery that reflects biases embedded in their training data.”
Phuong Tran Le, Data & Technology Lead at International SOS, shares her perspective on the emerging risks surrounding AI.
“Risks arise when decisions are made around AI without a clear understanding of its strengths and limitations.”
Building Smarter Defences Against AI Risks
To tackle the complex risks associated with AI, Christine advocates for a layered and integrated approach. From a technical standpoint, organisations should implement robust security protocols for AI systems, including regular vulnerability assessments, penetration testing, and secure data handling practices. Interestingly, AI itself can also be used to defend against AI-driven threats.
“This creates a valuable opportunity for risk teams to collaborate with cyber and information security teams, working together to develop tailored technical controls,” she said.
Education is equally vital. “We need to educate users on AI-related cyber risks and promote best practices for its secure use. It is also essential to stress the importance of human judgement and critical thinking. AI is a powerful tool, but it is not a silver bullet.”
Phuong takes a complementary view, advocating for a problem-first approach to AI adoption.
“Most often, the right solution isn’t AI — but when it is, we make sure the right safeguards are in place.” This includes using high-quality, representative training data and implementing a final layer of human oversight to review AI-generated insights before they inform decision-making.
Assessing Risks Related To AI-decision Making
When evaluating risks associated with AI-driven decision-making, Christine recommends starting with context — mapping out the decision-making process and identifying where and how AI is involved. This includes pinpointing the specific AI models in use, the data they draw upon, and the decisions they help inform.
Next, it is critical to assess both the data and the model:
- Data quality: Is the data complete and, to the greatest extent possible, free from bias and inaccuracies?
- Model reliability: Has it been thoroughly tested to identify vulnerabilities and failure points?
- Explainability: Can the model’s decision-making process be clearly understood and independently audited?
How AI Is Reshaping Risk Management
Looking ahead, Christine believes AI will profoundly transform risk management over the next five to ten years. A major shift will be the increased use of AI in automating risk assessments, anomaly detection, and threat prediction — empowering risk managers to make faster, data-driven decisions.
“As AI evolves, we can expect to see more specialised tools for continuous monitoring and predictive risk analysis.”
Risk Managers: The Gatekeepers
AI brings both transformative potential and significant challenges. For organisations, strong risk management strategies are no longer optional — they are essential. As AI advances, the frameworks that govern its use must evolve in tandem. Risk managers will continue to play a critical role as the gatekeepers of AI governance — ensuring that technology is deployed responsibly, ethically, and in ways that uphold both security and human values.