Rethinking Governance in the Age of Generative AI
From healthcare and science to entertainment and gaming, generative AI is changing the landscape of all industries alike. AI’s integration into modern applications brings both promise and peril. As organisations rush to adopt this technology for data-driven decision-making and automation, the security breaches and problems that accompany these systems often take a backseat.
We have seen several examples of cyber-attacks this year. In May 2024, a ransomware attack disrupted the clinical operations of Ascension, a health system with 140 hospitals. The attack was caused by an employee who accidentally downloaded malware.
Meanwhile, T-Mobile detected malicious activity in their systems that stole personal data from 37 million customer accounts. The stolen information included customer billing addresses, emails, phone numbers, birth dates, and T-Mobile account numbers.
All this boils down to one key lesson: cybersecurity should be treated as an important facet while ensuring a robust future for AI systems. The responsible deployment of AI requires robust cybersecurity frameworks to address these systems’ unique vulnerabilities – a topic that needs to be discussed in depth.
The Growing Need for AI Governance
The global AI governance market, valued at $124.3 million in 2022, is projected to grow at a compound annual growth rate (CAGR) of 35.6% between 2023 and 2030. This growth is driven by increasing regulatory requirements, the demand for explainable AI, and heightened awareness of AI’s risks, including algorithmic bias, privacy violations, and cyber threats.
Moreover, according to Verizon’s 2024 Data Breach Investigations Report (DBIR), a significant number of cybersecurity breaches were caused due to human involvement, with as many as 68% of incidents attributed to non-malicious human actions.
Industries such as healthcare, life sciences, and defence are at the forefront of adopting AI governance frameworks to mitigate these risks. As the world seeks to harness AI’s power, we need to make it responsible. It is not just a technical requirement but also a moral obligation to protect AI’s security, especially as companies shift to digital products and service providers.
The growing role of AI in enterprise operations makes it a prime target for cyberattacks.
- Phishing and Malware: AI can easily create phishing emails by mimicking writing styles and leveraging contextual information, which can be very convincing to look at. Similarly, ML models can generate adaptive malware that can evade detection mechanisms.
- Social Engineering with Deepfakes: AI-powered deepfake technology is often used to create realistic fake audio and video on social media. This enables impersonation and other forms of social engineering attacks.
- Breach Automation: Attackers can use AI to identify vulnerabilities in systems and automate the exploitation process, which makes cyberattacks faster and easier to execute.
- Behavioural Analysis for Targeted Attacks: AI can analyse content on social media and other online behaviours to carry out personalised attacks, which increase their chances of success.
AI can be a double-edged sword. While it enhances automation and decision-making, its misuse by attackers highlights the urgent need for comprehensive cybersecurity measures.
Building a Robust AI Security Framework
Recently, with CrowdStrike, the world witnessed how one glitch can affect the entire world. It was almost as if the whole world was thrown into the Stone Age for a few hours, with all servers going down. That is the extent of the repercussions it can have.
While AI didn’t cause it directly, it was a case of overreliance on third-party tools that were given too much internal access, something that organisations integrating AI are also doing.
In my work, I ensure that AI products and solutions are adaptable within the business context, prioritising trust, ethics, and security. To build secure AI systems, organisations must adopt several key strategies:
- Ensure data privacy laws are followed and sensitive information is protected.
- Employ AI to detect and counteract phishing attempts and malware before they cause harm.
- Strengthen authentication protocols and implement continuous monitoring to detect anomalies in system behaviour.
- Periodically assess and address vulnerabilities in AI systems.
A Call to Action
The adoption of AI is gaining pace with every passing day. For the last two years, we have been talking about the launch of a new AI model every other week, each claiming to compete with the last version. Therefore, it has become imperative for organisations to balance two objectives: leverage their potential and mitigate their risks.
This demands an absolute commitment to creating secure and ethical AI systems that are focused on ensuring users’ safety. This is the moment to ensure AI is not just powerful but also responsible.
The way forward must be built on trust and resilience in AI systems through strong security frameworks, best industry practices, and a shared commitment among enterprises, policymakers, and practitioners.