How to identify and mitigate AI risks

How to identify and mitigate AI risks
  • Publication
  • 8 minute read
  • May 21, 2024

Artificial intelligence offers extraordinary opportunities. But organisations will only achieve scale if they put robust governance and risk management strategies in place.

Organisations have more opportunities than ever before to transform their business models using artificial intelligence (AI). The trick, however, is to not stumble at the first hurdle. That is, organisations must update their approach to automation governance and risk management before they get out of the blocks and move beyond prototyping and experimentation. It’s about building and managing trust in AI so your organisation can get past the initial stages, and progress to production solutions.

Organisations are right to be cautious about potential risks. To this end, we’ve pinpointed ten emerging trends that leaders need to watch. And we’ve identified two key challenges that must be overcome before AI can be scaled up.

Where are we now? Australia’s AI landscape

Organisations are increasingly reliant on AI systems for automated decision-making, data analysis and content generation. In the past 12 months alone, breakthroughs in generative AI and large language models have set new records for the rate of adoption and reliance on AI.

Moreover, today’s digital landscape is increasingly accessible. The playing field has been leveled by intuitive conversational user interfaces, as well as AI embedded in third-party software and cloud-based automated machine learning (AutoML) tools. Now, the ability to experiment with new AI use cases is available to everyone (and not just data scientists, software engineers, and major technology vendors, as it was several years ago).

 Accessibility, however, comes with its own set of challenges for organisations.

Accessible AI: Is your organisation ready?

Breakthroughs in the accessibility of AI are disrupting and challenging the traditional approach to model risk management and automation governance. For instance, leaders should ask:

  • How do we trust a decision system that doesn’t provide consistent and predictable outputs?
  • How do we govern and sign-off on models that could be applied to a virtually infinite number of use cases (for example, generative AI and foundation language models)?
  • How do we know whether the AI systems that our organisation relies on are sufficiently precise or have the necessary safeguards to both prevent and detect errors?

To ensure your organisation is ready, we’ve identified the following ten trends. These have come from our research, and from collaborating with some of Australia’s market leaders.

Plus, two key challenges in AI risk management

In addition to these ten trends, when it comes to AI and risk management there are two key challenges keeping leaders awake at night:

How to develop or modernise an organisation’s AI policy framework

To harness AI opportunities safely and responsibly, organisations need to establish a robust, holistic, and accessible AI policy to underpin the development, procurement, implementation, and use of AI. No mean feat.

To support organisations, we’ve put together the following checklist of key considerations:

Dealing with AI incidents and issues
Dealing with AI incidents and issues

Consider how issues with AI will be identified  (i.e what is an issue from an AI perspective?) and  reported by stakeholders, including employees and customers. Organisations will need to also consider whether workarounds exist where a system is under investigation/not available.

Consider establishing separate AI incident reporting mechanisms within the organisation to address issues with AI.

Criteria for assessment of AI systems
Criteria for assessment of AI systems

Businesses should consider what criteria or  framework will apply to the assessment of AI systems. What criteria be applied in assessing whether AI systems present appropriate risk/reward balance to the business (i.e NIST AI RMF 1.0 - Trustworthiness).

Clear Criteria should be developed in relation to the AI systems and use cases asessed by the orgnisation.

A clear definition of “AI”
A clear definition of “AI”

AI is a nebulous concept. Organisations need  to consider what AI is to provide clarity around  when the policy does or does not apply. As a result, it is critical to ensure a functional definition of “AI” is established that sets clear boundaries for the policy.

When drafting, it is important to ensure that "AI" is defined to both emphasise technical functionaliity and facilitate clarity around policy implementation.

Other organisational policies
Other organisational policies

The AI policy should build on, and interoperate with, existing technology and data governance foundations – but organisations should identify any current policies requiring update such as privacy, IT security and third-party risk.

Organisations can leverage existing technology and dat governne processes for AI systems. Existing third-party risk questionnaires and processes should also be revised to address AI.

Legal and regulatory requirements
Legal and regulatory requirements

Consider Australian and international regulation, principles and guidelines that have relevance e.g. Australia's AI Ethics Framework, NIST AI RMF, and other applicable laws (e.g. privacy, intellectual property, surveilance, human rights, business conduct rules etc.).

Consider undertaking a regulatory scan to determine applicable laws and how it might impact an organisation's use of AI prior to drafting a policy.

Approving AI systems and use cases
Approving AI systems and use cases

Thought should be given to how the organiation intends to use, develop or procure AI, which should be clearly stated in the policy. Organisations should ensure that any AI governance policy is paired with an appropriate process for approval for certain AI systems and use cases.

Consider an allow-list approach to AI, leveraging an organisation's existing technology governance structures to approve AI systems and uses.

Risk-based classification of AI
Risk-based classification of AI

An AI policy should take a risk-based classification approach to AI systems and the proposed use cases based on the organisation's risk profile. The greater the risk posed, the more robust the controls and governance should be in place to address those risks.

Different AI use caes will require different levels of human invilvement and risks. The higher the risk, the structer and more defined the AI policy should be around required controls - including banning certain uses.

Ongoing assurance
Ongoing assurance

AI is not set and forget. It is extremely important to set obligations around ongoing assurance, monitoring and testing of AI systems to ensure that they remain aligned with the organisation’s requirements and obligations.

Organisations will need to consider what assurance related obligations they will place on AI system owners within the organization to ensure monitoring and ongoing compliance.

Holistic assessment and value alignment
Holistic assessment and value alignment

Adoption of AI must occur with an ethical mindset, considering the holistic risk of harm to the organisation, people and society. Ultimately, the approach should be consistent with the organisation’s values and the expectations of its stakeholders considering these wider harms. 

It is important to ensure that the organisation approaches AI in a holistic manner and with the same ethical lens it approaches business and considers stakeholder expectations.

Transparency and accountability
Transparency and accountability

Trust is a key enabler of digital transformation. The AI policy should clearly outline how AI is being used within the organisation to ensure key stakeholders are well-informed. It should also ensure clear lines of accountability for the use of AI in the organisation.

Identifying who within the business is accountable for particular AI systems and ensuring transparency in relation to how those AI systems operate is integral.

Governance operating model
Governance operating model

Organisations should consider their AI governance structure at organisational level. Organisations can  consider establishing an AI Board that supports a centralised governance and decision-making process for AI. This body should consider education of staff on the policy and guide how to apply it correctly.

An effective AI governance model should be based on existing technology governance structures, business’ corporate values, ethical principles, and law.

Keeping up to date with the ever-changing regulatory landscape.

Currently, there’s no globally consistent approach to AI regulation, and legislators are grappling with complex privacy, reliability, and intellectual property. Global standard-setters will continue to support the compatibility and convergence of regulation, and additional protections are in the pipeline.

A recent snapshot of what of the ever evolving landscape looked in the year:

Artifical Intelligence - Global AI Regulation Summary

Territory

Here at PwC Australia, our Artificial Intelligence practice is committed to working with our clients to navigate and overcome these AI challenges together. To learn more about PwC Australia’s Artificial Intelligence practice please contact Tom Pagram and David Ma.

Contact us

Tom Pagram

Tom Pagram

Partner, AI Leader and Global AI Factory Leader, PwC Australia

David Ma

David Ma

Director - AI Assurance, PwC Australia

Tel: +61 2 8266 3071

Follow PwC Australia