Navigating the opportunities and risks of generative AI and the democratisation of artificial intelligence

The accessibility and capability of artificial intelligence (AI) is evolving faster than at any moment in history, triggering a wave of reinvention across virtually every industry. 
For the organisations that apply it wisely, AI has the potential to save time and money, improve products and services and disrupt industries. But moving quickly on AI research and development only pays its dividends if organisations are ready and able to effectively navigate the risks of bleeding edge technology. 
Tom Pagram and Jon Benson - leaders in our Trusted AI practice - led a practical conversation on the opportunities that generative AI presents, as well as the associated risks and key considerations for innovating safely and responsibly.

Questions that Boards should be asking:

  • What visibility do we have over our existing usage of AI?
  • What are our contractual, regulatory, social and ethical obligations related to the use of AI?
  • What is our risk appetite for the use of AI and emerging technologies?
  • What are the opportunities for AI-powered transformation and the threats of disruption from our competitors' use of AI?
  • Do we have the right processes and controls in place for assessing and governing new AI use cases? 

Understanding Generative AI tools

Generative AI tools use large language models to allow users to interact with computers through human-like conversations. However, while these tools are incredibly impressive, it’s not the core disruptive force at play. More broadly, it is both the accessibility of AI and the capability of AI that is creating new opportunities and risks:

  • The capability of AI is growing exponentially: There have been - and continue to be - exponential breakthroughs in model architectures, computational efficiency and the scale of model parameters, which is unlocking new and emerging capabilities for language models.
  • AI is more accessible than ever before: The technology that powers these generative AI tools isn’t necessarily new, but what we’ve seen over the last few months is the conversation around AI moving from data science teams into our family living rooms, and by placing this technology in the hands of experts in different domains, we’re making it easier for them to identify new use cases and applications.

A snapshot of the opportunities

There’s a broad range of opportunities that generative AI can be used for. Unlike some of its predecessors, newer AI models go well beyond knowledge retrieval and are able to extend to logic and reasoning. Beyond finding information, they’re able to transform it into different formats and levels of detail. For example, identifying key talking points and brainstorming ideas. There’s a common misconception that these models only regurgitate content that they’ve been trained on, however they’re also capable of generating novel and unique ideas as well. 

It also goes beyond text. Some more recent releases have extended to producing entirely unique audio, video and image files, and also offer the ability to modify content based on your instructions. This comes with a large number of opportunities, but also a large number of risks.

To date we’ve seen over 50 use cases for AI across our clients and across industries.

While these models are incredibly powerful, they can also produce inaccurate responses that appear authoritative and factual. For example, when we use Generative AI to describe PwC’s Executive Board, it produces a list of only four names. One was of the former CEO, another was a real person but never a PwC Partner and the remaining two individuals were entirely invented. Even this basic example demonstrates why these models should be used with careful oversight, controls, human review and validation.

AI risk and the need for proactive governance

There’s been a massive uptake in interest in AI in Australia and globally: 

  • 7 in 10 Australian businesses are already exploring or applying AI (Source: IBM, 2022)
  • 1 in 10 Australian businesses are ready to navigate the risks of using AI (Source: Fifth Quadrant, 2021) 
  • 4 in 10 consumers are willing to trust AI (Source: University of Queensland, 2023)

So what does this tell us? We have very interested and innovative people working within organisations and trialling this technology without formal oversight and governance from a risk perspective. And while 4 in 10 consumers are willing to trust AI, 60% of consumers are still concerned about the trustworthiness or other aspects of AI. 

The need for trusted AI

Without the appropriate guardrails in place, organisations can build solutions based on inaccurate insights or establish an over reliance on AI, which without human intervention and oversight can increase exposure to risks. Whether it’s compliance with evolving privacy and data protection legislation, reputational damage, unlawful discrimination, operational disruption or intellectual disputes, it’s important to be conscious of the associated risks of using AI tools. 

For example, we’re currently in conversations with an organisation to understand their rights if these models are being trained using their IP, in relation to the monetisation of outputs, how their data is being used and how their IP is being protected. It’s a very pertinent subject in the media industry, and something we encourage organisations to go into with eyes wide open and guardrails in place from the outset.

Key considerations for boards and executives 

The first question is are we aware of what our organisation is doing today? Do we have full visibility and oversight of how AI is being trialled? And have we considered the tools and vendor products that we’re buying and the AI risk exposure associated with them? Whether they’re aware or not, many organisations will soon be AI-enabled, and will need to have a full appreciation of the controls and risks associated.

Organisations also need to have a position on the contractual, regulatory, social and ethical obligations associated with AI. Just because you can use this technology, should you? What controls, checks and balances are in place to ensure that you’re operating within your risk appetite?

Furthermore, what is your organisation’s risk appetite? Do you want to be leading edge innovators and expose your organisation to some of the potential risks discussed? Or do you have a more conservative view of how technology should be implemented and adopted?

On a positive note, there’s tremendous opportunity for transformation and innovation if it’s being used with the appropriate guardrails in place. We’re actively trialling this technology within our legal function to increase the scale, breadth and depth of legal advice with HarveyAI. We’re not saying don’t do this. We’re saying proceed with caution and do it in a systematic and structured way to minimise reputational risk and ensure quality of service.

The last consideration is governance processes and controls for AI. Many organisations are still early in their control and governance journey and in many instances, advanced analytics and AI is still yet to be considered in a controlled environment. Organisations need to have an understanding of where they are on their AI journey, and apply a systematic and formalised approach to implement governance in their ecosystem.

In many of our conversations with boards and executives, we hear that these risks aren’t front of mind as they’re not using AI at scale. In some cases, this may be true, but our experience tells us that those who invest upfront in risk and governance are able to get more sustained benefit in the end. For example, investing heavily in research and development activities for AI can only pay dividends if you have enough trust and confidence to deploy that solution into production, or rely on it as part of the services you deliver. The conversations we’re having are about proactive risk management and governance. It’s about how you unlock the upside risk. For some organisations, it's about managing the downside risk, because they're already relying on AI that they potentially can't trust. But for the vast majority, this is about upside risk.

Q&A Discussion

NED: Do you have a sense of how boards use policies as a voice to the people in their organisation?  

JB: This is a great topic of conversation more generally, around governance for evolving technologies and challenges in this space. For a number of years, organisations have attempted to leverage the policy mechanisms around privacy and data governance and security. It’s great for a baseline within an organisation to raise awareness around the expectation. Where we see challenges is the practical interpretation of those policies on the job and having them consistently interpreted across the organisation. Where we see the leading organisations heading in this space is starting off with the policy and a clear direction around the organisation's position, and then also formalising minimum standards or requirements that the organisation needs to meet. Additionally, making sure that there is a level of monitoring of control assurance to ensure that these are being followed. It’s when we start to see that loop being closed, that we see a demonstrable shift in consistency in the way that organisations formally adopt the prescribed way.

NED: I think the way we view this is going to be important. It is not about execution, it’s about augmented thinking. There are wonderful use cases. I think we’ll see new job opportunities that work with the organisation to translate the technology. I think this is really exciting and that it’s going to change the workplace.

TP: The way we describe it is ‘human-led, technology powered’. Recently the latest GPT models scored in the 90th percentile on the bar exam. This achievement alone is a powerful example of the capability of these models - they’re exceptional, however they are imperfect. In the same way that you wouldn’t rely on information from another person without validating it yourself, you shouldn’t rely on information from a model without understanding and validating it. For most use cases, this means we should be thinking about large language models as a tool that equips a person to do their job, as opposed to a tool that removes humans from a process entirely.

NED: How can we ensure that the models and algorithms that are being used are reliable and can be realistically implemented? 

TP: Firstly, we’re starting to see a trend where some tech multinationals are building trust in AI by open sourcing models and algorithms. For example, Twitter’s models and algorithms for personalised recommendations and their decision to make that information available to the public to resolve some of the trust issues associated with ambiguity. In my view, we’ll likely start to see a shift towards more transparency in the way that models work. 

In the absence of complete model transparency, there are a couple of other mechanisms that come into play. For example, you may not have visibility of how a cloud service provider manages their security controls, however you get comfort through reviewing a SOC2/3 report. We’re also seeing grassroots conversations around the concept of trust marks and certification for AI. It may still be a black box to a customer but if you’ve got a stamp which confirms it’s been independently validated and complies with the best practice standards, it may help to give you comfort that it can be relied on for certain types of use cases.

Lastly, for some organisations you can still get a level of comfort by testing inputs and outputs. Even if you don’t know how the model works, using a set of known inputs, you can build up some confidence around where the model performs well and where it may not. At PwC, we take that approach to generate some guidance on our AI tools and say, “here are the scenarios that have been validated and where you can use and rely on this model for outputs and here are scenarios where we don't support the use of this model.” 

NED: We’ve talked a lot about going external and getting third party audits and accreditations. However, I think fundamentally you want an organisation to be prepared and have a good understanding of its own framework to an extent which can then be audited. How do we govern explainability and what should directors be looking for as companies start to test and build these algorithms?

TP: The best practice frameworks are evolving quickly, but we haven’t yet reached a point of convergence towards specific standards and many of the frameworks are still in their infancy. The depth of information that sits behind them in terms of practitioner guidance and practical implementation tools just isn’t there yet. 

Specifically on your question about explainability - one of many key characteristics of trustworthy AI - the approach that we are taking at PwC, is putting forth a set of design principles and constraints that we expect all AI models to meet, that is captured as part of the design document when the model was developed. Depending on the nature of a use case, that includes limiting ourselves to the use of data science techniques that are fully explainable, or creating an explainability engine that retrospectively reviews and interprets the black box and explains how the model is likely to have  reached the answer. There is a set of practical principles that we have come up with that guide a data scientist on what would be acceptable within our risk appetite. We include or exclude different principles depending on the use case. 

As an example, if the use case is finding invoice numbers in an invoice and matching it with another system, we can be more confident when we get the answer because it’s matched to another system. The types of guardrails that have been put in place would be offset by the fact that we are validating the data or that humans are involved with the review. For a use case requiring high precision with limited human review and oversight, explainability is a non-negotiable for us. Our clients have taken a similar approach in terms of outlining a set of principles that need to be adhered to. It’s about aligning people's individual risk appetites to the organisation’s risk appetite through specific design principles.

JB: Australia is trailing other areas of the world where explainability has already been a key component of data protection regulation for some regions for a number of years. The approach to black boxing, and not being able to explain how a decision is ultimately reached, has been a key element of concern. The risk driver behind that is that data scientists do things in different ways and may not document how their solutions are being developed. Without a formalised methodology, we’re more at risk of receiving conclusions that aren’t in the best interest of the consumer or violate privacy and data protection laws. It’s not just from a regulatory perspective, but also from the mission, culture and values of an organisation, and how they want to be treating their customers. It’s incumbent on us to ensure that practitioners have a defined way of working, enabling us to have confidence that those models aren’t biassed, are running on trusted data and will ultimately continue to deliver the right answer and recommendation to assist our people and deliver the desired outcomes.

NED: Which industries or functions are you seeing generative AI being used in a beneficial way?

TP: One of the big use cases across industries is understanding and interpreting large amounts of data in an efficient way, beyond just knowledge retrieval, and being able to draw conclusions and provide logical reasoning over large sets of data. As an example, if you have a policy or piece of legislation, how can you understand the linkages? With GPT models we’ve seen a successful proof of concept that we have been working on with a client, where we were able to have the model perform an analysis with very limited input and instruction as well as provide guidance back, including its rationale for the proposed changes to policy. That’s one example of a core theme that we’re seeing around the ability to understand as well as simplify large amounts of data. 

Another use case is around ideation and being able to provide a way for people to brainstorm ideas, create a first draft or prepare a piece of marketing material for an event. 

NED: With APRA and ASIC turning their attention to questioning organisations about their AI governance practices and risk appetite, how can you determine whether you’re doing all of the right things to meet requirements for financial service organisations?

JB: There’s a lot of active interest in regulation and enforcement in this space, but it takes time. One of the key challenges that we see in how this legislation is being brought forward is the practical elements around having a prescriptive set of expectations on how it’s to be implemented and then having an effective regulatory oversight function. What we see when we look at organisations like APRA, who are responsible for regulating the whole financial services industry, is that they are very forthright on their expectations around information security. APRA has written guidelines on data risk management that have been published for over decade, however they are yet to be formalised into regulatory standards that are widely enforced. As it stands today they don’t have an active role in enforcing adherence to privacy and related AI concerns.

That being said, we also acknowledge that there have been some regulatory frameworks that have taken that next level approach. One example is the consumer data rights for open banking, where legislation has embedded the concept of data standards and the need to have external assurance over the way that data is handled during the sharing of open data and the consumer data framework. The regulatory ecosystem is still coming to grips with how to enforce these laws. Until that is in place, I don't think you can take prescriptive guidance. This means we still need to go back to the risk appetite as the key driver along with the desire and willingness to leverage the internal control functions to ensure we’re operating within risk appetite. This isn’t just a regulatory thing. This can impact reputation, revenue, and ultimately the regulatory trust of an organisation going forward. We can not rely on the regulatory angle as a key driver. 

Contact us

Tom Pagram

Partner - AI, PwC Australia

Tel: +61 451 470 509

Jon Benson

Partner, Assurance - T&R Cyber, PwC Australia

Tel: +61 438 565 299

Follow PwC Australia