From baseline to breakthrough.

Discover your AI potential. Empower your workforce.

Female Quality Engineer and Male Production Engineer talking at welding robot in a factory. Cost Reduction in manufacturing process concepts.

For many organisations, AI promises growth, productivity, and new ways of working. Turning that potential into consistent enterprise-wide impact, however, depends on something more fundamental than technology alone—workforce readiness. 

Leaders are energised by the opportunity AI creates, but many organisations are still building a clear picture of how confidently their people are applying it in practice. Without that visibility, investment decisions become harder to prioritise and adoption harder to scale in a coordinated way. PwC’s 29th CEO Survey found fewer than one in five organisations report strong AI foundations. Despite strong optimism, adoption remains uneven: only 14% of employees use AI daily, and just 56% learned new skills last year. 

Closing this ambition–adoption gap represents one of the biggest opportunities organisations have today to accelerate value from AI and support workforce transformation at scale. 

How can leaders get a clear view of AI readiness and fix gaps before they become roadblocks? PwC’s AI Skills Scanner gives you the baseline—and the build. This rapid, data-driven diagnostic offers leaders a clear view of workforce AI capability – from practical skills to responsible AI usage. But its real value lies in what comes next. The insights from the AI Skills Scanner enable targeted upskilling, risk mitigation and, critically, informed workforce and operating model transformation - helping organisations activate AI in ways that align to strategy, protect trust, and unlock sustainable value at scale. 

Engineer working with robotic arm technology

“'This is why understanding AI capability is a strategic imperative, not a learning exercise. Leaders need a fact base that shows where the organisation is truly ready to scale AI, where it is not, and what must change across skills, governance, operating models and culture to move forward safely and effectively.” ​

Emma Hardy,Partner, Workforce

Understanding AI workforce readiness: Insights from a real capability assessment

AI workforce readiness refers to an organisation’s ability to apply artificial intelligence safely, effectively and at scale through the skills, behaviours and confidence of its workforce.  

How do you know if your workforce is truly AI-ready? Rather than relying on assumptions or anecdotal evidence, organisations require measurable data on workforce AI capability. 

What is an AI capability assessment?

An AI capability assessment provides a structured evaluation of how well employees can apply AI in real work situations. Rather than relying on self-reported confidence, it evaluates practical skills: how employees apply AI tools, solve problems with and use AI responsibility in everyday work. Unlike many capability assessments that focus only on technical skills, the AI Skills Scanner also evaluates how responsibly employees apply AI in real work scenarios. 

In a recent client case study, two-thirds of employees completed the 10 – 15 minute AI Skills Scanner, producing a clear baseline of AI readiness The results revealed that:  

  • 62% of employees were at a foundational level, able to use basic AI tools 
  • Only 12% demonstrated advanced capability, such as designing AI workflows 
  • Fewer than 40% could complete more complex AI scenarios 

Insights like these transform vague assumptions about ‘AI readiness’ into measurable data. They show how AI skills vary across teams and roles, and provide a factual starting point for workforce transformation. 

What does an AI capability assessment reveal?

A well-designed AI skills assessment does more than measure knowledge. It reveals how AI capability is distributed across an organisation—and where the biggest barriers to adoption exist. 

In the same organisation (a large client service firm), deeper analysis exposed a clear ’proficiency cliff’. Employees performed strongly on basic AI tasks, but capability dropped sharply when scenarios required more advanced problem-solving. 

The assessment highlighted several critical gaps: 

  • Heavy reliance on pre-built AI tools, with limited capability to design or build solutions internally 
  • Responsible use was generally strong in simpler scenarios, but flags increased as scenario complexity rose 
  • A junior–senior skills gap, where mid-level managers had the strongest capability while junior staff lagged behind 

 These insights are difficult to uncover through surveys or assumptions alone. By segmenting results by role, team and level, organisations can see exactly where responsible use of AI is strong or requires uplifting - a crucial step for any workforce AI transformation strategy. 

Why does measuring AI workforce capability matter?

For organisations investing in AI, workforce capability is often the biggest barrier to adoption. Without a clear understanding of skills, companies risk over-investing in technology while under-investing in the people needed to use it effectively. 

AI capability assessments provide the insight needed to move from experimentation to real value creation. In this case, the organisation used its assessment results to implement a targeted capability roadmap, including: 

  • Practical AI training programs focused on real client delivery workflows 
  • Clear AI use standards and guardrails to support responsible AI adoption 
  • AI coach roles to help teams apply AI tools in everyday work 
  • Leadership AI literacy programs to drive adoption from the top 

Within months, teams began experimenting with AI more confidently and reliance on a small group of specialists began to decline. For leaders, measuring workforce capability provides a clear baseline, highlights risks early, and creates a practical roadmap for scaling AI across the organisation. 

The AI Skills Scanner: Measuring and building AI workforce capability

After diagnosing the problem and concept, what’s the solution? PwC’s AI Skills Scanner is a rapid, robust way to measure the baseline of your organisation’s AI skills and readiness. It provides a comprehensive diagnostic assessment of workforce AI capability and maturity of responsible use, all in a user-friendly format. 

A quick, adaptive survey to measure workforce AI capability

Time is precious, so the AI Skills Scanner is designed to be fast and user-friendly. It takes 10-15 minutes to complete and uses advanced survey logic with real-time adaptation – meaning each participant gets questions suited to their proficiency level. This design keeps users engaged and helps maintain strong participation rates. It ensures staff learning the basics of AI are not overwhelmed while more advanced users are appropriately challenged. This turns vague notions of 'AI readiness' into quantified metrics you can track. 

Businesswoman presenting data to team in a conference room

Understand how employees are using AI and identify potential risks

Knowing skill levels isn’t enough – companies also need confidence that AI is being used safely and responsibly. The AI Skills Scanner has built-in responsible use checks, assessing whether employees understand and apply AI responsibly in their work. You receive feedback on potential risk areas (It can reveal, for example, if employees trust AI outputs too readily or misuse AI tools in ways that could pose quality or compliance risks). These insights help you address potential vulnerabilities early by strengthening AI governance and training on responsible use so you can address compliance and policy gaps proactively. Essentially, the AI Skills Scanner serves as a 'responsible AI thermometer’ gauging how AI is used across your workforce. Leaders gain insight into whether their people are using AI in a safe, appropriate way. 

Compare AI maturity and track progress over time

The AI Skills Scanner isn’t just a once-off diagnostic–it’s built for continuous improvement. You can re-run the assessment periodically (for example, every six or twelve months) to track improvements in your workforce’s AI maturity and measure ROI on your upskilling and transformation initiatives. The initial assessment establishes a baseline, and the interactive results dashboard lets you compare future assessments against that baseline. Over time, you’ll clearly see growth in capability and can spot new gaps as your AI usage evolves. This ongoing measurement creates a feedback loop, allowing leaders to monitor how well their AI talent strategy is working and to make data-informed choices. In practice, these trends often signal a capability inflection point: AI use may be high, but without targeted intervention the benefits plateau at individual productivity, and many organisations are left wondering why they aren't seeing sufficient return on their investment in AI in the form of meaningful productivity uplift and value creation. With insights generated from the AI Skills Scanner, leaders can turn benchmark data into clear priorities for where to intervene to derive greater benefit and value from their investment in AI.    

Interactive dashboard and tailored report to guide targeted upskilling

The AI Skills Scanner delivers actionable intelligence through a dynamic dashboard and a customised report. These provide: 

  • A visual view of capability 'hot spots', highlighting areas of strength and opportunities across different parts of the organisation.  
  • Identification of priority cohorts for training initiatives 

 These insights help organisations identify capability strengths and gaps across workforce cohorts and inform workforce transformation priorities. 

End-to-end support to build an AI-ready workforce

Armed with insights from the AI Skills Scanner, we help leaders turn data into action, building the skills and transformation needed to realise AI’s value including: 

Businesswoman presenting data to team in a conference room

AI Skills Scanner report: A snapshot

Executive Summary Management division Total responses AI competency overview AI competency by org level 19 Practicing16% Emerging32% Foundational53% Trainee / Associate Senior Associate Manager Senior Manager Director Partner / Managing Director 5 5 5 5 5 5 5 11 11 5 11 26 Responded % of respondents 0 20 40 Operational division Total responses AI competency overview AI competency by org level 137 Practicing5% Emerging28% Foundational59% Trainee / Associate Senior Associate Manager Senior Manager Director Partner / Managing Director 10 5 5 10 21 13 5 Responded % of respondents 0 20 40 Fluent8% 10 8 Finance and Governance division Total responses AI competency overview AI competency by org level 139 Fluent8% Emerging22% Foundational70% Trainee / Associate Senior Associate Manager Senior Manager Director Partner / Managing Director 8 14 19 8 14 Responded % of respondents 0 20 40 8 8 5 5 % % %
Executive Summary Management division Total responses AI competency overview AI competency by org level 19 Practicing16% Emerging32% Foundational53% Trainee / Associate Senior Associate Manager Senior Manager Director Partner / Managing Director 5 5 5 5 5 5 5 11 11 5 11 26 Responded % of respondents 0 20 40 Finance and Governance division Total responses AI competency overview AI competency by org level 139 Fluent8% Emerging22% Foundational70% Trainee / Associate Senior Associate Manager Senior Manager Director Partner / Managing Director 8 14 19 8 14 Responded % of respondents 0 20 40 8 8 5 5 Operational division Total responses AI competency overview 137 Practicing5% Emerging28% Foundational59% Responded Fluent8% AI competency by org level Trainee / Associate Senior Associate Manager Senior Manager Director Partner / Managing Director 10 5 5 10 21 13 5 % of respondents 0 20 40 10 8 % % %
AI Competency Overview Build-stream competency Use-stream competency % of respondents 0 50 100 % of respondents 0 50 100 Foundational Emerging Practicing Fluent 60 26 4 10 Foundational Emerging Practicing Fluent 18 46 23 12 Use-stream Build-stream Foundational Foundational Emerging Practicing Fluent Emerging Practicing Fluent 16% 2% 29% 15% 2% 10% 8% 1% 4% 5% 1% 6% % %
AI Competency Overview Build-stream competency Use-stream competency % of respondents 0 50 100 % of respondents 0 50 100 Foundational Emerging Practicing Fluent 60 26 4 10 Foundational Emerging Practicing Fluent 18 46 23 12 Use-stream Build-stream Foundational Foundational Emerging Practicing Fluent Emerging Practicing Fluent 16% 2% 29% 15% 2% 10% 8% 1% 4% 5% 1% 6% % %
Responsible AI Accuracy Appropriateness Completeness 100 80 60 40 20 0 % If a user selects an answer that reveals over trust or poor checking behaviour, an Accuracy flag is triggered. Unfavourable response rate 14.1% 100 80 60 40 20 0 % If a user selects an answer that violates context, governance, or safeguards, an Accuracy flag is triggered. Unfavourable response rate 14.9% 100 80 60 40 20 0 % If a user selects an answer that contains unfinished work, missing steps, or unclear ownership, a Completeness flag is triggered. Unfavourable response rate 14.0% Unfavourable Accuracy flag selection response rate Unfavourable Appropriateness flag selection response rate Unfavourable Completeness flag selection response rate % of respondents 0 50 100 Q2 Q3 Q4 Q7 Q12 Q14 Q15 Q16 Q17 Q19 Q20 Q21 3.06 9.38 4.35 13.48 7.59 16.67 47.22 25.00 89.29 21.43 35.71 28.57 % of respondents 0 50 100 Q2 Q3 Q9 Q10 Q14 Q15 Q19 Q20 Q21 3.06 1.04 34.48 10.00 22.22 47.22 78.57 14.29 28.57 % of respondents 0 50 100 Q2 Q3 Q4 Q7 Q12 Q14 Q15 Q16 Q19 Q20 Q21 3.06 9.38 7.61 13.48 7.59 63.89 47.22 25.00 7.14 78.57 28.57 % % %
Responsible AI Accuracy Appropriateness Completeness 100 80 60 40 20 0 % If a user selects an answer that reveals over trust or poor checking behaviour, an Accuracy flag is triggered. Unfavourable response rate 14.1% 100 80 60 40 20 0 % If a user selects an answer that violates context, governance, or safeguards, an Accuracy flag is triggered. Unfavourable response rate 14.9% 100 80 60 40 20 0 % If a user selects an answer that contains unfinished work, missing steps, or unclear ownership, a Completeness flag is triggered. Unfavourable response rate 14.0% Unfavourable Accuracy flag selection response rate Unfavourable Appropriateness flag selection response rate Unfavourable Completeness flag selection response rate % of respondents 0 50 100 Q2 Q3 Q4 Q7 Q12 Q14 Q15 Q16 Q17 Q19 Q20 Q21 3.06 9.38 4.35 13.48 7.59 16.67 47.22 25.00 89.29 21.43 35.71 28.57 % of respondents 0 50 100 Q2 Q3 Q9 Q10 Q14 Q15 Q19 Q20 Q21 3.06 1.04 34.48 10.00 22.22 47.22 78.57 14.29 28.57 % of respondents 0 50 100 Q2 Q3 Q4 Q7 Q12 Q14 Q15 Q16 Q19 Q20 Q21 3.06 9.38 7.61 13.48 7.59 63.89 47.22 25.00 7.14 78.57 28.57 % % %

Final closing: Implementing AI successfully isn’t just about technology – it’s about people. With PwC Australia’s Workforce Transformation and AI experts you get a holistic solution to the AI skills challenge. The AI Skills Scanner is a unique tool that reflects our philosophy: real, data-backed insight is the key to accelerating workforce transformation. We don’t stop at analysis – we bring deep experience to help you transform your workforce. 

Discover your AI capability baseline and scale responsibly

Required fields are marked with an asterisk(*)

Your personal information will be handled in accordance with our Privacy Statement. You can update your communication preferences at any time by clicking the unsubscribe link in a PwC email or by submitting a request as outlined in our Privacy Statement.

Contact us

Meredith Wallace

Meredith Wallace

Director, Workforce, PwC Australia

Emma  Hardy

Emma Hardy

Partner, Workforce, PwC Australia

Hide