How to be an AI leader, not an AI washout

  • Not all AI startups are as artificial as others — the reality is that many AI companies don’t use AI at all.
  • For those attempting to make AI deals, there is a risk in not getting the promised benefits of the company or technology being bought.
  • Even when an AI startup does include genuine AI at its core, it needs to be inspected for its responsibility.

It sure sounded promising: a startup that claimed it had artificial intelligence (AI) technology that could automate the development of mobile apps.

The problem?

It may not be true. According to The Wall Street Journal, the company (which recently raised nearly $30 million from AI-focused VC funds), may hardly have any AI capabilities or expertise.1

It’s a common scenario. According to a study of 2,830 European startups by London-based MMC Ventures, 40 percent of those that claimed to be “AI startups” had barely any AI at all.2

This phenomenon is called ‘AI washing’: companies and people that mislead others about their artificial intelligence capabilities. In my work at the intersection of marketing and emerging technologies I’m seeing it more and more often.

Deals and leaders: At risk

AI-washing probably got its name as a variant on ‘green-washing’ — presenting corporate activities as sustainable when they’re not — itself a variation of the historical meaning of white-washing (to conceal or cover with a uniform paint). For corporate leaders, the risks of AI-washing may be even greater than that of false sustainability claims.

One risk is for dealmakers. With skyrocketing valuations for companies that claim AI capabilities, you’d best do your AI due diligence to make sure you’re getting what you pay for.

Beyond deals, there’s another risk for companies that are aggressively rolling out AI and letting their business partners and clients know about it. It could be devastating for your business and brand if anyone within your ranks was misleading others about your AI maturity.

Misleading promises can happen more easily than you think. AI is hard. Unless you’ve got a doctorate in computer science, how can you be sure that your AI tools are really doing everything that your tech and marketing teams claim they are?

Maybe there’s not even any conscious intent to mislead. Maybe someone on the tech team is just being too optimistic. Maybe someone in marketing is just pushing the envelope a little too far in the hope of a catchy campaign, or they didn’t understand the technology right.

Being responsible with AI

Whether making AI deals or building AI organically, the imperative is the same: Make sure that you’re always dealing with responsible AI.

Responsible AI means that all your stakeholders — customers, employees or communities — can be confident that your AI really is doing what it’s supposed to, in a way that benefits them, because you’ve got these five pillars right:

  1. Governance: A cross-functional team must supervise AI across the enterprise for the full AI life cycle.
  2. Explainability and interpretability: Even when AI does its work behind the scenes, it should be possible to lift up the hood and see what’s going on.
  3. Bias and fairness: It’s critical to have controls to spot bias and procedures to fix it fast.
  4. Robustness and security: You’ll want to monitor not just for cyber threats, but also for how AI can ‘naturally’ degrade, or become less accurate as its data or models age.
  5. Ethics and regulations: It’s not enough to obey the law — especially since many laws are still evolving. AI must align both with corporate principles and wider ethical issues.

If you have these pillars internally, you can be sure that your AI is a source of trust, not risk. And when looking at potential targets, dealmakers must examine these five pillars in relation to their own needs and plans.

For example, you may find that a small company’s AI is real and robust enough for its own limited needs. But it might not work or be secure if you try to scale it up for your own global operations.

If a startup lacks good governance or says it’s impossible to explain how its AI works, those could be red flags, telling you to walk away.

Yet, if you poke a little deeper and find that the tech is genuine, these flaws could be opportunities for you to acquire it and take it to another level.

So if the bad news here is that a lot of people and companies are talking a better AI game than they can really play, there’s good news too. If you know what to look for, you can avoid the dangers and be an AI leader, not an AI washout.

This is modified version of an article previously published on Forbes.