‘That’s a disaster’: Only 17% of government leaders have plans to improve their AI skills
“That’s a disaster," Apolitical's Robyn Scott said of the low intent to up-skill around AI at Fortune's Brainstorm AI conference.

Since AI exploded onto the public stage with the unveiling of OpenAI’s ChatGPT in late 2022, the use of regulation to channel its positive uses and control its excesses has sparked heated debate.
Central to this debate has been the general lack of AI knowledge among political leaders—and the problems this exposes when governments put themselves to the task of writing regulation.
It’s hard to know how, and how much, to regulate an entirely new industry in which even its leaders have a hard time explaining how large language models (LLMs) like ChatGPT come to the answers they give.
Robyn Scott, the cofounder and CEO of Apolitical, a learning platform for government employees, told Fortune’s Brainstorm AI conference in London that only 17% of government leaders Apolitical spoke to had plans to improve their AI skills.
“That’s a disaster. We can’t have our governments, our regulators, our implementers that poorly skilled in AI,” she said.
So far, Europe has made a splash with its AI Act, while countries like Singapore have also made strides in creating safeguards. The U.K. has set its sector regulators to oversee AI as well.
Irrespective of how different parts of the world are approaching regulation, being crystal clear about the rules will be instrumental in aligning lawmakers and companies on all things AI, Apolitical’s Scott says.
“Clarity is half the battle,” said Scott at Brainstorm AI on Tuesday. “The burden of understanding is immense … governments are huge organizations — there isn’t clarity on how to interpret the top-level edict.”
Scott’s comments followed a discussion about how companies and public agencies approach regulation. Europe has earned a reputation for focusing on broad rules, which it argues have helped it set the highest standards on everything from competition to food safety and AI, but which critics say stifle innovation and favor large companies.
The U.K. thinks about regulation differently—instead of an overarching framework, its sector-specific regulators oversee AI as they would all other matters. This approach hinges on industries being in the best position to gauge responses to new developments.
Laura Gilbert, senior AI director at the nonprofit Tony Blair Institute and former head of an incubator at 10 Downing Street, said a sector-specific target is better suited to the complex AI world.
“I have real concerns about regulation that is too specific. That means that you’re then not future-proof, and it’s going to take years to rewrite it,” Gilbert said during the conference.
While AI regulation could hike compliance and training costs for companies, it may also open up opportunities for those who align with it early in the game.
The key still comes back to having clear regulation—governments are “getting better at cascading legislation” with training, Apolitical’s Scott said.
She gave the example of UAE, which is making an introductory AI course mandatory for all government employees (they did so for schools recently). The missing piece? Keeping up the consistency with upskilling at scale.
“We’ve got very good adoption of pilots, less at-scale work,” Scott said.
It might take years before countries and individual companies fine-tune what works for them in the regulatory realm. Experts at the AI conference agreed that there are not one but many approaches to getting there that help companies more than hurt them.
Other highlights
“If a company is global, that’s where the agility comes in. And I think for us, when we innovate, we put regulators, innovators, engineers, clinicians, all in the same room. We talk about it from the beginning, so we don’t end up building in silos without considering all of the nuances that come in.”—Betsabeh Madani-Hermann, global head of research, Philips.
“You can’t have a peanut butter approach. I’m a big believer in context. So my recommendation would be to focus on use case-specific partners and recommendations. And I think the second thing is, there is a massive need for standardization and interoperability, which I have not seen happen at scale. So I would encourage thinking about, how do you bring this very fragmented ecosystem together?”—Navrina Singh, founder and CEO of Credo AI, on lessons the U.K. could learn from the EU’s regulatory rollout.
“There’s a false dichotomy, which has dogged us for decades in pretty much all democracies. This sense that you need to have regulation or innovation.”—Lord Chris Holmes of Richmond, MBE, on the misconceived trade-off when regulating AI.
This story was originally featured on Fortune.com