The Alan Turing Institute has been chosen to pilot the UK AI Standards Hub, which will provide guidance and best practices for UK companies looking into AI.
The Alan Turing Institute has been chosen as the organization to pilot a UK government initiative, aimed at shaping standards and best practices in artificial intelligence.
The UK AI Standards Hub will be run by the Turing Institute, alongside support from the British Standards Institution and National Physical Laboratory. It is also backed by the Department for Digital, Culture, Media and Sport and the Office for AI, two UK government departments.
“The hub will ensure that industry, regulators, civil society and academic researchers have the tools and knowledge they need to contribute to the development of standards,” said head of AI governance and regulatory innovation at the Alan Turing Institute, Dr Florian Ostmann.
Alongside providing tools, resources, and guidance to interested parties in the UK, the AI Standards Hub will also host events, talks, and community-building activities designed to improve understanding and adoption of artificial intelligence in the UK.
The UK government may be dwarfed in AI development and spending by the US and China, but it has a flourishing scene, especially in the startup world. OneTrust, Graphcore, and Signal AI have all received hundreds of millions in funding and billion dollar valuations. Through Oxford University, the UK is also home to several autonomous vehicle startups, such as Oxbotica.
“AI is not an industry or single product, it is an enabler that drives every aspect of modern life,” said parliamentary under-secretary of state for tech and the digital economy, Damian Collins. “The Hub will work to improve AI standards adoption and contribute to their development internationally, bringing an additional layer of confidence to those using and engaging with the technology.”
The UK government is taking a risk-based approach to AI regulation, instead of outright banning certain technologies because of their presumptive risks. This is slightly different to the EU’s planned laws, which are set to ban very high-risk AI system, and heavily moderate high risk systems.