SHARE
Facebook X Pinterest WhatsApp

ACM Pushes For Safety Culture In Algorithm Design

thumbnail
ACM Pushes For Safety Culture In Algorithm Design

IOT cyber security padlock concept. Personal data safety Internet of Things smart home cyber attack. Hacker attack danger firewall innovation system vector illustration art

The ACM has called on developers to foster safety cultures with algorithm development, and to promote internal and external testing.

Written By
thumbnail
David Curry
David Curry
Feb 14, 2023

The Association for Computer Machinery (ACM) has called for safety culture improvements in academia, business, and government in relation to algorithm design and operations. 

This is a hot button issue, with The White House, Congress, the European Parliament, and Indian government all publishing or preparing to publish policy frameworks and regulatory standards for algorithm developers, which aim to provide more transparency as to how algorithms work and expose any intentional or unintentional flaws and biases to the public. 

See Also: Algorithmic Destruction: The Ultimate Consequences of Improper Data Use

Some of the legislation also aims to impose bans on certain types of algorithms that interfere with high risk decisions. The European Union’s draft legislation specifically targets algorithms used by law enforcement and prison systems. 

According to ACM’s Safety Algorithm Systems report, the ubiquity of algorithmic systems has created serious risks, which outside of a few notable industries are not being adequately addressed by stakeholders or governments. 

If left unaddressed, public opinion of these algorithms will continue to decline, leading to slower innovation as the public persistently objects to further rollout of innovative technologies, such as self-driving vehicles and automated medical procedures. Lack of regulation or robust testing systems can also lead to unintentional biases to leak out into the end product, which can have serious consequences if the algorithm is responsible for medium to high risk decisions. 

“Reducing risks from algorithmic systems will require commitment by all stakeholders to more safety-oriented approaches sensitive to organizational and cultural considerations as well as technological ones,” said lead author of the report, Ben Shneiderman. “Such commitment must inform the development of algorithms and their operating environments from the outset, and of the larger software-driven systems into which algorithms are integrated. Drawing on human-centered social systems scholarship, safety research and policy making must foster adoption of a safety culture within relevant organizations.”

This focus on human-centered research and development is a key focus of the report, in which it highlights the aviation and medical industries for their adoption of safety cultures, with senior management providing clarity in procedures and significant investments made to ensure metrics are tracked and data is collected when a mistake happens to inform future development. 

By also borrowing methods adopted in cybersecurity, such as red team tests in which experts are brought in to try and break systems, or bug bounties are offered to third-parties that can penetrate a system and cause major failures. The same could be adopted for algorithm development, with experts or third-parties able to test an algorithm and ensure it can reject poor quality data, or not provide an answer at a time where there isn’t enough evidence. 

The ACM report concludes by calling on the industry at large to promote saer algorithmic systems, with improving testing, audits, monitoring, and governance. Organizations are also called on to focus on safety cultures, rather than a move fast and break things culture, which has been prominent in the tech scene over the past 20 years. Internal and external oversight of algorithms is necessary to promote these safer algorithm decisions, and organizations need to do more to have these systems in place. 

thumbnail
David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Recommended for you...

AI as a Co-Pilot, Not a Replacement: The Ethical Path to Integrating AI into Business
Mohamed Yousuf
Feb 8, 2026
Building Trust and Safety in the Age of AI
Dan Hays
Dec 5, 2025
What Is Sovereign AI? Why Nations Are Racing to Build Domestic AI Capabilities
AI Bias Isn’t Just a Model Problem – It’s a Data Supply Chain Problem

Featured Resources from Cloud Data Insights

Real-time Analytics News for the Week Ending February 14
Why Satellite Connectivity Sits at the Heart of Enterprise Network Resilience
Fánan Henriques
Feb 14, 2026
Cleaning up the Slop: Will Backlash to “AI Slop” Increase This Year?
Henry Young
Feb 13, 2026
How Data Hydration Enables Scalable and Trusted AI
Peter Harris
Feb 12, 2026
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.