ACM Pushes For Safety Culture In Algorithm Design

PinIt
IOT security

The ACM has called on developers to foster safety cultures with algorithm development, and to promote internal and external testing.

The Association for Computer Machinery (ACM) has called for safety culture improvements in academia, business, and government in relation to algorithm design and operations. 

This is a hot button issue, with The White House, Congress, the European Parliament, and Indian government all publishing or preparing to publish policy frameworks and regulatory standards for algorithm developers, which aim to provide more transparency as to how algorithms work and expose any intentional or unintentional flaws and biases to the public. 

See Also: Algorithmic Destruction: The Ultimate Consequences of Improper Data Use

Some of the legislation also aims to impose bans on certain types of algorithms that interfere with high risk decisions. The European Union’s draft legislation specifically targets algorithms used by law enforcement and prison systems. 

According to ACM’s Safety Algorithm Systems report, the ubiquity of algorithmic systems has created serious risks, which outside of a few notable industries are not being adequately addressed by stakeholders or governments. 

If left unaddressed, public opinion of these algorithms will continue to decline, leading to slower innovation as the public persistently objects to further rollout of innovative technologies, such as self-driving vehicles and automated medical procedures. Lack of regulation or robust testing systems can also lead to unintentional biases to leak out into the end product, which can have serious consequences if the algorithm is responsible for medium to high risk decisions. 

“Reducing risks from algorithmic systems will require commitment by all stakeholders to more safety-oriented approaches sensitive to organizational and cultural considerations as well as technological ones,” said lead author of the report, Ben Shneiderman. “Such commitment must inform the development of algorithms and their operating environments from the outset, and of the larger software-driven systems into which algorithms are integrated. Drawing on human-centered social systems scholarship, safety research and policy making must foster adoption of a safety culture within relevant organizations.”

This focus on human-centered research and development is a key focus of the report, in which it highlights the aviation and medical industries for their adoption of safety cultures, with senior management providing clarity in procedures and significant investments made to ensure metrics are tracked and data is collected when a mistake happens to inform future development. 

By also borrowing methods adopted in cybersecurity, such as red team tests in which experts are brought in to try and break systems, or bug bounties are offered to third-parties that can penetrate a system and cause major failures. The same could be adopted for algorithm development, with experts or third-parties able to test an algorithm and ensure it can reject poor quality data, or not provide an answer at a time where there isn’t enough evidence. 

The ACM report concludes by calling on the industry at large to promote saer algorithmic systems, with improving testing, audits, monitoring, and governance. Organizations are also called on to focus on safety cultures, rather than a move fast and break things culture, which has been prominent in the tech scene over the past 20 years. Internal and external oversight of algorithms is necessary to promote these safer algorithm decisions, and organizations need to do more to have these systems in place. 

David Curry

About David Curry

David is a technology writer with several years experience covering all aspects of IoT, from technology to networks to security.

Leave a Reply

Your email address will not be published. Required fields are marked *