Algorithmic Destruction: The Ultimate Consequences of Improper Data Use

PinIt

The FTC is serious about data being properly collected and used. In several cases, it has ordered what is called the algorithmic destruction of AI systems built on ill-gotten data.

Imagine you’ve built a wonderful new office building on a prime plot of land only to find out the land was gotten illegally or was not zoned for commercial use. Authorities have the power to fine you and force you to tear down the structure built on that land. A very similar approach, dubbed algorithmic destruction, is now being used with AI algorithms built upon ill-gotten or improperly used data.

A recent example of this extreme measure occurred earlier this year. The U.S. Federal Trade Commission (FTC) issued a settlement order against WW International (the company previously known as Weight Watchers) and its subsidiary, Kurbo.

The FTC alleged that the company had been marketing a weight loss app for use by children as young as eight and then collected their personal information without parental permission. This practice went against the FTC’s Children’s Online Privacy Protection Act Rule (COPPA Rule), which requires that websites, apps, and online services that are child-directed or knowingly collect personal information from children notify parents and get their consent before collecting, using, or disclosing personal information from children under 13.

“Our order against these companies requires them to delete their ill-gotten data, destroy any algorithms derived from it, and pay a penalty for their lawbreaking,” said Federal Trade Commission Chair Lina M. Khan.

Specifically, the settlement order required WW International and Kurbo to delete personal information illegally collected from children under 13, destroy any algorithms derived from the data, and pay a $1.5 million penalty. So, another case of algorithmic destruction.

See also: AI Bias: FTC Cautions Businesses and Offers Guidance

Not an isolated incident of algorithmic destruction

This is not the only time the FTC has exerted such power. In 2021, Everalbum settled Federal Trade Commission allegations that it deceived consumers about its use of facial recognition technology and its retention of photos and videos of users who deactivated their accounts.

The FTC’s order required Everalbum to forfeit “the fruits of its deception,” according to FTC Commissioner Rohit Chopra. Specifically, the company had to delete the facial recognition technologies enhanced by any improperly obtained photos.

One interesting aspect of this settlement is that the statement by Commissioner Chopra noted that the FTC had previously voted to allow data protection law violators to retain algorithms and technologies that derive value from ill-gotten data. But in case, the Commission made a “course correction,” according to Chopra.

Another notable case was settled in 2019. Then, the FTC alleged that app developer Aleksandr Kogan worked with Cambridge Analytica and its former CEO Alexander Nix to enable Kogan’s GSRApp to collect Facebook data from app users and their Facebook friends. The FTC alleged that app users were falsely told the app would not collect users’ names or other identifiable information. The GSRApp, however, collected users’ Facebook User ID, which connects individuals to their Facebook profiles.

As part of the settlement, the people involved were prohibited from making false or deceptive statements regarding the extent to which they collect, use, share, or sell personal information, as well as the purposes for which they collect, use, share, or sell such information. In addition, they were required to delete or destroy any personal information collected from consumers via the GSRApp and any related work product that originated from the data.

See also: Responsible AI: Balancing AI’s Potential with Trusted Use

EU exploring its options

The European Union has been working on a draft regulation aimed specifically at the development and use of AI. The proposed regulations would apply to any AI system used or providing outputs within the European Union.

The regulation puts AI systems into three categories: unacceptable-risk AI systems, high-risk AI systems, and limited- and minimal-risk AI systems. According to McKinsey, these systems can be defined as follows:

  • Unacceptable-risk AI systems include subliminal, manipulative, or exploitative systems that cause harm; real-time, remote biometric identification systems used in public spaces for law enforcement; and all forms of social scoring, such as AI or technology that evaluates an individual’s trustworthiness based on social behavior or predicted personality traits.
  • High-risk AI systems include those that evaluate consumer creditworthiness, assist with recruiting or managing employees, or use biometric identification, as well as others that are less relevant to business organizations. Under the proposed regulation, the European Union would review and potentially update the list of systems included in this category on an annual basis.
  • Limited- and minimal-risk AI systems include many of the AI applications currently used throughout the business world, such as AI chatbots and AI-powered inventory management. 

The EU regulations are not finalized. However, it appears there will be different levels of fines based on the potential risk and the impact level such systems would have on the public. Currently, there does not appear to be an algorithmic destruction provision in the plans.

Salvatore Salamone

About Salvatore Salamone

Salvatore Salamone is a physicist by training who has been writing about science and information technology for more than 30 years. During that time, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Leave a Reply

Your email address will not be published. Required fields are marked *