Heads up: New government concerns about collection, use and algorithms could impact organizations of all sizes.
Huge IT investments have been driven by analytics applications to optimize customer experiences. Along the way, many companies have become adept at applying predictive analytics against Big Data. But it’s become clear that some organizations use collected data to manipulate users in biased or downright unethical ways.
Such behavior continues to draw the attention of lawmakers worldwide. In the U.S., the most recent effort is the proposed Deceptive Experiences to Online Users Reduction (DETOUR) Act. Co-sponsored by U.S. Sens. Mark R. Warner (D-Va.) and Deb Fischer (R-Ne.), the act is aimed at any company with more than 100 million users. The legislation would make it illegal for “Web-scale” companies such as Facebook, Google, and Microsoft to:
. Design, modify, or manipulate a user interface with the purpose or substantial effect of obscuring, subverting, or impairing user autonomy, decision-making, or choice to obtain consent or user data.
. Subdivide or segment consumers of online services into groups for the purposes of behavioral or psychological experiments or studies, except with the informed consent of each user involved.
See also: California Lays Out New IoT Law
. Design, modify, or manipulate a user interface on a website or online service, or portion thereof, that is directed to an individual under the age of 13, with the purpose or substantial effect of cultivating compulsive usage, including video auto-play functions initiated without the consent of a user.
From Ads to AI, a Raft of Proposals
The DETOUR Act is only the latest in a series of proposals intended to rein in perceived abuses of data. Other U.S. efforts include:
For the People Act, sponsored by Rep. John Sarbanes (D-Md.), includes provisions that address digital deception. It applies existing advertising disclosure requirements to digital media.
Honest Ads Act, sponsored by Sen. Amy Klobuchar (D-Minn.), Sen. Mark Warner (D-Va.), and the late Sen. John McCain (R-Ariz.). It would require online platforms reaching more than 50 million monthly viewers to disclose who paid for an advertisement and targeting criteria used. It also aims to create a public database of political ads, and tighten restrictions to ensure that ads are not purchased by foreign entities.
Customer Online Notification for Stopping Edge-provider Network Transgressions (CONSENT) Act, sponsored by Sen. Edward Markey (D-Mass.) and Sen. Richard Blumenthal (D-Conn.). The law seeks to give users control over their data to better combat digital deception. It would do so by requiring the Federal Trade Commission (FTC) to put in place privacy protections for individuals.
Bot Disclosure Act sponsored by Sen. Dianne Feinstein (D-Calif.) seeks to increase transparency for bots by requiring the FTC to create regulations ensuring that social media companies require users to publicly disclose all bots. The bill also seeks to make it illegal for candidates and political parties to purchase or use bots to spread their messages.
Algorithmic Accountability Act sponsored by Sens. Cory Booker (D-N.J.) Ron Wyden (D-Ore.) and Rep. Yvette Clarke (D-N.Y.) seeks to eliminate discriminatory biases from machine learning-based systems. The bill would require companies to check and fix their algorithms for racial bias and correct them if such bias is found.
Well-Meaning but Broad
Like most pieces of legislation, these efforts are all well-meaning. But they also tend to be so overly broad as to be subject to interpretation.
In its current form, for example, the proposed DETOUR legislation could easily be interpreted as preventing Web-scale companies from engaging in common A/B testing to determine whether most end users prefer one feature over another, notes Paul Bischoff, privacy advocate with Comparitech.com, a Web site the reviews technologies and services.
“The wording is very vague,” says Bischoff. “That’s always the problem with this kind of legislation.”
Even so, the DETOUR Act and other proposals around the world are gaining support. One big reason is governments beginning to fully appreciate how firms such as Cambridge Analytica were reportedly able to unduly influence elections in the U.S. and referendums in the United Kingdom such as Brexit.
Those concerns are now being extended into how artificial intelligence (AI) models are being constructed. Various lobbying organizations are starting to demand greater transparency into AI models to ensure organizations do not inadvertently inject bias in algorithms that disadvantage one group over another.
Such efforts remain a work in progress. The latest revisions to the Privacy Directive from the European Union (EU), for example, now explicitly allows analysis of metadata generated by transactions.
Companies of All Sizes Affected?
Data protection laws may inevitably impact any company trying to operationalize the data it collects. It likely won’t take long for politicians to realize that using vast amounts of data for competitive advantage has little to do with the size of the organization. So all enterprises need to start paying more attention to how legislation is being crafted now.
Courts are also likely to decide any legislation enacted must be applied equally to all companies, versus a handful with user bases that exceed a specific but arbitrary amount. Once that happens, the rules will be quickly amended to apply to companies large and small.
In the meantime, some organizations will choose to maximize the value of the data they collect until the government says otherwise. Others will try to turn how they handle data into a trust issue that boosts the value of their brand. Whatever the path forward, how data is processed and analyzed is about to become one of the great political issues of our time.