Economic Questions:
How do we define “fairness” of data? Is it an effect standard or an intent standard?
How do we define “bias” of algorithms? What baseline is unbiased and how do we measure the divergence from baseline?
Why should the threshold for firms regulated by the Act be set at $50 million revenues or 1 million consumers? Is there a reasonable explanation for this number that relates to harms or dangers?
Are small firms less likely to need algorithmic accountability or are they more likely to need accountability?
Can we design experiments that help identify biases and, importantly, their causes?
Summary:
Introduced by Sens. Wyden (D-OR), Booker (D-NJ) and Rep. Clarke (D-NY). The bill requires companies to study algorithms and work towards eliminating biased or discriminatory information. The bill authorizes the FTC to create regulations that require companies to assess their fairness of data. It would put three requirements on tech companies: assess their systems for fairness and bias, evaluate how their systems protect privacy/personal information, and correct any issues found during this assessment. These requirements apply to companies regulated by the FTC that make more than $50 million per year, or to companies that have data on 1 million consumers/consumer devices, regardless of revenue.
Supporters argue that discriminatory algorithms are a civil rights issue, affecting vulnerable and minority situations. Bias limits job opportunities, housing opportunities, and more. The bill also aims to introduce better security and privacy of all consumers.
Objections question who is responsible for biased algorithms, and how much the testing will cost.