The AI regulatory toolbox: How governments can discover algorithmic harms

Governments around the world are implementing policies to regulate artificial intelligence (AI) and algorithmic systems more generally. While legislation is advancing, regulators should not wait for legislators to act. Instead, regulators should be actively learning about the algorithmic systems in their regulatory domain and evaluating those systems for compliance under existing statutory authority.

As oversight agencies learn about algorithmic systems, their societal impact, harms, and legal compliance it is possible to broadly characterize an emerging AI regulatory toolbox for evaluating algorithmic systems, particularly those with greater risk of harm. This AI regulatory toolbox includes expanding transparency, performing algorithmic audits, developing AI sandboxes, leveraging the AI assurance industry, and learning from whistleblowers. 

Regulators can also learn from independent academic research which is especially helpful to understand algorithms as part of large online platforms. Some governments, including the EU through the Digital Services Act, are even requiring access to platform data for independent researchers—this research is expected to inform regulatory investigations and enforcement actions and enforcement actions. Regulators may turn to existing academic research first, even to prioritize what other information-gathering tools to employ.

These interventions have different strengths and weaknesses for governing different types of AI systems, and further, they require different internal expertise and statutory authorities. To better inform AI policymaking, regulators should be aware of these tools and their trade-offs.







Leave a Reply

Your email address will not be published. Required fields are marked *