The week of April 19 was an eventful one for practitioners following the evolution of potential artificial intelligence (AI) enforcement both in the United States and abroad, answering some questions regarding which regulators were going to take a more active and prospective role in regulating and advising on AI use and what those roles might look like. In addition, and perhaps more importantly for advisers and their clients, the announcements from the U.S. Federal Trade Commission (FTC) and the European Commission (EC) provided insight into what organizations using AI might do prospectively to mitigate enforcement concerns and prepare for future responses.

News from the FTC

The week started with an April 19, 2021, post from the FTC titled “Aiming for truth, fairness, and equity in your company’s use of AI.” Note first, however, that the FTC had provided some initial guidance back on April 8, 2020, titled “Using Artificial Intelligence and Algorithms.” The FTC’s 2020 guidance noted that the FTC had “brought many cases alleging violations of the laws [the FTC] enforce[s] involving AI and automated decision-making, and [had] investigated numerous companies in this space.”

The FTC’s 2021 statement reiterated the 2020 guidance and focused more explicitly on the FTC’s enforcement power within the following three laws:

  • Section 5 of the FTC Act and prohibitions against unfair or deceptive practices. Here, the FTC noted the sale or use of racially biased algorithms as an enforcement priority.
  • The Fair Credit Reporting Act and its application in instances when an algorithm might be used to deny people employment, housing, credit, insurance or other benefits.
  • The Equal Credit Opportunity Act and occasions when a company might use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status or age, or because a person receives public assistance.

The FTC’s 2021 statement emphasized the importance of ensuring data sets are representative of the population, testing algorithmic outcomes to avoid unfair discrimination on the basis of a protected class and other biases, and being transparent with business customers and consumers about what an algorithm can or cannot do. Tellingly, the FTC’s 2021 statement also included this admonition: “Hold yourself accountable – or be ready for the FTC to do it for you.” The FTC’s guidance was therefore summed up as a requirement – and enforcement warning – regarding transparency and algorithm performance accountability. The FTC ended its statement with a note that practices associated with AI should be “grounded in established FTC consumer protection principles,” and that the FTC’s “recommendations for transparency and independence can help [companies] do just that.”

Announcements from the EC

On April 21, 2021, the EC published three new, related documents: Proposal for a Regulation on a European approach for Artificial Intelligence, an updated Coordinated Plan with Member States and Proposal for a Regulation on machinery products (collectively, AI Regulation).

The EC’s proposed AI Regulation is expansive and proposes global AI standards whose ambit of enforcement would radiate out from the European Economic Area (EEA) with enforcement by a European Artificial Intelligence Board to oversee regulatory implementation. The EC’s proposed AI Regulation bans certain AI practices that create an unacceptable risk; outlines strict safeguards for “high-risk” AI systems (outlined in Annex III in the Annexes to the Proposal); places lesser restrictions on and creates voluntary “codes of conduct” for “limited-risk” AI systems; and includes weighty fines for violations (noncompliance at up to 30 million euros or 6 percent of worldwide turnover, whichever is greater).

Like the EU General Data Protection Regulation (GDPR) before it, the proposed AI Regulation contemplates extraterritoriality, stating that “to prevent the circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this Regulation should also apply to providers and users of AI systems that are established in a third country, to the extent the output produced by those systems is used in the Union” (Reg. 10). This is, of course, concerning for AI system providers based outside the EEA, who would ostensibly be subject to the proposed AI Regulation’s requirements if their AI systems are available in the EU. Similarly, and perhaps more significantly, the law would apply to both providers and users of AI systems where the “output” of an AI system is used in the EU, potentially affecting a significant number of additional organizations that would otherwise have no commercial presence in Europe.

Instead of bright-line rules, the EC put forward a proportional and risk-based or “balanced” approach. Specifically, the EC proposed “a robust and flexible legal framework,” one that “is comprehensive and future-proof in its fundamental regulatory choices” while still putting “in place a proportionate regulatory system centred on a well-defined risk-based regulatory approach that does not create unnecessary restrictions to trade, whereby legal intervention is tailored to those concrete situations where there is a justified cause for concern or where such concern can reasonably be anticipated in the near future.” However, the following AI practices are explicitly prohibited in the draft AI Regulation:

  • AI systems that deploy subliminal techniques beyond a person’s consciousness.
  • AI systems that exploit any of the vulnerabilities of a specific group of persons due to their age or physical or mental disability.
  • The use of “social scoring” AI systems – which evaluate or classify the trustworthiness of natural persons based on their social behavior in multiple contexts or known or predicted personal or personality characteristics – by public authorities.
  • The use of “real-time” remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement unless necessary for certain narrow exceptions, including finding victims of kidnapping and responding to terrorist attacks.

In line with the EC’s ongoing focus on the EU’s digital strategy, data is at the heart of the AI Regulation, and data governance – also the subject of a recent draft EU regulation – forms an integral part of the obligations that are intended to apply to providers of high-risk AI systems. The AI Regulation requires providers to employ a range of techniques to data sets that are used in the training, validation and testing of machine learning and similar technologies. This includes identifying potential biases, checking for inaccuracies and assessing the suitability of the data. Enforcement of the proposed AI Regulation would be accomplished through a governance system at the Member State level, building on existing structures, and a cooperation mechanism at the EU level through the establishment of a European Artificial Intelligence Board. The EC also asserted that the proposed rules complement “the General Data Protection Regulation (Regulation (EU) 2016/679) and the Law Enforcement Directive (Directive (EU) 2016/680) with a set of harmonised rules applicable to the design, development and use of certain high-risk AI systems and restrictions on certain uses of remote biometric identification systems.” The EC would prefer a regulatory framework for high-risk AI systems only (where such high-risk AI systems would be required to meet “certain mandatory requirements” (Reg. 27) with the possibility that non-high-risk AI systems providers could otherwise follow an approved code of conduct. The high-risk AI systems identified in the AI Regulation cover a broad range of activities, including AI systems that are:

  • Associated with the “administration of justice and democratic processes” (Reg. 40).
  • Directing or supporting law enforcement actions (Reg. 38).
  • Providing or prohibiting access to private or public services (Reg. 37).
  • Used in migration, asylum and border control management (Reg. 39).
  • Associated with safety components in the management and operation of road traffic and other critical infrastructure (Reg. 34).
  • Employed in education or vocational training, notably for determining access or assigning persons to educational and vocational training institutions (Reg. 35).
  • Used in employment, employee management and access to self-employment, notably for the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships (Reg. 36).
  • Used for real-time and post-remote biometric identification systems (Reg. 33).
  • Involving specific products, such as machinery, toys, elevators, radio equipment, recreational vehicles, cableway installations, appliances burning gaseous fuels, medical devices, in vitro diagnostic medical devices, pressure equipment, and equipment or protective systems intended for use in potentially explosive atmospheres (Reg. 30).

The proposed AI Regulation’s mandatory framework for high-risk AI systems covers data, documentation and traceability; transparency and provision of information; human oversight; and robustness and accuracy of data sets.

The proposed AI Regulation also includes a proposed implementation plan for monitoring, evaluating and reporting use arrangements – specifically, the EC would establish a system for registering stand-alone, high-risk AI systems in a public EU-wide database. This registration would enable competent authorities, users and other interested people to verify whether a given high-risk AI system complies with the requirements laid down in the proposal and to exercise enhanced oversight of those AI systems posing high risks to fundamental rights. As part of the registration process, high-risk AI system providers will need to include meaningful information about their AI systems and conformity assessments carried out on those systems.

AI system providers will be obliged to inform national competent authorities about serious incidents or malfunctions as soon as they become aware of them, and about any recalls or withdrawals of AI systems from the market. National competent authorities will then investigate, collect all the necessary information and regularly transmit it to the EC with adequate metadata. The EC will use this information to provide comprehensive analyses of the overall AI market.

Finally, the proposed AI Regulation puts forward record-keeping requirements, noting that having “information on how high-risk AI systems have been developed and how they perform throughout their lifecycle is essential to verify compliance with the requirements under this Regulation” (Reg. 46) and that such “high-risk AI systems should therefore be accompanied by relevant documentation and instructions of use and include concise and clear information, including in relation to possible risks to fundamental rights and discrimination, where appropriate” (Reg. 47).

Next Steps

The EC is taking public comments on the proposed AI Regulation until June 22, 2021. Then, before the AI Regulation can be finalized, the European Parliament must shepherd it through the sometimes protracted EU approval process. We also expect to see lobbying efforts regarding the lists of prohibited AI practices and high-risk AI systems and associated requirements as the proposed AI Regulation moves forward. We will monitor these developments, and any actions from the FTC closely, and provide a deeper dive into key elements of the EC’s proposed AI Regulation on this blog.