New York State Adds Health Care Geofencing Prohibition, Taking a More Measured Approach Than Washington’s Similar Ban

Abstract glowing grid background with particles in the center

As part of the health budget bill signed by Governor Hochul in early May, New York has amended its General Business Law, introducing a prohibition on geofencing of health care facilities that goes into effect on July 2, 2023 – just three weeks before a similar ban in Washington state. This addition to the General Business Law makes it illegal for anyone other than the health care facility itself to establish or use a geofence or similar virtual boundary around any health care facility to (1) deliver digital advertisements to, (2) create profiles of, or (3) make any inferences about the health status, medical condition, or medical treatment of any person at or within the health care facility.

For purposes of the amendment, a health care facility is any public or private entity that provides medical care or related services. This definition explicitly includes hospitals, nursing homes, mental health care facilities, and services provided to individuals with developmental disabilities, but many other types of medical care services may be included in the scope of the definition. The definition also encompasses the building in which any health care facility is located.

Continue Reading

Deeper Dive into the Data

Every year, BakerHostetler collects and analyzes various metrics about the incident response matters we handle. In 2022, we handled over 1,160 incidents. The most striking trends we saw across those incidents were an overall increase in the average ransom demands and payments, as well as an increase in recovery times in certain sectors. We also saw a decrease in all the metrics we track related to fraudulent fund transfers.

Continue Reading

FTC Issues Warning on Use of Biometric Information

On May 18, 2023, the Federal Trade Commission (FTC) issued a Policy Statement on Biometric Information and Section 5 of the FTC Act (Policy Statement). Section 5 of the FTC Act, 15 U.S.C. § 45, prohibits “unfair or deceptive practices in or affecting commerce” and empowers the FTC to bring civil actions for penalties of not more than $10,000 per violation and issue cease and desist orders.

While the Policy Statement does not confer any rights on any person and does not operate to bind the FTC or the public, companies should be mindful that the FTC is now focusing on the use of “biometric information” and the technology used in relation to such biometric information.

Continue Reading

2023 DSIR Report Deeper Dive: Privacy at the FTC – What Are the Hot Topics Almost Two Years Into the Khan Administration?

It has been almost two years since Lina Khan was designated the new Federal Trade Commission (FTC) chair, and it has been an eventful few years. One of the many questions being asked is “Where do things stand at the FTC on privacy?” Congress has yet to pass comprehensive privacy legislation, and the FTC continues to mostly use its decades-old statute to bring privacy cases that are not always well suited to such an antiquated law.

This challenge became readily apparent in a recent FTC case brought against Kochava, a data broker that allegedly, among other things, sold to or shared with third parties precise geolocation data that, when associated with unique persistent identifiers, could reveal consumers’ visits to sensitive locations, such as reproductive health facilities or houses of worship. The FTC had alleged that selling or sharing this data was an unfair practice. Of course, to show that a practice is unfair, the agency must show that it 1) causes or is likely to cause substantial injury to consumers; 2) is not reasonably avoidable by consumers themselves; and 3) is not outweighed by countervailing benefits to consumers or to competition. In addition to the statutory language, there is a 1980 FTC Policy Statement on Unfairness that provides a detailed analysis of each of the three prongs of the unfairness test and how the agency interprets the legal standard. Establishing that something is unfair is not simple; it requires extensive proof and a detailed factual analysis.

And just last week, a federal district court dismissed the FTC’s complaint in Kochava, finding that although the concerns about the practices at issue might be “legitimate,” the FTC had not pled sufficient facts in the complaint to demonstrate that the practices cause or are likely to cause substantial injury to consumers. The FTC also argued that the practices create a significant risk of concrete harm, and that also was found to be not adequately pled. Among the failures cited by the court are that the data at issue constitute information that can allow sensitive “inferences” to be made, which  “lessen[s] the severity of the alleged privacy injury.” But probably more significant to the decision is the finding that the FTC “claims only that third parties could tie the data back to device users; not that they have done so or are likely to do so.” 

It remains to be seen whether the FTC will take the court up on its invitation to amend the Kochava complaint. But the case serves as an important reminder that despite all the statements made by commissioners about the broad range of privacy harms that they want to challenge, the very real limits to the agency’s statutory authority can make a lot of this a steep uphill battle, particularly when scrutinized closely by federal court judges. It is, however, unlikely that one decision like this will stop the agency from bringing comparable unfairness claims going forward.

That said, there have been some pretty interesting settlements that have to be recognized. Earlier this year we saw two health-related privacy cases that sent a very strong message that the FTC is deeply concerned about the privacy of health information, particularly if that information is shared with third parties for advertising purposes. These cases relied on both the concept of deception and the concept of unfairness, and the agency has made it very clear that health privacy remains a top priority.

With respect to the Children’s Online Privacy Protection Act (COPPA), Congress continues to consider modifications and expansions to the statute and the FTC remains eerily silent on the rulemaking that has been pending since 2019. The recent FTC case against Epic saw the agency obtaining its highest COPPA settlement thus far, at $275 million. The case also had an interesting unfairness count where the FTC alleged that default settings harmed kids and teens by allowing personal interactions with strangers that sometimes led to threats, bullying and sexual harassment.

The potential for harm to kids and teens will continue to be a major issue at the agency, and Commissioner Alvaro Bedoya has been speaking frequently on the subject. At a recent speech delivered at the National Academies of Sciences, Engineering & Medicine, Commissioner Bedoya emphasized the need to make sure that legal tools are addressing the issue of how social media and other technology can affect the mental health of teens. Although his concerns do go beyond traditional privacy issues, certainly how data is collected and used with respect to teens is a significant part of the discussion.

Interestingly, Commissioner Bedoya has emphasized that the agency needs to explore hiring psychologists to assist agency staff with better understanding the potential harms to children and teens that may be caused by certain online practices or tools. Notably, in recent testimony to Congress, the FTC emphasized the risks that online services can pose to children and teens and stated that:

The FTC is considering steps to deepen this work, including retaining psychologists and youth development experts to allow the agency to analyze conduct, assess harms and remedies, and pursue studies with an interdisciplinary approach, including conduct affecting children.

Teen and child privacy is also one of the many issues being examined in the FTC’s privacy rulemaking, more formally known as the Commercial Surveillance and Data Security rulemaking. The comment period has closed for the first stage of the rulemaking, and it remains to be seen whether or when the agency will come forward with any proposed rules. This rulemaking will continue for quite some time, but it remains an important FTC issue to closely follow.

And finally, we have seen the FTC make a wide variety of statements about artificial intelligence (AI), although thus far the statements have focused more on broad FTC Act compliance issues and less on privacy specifically. In a recent statement issued by the FTC along with several other agencies, the agencies emphasized that existing laws do apply to the use of AI and reiterated their concern that AI be used and developed in ways that do not have discriminatory impacts or otherwise violate federal law. In a series of blog posts, the FTC provided a deeper dive into some of its AI-related concerns. Although we have yet to see FTC enforcement involving the new wave of AI tools that are available to consumers, it is quite clear that such tools are being closely scrutinized by agency staff.

Privacy vs. Perks: Comparing the CCPA’s Notice of Financial Incentive and the Colorado Privacy Act’s Bona Fide Loyalty Program Rules

Global communication network

What is loyalty? Ask a mob boss, a Los Angeles Lakers fan and a Labrador retriever, and you might get three different answers. Ask a retailer, and they’ll likely tell you that a loyalty program can be a great tool for rewarding their best customers and helping drive further purchases. But with these benefits come privacy compliance costs, including under some unique provisions of the California Consumer Privacy Act (CCPA) and the Colorado Privacy Act (CPA).

While both the CCPA and the CPA regulate personal information used for loyalty programs, they go about it in somewhat different ways. The CCPA also appears to have a broader scope in terms of the types of activities it regulates. As this is an area of active enforcement under the CCPA and a likely focus for enforcement under the CPA, understanding when these rules apply and what they require is critical to compliance.

A. Definitions and Comparative Scope

1. Colorado

The CPA Regulations define a “Bona Fide Loyalty Program” as “a loyalty, rewards, premium feature, discount, or club card program established for the genuine purpose of providing Bona Fide Loyalty Program Benefits to Consumers that voluntarily participate in that program, such that the primary purpose of Processing Personal Data through the program is solely to provide Bona Fide Loyalty Program Benefits to Consumers that voluntarily participate.” A “Bona Fide Loyalty Program Benefit” is “an offer of superior price, rate, level, quality, or selection of goods or services provided to a Consumer through a Bona Fide Loyalty Program. Such benefits may be provided directly by a Controller or through a Bona Fide Loyalty Program Partner.” A “Bona Fide Loyalty Program Partner” is a Third Party (as defined by the CPA) that provides Bona Fide Loyalty Program Benefits, either alone or jointly with the business. Putting these terms together, a “Bona Fide Loyalty Program” has four key components, which are

  1. a loyalty, rewards, premium feature, discount or club card program
  2. established for the purpose of providing a superior price, rate, level, quality or selection of goods or services
  3. to consumers who voluntarily participate
  4. such that the primary purpose for processing the consumer’s personal information is solely to provide loyalty program benefits.

This definition appears targeted at traditional loyalty or rewards programs in which consumers sign up to receive points, discounts or other benefits on an ongoing basis when they make qualifying purchases. Given that a program must meet each of the four criteria described above to qualify as a Bona Fide Loyalty Program, the definition does not appear intended to sweep in all circumstances in which a consumer provides a business with personal information in order to receive a discount or other type of benefit. For example, a general marketing distribution list that may involve sending coupons or information about sales to consumers who provide their email addresses may not meet all the criteria necessary for the offering to be treated as a Bona Fide Loyalty Program. This interpretation is further supported by the examples of Bona Fide Loyalty Programs within the Regulations, which concern a hypothetical grocery store loyalty program, a hotel chain points program, and a retailer offering discounts based on consumers’ purchase histories.

2. California

By contrast, the CCPA Regulations define a “Financial Incentive” as a “program, benefit, or other offering, including payments to consumers, for the collection, retention, or sharing of personal information. Price or service differences are types of financial incentives.” Although the California attorney general’s enforcement of Financial Incentives has focused on loyalty programs and the CCPA includes compliant “loyalty, rewards, premium features, discounts, or club card programs” as examples of offerings that can be provided without violating the CCPA’s nondiscrimination provision, a traditional loyalty program is not necessarily the only type of Financial Incentive under the CCPA. According to the attorney general, “[u]nder the CCPA, businesses that offer financial incentives, such as discounts, free items, or other rewards, in exchange for personal information must provide consumers with a notice of financial incentive.” The potential applicability to discounts or free items that require the collection, retention or sharing of personal information implies that the scope of what counts as a Financial Incentive under the CCPA may be broader than a Bona Fide Loyalty Program under the CPA.

B. Key Requirements for Compliance

Both the CCPA and the CPA affirmatively authorize businesses to offer Financial Incentives and loyalty programs so long as they satisfy certain rules. However, the CCPA and the CPA Regulations differ in terms of what they require of businesses that offer a Bona Fide Loyalty Program or a Financial Incentive.

1. Notice

The CCPA and the CPA require specific notices covering Financial Incentives and Bona Fide Loyalty Programs, respectively. Although the motivation behind these rules appears to be a shared goal of transparency, the specific requirements differ significantly. The information required to be included in a California Notice of Financial Incentive and a Colorado Bona Fide Loyalty Program disclosure is detailed in a chart here.

As can be seen in the chart, the required disclosures do not directly overlap. Broadly, the CCPA Regulations are far more concerned with the economics of a Financial Incentive (and even include specific rules governing how to calculate the value of personal information), whereas the CPA Regulations focus more on disclosures of personal information to third parties. Due to the differences in disclosure requirements, businesses subject to both laws will need to decide whether to create separate loyalty notices for California and Colorado or combine them in a single, expanded notice directed to consumers in both states. Notably, none of the other state comprehensive privacy laws passed to date include specific disclosure requirements focused on loyalty programs or Financial Incentives.

2. Consent

The CCPA provides that a business “may enter a consumer into a financial incentive program only if the consumer gives the business prior opt-in consent . . . that clearly describes the material terms of the financial incentive program, and which may be revoked by the consumer at any time.” By contrast, the CPA does not include additional consent requirements for enrollment in a Bona Fide Loyalty Program, though by definition, participation in a Bona Fide Loyalty Program must be “voluntary.” A business must obtain consent from Colorado consumers before processing their sensitive data in connection with a Bona Fide Loyalty Program or using personal information for a secondary purpose; however, both of these circumstances involve general rules that also apply outside the context of loyalty programs.

C. Impact on Data Subject Rights

The CPA expressly envisions that there may be situations where a consumer’s decision to exercise a data subject right – in particular, a request to delete personal information, a request to opt-out of sales of personal information or a refusal to consent to processing sensitive data – makes it impossible for the business to provide all the benefits associated with the Bona Fide Loyalty Program. Although each right is addressed by a separate rule, as a general principle, when a consumer exercises their data subject rights, the business may stop providing impacted loyalty program benefits but must continue to provide any benefits that do not require the personal information impacted by the consumer’s data rights request. This necessitates a nuanced understanding of which data elements are needed for which types of benefits, as it would not be sufficient to terminate a consumer’s loyalty account when they exercise their privacy rights unless all data involved is impacted by their request.

Moreover, the CPA imposes certain timing requirements on businesses when a consumer exercises a right that affects membership in a Bona Fide Loyalty Program. The business will have 24 hours prior to discontinuing the loyalty program benefit or membership to inform the consumer of the same, and must also provide a reference or link to the Bona Fide Loyalty Program Disclosure.

The CCPA is less specific about the impact of data subject requests on loyalty program benefits or other Financial Incentives, but a consumer’s request to delete data needed to provide loyalty program benefits would likely fall within the exception under Cal. Civ. Code § 1798.105(d)(1) for information needed to “provide a good or service requested by the consumer, or reasonably anticipated by the consumer within the context of a business’s ongoing business relationship with the consumer, or otherwise perform a contract between the business and the consumer.” Indeed, the CCPA Regulations include the following example:

A clothing business offers a loyalty program whereby customers receive a $5-off coupon by email after spending $100 with the business. A consumer submits a request to delete all personal information the business has collected about them but also informs the business that they want to continue to participate in the loyalty program. The business may deny their request to delete with regard to their email address and the amount the consumer has spent with the business because that information is necessary for the business to provide the loyalty program requested by the consumer and is reasonably anticipated within the context of the business’s ongoing relationship with them pursuant to Civil Code section 1798.105, subdivision (d)(1).

This outcome (in which the business retains the personal information and the consumer remains enrolled in the program) appears to be the opposite of that intended by the Colorado Regulations (which call for ending the consumer’s enrollment in the program, subject to 24 hours’ notice).

The CCPA Regulations also include this example:

A grocery store offers a loyalty program whereby consumers receive coupons and special discounts when they provide their phone numbers. A consumer submits a request to opt-out of the sale/sharing of their personal information. The retailer complies with their request but no longer allows the consumer to participate in the loyalty program. This practice is discriminatory unless the grocery store can demonstrate that the value of the coupons and special discounts are reasonably related to the value of the consumer’s data to the business.

With these examples in mind, businesses should carefully assess the impact of an individual privacy rights request on a loyalty program or other Financial Incentive in which the consumer participates. This requires examination of whether specific types of data or processing are necessary to continue providing the benefits as well as the CCPA’s requirement that the value of the benefits be reasonably related to the value of the consumer’s personal information. Moreover, the outcome in terms of deleting or retaining relevant data may differ by state.

D. Enforcement Risk

The California attorney general has actively enforced the CCPA’s Financial Incentive rules against many businesses. Although the updated Regulations promulgated by the California Privacy Protection Agency (CPPA) under the California Privacy Rights Act (CPRA) made modest changes to the Financial Incentive rules, it is likely the CPPA and the California attorney general would consider the revised rules fully enforceable, even in advance of the July 1 enforcement deadline provided for new rules under the CPRA. Alternatively, either California enforcer could bring an action based on an alleged violation of the prior version of the rules. Enforcement under the CPA remains more of an unknown, though given the detail included within the CPA Regulations, it appears Bona Fide Loyalty Programs are an area of interest for the Colorado attorney general. In the event of an enforcement, the CPA’s right to cure might allow a business to resolve an alleged violation, depending on the nature of the issue.

E. Conclusion

In some ways, the CPA appears to take a more measured approach to Bona Fide Loyalty Programs than do the CCPA’s rules on Financial Incentives. However, both states impose unique requirements that businesses must follow for in-scope activities, raising the overall compliance burden for businesses subject to both laws. Businesses that operate programs for consumers that involve the collection of personal information in exchange for a benefit should assess the applicability of these specialized rules and prepare appropriate notices and procedures to comply with them.

Welcome to our 9th annual Data Security Incident Response Report!

Click the image to download the 2023 DSIR Report.

We are now three years post pandemic, and while a lot has changed, some things remain the same. Last year, I talked about resilience—the uncertainties of the pandemic were still present, the war in Ukraine had just begun, and businesses were addressing new issues caused by technology evolution and work-pattern changes. Resilience in 2022 meant continued effective implementation of security measures, evolving privacy compliance programs beyond just addressing the biggest compliance risk areas, and responding to continued efforts by litigators to exploit different privacy and privacy-adjacent statutes for financial gain.

Continue Reading

An Introduction to Washington’s My Health My Data Act

On April 17, the Washington legislature passed the My Health My Data Act (MHMD Act), which includes some of the most restrictive provisions in any U.S. state privacy law. The MHMD Act is the result of Washington state’s multi-year effort to pass comprehensive privacy legislation fueled by new fears about access to reproductive health care services following the U.S. Supreme Court’s decision overturning Roe v. Wade in Dobbs v. Jackson Women’s Health Organization. The act will apply to many organizations that may not immediately realize they are processing data within the law’s scope. And those processing covered data will likely need to adjust existing programs designed to comply with other U.S. state privacy laws to address the act’s unique requirements. This post briefly describes the act’s history and stated purpose. We will follow it with a series of posts addressing the law’s nuances and practical takeaways on how the law will affect organizations’ privacy compliance programs.

Never let a crisis go to waste

Continue Reading

Washington State’s New Shield Law, Part of Washington’s ‘Choice-Defending Agenda,’ Modifies Obligations Related to Other States’ Criminal and Civil Process

Entities that receive criminal process (such as subpoenas or search warrants) in Washington state should review Washington’s new Shield Law, which the legislature just passed as part of its post-Dobbs “choice-defending agenda.” The law allows those in Washington who receive out-of-state legal process to ignore the process in certain circumstances. Providers of “electronic communication services” incorporated or having their principal place of business in Washington should take particular note, as the law prohibits them from complying with certain civil and criminal legal process issued in other states. The Shield Law is an important companion to Washington’s nearly enacted privacy law, the My Health, My Data Act, which prohibits most sharing of covered data by regulated entities with any third party without affirmative consent (including sharing with law enforcement, unless the data relates to a criminal violation recognized under Washington state or federal law).

Continue Reading

The Potential Risks of ChatGPT and Other Generative AI

Shall we play a game?” Those innocuous words “spoken” by Matthew Broderick’s computer in John Badham’s sci-fi techno-thriller War Games stunned audiences at the time. A computer that could “talk” and “think” and engage in conversation?!? This was the height of science fiction. Well, with the recent release of generative artificial intelligence (AI) tools, specifically in the form of ChatGPT and other predictive natural language processing (NLP) algorithms, science fiction has once again become reality.

Companies from Microsoft to Google and Instacart to Kayak have begun to incorporate and build upon this technology, originally developed by OpenAI. These tools can be incredibly beneficial to businesses, but they also carry risks.

Read full AD-ttorneys blog post.

What About the Robots That Are Already Here? New York City to Begin Enforcement of Artificial Intelligence Applications Related to Applicants and Employees Through the NYC Automated Employment Decision Tools Law on July 5, 2023

Artificial intelligence

On January 1, 2023, New York City became the first jurisdiction in the United States to regulate employers’ use of automated employment decision tools (AEDTs) in the hiring and promotion process. Local Law 144 (the NYC AEDT Law), which requires anyone who uses (or wants to use) an AEDT to first conduct a bias audit and notify job candidates, is set to be enforced starting July 5, 2023. The NYC Department of Consumer and Worker Protection (DCWP) adopted its final rules on April 6, 2023, following two previous draft proposals, the first in September 2022 and the second in December 2022; a significant volume of comments (including from employers, employment agencies, law firms, AEDT developers and advocacy organizations); and two public hearings.

As companies increasingly rely on automated tools to assist in the employment process and sort through large volumes of job applicants, New York City employers will now be forced to determine whether those tools constitute an AEDT and, if so, secure the required bias audit from its AEDT vendors or conduct one themselves.

Key Definitions

The NYC AEDT Law defines an AEDT as:

“any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.”

The final rules clarify that the phrase “to substantially assist or replace discretionary decision-making” means (i) to rely solely on a simplified output (score, tag, classification, ranking, etc.) with no other factors considered; (ii) to use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set; or (iii) to use a simplified output to overrule conclusions derived from other factors, including human decision-making.

The final rules further clarify the definition of “machine learning, statistical modeling, data analytics, or artificial intelligence” as:

“a group of mathematical, computer-based techniques:

  • that generate a prediction, meaning an expected outcome for an observation, such as an assessment of a candidate’s fit or likelihood of success, or that generate a classification, meaning an assignment of an observation to a group, such as categorizations based on skill sets or aptitude; and
  • for which a computer at least in part identified the inputs, the relative importance placed on those inputs, and, if applicable, other parameters for the models in order to improve the accuracy of the prediction or classification.”

The DCWP indicates that these changes both serve to focus the definition of AEDT and expand its scope. However, the example included in the final rules, an AEDT used to screen resumes and schedule interviews for a job posting, highlights the breadth of the law. The DCWP notes that for such use, a bias audit is still necessary even though the employer is not using the AEDT to make the final hiring decision but only to screen at an early point in the application process.

Bias Audit Requirements

The NYC AEDT Law prohibits employers and employment agencies from using an AEDT unless:

(1) The tool has been subject to a bias audit within one year of the use of the tool

(2) A summary of the bias audit has been made publicly available on the employer’s or employment agency’s website

Such bias audits must calculate the selection rate for each race/ethnicity and sex category that is required to be reported to the U.S. Equal Employment Opportunity Commission (EEOC) pursuant to the EEO-1 Component 1 report and compare the selection rates with the most selected category to determine an impact ration. The final rules provide explicit requirements depending on how the tool operates—selecting candidates or employees to move forward in the hiring process or classifying them into groups versus scoring candidates or employees.

Bias audits must rely on historical data or on test data if the available historical data is insufficient to conduct a statistically significant bias audit. If more than a year has passed since the most recent bias audit of the AEDT, the employer or employment agency may not continue the use of the tool.

While the artificial intelligence (AI) auditing industry is still very nascent, with industry standards and best practices still in development, New York City has indicated that the onus is on employers to provide proof of the outcome-focused bias audit but has noted that multiple employers may use the same audit, provided the audit includes the employer’s own historical data.

Several changes regarding bias audit requirements ultimately present in the final rules were oriented toward the following:

  • Clarifying that an “independent auditor” may not be employed or have a financial interest in an employer/employment agency that seeks to use or continue to use an AEDT or in a vendor that developed/distributed the AEDT
  • Clarifying that the required “impact ratio” must be calculated separately to compare sex categories, race/ethnicity categories and intersectional categories
  • Adding a requirement that the bias audit indicate the number of individuals the AEDT assessed who are not included in the calculations because they fall within an unknown category, and requiring that number be included in the summary of results
  • Allowing an independent auditor to exclude a category that comprises less than 2 percent of the data being used for the bias audit from the calculations of impact
  • Clarifying that an employer or employment agency may rely on a bias audit of an AEDT that uses historical data of other employers or employment agencies only if it has provided historical data from its own use of the AEDT to the independent auditor conducting the bias audit or if it has never used the AEDT
  • Providing examples of when an employer or employment agency may rely on a bias audit conducted with historical data, test data, or historical data from other employers and employment agencies.

Notice Requirements

The NYC AEDT Law also requires that candidates or employees who reside in the city be notified about the use of an AEDT in their assessment or evaluation for hiring or promotion, as well as the job qualifications and characteristics used by the AEDT. Such notice must be provided 10 business days prior to the use of the tool and allow the candidate or employee to request an alternative selection process or accommodation.

Notice can be provided in several ways. For candidates and employees, notice may be provided in a job posting or via U.S. mail or email. For candidates, notice may be provided on the careers or job section of the website; for employees, notice may be provided in a written policy or procedure. The employer or employment agency must also disclose on its website or make available to a candidate or employee within 30 days of receiving a written request (1) information about the type of data collected for the AEDT, (2) the source of the data collection and (3) the employer’s or employment agency’s data retention policy.

While the required notice must include instructions for how an individual can request an alternative selection process or reasonable accommodation under other laws if available, the final rules clarify that an employer or employment agency is not obligated to provide such an alternative selection process,at least under the NYC AEDT Law.

If an AEDT is in use on the law’s enforcement date, employers and employment agencies should be prepared to publish the required notice by June 20, 2023.

Penalties

Violations of the law will result in civil penalties of up to $500 for the first violation and each additional violation occurring on the same day, and between $500 and $1,500 for each subsequent violation. Each day the AEDT is used in violation of the law constitutes a separate violation, and the failure to provide notice constitutes a separate violation.

What’s Next?

The NYC AEDT Law is just the first among several emerging laws and regulations governing automated decision tools in the hiring process. In 2021, the EEOC launched an agencywide Artificial Intelligence and Algorithmic Fairness Initiative to ensure the use of AI, machine learning and other emerging technologies used in hiring and employment decisions comply with federal civil rights laws enforced by the agency. As EEOC Chair Charlotte A. Burrows stated, “the EEOC is keenly aware that these tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

At the state level, New York and New Jersey have introduced similar bills. Notably, neither bill, like the NYC AEDT Law, prohibits the use of these tools if the bias audit shows a discriminatory effect. The 2022 New Jersey bill does not even require publication of the bias audit results. Rather, New Jersey’s bill A4909 prohibits the sale of AEDTs unless the tool (1) is subject to a bias audit, (2) includes an annual bias audit service at no additional cost and (3) is sold with a notice stating the tool is subject to the law. If passed, the law would further require employers to notify each candidate within 30 days that the tool was used in connection with the candidate’s application for employment and that it assessed the job qualifications or characteristics of the candidate. The bill proposes civil penalties similar to those of the NYC AEDT law.

New York State’s 2023 bill A00567 provides requirements similar to those of the NYC AEDT Law but instead calls the bias audit a “disparate impact analysis.” In addition to making a summary of the disparate impact analysis available on the employer’s or employment agency’s website, the bill also requires that the summary be provided to the New York Department of Labor on an annual basis. The law does not include civil penalties but instead permits enforcement by the state attorney general and commissioner of labor.

LexBlog