Iowa Becomes the Sixth U.S. State to Pass a Comprehensive Consumer Privacy Law

People communication

On March 29, 2023, Iowa became the sixth U.S. state to pass comprehensive consumer privacy legislation. The new law will go into effect on January 1, 2025. Iowa’s privacy law bears substantial similarity to the Virginia, Colorado, Connecticut and Utah privacy laws, which should facilitate compliance for businesses subject to those laws. Likewise, businesses familiar with the California Consumer Privacy Act (CCPA) will find overlap with the CCPA, though Iowa’s law is not as far-reaching and provides a more limited slate of consumer rights.    


Companies are subject to the Iowa privacy law if they do business in Iowa or target products or services to Iowa consumers, and meet one of the following criteria during a calendar year:

  1. Control or process personal data of at least 100,000 Iowa consumers.
  2. Control or process personal data of at least 25,000 Iowa consumers and derive over 50 percent of gross revenue from the sale of personal data.

Continue Reading

Artificial Intelligence Competitiveness, Inclusion, and Innovation – the U.S. Chamber of Commerce Commission Considers AI Regulation, Competitiveness, and the Future of AI

On March 9, 2023, the U.S. Chamber of Commerce released both an Executive Summary and Full Report detailing the work done by the Chamber’s Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation (“AI Commission”) over the course of 2022-2023, calling for a “risk-based regulatory framework that will allow for [AI’s] responsible and ethical deployment.” The AI Commission traveled across the United States and to London to hear from expert witnesses, including industry experts, government leaders, company executives, academics, and researchers; gain a better understanding of technological change and the status of AI implementation; and gauge public opinion on AI regulation.

The Summary and Report both related six key takeaways aligned with major themes:

  1. The development of AI and the introduction of AI-based systems are growing exponentially. Over the next 10 to 20 years, virtually every business and government agency will use AI. This will have a profound impact on society, the economy, and national security.
  2. Policy leaders must undertake initiatives to develop thoughtful laws and rules for the development of responsible AI and its ethical deployment.
  3. A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.
  4. The United States, through its technological advantages, well-developed system of individual rights, advanced legal system, and interlocking alliances with democracies, is uniquely situated to lead this effort.
  5. The United States needs to act to ensure future economic growth, provide for a competitive workforce, maintain a competitive position in a global economy, and provide for our future national security needs.
  6. Policies to promote responsible AI must be a top priority for this and future administrations and Congresses.

The Report then suggested five “Pillars for AI Regulation,” which seek to provide a substrate for lawmakers and other policymakers considering how best to address quickly developing AI technologies and the rapid embrace by the market of such tools in a variety of circumstances.  The Commission’s determinations outlined the following:

  • Efficiency. Generally, policymakers should consider existing laws and regulations and focus on filling any current gaps to accommodate new challenges created by AI usage.
  • Neutrality. New laws should be technology neutral and focus on applications and outcomes of AI, not on the technologies themselves.
  • Proportionality. When policymakers determine that laws are necessary to fill gaps in existing law, they should attempt to adopt a risk-based approach to AI regulation to ensure a balanced and proportionate approach.
  • Collegiality. Federal interagency collaboration is vital to developing cohesive regulation of AI across the government.
  • Flexibility. Laws and regulations should encourage private sector approaches to risk assessment and innovation to provide the flexibility of keeping up with rapidly changing technology.

Finally, the Report suggested several recommendations according to the following areas:

  • Preparing the Workforce – such preparation would use an evidence-based approach; would focus on educating the future workforce; would include training and reskilling within the private and public sectors; and would include programs directed toward attracting high-skilled talent.
  • Bolstering Global Competitiveness – this goal would include shoring global partnerships between the U.S. and key partners and allies; advancing IP protections; providing necessary related resources to the U.S. Patent and Trademark Office (among others); and protecting ingenuity through provisional approvals of patent submissions.
  • Protecting National Security – this aim would develop associated laws and codes of conduct promoting human rights and innovation; establish international rules of conduct; invest in systems validation; streamline procurement; and support work with allies to identify investment opportunities for AI-enabled systems among like-minded countries and allies.

Among the materials cited by the Commission were two articles authored by BakerHostetler attorneys: AI-Human Interaction—Soft Law Considerations and Application, 1 JARWA 4, 360–370 (Feb. 18, 2022), which the Commission considered regarding the difference between AI conceptually and simple algorithms, and Archimedes’ Lever and Audience Participation – or – Multifactor Soft-Law Transparency for AI System Process Development, 5 RAIL 1 (2022), which was cited for the Commission’s considerations of challenges in transparency and issues regarding “a one-size-fits-all Transparency mechanism.”

Generative AI Tools Can Present IP Risks, But They’re Manageable

Internet law on Dark Blue blurred background.

The sudden increase in news coverage of generative artificial intelligence (AI) tools like ChatGPT and Midjourney has employees excited to discover how these accessible tools can make their jobs easier. Employers are concerned about the legal implications of using such tools, and they are exploring different approaches for their own AI usage policies.

However, blanket policies that try to account for the risks that can come from using any AI technology can be overly restrictive or so generic that they do not properly address the legal considerations for AI usage. The rules for using AI technology under an enterprise license are likely more permissive than those for consumer-facing AI tools, as the former will likely have broader confidentiality and indemnity protections than the latter. For this reason, employee guidance on AI usage should specifically identify the AI tools subject to particular guidance and specifically differentiate between AI tools that may be offered under both a personal and an enterprise license, like ChatGPT. Those interested in a more general conversation about current AI frameworks can see our article here.

Read full IP Intelligence blog post here.

With New Enforcement Action, FTC Warns Against Health Information Being Used for Advertising Purposes

If the Federal Trade Commission’s (FTC) recent pursuits did not make clear the agency’s deep concerns about the use of health information for advertising purposes, a new enforcement action brought by the FTC against BetterHelp – to the tune of $7.8 million – should leave no uncertainty.

Factual Allegations

BetterHelp provides online counseling services and also provides specialized services for various constituencies, including people of Christian faith, teens and the LGBTQ+ community. The advertising technology BetterHelp utilizes on its website collects a range of information from and about visitors. When consumers sign up for a BetterHelp account, that information includes email, IP addresses, and certain sensitive health information responsive to intake questions.

The complaint details the many strong privacy promises that BetterHelp made over time to consumers through statements made during the intake process and in various iterations of the privacy policies. For instance, at sign-up, consumers were asked about mental health issues and were told the information was “anonymous,” “strictly private” or “will stay private between you and your counselor.” At times, the BetterHelp website stated that email addresses were “never shared, sold or disclosed to anyone.” Note that in this context, the fact that BetterHelp had a person’s email address would indicate the owner of that email address was seeking mental health treatment. In addition, banner statements at the bottom of the website stated at times that “[We] never sell or rent any information you share with us” and later, “We use BetterHelp and third-party cookies and web beacons to help the site function and measure the effectiveness of our ads.”

As for the privacy policy, it was allegedly “barely visible” at the bottom of the page. At the outset, the privacy policy said that information would be used and disclosed for certain purposes, including to connect users to a therapist and to operate the service. No mention was made of using or disclosing the info for advertising purposes. The policy also said nothing about “permitting third parties to use this information for their own purposes.” Eventually, the policy was changed to indicate that BetterHelp may both use and disclose information for advertising, but it still did not indicate that the third parties could also use the information for their own purposes. The FTC heavily emphasized the issue of third-party uses in its complaint.

The complaint then details the many ways in which the FTC alleges that BetterHelp violated the privacy-related promises it made, including many instances in which BetterHelp allegedly shared data with third parties or used sensitive data in ways that had not been clearly described to consumers at the time of collection. A few examples of the allegations:

  • Information that was collected at times was used to retarget ads to users who had visited the website.
  • Personal information was used to “find and target potential new users” with ads through locating potential new users who “shared traits with current [u]sers.”
  • In order to accomplish all of the above, consumers’ information was shared with various third parties.
  • The company benefited from this sharing, and the FTC alleges that BetterHelp “brought in hundreds of thousands of new [u]sers, resulting in millions of dollars in additional revenue.”

The complaint also details issues that contributed to the alleged violations. The complaint emphasizes that a recent college graduate, with little to no health privacy experience, was given “carte blanche” by the company to decide what user information to upload to a third-party platform and how to use that information. In another instance, although hashed email addresses were provided to a third party, the hashing was done merely to hide the addresses from a potential hacker. BetterHelp supposedly knew that the third party would be able to undo the hashing and reveal the email addresses in order to conduct matching and know who was seeking or in therapy. Moreover, the FTC also alleges that BetterHelp did not contractually limit how third parties could use or disclose the data at issue and that some of the third parties indeed used the data for their own purposes, including research and development and improving their advertising products. This was not adequately described to users in the statements that the agency challenges, the FTC alleges.

And finally, the FTC alleges that when news reports emerged about the sharing of health data in 2020, the company provided “false responses” claiming that data was not shared with third parties. (Note: It certainly appears that this investigation went on for quite a long time given the 2020 news reports.)

The specific complaint allegations

Given the broad range of activities discussed in the complaint, it is helpful to parse the specific unfairness and deception counts laid out in the complaint to better assess what the FTC considers violations. For those newer to FTC issues, the FTC can allege that practices are unfair if they meet the three-part unfairness test, which requires showing that the practices (1) cause or are likely to cause substantial injury to consumers, (2) cannot be reasonably avoided by consumers and (3) are not outweighed by countervailing benefits to consumers or to competition. What this means in practice is that regardless of any statement made on the website or in the privacy policy, the FTC can, after review of a practice, declare the practice, in and of itself, to be unfair regardless of any statements made.

And that is what the FTC did here, alleging that it is an unfair practice to fail to obtain affirmative express consent before collecting, using and disclosing to third parties consumer health information. There is a second broader unfairness count, which is the failure to have reasonable measures in place to protect health information in connection with the collection, use and/or disclosure of that information. Paragraph 72 of the complaint details the practices that allegedly support this count and includes the following alleged activity to support a violation:

  • Lack of written standards regarding collection, use and disclosure of health information, including making sure practices comply with representations.
  • Inadequate training, guidance and supervision.
  • Failure to obtain affirmative express consent to collect, use and disclose health information for advertising as well as for third parties’ own purposes.
  • Failure to contractually limit third parties from using health information for their own purposes.

And then there are a host of practices that the FTC alleges are deceptive and that are all of a similar genre but reflect unique, different violations, as described below:

  • Tell consumers that it would disclose health information to third parties for “limited purposes” and not include advertising or third parties’ own uses as a listed purpose.
  • Tell consumers that it would use health information for “limited purposes” and not include “advertising or advertising-related purposes” as a listed purpose.
  • Misrepresent that it would not disclose health information to any third party for advertising or that third party’s own uses.
  • Misrepresent that it would not use consumers’ health information for advertising.
  • Misrepresent that health information would not be disclosed to anyone except the consumer’s licensed therapist.
  • Misrepresent Health Insurance Portability and Accountability Act (HIPAA) certification. (We haven’t delved into this issue here, but don’t imply that your services have been reviewed and found compliant.)

Monetary Penalty Rationale

And finally, we get to the $7.8 million question: How did the FTC get money here when there is no alleged violation of the Health Breach Notification Rule violation? (An interesting statement from Commissioner Wilson explains why there was no rule violation in this context – it simply was because the information at issue does not meet the definition of a health record.)

The agency is using Section 19 of the FTC Act, which – in the context of administrative litigation – allows the agency to seek money in circumstances where a reasonable person would have known the practice was dishonest or fraudulent. So the bottom line is that the current commission is stating that the types of practices alleged in this case are dishonest or fraudulent.

A blog we wrote a while back explains the FTC administrative process and legal standard here and Commissioner Wilson (who left the commission effective March 1) states:

BetterHelp told consumers “Rest assured – your health information will stay private between you and your counselor” but, as alleged, shared this highly sensitive information with third parties for the purpose of monetizing it. I am comfortable that this conduct falls within our authority to seek relief under Section 19 of the FTC Act.”

Final Takeaways

We have been saying for quite a while that the FTC will be focused heavily on health privacy issues, and that is precisely what we are seeing. To stay on the FTC’s good side, entities must

  • obtain express informed consent before they share health information with third parties;
  • have reasonable processes and procedures to protect what happens to that information, both internally and externally (assuming they have consent to share that data); and
  • tell consumers with some degree of specificity how that data will be used.
  • Anyone whose practice touches upon the collection or use of health information needs to read the case in full.

2023: A Generative AI Odyssey

Artificial intelligence (AI) has long existed in the public consciousness through science fiction, doomsday planners, and fears of Ray Kurzweil’s singularity—but it now appears to be an accessible reality. 2023 has begun with a sharp increase in the number of AI tools in the marketplace, such as AI-based bots that understand natural language and generate human-like responses to text-based conversations. These bots are what is known as “generative AI,” or algorithms that receive inputs and output “new” content, such as text, imagery, and audio.

Many businesses have already been using AI tools, whether to make employment decisions or automate simple tasks. Regulation on these tools has already begun at the state and local levels, such as the various state consumer privacy laws that regulate autonomous decision-making and NYC’s Automated Employment Decision Tools Law (NYC AEDT Law) that was originally set to come into effect on January 1, 2023 (currently delayed to April 15, 2023). Now, with the advent of readily accessible tools like these AI-based text bots that can seemingly create work-product through a sentence prompt, regulators like the Biden Administration and key industry players like the U.S. Department of Commerce’s National Institute of Standards & Technology (NIST) are also weighing in on the discussion.

In October 2022, the White House Office of Science and Technology Policy (OSTP) issued its Blueprint for an AI Bill of Rights. This document provided basic principles and guidance on ethical use of AI, namely safe and effective systems, algorithmic discrimination protections, data privacy, notice and transparency, and the right to opt out of AI.

While the OSTP continues to discuss and explore how to operationalize this guidance, NIST also provided its own guidance through the AI Risk Management Framework (AI RMF 1.0), to which OSTP provided “extensive input and insight.” The AI RMF 1.0 is made up of four “core functions”: governing information (as we’ve discussed before), mapping AI risks, measuring AI risks through multifactorial risk assessment methods and independent experts, and managing mapped and measured risks through triage, covered in the AI RMF Playbook, Roadmap, Crosswalks, and Perspectives. Comments on AI RMF 1.0 have closed, and as of this blog’s publication, an updated version is expected in spring 2023.

In the near term, organizations have begun to address issues like popularized use of AI-based text bots (extremely accessible to everyone with an internet connection) generally, as well as purpose-driven applicant tracking system (ATS) initiatives that comb and sift resumes “automagically” before relinquishing decision-making to employers. Approaches, at least initially, seem to focus on two ends of the spectrum:

First, many organizations are utilizing and/or updating internal policies regarding the creation/development, deployment, and ongoing monitoring of the use of such automated processing tools (as we’ve also discussed before regarding frameworks posed by the Federal Trade Commission, among other frameworks) and adopting different checklists and measuring tools in service to responsible development. Some of these efforts are utilizing existing platforms to catalog and map internal data stores and related organizational activities, many of which began in response to greater organizational interests in data security and data privacy.

Second, organizations are beginning to confront regulatory notice requirements. There are laws in addition to the NYC AEDT Law’s ultimate notice requirements that address the topic. In particular, the California Privacy Rights Act (CPRA) has provided the California Privacy Protection Agency (CPPA) a mandate to promulgate regulations governing access and opt-out rights regarding covered entities’ uses of automated decision-making technology. Similarly, the Virginia Consumer Data Protection Act (VCDPA), the Colorado Privacy Act (CPA), and the Connecticut Data Privacy Act (CTDPA) all grant rights to opt out of personal information processing directed toward profiling, and they create additional requirements regarding the use of automated decision-making technology.

While regulations will be promulgated and frameworks will be developed, there is the thought that, at least in the U.S., the government may be moving toward common ground. OSTP and NIST, for example, have continued to communicate with each other so that their sets of guidance are “complementary,” as stated by Alondra Nelson in her former capacity as OSTP’s chief. Similar themes do indeed emerge in both frameworks, such as consumer protection and transparency. As such, businesses should keep both in mind as they develop or continue to develop their AI toolbox and should utilize such pre-work when regulatory disclosures and practices finally “go live”—likely in a very different way than what we’ve seen so far.

The Federal Trade Commission’s New Health Product Compliance Guidance

Join Randy Shaheen and Daniel Kaufman as they discuss the Federal Trade Commission’s new Health Products Compliance Guidance. During this webinar, they will share their insights about the new Guidance, highlight areas of particular importance for anyone involved in marketing or analyzing health claims, and explain how this new document differs from the agency’s two-decade-old Dietary Supplement Guidance. This session is the third event in our monthly 2023 Advertising CLE Webinar Showcase. The program is approved for 1.0 hour of CLE credit.

Read more.

Key Takeaways from the US National Cybersecurity Strategy

Last week, the White House released its much-awaited National Cybersecurity Strategy (the Strategy), which highlights the Administration’s cybersecurity policy development over the past two years and outlines critical objectives that will take years to achieve. The Strategy builds on the President’s May 2021 Executive Order, which committed the government to modernizing its own cybersecurity defenses, increasing the real-time sharing of threat intelligence across the government and with the private sector, improving government-private sector coordination on incident response and resiliency, and using the Government’s purchasing power to drive improvements in the broader ecosystem.

Read full alert.

HHS Announces New Divisions to Address Weighty Case Load

Medical symbol image on high tech blue background

On February 27, 2023, the U.S. Department of Health and Human Services (HHS) announced that its law enforcement agency – the Office for Civil Rights (OCR) – will reorganize, adding new divisions to better address the rapid increase in cases it is charged with handling. Tellingly, the title of HHS’s announcement focuses on the need to better address the “growing need” for enforcement. The OCR will now have three new divisions: the Enforcement Division, Policy Division and Strategic Planning Division. OCR Director Melanie Fontes Rainer indicated these new divisions are needed due to the OCR’s caseload having increased 69 percent between 2017 and 2022.

Continue Reading

The Podcast You Didn’t Know You Were Missing – AD Nauseam

It’s not every day that we get to announce new content created especially for advertising and marketing enthusiasts who just can’t get enough about legal issues in advertising and the latest from the Federal Trade Commission and the National Advertising Division. With that, we are proud to launch AD Nauseam, a new podcast series from some of the same people who post regularly to this blog.

With AD Nauseam, we will provide deep dives into some of the latest cases being announced as well as important issues of the day. Please join our hosts, Amy Mudge and Daniel Kaufman, as they talk ad nauseam about issues ranging from endorsements and testimonials to green marketing, health claims and their favorite artists from the 1980s. 

A link to the inaugural episode can be found here, and the podcast can also be found on all major streaming services.

Illinois Supreme Court: Sections 15(b) and 15(d) BIPA Claims Accrue with Each Scan or Transmission

Today the Illinois Supreme Court issued a decision in Cothron v. White Castle System, Inc. 2023 IL 128004, in which the court held that the statute of limitations accrues with each scan or transmission of biometric identifiers or biometric information for claims arising under Sections 15(b) and 15(d) of the Illinois Biometric Information Privacy Act, 740 ILCS 14/1, et seq. (BIPA). Section 15(b) pertains to notice and written consent before collecting, capturing, purchasing, receiving through trade, or otherwise obtaining biometric data. Section 15(d) pertains to the disclosure, redisclosure, or dissemination of biometric data without consent.

In Cothron, the plaintiff, a manager of a White Castle System Inc. (White Castle) restaurant, filed in the Circuit Court of Cook County a putative BIPA class action, which was subsequently removed to the U.S. District Court for the Northern District of Illinois. The plaintiff alleged that in 2004, White Castle implemented a system that required employees to scan their fingerprints to access their pay stubs and computers and that a third-party vendor then verified each scan to authorize the employee’s access. In 2008, BIPA took effect. The plaintiff alleged that White Castle did not seek her consent to acquire her alleged biometric data until 2018. The plaintiff alleged that White Castle (1) failed to provide notice and to obtain her consent when collecting her alleged biometric data under Section 15(b) and (2) disclosed her biometric data to a third-party vendor without her consent under Section 15(d).

Continue Reading