On Sept. 13, California Gov. Gavin Newsom signed into law AB 587, which requires social media companies to publicly post their content moderation policies and semiannually report data on their enforcement of the policies to the attorney general. The first part of this article will discuss the requirements imposed by AB 587 on social media companies. The second part will discuss other state laws that similarly moderate social media content and how they compare to AB 587. The last part of this article will examine the litigation history of content moderation laws and the potential implications of possible Supreme Court intervention on these state laws.
On Nov. 9, 2022, the New York State Department of Financial Services (NYDFS) published a proposed second amendment to its cybersecurity regulation. This follows its pre-proposed amendment that was published on July 29. Our prior analysis of those amendments is available here. NYDFS did consider comments received in response to the pre-proposed amendments, as they clarify and strengthen certain requirements. We highlight some of the key changes.
Additional Incident-Reporting Requirements
The first pre-proposed amendment requires notification to NYDFS within 72 hours of unauthorized access to privileged accounts or the deployment of ransomware within a material part of a covered entity’s information systems. The amendment also proposed a new 24-hour notification obligation in the event a ransom payment is made and a 30-day requirement to provide a written description of why the payment was necessary, alternatives considered and sanctions diligence conducted. Those stringent timelines are maintained in the second amendment, with additional reporting requirements:
- 90 days – Within 90 days of the notice of the cybersecurity event, each covered entity shall provide the superintendent any information requested regarding the investigation of the cybersecurity event, to be sent electronically in the form set forth on the department’s website. Covered entities shall have a continuing obligation to update and supplement the information provided.
- 72 hours – Each covered entity that is affected by a cybersecurity event at a third-party service provider shall notify the superintendent electronically in the form set forth on the department’s website as promptly as possible, but in no event later than 72 hours from the time the covered entity becomes aware of such cybersecurity event.
Revised Definition of Class A Companies
The original proposed amendment created the new category of “Class A” companies, defined as covered entities with more than 2,000 employees or over $1 billion in gross annual revenue averaged over the past three years from all business operations of the company and its affiliates.
The second amendment revises that definition. Class A companies are now those covered entities with at least $20 billion in gross annual revenue in-state in each of the past two fiscal years from business operations of the covered entity and its affiliates:
- More than 2,000 employees averaged over the past two fiscal years, including those of both the covered entity and all of its affiliates, no matter where located; or
- More than $1 billion in gross annual revenue in each of the past two fiscal years from all business operations of the covered entity and all of its affiliates.
The changes in the second amendment may result in excluding from the Class A definition certain covered entities that have a small presence in New York, and they also shift the focus on gross annual revenue averaged over three years.
Additionally, the second proposed amendment modifies the definition of an “independent audit,” which is to be conducted by an external auditor, not an internal auditor.
The pre-proposed amendment required the chief information security officer (CISO) to have adequate independence and authority to appropriately manage cyber risks. The second proposed amendment removes the CISO independence requirement. It does require the CISO to have the ability to direct sufficient resources to implement and maintain a cybersecurity program. The second proposed amendment also:
- Only requires that the CISO’s annual board reports consider (as opposed to expressly address) certain factors (i.e., the confidentiality of nonpublic information and the integrity and security of the covered entity’s information systems, the covered entity’s cybersecurity policies and procedures, plans for remediating material inadequacies, etc.).
- Removes the obligation included in the pre-proposed amendment that the CISO annually review the feasibility of encryption of nonpublic information at rest and the effectiveness of compensating controls.
- Changes the obligation that both the CEO and the CISO sign an annual certification or acknowledgment of noncompliance to a requirement that the “highest-ranking executive” and the CISO sign.
- Clarifies that the role of the board (or its equivalent or the appropriate committee) shall also include exercising oversight of and providing direction to managers on cybersecurity risk management.
Penetration Testing and Vulnerability Assessments
The second proposed amendment makes significant changes to the strengthened technical and written policy requirements detailed in the pre-proposed amendment. Those technical requirements for penetration testing, vulnerability management and access controls include:
- Requiring user access privileges for privileged accounts be reviewed at least annually and terminated upon employee departure.
- Having the required penetration testing be conducted by either a qualified internal or external independent party, which must include testing from both inside and outside the information system’s boundaries.
- Replacing the pre-proposed amendment’s exception to multifactor authentication for service accounts with an exception where the CISO approves a reasonably equivalent or more secure control, and otherwise requiring multifactor authentication for (i) remote access to the covered entity’s information systems, (ii) remote access to third-party applications from which nonpublic information is accessible and (iii) all privileged accounts.
- Replacing the pre-proposed amendment requirement for “strong, unique passwords” with a requirement to implement a “written password policy that meets industry standards.”
The pre-proposed amendment expanded the requirements for and definition of “risk assessments.” These changes have been maintained in the second amendment. The pre-proposed amendment required that covered entities review and update risk assessments annually and conduct impact assessments whenever a change in the business or technology causes a material change in the covered entity’s cyber risk. The requirement for impact assessments has been removed in the second amendment.
Incident Response Plan and BCDR Plan
A covered entity would be required to provide relevant training on its incident response plan and its business continuity disaster recovery (BCDR) plan to all employees necessary to implement such plans, and it must test both plans at least annually. In the second proposed amendment, NYDFS removed the proposed requirement that copies of the incident response and BCDR plans be maintained at one or more accessible off-site locations, and it clarified that the incident response plan and BCDR plan must be both distributed to and accessible by all employees necessary to implement them. Covered entities are also required to maintain backups adequately protected from unauthorized alterations or destruction (instead of “isolated from network connections,” as the pre-proposed amendments had required).
The second proposed amendment continues to illustrate NYDFS’s leading role in requiring covered entities to strengthen their cybersecurity practices. Covered entities should assess their cybersecurity practices to ensure they have adequate controls in place to comply with these anticipated regulatory changes. The 60-day public comment period to the proposed amended regulation ends on Jan. 9, 2023.
As a Halloween treat for HIPAA-covered entities and business associates, on October 31, the Department of Health and Human Services Office for Civil Rights (OCR) released a new video on its YouTube channel, in which senior OCR cybersecurity advisor Nick Heesters addresses recognized security practices, or RSPs. In this video, Heesters answers a handful of questions directed to the OCR in response to OCR’s June 2022 call for input on the implementation of RSPs. While the video should be viewed in its entirety, we discuss here some of the more noteworthy aspects: (1) the OCR’s position on the “voluntary” nature of RSPs, (2) the goal posts around implementation; (3) the importance of robust asset inventory practices, and (4) supporting evidence of RSP implementation.
New Software Development Security Attestation and Related False Claims Act Liability for Commercial and Noncommercial Software Developers and Suppliers
Software producers at all levels in the federal supply chain should prepare to attest that their software development practices comply with National Institute of Standards and Technology (NIST) standards supported by artifacts that demonstrate secure software development and by the software bill of materials.
On Sept. 14, 2022, the Office of Management and Budget (OMB) issued guidance establishing time frames for requiring all federal agencies to only use software provided by developers (producers) who can attest in writing to complying with the NIST-specified secure software development framework (NIST SP 800-218) and NIST software supply chain security guidance. OMB’s actions implement President Joe Biden’s May 12, 2021 Executive Order requiring NIST to identify practices that enhance the security of the software supply chain.
The continued growth of the market for nonfungible tokens (NFTs) in 2022 has helped shape the zeitgeist of what has been referenced colloquially by some as the “fourth industrial revolution,” defined largely by network effect (e.g., virality); rapid innovation; social, creative and civic engagement; and evolved perspectives with regard to how rights and obligations between and among parties to automated agreements are defined and enforced.
Commonly used to identify and affix identifiable rights to otherwise fungible digital media files, NFTs, along with other cryptographic assets and blockchain technology generally, compose the infrastructure required to facilitate transactions between and among anonymous or pseudonymous counterparties without involvement by third-party intermediaries, such as banks. As a result, the nonfungible (unique) nature of NFTs has revolutionized conceptions of digital property ownership by demonstrating that digital property is not only real but has intrinsic value, similar to real property.
On Sept. 16, 2022, the White House released a comprehensive framework for responsible digital asset development and, in particular, cryptocurrency. Agencies across the federal government have been working for the past six months to develop frameworks and policy recommendations to advance the six key priorities identified in President Biden’s March 9 executive order on Ensuring Responsible Development of Digital Assets: (1) consumer and investor protection, (2) financial inclusion, (3) promoting financial stability, (4) responsible innovation, (5) U.S. leadership in the global financial system and economic competitiveness, and (6) countering illicit finance. This framework comes weeks after the California Senate unanimously passed the Age-Appropriate Design Code Act on Aug. 29, 2022, reflecting an increased focus on platform accountability, transparency and consumer protection at both the state and federal levels.
In 2019, the U.S. Department of Health & Human Services, Office for Civil Rights (OCR) announced its Right of Access Initiative, promising to prioritize patients’ rights to receive timely copies of their medical records without being overcharged. In the three years since, which saw the transition to a new administration in Washington, OCR has publicized resolutions related to 41 Right of Access claims, including two civil monetary penalties (CMP) and 39 settlements totaling $2,428,650. In BakerHostetler’s 2022 Data Security Incident Response (DSIR) Report, we highlighted OCR’s ongoing commitment to its Right of Access Initiative, fully expecting the trend would continue, and also provided a high-level list of red flags based on the resolution agreements published at the time. In this blog post, we take a deeper dive into OCR’s enforcement actions under this initiative to date, including major themes and shifts in approach.
What’s Trending? (Privacy a la Mode)
Notable fashion brands have been engaging in a “trial period” of new technologies as privacy laws and privacy enforcement are trending – for example, exploring integrating branding into digital assets in video games, virtual reality (VR) and augmented reality (AR) technology, metaverses, and non-fungible tokens (NFTs). Fashion naturally pushes the envelope, taking on risks in the interest of not being left behind and losing relevancy and notoriety. This brings about several legal issues, such as those arising from trademark infringement by NFT creators, as well as marketing collaborations as influencers are becoming an essential component of a brand’s commercial success.
Artificial intelligence (AI) refers to the recognition or creation of patterns that simulate human actions or thought. Since the late 1970s, when people began regularly interacting with computers, AI has become increasingly prevalent, and uses of AI technology continue to create greater opportunities for interaction with human norms — those rules that define acceptable behavior. The intersection of those norms and AI processes that seek to replace human actions where efficiency calls for it is also an intersection of expectations and the law ― one that is changing and adapting quickly.
The recent article “AI–human interaction: Soft law considerations and application,” published in the Journal of AI, Robotics & Workplace Automation, discusses these issues. In particular, the article considers several primary concepts:
- The history of AI and how its purpose, usability and interaction with humans have evolved in the past 50 years.
- One potential challenge to AI is the Uncanny Valley, or those instances where robots create a digital presence that is indistinguishable from a real human that can induce mental uneasiness when humanlike appearances create expectations that robots cannot meet.
- The Turing test, a concept anticipated by AI pioneers, as a method of inquiry for determining whether a computer is actually capable of thinking like a human being.
- How chatbots launched by technology corporations have recently demonstrated the risks and ethical challenges that advancement to AI presents.
- The necessity of soft law to address the risks presented by increased use of AI as the industry progresses, and how legislative bodies have already begun addressing those risks.
Examining these issues, the article begins by tracing the history of AI from personal computing in the 1970s, when software and computer platforms were being developed with the goal of making everyone a computer user. As computers became a part of daily life, the field of cognitive engineering, a scientific field merging how people think and the engineering of products to address human needs, developed with the goal of increased efficiency worldwide. Human interaction with computers then progressed actively for decades, conforming to usability and aiming to reflect changes in society. And today, everyone is “plugged in” in some way in nearly every part of their existence, especially given the virtual and remote world designed in response to the COVID-19 pandemic. AI has not only adapted to these changes but continues to evolve, and the use of AI is shaping up to create a new normal.
The rapid maturation of the industry has set off related calls for action in the legal and regulatory communities. The article considers these movements and posits that soft law is the ideal step to address AI innovations, especially when considering how certain legislative bodies (including the California Legislature) have frameworks addressing how AI communicates with the public. As noted in the article, because “this is unlikely to be a situation where AI developers police themselves without any outside demands or influence,” there is a need to continue expanding efforts like this into a soft law approach that works.
The Federal Trade Commission issued a detailed [staff report] on September 15 addressing Dark Patterns (or what some more descriptively call “manipulative design,” but Dark Patterns seems to be sticking). Regulators are focusing increased attention on these manipulative designs and it’s critical for marketing, user experience and design teams to understand this topic.