Latest FTC Health Privacy Case Sheds Light on Agency Health Privacy Approaches

Health privacy has been a Federal Trade Commission (FTC) priority for decades, and indeed, one of its very first privacy cases, in the early 2000s, involved the inadvertent sharing of user health data. Fast-forward a few decades, and health privacy remains a major concern. Case in point: The latest FTC privacy enforcement action focuses on the practices of GoodRx and is the first FTC case to allege a violation of the Health Breach Notification Rule (HBNR or Rule). This enforcement action should serve as a warning shot to companies dealing in health information, reminding them that just because they do not fall under the Health Insurance Portability and Accountability Act (HIPAA) does not mean they are free to use the data they collect without potential regulatory consequences.

FTC’s Health Breach Notification Rule Background and Focus on Health Information

As explained in more detail in this post, the HBNR was introduced as part of the American Recovery and Reinvestment Act of 2009. The rule applies to entities that are not subject to HIPAA but are capable of obtaining health records from multiple sources. The FTC has specifically indicated that the following types of entities that handle health information – are subject to the Rule: health apps and wearable devices that track diseases, diagnoses, treatments, medications, fitness, fertility, sleep, mental health, diet, and other vital areas.

In September 2021, the FTC issued a policy statement on the FTC’s HBNR, “reminding” entities that “a ‘breach’ is not limited to cybersecurity intrusions or nefarious behavior. Incidents of unauthorized access, including sharing of covered information without an individual’s authorization, triggers notification obligations under the Rule.”

In the wake of the Dobbs decision, the FTC issued a warning, by way of an open letter from the acting associate director of the Division of Privacy & Identity Protection, that websites sharing health, location and highly sensitive data without adequate disclosures to consumers would “hear from” the FTC.

GoodRx and the Allegations Raised by the FTC

GoodRx is a digital healthcare platform that sells health-related products and services to consumers, including through prescription medication discount products and telehealth services.

Privacy Policy Statements

For many years, the company’s online privacy policy stated, “We never provide advertisers or any other third parties any information that reveals a personal health condition or personal health information.” See Para. 27 of the FTC’s complaint. In spring 2019, GoodRx amended that sentence to remove “or any other third parties,” and a month later, it removed that sentence entirely from the privacy policy without providing notice of the change to users. fn. 1. The complaint alleged, overlapping with the above statement, that in various versions of privacy policies for various GoodRx companies, the privacy policies stated that the company would share users’ personal information – name, phone number and email address – with third parties only to provide services to users or to contact them directly, and later it stated that it would only share information as required by law, in a merger, to respond to consumer requests, to securely store or process data, or to reach out to someone if health or safety were a concern. Id. at Paras. 33-34.

In addition to these direct claims about data sharing, the complaint also notes instances where GoodRx or its sister companies stated that they complied with certain standards – like the Digital Advertising Alliance principles, HIPAA and the “same guidelines as any health entity.” Id. pp. 7-9.

Alleged Sharing via Website Tracking Technology

The complaint details many ways in which the FTC believes that the company violated these very promises. At its core, the FTC’s complaint alleges that health information was shared with advertising and social media platforms for advertising purposes. The sharing allegedly occurred in multiple ways, but in short, the FTC focuses on GoodRx’s use of website tracking technology, advertising technology and certain software development kits (SDKs) that it says disclose this information, contrary to GoodRx’s privacy policies. The complaint also alleges that until early 2020, GoodRx did not have “sufficient or formal compliance programs for reviewing and approving all data sharing requests or third-party tracking tool integrations. It also had no policies or procedures for notifying users of breaches of their personal and health information.” It seems fairly likely that these policies and procedures, if implemented, would have helped identify and remediate many of the practices and issues alleged in the complaint.

The Violations

The FTC alleges that GoodRx engaged in both deceptive and unfair practices in connection with the practices described above. Notably, the FTC complaint alleges that it is an unfair practice to have shared consumer health information with third-party advertising platforms without user knowledge and affirmative express consent. Given the prevalence of the use of advertising trackers, entities covered by the HBNR should give immediate attention to how they are obtaining consent to share with website tracking technology providers.[1]

This failure to obtain express consent led to a first-of-its-kind alleged violation by the FTC: that GoodRx’s sharing with website tracking and advertising technology providers constituted a breach under the HBNR and GoodRx violated the rule by failing to provide notification as required. To be clear, HBNR is at its core a notification rule, and it is the lack of notice to consumers that triggers the ability of the FTC to allege a violation here. Last year, an FTC blog stated that it can be a “de facto” violation of the FTC Act in some cases to not provide notice to consumers, and this appears to be the first FTC case to actually allege that as a specific FTC Act violation. These unfairness counts are also a good reminder that the FTC doesn’t need a deceptive privacy policy to go after you for certain questionable privacy practices; unfairness is also a viable tool, and it doesn’t require a statement in a privacy policy.

The remaining alleged violations are more commonplace:

  • It was an unfair practice to not have adequate policies and procedures to prevent the alleged unauthorized disclosures.
  • It was an unfair practice to fail “to notify users of breaches of that information.” It violated the FTC Act by misrepresenting that its telehealth services were HIPAA-compliant, when, in fact, it was neither.
  • It misrepresented its compliance with the Digital Advertising Alliance Sensitive Data Principle, which generally prohibits the use of certain health information for online behavioral advertising without consent.


So what does the settlement require? First, there is a $1.5 million civil penalty for the HBNR violation. Much has been written about current limitations on the FTC to obtain monetary relief, but HBNR is one area where the FTC can and will seek civil penalties. (Notably, FTC Commissioner Christine Wilson, in a concurring statement, said that she would have preferred that the civil penalty be higher. She also doesn’t mince words in this must-read statement and calls to task some of her colleagues for accepting this penalty amount while having dissented on other high-profile FTC privacy cases that involved much greater penalties).

But there is a lot more to the settlement than money. It is also the first FTC settlement that bans a party from disclosing health information to third parties for advertising purposes. The order provisions of the ban are carefully crafted to allow certain analytics and contextual advertising, but the agency’s message is quite clear. The order also requires affirmative express consent (which is defined in detail) in many circumstances where data is shared with third parties. And as expected, the order requires the creation of a comprehensive privacy protection program with biennial third-party assessments. And finally, there are robust notice provisions, including direct notice to users and website notice.

We mentioned earlier a separate statement from Wilson, and one other issue from her statement is worth flagging. She states in a footnote that this case reflects a violation of the HBNR “based on a plain reading of the text, setting aside any gloss the Commission sought to add in its September 2021” policy statement.

Action to Consider

There is a lot to digest about this case, but the following are a few key takeaways for companies that provide health-related services, whether or not they are HIPAA-covered entities.

            Policies and Procedures Matter – It is important to regularly review your policies and procedures and to make certain that your practices are in sync with how you describe your practices to users. Whether they would have prevented some of the alleged practices in this case is unknown, but the lack of policies certainly did not help the FTC’s view of GoodRx.

             The HBNR Should Always Be Considered – Many practitioners forget that the activities that constitutes a breach under the HBNR are quite broad and can involve unauthorized sharing with a third-party partner. Pay extra-close attention to any health data that is being shared with third parties.

            Sharing Can Be Too Simple – Through pixels, SDKs and many other means, it is easy – and, frankly, the norm – to share user data with third-party advertisers. Depending on configurations, the information shared may go beyond what companies intend or what users would expect. Make sure you understand all tracking technologies used on your website and their configurations, and confirm whether they jibe with your privacy policies.

            FTC Unfairness Allegations Can Get at Many Types of Activities – Practitioners often focus on state laws and specific federal health laws without doing an adequate assessment of whether the practices at issue could also constitute an FTC unfairness violation. The contours of unfairness are not particularly well defined, but it is worth considering whether there could be an FTC unfairness issue lurking for any data incident.

[1] Note that HIPAA-regulated entities have been on notice since Dec. 1, 2022, that the Office for Civil Rights (OCR) would be taking a strict view of the use of tracking technology and advertising technology. A blog post explaining those technologies and the OCR’s guidance can be found here.

For Educational Institutions, Post-Ransomware Harassment Requires A+ Messaging

Privacy protection. Collage with personal info of African American man holding mobile phone.

Educational institutions have not been excluded from the ransomware epidemic, and stakeholder communications are critical to an effective response. In a typical double-extortion ransomware attack, threat actors demand that victims pay a ransom to decrypt systems and to prevent publication of stolen data. However, with a decline in the number of victims choosing to pay a ransom, threat actors are trying different approaches. Post-attack harassment of victims’ students, employees, board members, donors or other stakeholders was once an outlier pressure tactic but appears to be on the rise among some ransomware gangs, and it is often directed toward victims in the education sector, such as universities/colleges and school districts. Increasingly, threat actors contact their victims’ students, employees or others to expose details about the attack in an effort to force a ransom payment. For educational institutions, this growing trend highlights the importance of promptly providing clear, consistent messaging to ensure that they can frame, if not control, the narrative about the incident.   

Threat actors may contact students and employees using contact information found in the files stolen from an institution’s servers to try to coerce the educational institution to pay a ransom. In these communications, a threat actor may claim to have taken a large amount of sensitive data from the institution. The threat actor may assert that they will publish the data online because the school does not care enough about its students to protect the information. This tactic is designed to pressure a ransom payment by creating a panic within the institution’s community as recipients begin questioning administrators and other personnel about the incident or posting about these messages on social media.

Educational institutions, like any ransomware victims, cannot control what a threat actor will do. But there are measures they can take to be well positioned to mitigate the impact of this kind of threat actor harassment.

As an initial matter, providing clear, accurate messaging to students, parents, employees and other community members puts an educational institution in the best position to avoid negative fallout. When, for example, a student first learns of an incident through an email from a threat actor claiming their personal information will be published to the dark web, they may perceive their school as lacking transparency and might distrust any reactive messages the school publishes. Educational institutions that communicate promptly and proactively with their communities to share information about what happened in the attack and what they are doing in response may be inoculated against the negative effects of such harassment.  

Of course, the timing and accuracy of messaging are keys to maintaining the community’s trust. And rushing a message light on details can undermine confidence if there are questions that cannot be answered. Vague or inaccurate messages downplaying an incident can be similarly damaging. For example, a victim that communicates that student personal information is not impacted would quickly lose their constituents’ faith if a threat actor emails files or images showing that student data was involved. Initial messaging identifying known facts followed by periodic updates can help reassure the community, prevent administrators from being overwhelmed by questions, and serve as a buffer against incomplete or incorrect social media narratives.

The growing possibility that a threat actor may act out by harassing members of a college, university or school district community underscores the need to move quickly and decisively when investigating and communicating about an incident. Proactively engaging with the community to provide accurate messaging about the incident and response efforts can help build trust, demonstrate a commitment to transparency and reduce the negative effects of such harassment. Educational institutions should develop their strategies for internal and external messaging and communications early in the incident response process and keep these strategies updated as their efforts progress. Outlines and sample messages can even be incorporated into an incident response plan. These messages can be tested and refined in tabletop and other incident readiness exercises. Educational institutions that are able to quickly determine what the message will be, who will deliver it, and when and how it will be sent are likely to be well positioned to stay ahead of threat actors that try to pressure ransom payments by harassing their communities.

Illinois Supreme Court: 5-Year Statute of Limitations for BIPA Claims

Technology digital wave background concept.

Earlier today, the Illinois Supreme Court issued a decision in Tims v. Black Horse Carriers, Inc., 2023 IL 127801, in which the court held that a five-year statute of limitations applies to all claims arising under the Illinois Biometric Information Privacy Act, 740 ILCS 14/1, et seq. (BIPA). There are five primary sections under BIPA. Section 15(a) pertains to the establishment and maintenance of and adherence to a retention schedule and guidelines for destroying collected biometric information. Section 15(b) pertains to notice and written consent before collecting or storing biometric information. Section 15(c) pertains to selling or otherwise profiting from collected biometric information. Section 15(d) pertains to the disclosure or dissemination of biometric information without consent. Section 15(e) pertains to the proper storage and transmittal of collected biometric information.

Continue Reading

Welcome Counsel Andrew Epstein to the DADM Group

We are excited to welcome new Counsel Andrew Epstein to our Digital Assets and Data Management Group. Andrew joins our Digital Risk Advisory and Cybersecurity team and works out of our Seattle office.

Andrew joins us most recently from Ethos Technologies, Inc., where he was Senior Corporate Counsel – Privacy, Cybersecurity and Employment.

As a strategic thought partner to businesses, non-profits and other organizations, Andrew provides risk-based options and operationalizes solutions to clients’ privacy and cybersecurity compliance obligations that are designed to optimize clients’ abilities to leverage a key asset: data.

Read more

Pennsylvania’s Data Breach Notification Law Is Changing: What Does It Mean for Entities Doing Business in the Keystone State?

2023 is going to bring big changes to Pennsylvania’s Breach of Personal Information Notification Act. Although the revisions to the law do not go into effect until May 2, 2023, now is the time for Pennsylvania entities to ensure that they are in compliance before the effective date.

Continue Reading

OCR Guidance on Use of Tracking Technologies Warrants Review of Website Tech

The U.S. Department of Health and Human Services Office for Civil Rights (OCR) issued guidance regarding covered entities’ and business associates’ use of tracking technologies (the Guidance). As discussed in greater detail below, the Guidance reveals OCR’s position that an IP address is not just an identifier but is itself individually identifiable health information (IIHI) when collected by tracking technology on a healthcare entity’s website. In light of the significant regulatory and class-action activity against covered entities and business associates regarding their use of this technology, this post provides our analysis of how the Guidance impacts how these entities use and assess their usage of tracking technologies. We also provide general recommendations for healthcare entities in light of the Guidance.

Background – Tracking Technologies

Organizations use various tools to make their websites functional, improve visitor experience and analyze website traffic. These tools are often grouped together and referred to as “tracking technologies” and include things like cookies, web beacons or pixel tags, heatmaps, session replay, and recording scripts, all of which can be used to collect information from website visitors as they navigate a website.

The following list includes a general overview of each of these common technologies and their functions.

  • Cookies – Cookies are small text files sent to website visitors’ browsers from the websites they visit. They help that website learn or remember information about the visit – such as the user’s preferences (e.g., language choice, page configuration, shopping cart contents) – to improve the web browsing experience. Cookies can also be used for analytics, advertising and personalization. Depending on the user and browser settings, the browser will store cookies locally on the user’s device.
  • Pixels – Also known as web beacons, trackers or advertising technology (AdTech), a pixel is a piece of code embedded on a website that can be used to track visitor activity on that website. By default, pixels will collect information about URLs visited, buttons clicked and other actions taken by a website visitor on a webpage where the pixel is present. Many pixels interact with cookies to track users’ activity and preferences.
  • Heatmaps – Heatmaps collect user behavior data – such as button clicks and scrolling – to provide the website owner with a color-coded representation of the website elements that are the most (hot) and least (cold) interacted with.
  • Session recording – Also known as session replays, user recordings and user/visitor replay tools, session recordings are renderings of real actions taken by visitors as they browse a website. The recordings capture mouse movement, clicks/taps, keyboard strokes and scrolling during the visitor’s website session to help website owners improve site functionality by understanding how users navigate their site, how they interact with elements, where they hesitate and where they get stuck. By default, the session recording tools we have seen (including HotJar and Crazy Egg) automatically anonymize keyboard strokes (i.e., the data a user inputs in a form) and can be configured to suppress specific elements.

Separately, all websites also collect a set of data from website visitors in order for the website to function, known as HTTP headers or “header information.” Without getting too technical, header information is how a website communicates with a device and is a component necessary for the Internet to work. Header information includes data about a visitor’s computer, mobile device and Internet connection, such as the IP address, operating system, browser type and app version. This information tells a website how to present information to the visitor (for example, the website might be presented differently when the visitor is on a computer versus on a mobile device) and how to get it there (i.e., the IP address).

Background – Regulatory Action and Litigation Related to Tracking Technology

Regulatory scrutiny of and class-action litigation based on healthcare providers’ use of tracking technology increased significantly after the June 2022 online publication of an article about healthcare providers’ use of Meta Pixel. Since 2016, there has been ongoing class-action litigation against a small group of entities and tracking technology providers. After June 2022, however, the litigation net was cast much wider, with new cases filed against many of the hospitals named in the article. Additionally, many of our clients (not all of whom were named in the article) began receiving regulatory inquiries from OCR, state attorneys general and departments of justice, and federal congressional committees. While the inquiries were triggered by interest in the use of tracking technology, the OCR inquiries have taken deep dives into general compliance with the Health Insurance Portability and Accountability Act (HIPAA) Privacy, Security and Breach Notification Rules. Several investigations have also revealed an interest in the intersection of tracking technology and its use on webpages related to women’s reproductive health following the Dobbs decision.

The Guidance – OCR’s Position on What Constitutes PHI when Collected from a Covered Entity’s Website

Below we highlight the significant points OCR makes in the Guidance in support of its position that an IP address is itself IIHI when collected by tracking technology on a HIPAA covered entity’s (CE) website. Those points are followed by OCR’s recommendations for using tracking technology in a HIPAA-compliant manner.

First, OCR’s rationale:

  • OCR asserts that an IP address alone, collected by a CE’s website, is IIHI. In explaining how the HIPAA rules apply to CEs’ use of tracking technologies, OCR begins by asserting that (1) a website user’s IP address or geographic location, or any unique identifying code, is individually identifiable health information (IIHI); and (2) all IIHI, including IP addresses and geographic locations, that a website visitor provides when using a CE’s website “generally is PHI [protected health information],” even if the individual does not have an existing relationship with the CE and even if the IIHI, such as an IP address or geographic location, does not include specific treatment or billing information like dates and types of healthcare services.
  • According to OCR, “[t]his is because, when a regulated entity collects the individual’s IIHI through its website or mobile app, the information connects the individual to the regulated entity (i.e., it is indicative that the individual has received or will receive health care services or benefits from the covered entity), and thus relates to the individual’s past, present, or future health or health care or payment for care.”
  • A business associate agreement (BAA) is required for use of tracking technologies on a CE’s user-authenticated websites. Regarding tracking technologies on a CE’s user-authenticated websites (e.g., a patient portal), OCR states such technologies generally have access to PHI, and therefore a BAA with the technology vendor is required.
  • A BAA is required for use of tracking technologies on certain unauthenticated webpages. Regarding tracking technologies on a CE’s unauthenticated websites (e.g., any publicly available pages not requiring a login), OCR states such technologies generally do not have access to PHI and the HIPAA Rules do not apply. However, OCR outlines certain cases where it says tracking technologies on unauthenticated webpages may have access to PHI and the HIPAA Rules do apply, including (1) the login page of the CE’s patient portal or a user registration webpage where the user creates a login for the patient portal and (2) webpages that address specific symptoms or health conditions, such as pregnancy or miscarriage, or that allow a visitor to search for doctors or schedule appointments.
  • OCR provides the following as an example of when tracking technologies on unauthenticated pages have access to PHI: “[T]racking technologies could collect an individual’s email address and/or IP address when the individual visits a regulated entity’s webpage to search for available appointments with a health care provider. In this example, the regulated entity is disclosing PHI to the tracking technology vendor, and thus the HIPAA Rules apply.”
  • Information collected from the user or the user’s device by a CE’s mobile app is PHI. Regarding CEs’ mobile apps, OCR notes that such apps collect information provided by the user (i.e., information typed or uploaded into the app) and by the user’s device (i.e., fingerprints, network location, geolocation, device ID or advertising ID) and states that such information is PHI. Thus, CEs must comply with the HIPAA Rules for any PHI that the mobile app uses or discloses, including any subsequent disclosures to mobile app vendors, tracking technology vendors or any other third party that receives such information.

OCR also offers examples of the HIPAA Privacy, Security and Breach Notification Rules’ requirements that CEs must meet when using tracking technologies with access to PHI. The OCR’s requirements are as follows:

Privacy Rule:

  • CEs must ensure that if PHI is provided to a tracking technology vendor, the disclosure is permissible under HIPAA or subject to an exemption, and that only the minimum necessary PHI to achieve the intended purpose is disclosed.
  • OCR clarifies that a website or mobile app’s privacy policy, terms and conditions, and/or privacy notice are not sufficient to permit disclosures of PHI to tracking technology vendors if the disclosure is not otherwise a permissible disclosure under HIPAA or pursuant to a valid BAA.
  • OCR states that tracking technology vendors that receive PHI must sign a BAA, which must include a description of the vendor’s permissible uses and a guarantee of safeguarding PHI. OCR warns CEs that the vendor must meet the definition of a business associate in order for a BAA to permit the disclosure. “Signing an agreement containing the elements of a BAA does not make a tracking technology vendor a business associate if the tracking technology vendor does not meet the business associate definition.”
  • If there is not a HIPAA-permitted disclosure or BAA, then CEs must obtain a HIPAA-compliant authorization prior to the disclosure of PHI to a tracking technology vendor. Website banners that ask users to accept or reject a website’s use of tracking technologies – such as cookies – do not constitute a valid HIPAA authorization.

Security Rule:

  • CEs must address the use of tracking technologies in their risk analysis and risk management processes and implement other administrative, physical and technical safeguards (e.g., encrypting PHI transmitted to a technology vendor) to protect the PHI.

Breach Notification Rule:

  • CEs must notify affected individuals, OCR and the media, as applicable, of an impermissible disclosure of PHI to a tracking technology vendor that compromises the security or privacy of PHI where there is no Privacy Rule permission to disclose PHI and there is no BAA with the vendor, unless the CE can demonstrate that there is a low probability that the PHI has been compromised.

BakerHostetler’s Assessment – Impact of the Guidance

The Guidance appears to conflate the statutory definition of IIHI with the identifiers listed in 45 CFR § 164.514(b)(2), which relates to de-identification of established PHI/IIHI. Under HIPAA:

  • IIHI is defined as “information that is a subset of health information, including demographic information collected from an individual, and: (1) Is created or received by a [CE]; and (2) relates to the past, present, or future [(PPF)] physical or mental health or condition of an individual; the provision of health care to an individual; or the [PPF] payment for the provision of health care to an individual; and (i) That identifies the individual; or (ii) With respect to which there is a reasonable basis to believe the information can be used to identify the individual.” 45 CFR § 160.103 (our emphasis).
  • Health information (Health Information) is defined as “any information, including genetic information, whether oral or recorded in any form or medium, that: (1) Is created or received by a [CE]; and (2) Relates to the [PPF] physical or mental health or condition of an individual; the provision of health care to an individual; or the [PPF] payment for the provision of health care to an individual.” Id. (our emphasis).
  • PHI is IIHI that is: “i) Transmitted by electronic media; (ii) Maintained in electronic media; or (iii) Transmitted or maintained in any other form or medium.” Id.

In other words, IIHI creates the threshold for when personal information is considered PHI subject to the Privacy Rule. As such, it must include some Health Information about an individual accompanied by sufficient identifiers such that the individual is/could reasonably be identified.

45 CFR 164.514(b)(2), on the other hand, only applies once a determination has been made that the data at issue is PHI, as it instructs entities on which data elements to remove from PHI in order to render it de-identified. It is not a list of data elements that are, standing alone, individually identifiable.

The Guidance does not acknowledge any of the myriad situations in which the information that can be collected by tracking technologies never even meets the threshold definition of Health Information. Additionally, the Guidance states that something is IIHI if it “connects” a person with a CE, even if the person never becomes a patient. This is not consistent with the statutory definitions of IIHI and PHI. As a result of these two definitional issues, the Guidance could be ripe for challenge by both targets of OCR investigation and industry groups, including with respect to the scope of the OCR’s regulatory authority under HIPAA.

In practice, even if the definitional issues above were not present, the OCR may have a problem sufficiently proving a violation. Namely, the Guidance fails to acknowledge that, while some visitors on a CE’s website are also the CE’s patients, the pervasive use of “Dr. Google” to diagnose oneself or one’s friends/family members means that it is very likely that a significant amount of the data collected is not about the visitors themselves. With that reality, parsing out when such circumstances arise is impossible. For instance, a person may go to a hospital’s website after googling “face rash” because someone else – a friend, relative, co-worker – was experiencing that symptom. That user’s IP address bears no relationship to the person with the condition being searched and thus this is not IIHI. An attorney at a law firm may visit a hospital’s website from his or her office, using the firm’s IP address, to determine whether the notice of privacy practices (NPP) is up to date. The IP address is the firm’s, not the attorney’s, and the perusal of the NPP is not related to a health condition. OCR opts for a sledgehammer over a scalpel here, and in doing so creates guidance so flawed that we believe OCR will find it difficult to sufficiently prove a wholesale violation.

The Guidance does acknowledge the ability of CEs and their business associates to conduct a risk assessment to determine whether the use of a tracking technology resulted in a compromise of PHI. In undertaking that analysis, the basic question of “Was PHI involved?” is crucial, and CEs can defensively continue to use HIPAA’s definition of PHI, rather than the Guidance, to make that determination.


This Guidance should not be retroactively effective, meaning it should only apply on a going-forward basis. However, the going-forward application of this Guidance warrants analysis on whether the benefits of CEs continuing the use of tracking technologies are worth the risk. Specifically, it is possible that OCR could use the Guidance as a basis to find willful noncompliance for entities that continue to use tracking technologies after its publication date – resulting in higher penalty amounts levied.

While we do not believe that the use of tracking technologies is a per se violation and do believe that the Guidance can be successfully defended against, because of the increased potential for high fines after the Guidance came out, in an abundance of caution, we recommend the following:

  • If, as a CE, you’ve not already done so, determine whether any tracking technology is utilized on your websites, appointment forms and/or patient portal. It is important to understand which specific technology is being utilized and what information may be transmitted with this technology. Common technology products we have examined in our investigations include Meta Pixel, Google Analytics, Google Maps, Yelp, HotJar, Microsoft Clarity and Crazy Egg, to name a few.
  • To the extent that discussions about continuation/discontinuation of tracking technologies have been tabled, in an abundance of caution, we recommend reprioritizing the assessment and, if discontinuation is planned, implementing it quickly.
  • Implement a website governance plan so that legal/compliance/privacy professionals are part of any website technology change management process. This plan should be a documented policy and procedure, and training the marketing department and all advertising and marketing vendors on the process is highly recommended.
  • To the extent you will not discontinue all tracking technology use, ensure that each tracking product will be considered in your regular HIPAA risk analyses.
  • To the extent you will not discontinue all tracking technology use, the decision as to whether a BAA is appropriate should be documented as to each vendor. Although many vendors refuse to sign BAAs, in light of the Guidance, they may be more willing to do so.

Congratulations to Katherine Lowry and the IncuBaker Team

BakerHostetler is proud to announce that Financial Times recently recognized the firm’s IncuBaker team, along with incoming CIO Katherine Lowry, in its annual Innovative Lawyers North America 2022 Awards. The IncuBaker team won in the Innovation in Client Delivery category, and Lowry was named Most Innovative Intrapreneur.

The awards, presented on Dec. 5 in New York, celebrate the best in innovation from law firms and in-house legal teams in the North America region.

“I am thrilled to see Financial Times recognize both IncuBaker and Katherine,” said Bob Craig, BakerHostetler’s current CIO and co-creator of IncuBaker. “BakerHostetler’s collaborative culture and focus on innovation are key components in not only firm achievements, including awards and recognitions like these, but also the overall success of our people and the excellent service we provide our clients.”

Link to press release

CCPA/CPRA Rulemaking Update: What to Expect

The California Privacy Protection Agency (“CPPA” or the “Agency”) published on November 3, 2022, a Public Notice of Proposed Modifications and Additional Materials Relied Upon, which starts what we hope is the last round of rulemaking to finalize the regulations for the California Consumer Privacy Act (“CCPA”), as amended by the California Privacy Rights Act (“CPRA”). The CPRA amendments to the CCPA go into effect on January 1, 2023. Enforcement of those new provisions under the CPRA will become enforceable starting July 1, 2023, and the Agency will be able to bring enforcement actions for violations that occurred on or after July 1. This article summarizes the changes in the Proposed Regulations, what businesses can do now to comply with the January 1 deadline, and what to expect in terms of forthcoming regulations and enforcement of the new California requirements in 2023.

Key Takeaways

  • SPI and Opt-Out Preference Signal: There was significant discussion by the Agency on two topics, and therefore businesses should continue to monitor updated regulations in these areas: (1) the use and disclosure of sensitive personal information (“SPI”) and (2) opt-out preference signals.
  • DPA and Notice Requirements: No material changes were made in the November 3 modified draft of the regulations for requirements relating to data protection agreements (“DPA”), notice and privacy policies. The Agency did discuss creating in the future a DPA template that businesses could incorporate by reference, similar to a standard contractual clause that businesses can use to comply with the EU General Data Protection Regulation. For businesses that went forward with updating DPAs and prepared notices and privacy policies to go live on January 1 based on regulations that were proposed this past July, it is our assessment that there should be no material changes needed, at least for the January 1 deadline. For businesses that did not update the service provider and third-party contract terms or review the adequacy of the notices and privacy policies in the past year, they should now review them based on the November 3 draft regulations.

Continue Reading

California’s AB 587: What You Need to Know About Social Media Content Moderation

Businessman using smart phone. This is entirely 3D generated image.

On Sept. 13, California Gov. Gavin Newsom signed into law AB 587, which requires social media companies to publicly post their content moderation policies and semiannually report data on their enforcement of the policies to the attorney general. The first part of this article will discuss the requirements imposed by AB 587 on social media companies. The second part will discuss other state laws that similarly moderate social media content and how they compare to AB 587. The last part of this article will examine the litigation history of content moderation laws and the potential implications of possible Supreme Court intervention on these state laws.

Continue Reading

New York Department of Financial Services Publishes Proposed Second Amendment to Its Cybersecurity Regulation

technology smart city with network communication internet of thing.  Internet concept of global business in New york, USA.

On Nov. 9, 2022, the New York State Department of Financial Services (NYDFS) published a proposed second amendment to its cybersecurity regulation. This follows its pre-proposed amendment that was published on July 29. Our prior analysis of those amendments is available here. NYDFS did consider comments received in response to the pre-proposed amendments, as they clarify and strengthen certain requirements. We highlight some of the key changes.

Continue Reading