Texas Passes Bill Allowing Public Listing of Data Breaches, Effective Sept. 1, 2021

On May 31, 2021, the Texas Legislature approved House Bill 3746, which amends the Texas Business and Commerce Code § 521.053 relating to certain notifications required following a data breach involving Texas residents.

The bill includes the existing requirement that any business or entity notify the attorney general of a data breach within 60 days of its occurrence if the breach involves at least 250 Texas residents. The notice must include the nature and circumstances of the breach, the number of residents involved, the number of residents who were sent a notice letter, the measures taken regarding the breach and whether law enforcement is engaged in investigating the breach. In our discussions, with the Texas attorney general’s office, they encourage reporting entities to utilize the online reporting portal.

Notably, the bill allows the attorney general to post on its website a public listing of the data breach notifications received, excluding any sensitive personal information, which will be updated monthly. After one year, the attorney general will remove the posted notification if the entity has not reported any additional breaches during that period.

Once the bill is signed by Texas Gov. Greg Abbott, it will take effect beginning Sept. 1, 2021.

Updated EU Standard Contractual Clauses Are Finally Here

On June 4, 2021, the European Union’s (EU) executive branch, the European Commission (EC), released their new Standard Contractual Clauses (SCCs) for compliant cross-border data transfers under the EU’s General Data Protection Regulation (GDPR), ending a long wait for revised SCCs. The new SCCs resolve certain practical issues companies faced when using the older versions but simultaneously introduce new obligations for businesses that transfer personal data out of the EU. The EC also released a set of SCCs to address GDPR Article 28 requirements for controller-to-processor personal data transfers within the European Economic Area (EEA). This blog post focuses on the SCCs developed for cross-border personal data transfers.

SCCs are one of the most commonly used mechanisms for transferring personal data out of the EEA to countries that, like the United States, are not considered to provide “adequate” data protection under the GDPR. The EC’s existing sets of SCCs (adopted in 2001, 2004 and 2010) have been in need of an update for some time. Issues with the old SCCs range from basic inaccuracies (for example, they still reference the now-defunct 1995 EU Data Protection Directive) to substantive problems affecting applicability, as they cannot readily be applied in many common transfer scenarios. Continue Reading

Dramatic Increase in the Number of Third-Party Vendor Incidents Emphasizes the Need for Better Vendor Due Diligence Processes

As reflected in our 2021 Data Security Incident Response Report  2020 saw a sharp spike in the number of incidents involving vendors, which amounted to over 25 percent of the total incidents handled in 2020, and the trend is continuing well into 2021. This spike resulted from companies’ increased reliance on vendors to carry out tasks involving personal information, along with an increased focus by threat actors on such high-value targets. Vendor incidents involved entities in all industry segments but were particularly pronounced in the education and healthcare sectors.

Vendor incidents involved phishing schemes and inadvertent disclosures but primarily resulted from ransomware attacks. These ransomware attacks typically involve malware that spreads from the vendor to the customer’s environment by utilizing the vendor’s own credentials or the theft of customer data from a vendor’s environment – a continuation of the general rise of ransomware extortion that BakerHostetler saw throughout the year. In some ransomware attacks, if a vendor did not pay the ransom, a threat actor would contact the vendor’s clients to disclose what data was taken. Continue Reading

Congratulations to Sara Goldstein – a 2021 Law360 Rising Star!

Sara Goldstein has been named to Law360’s 2021 list of “Top Attorneys Under 40,” for her career accomplishments in the Cybersecurity & Privacy practice area. Only 180 attorneys nationwide were recognized for what Law360 describes as “legal accomplishments that transcend their age.” Sara focuses her practice on legal issues related to data privacy and security and has advised clients on responding to hundreds of cybersecurity and data privacy incidents, including several of the largest data breaches to date.

Read the article (registration required).

Welcome to the Digital Transformation and Data Economy Newsletter – May 2021 Issue

At the heart of digital transformation is disruption. That doesn’t always mean the kind of “disruption” that is the darling of VCs and technology blogs. Sometimes, it just means chaos. The sudden influx of software and data issues to a company or organization that is accustomed to a highly regularized way of doing business can set companies scrambling.

Read More.

Everywhere Commerce: Top Strategies for Mitigating Risk

Recent changes to our way of living have made it clear just how important it is for marketers and retailers to be thinking about the convergence of brand experience and commerce and redefining how consumers shop and interact with brands online. Join us virtually from 11-3:30 ET on June 15-16 for an opportunity to connect with peers and learn from business leaders and regulators as our advertising, digital risk advisory and cybersecurity, and class action defense teams host a mix of engaging panels and breakout sessions that will cover the latest developments, enforcement trends, and risk mitigation strategies. Approved for 8.0 hours CLE credit.

Read more.

The Not-So-Hidden FTC Guidance on Organizational Use of Artificial Intelligence (AI), from Data Gathering Through Model Audits

Our last AI post on this blog, the New (if Decidedly Not ‘Final’) Frontier of Artificial Intelligence Regulation, touched on both the Federal Trade Commission’s (FTC) April 19, 2021, AI guidance and the European Commission’s proposed AI Regulation. The FTC’s 2021 guidance referenced, in large part, the FTC’s April 2020 post “Using Artificial Intelligence and Algorithms.” The recent FTC guidance also relied on older FTC work on AI, including a January 2016 report, “Big Data: A Tool for Inclusion or Exclusion?,” which in turn followed a September 15, 2014, workshop on the same topic. The Big Data workshop addressed data modeling, data mining and analytics, and gave us a prospective look at what would become an FTC strategy on AI.

The FTC’s guidance begins with the data, and the 2016 guidance on big data and subsequent AI development addresses this most directly. The 2020 guidance then highlights important principles such as transparency, explain-ability, fairness, accuracy and accountability for organizations to consider. And the 2021 guidance elaborates on how consent, or opt-in, mechanisms work when an organization is gathering the data used for model development.

Taken together, the three sets of FTC guidance — the 2021, 2020, and 2016 guidance ― provide insight into the FTC’s approach to organizational use of AI, which spans a vast portion of the data life cycle, including the creation, refinement, use and back-end auditing of AI. As a whole, the various pieces of FTC guidance also provide a multistep process for what the FTC appears to view as responsible AI use. In this post, we summarize our takeaways from the FTC’s AI guidance across the data life cycle to provide a practical approach to responsible AI deployment.

Data Gathering

– Evaluation of a data set should assess the quality of the data (including accuracy, completeness and representativeness) ― and if the data set is missing certain population data, the organization must take appropriate steps to address and remedy that issue (2016).

– An organization must honor promises made to consumers and provide consumers with substantive information about the organization’s data practices when gathering information for AI purposes (2016). Any related opt-in mechanisms for such data gathering must operate as disclosed to consumers (2021).

Data Compilation

– An organization should recognize the data compilation step as a “descriptive activity,” which the FTC defines as a process aimed at uncovering and summarizing “patterns or features that exist in data sets” — a reference to data mining scholarship (2016) (note that the FTC’s referenced materials originally at mmds.org are now redirected).

– Compilation efforts should be organized around a life cycle model that provides for compilation and consolidation before moving on to data mining, analytics and use (2016).

– An organization must recognize that there may be uncorrected biases in underlying consumer data that will surface in a compilation; therefore, an organization should review data sets to ensure hidden biases are not creating unintended discriminatory impacts (2016).

– An organization should maintain reasonable security over consumer data (2016).

– If data are collected from individuals in a deceitful or otherwise inappropriate manner, the organization may need to delete the data (2021).

Model and AI Application Selection

– An organization should recognize the model and AI application selection step as a predictive activity, where an organization is using “statistical models to generate new data” — a reference to predictive analytics scholarship (2016).

– An organization must determine if a proposed data model or application properly accounts for biases (2016). Where there are shortcomings in the data model, the model’s use must be accordingly limited (2021).

– Organizations that build AI models may “not sell their big data analytics products to customers if they know or have reason to know that those customers will use the products for fraudulent or discriminatory purposes.” An organization must, therefore, evaluate potential limitations on the provision or use of AI applications to ensure there is a “permissible purpose” for the use of the application (2016).

– Finally, as a general rule, the FTC asserts that under the FTC Act, a practice is patently unfair if it causes more harm than good (2021).

Model Development

– Organizations must design models to account for data gaps (2021).

– Organizations must consider whether their reliance on particular AI models raises ethical or fairness concerns (2016).

– Organizations must consider the end uses of the models and cannot create, market or sell “insights” used for fraudulent or discriminatory purposes (2016).

Model Testing and Refinement

– Organizations must test the algorithm before use (2021). This testing should include an evaluation of AI outcomes (2020).

– Organizations must consider prediction accuracy when using “big data” (2016).

– Model evaluation must focus on both inputs and AI models may not discriminate against a protected class (2020).

– Input evaluation should include considerations of ethnically based factors or proxies for such factors.

– Outcome evaluation is critical for all models, including facially neutral models.

– Model evaluation should consider alternative models, as the FTC can challenge models if a less discriminatory alternative would achieve the same results (2020).

– If data are collected from individuals in a deceptive, unfair, or illegal manner, deletion of any AI models or algorithms developed from the data may also be required (2021).

Front-End Consumer and User Disclosures

– Organizations must be transparent and not mislead consumers “about the nature of the interaction” ― and not utilize fake “engager profiles” as part of their AI services (2020).

– Organizations cannot exaggerate an AI model’s efficacy or misinform consumers about whether AI results are fair or unbiased. According to the FTC, deceptive AI statements are actionable (2021).

– If algorithms are used to assign scores to consumers, an organization must disclose key factors that affect the score, rank-ordered according to importance (2020).

– Organizations providing certain types of reports through AI services must also provide notices to the users of such reports (2016).

– Organizations building AI models based on consumer data must, at least in some circumstances, allow consumers access to the information supporting the AI models (2016).

Back-End Consumer and User Disclosures

– Automated decisions based on third-party data may require the organization using the third-party data to provide the consumer with an “adverse action” notice (for example, if under the Fair Credit Reporting Act 15 U.S.C. § 1681 (Rev. Sept. 2018), such decisions deny an applicant an apartment or charge them a higher rent) (2020).

– General “you don’t meet our criteria” disclosures are not sufficient. The FTC expects end users to know what specific data are used in the AI model and how the data are used by the AI model to make a decision (2020).

– Organizations that change specific terms of deals based on automated systems must disclose the changes and reasoning to consumers (2020).

– Organizations should provide consumers with an opportunity to amend or supplement information used to make decisions about them (2020) and allow consumers to correct errors or inaccuracies in their personal information (2016).

Model Deployment

– When deploying models, organizations must confirm that the AI models have been validated to ensure they work as intended and do not illegally discriminate (2020).

– Organizations must carefully evaluate and select an appropriate AI accountability mechanism, transparency framework and/or independent standard, and implement as applicable (2020).

– An organization should determine the fairness of an AI model by examining whether the particular model causes, or is likely to cause, substantial harm to consumers that is not reasonably avoidable and not outweighed by countervailing benefits (2021).

Model Audit

– Organizations must test AI models periodically to revalidate that they function as intended (2020) and to ensure a lack of discriminatory effects (2021).

– Organizations must account for compliance, ethics, fairness and equality when using AI models, taking into account four key questions (2016; 2020):

– How representative is the data set?
– Does the AI model account for biases?
– How accurate are the AI predictions?
– Does the reliance on the data set raise ethical or fairness concerns?

– Organizations must embrace transparency and independence, which can be achieved in part through the following (2021):

– Using independent, third-party audit processes and auditors, which are immune to the intent of the AI model.
– Ensuring data sets and AI source code are open to external inspection.
– Applying appropriate recognized AI transparency frameworks, accountability mechanisms and independent standards.
– Publishing the results of third-party AI audits.

– Organizations remain accountable throughout the AI data life cycle under the FTC’s recommendations for AI transparency and independence (2021).

Podcast: AD-ttorneys@law: The Future of Consumer Redress after Supremes Rule in AMG Capital Management v. FTC

In a highly anticipated recent Supreme Court decision in the case of AMG Capital Management v. FTC, the court ruled in favor of putting the brakes on consumer redress and the commission’s ability to protect consumers from unfair or deceptive practices in the marketplace. BakerHostetler partner Randy Shaheen discusses the ramifications.

Questions and Comments: rshaheen@bakerlaw.com

Listen to the episode.

Subscribe to BakerHosts
Apple Podcast | Google Podcast | iHeartRadio | Spotify | Stitcher | TuneIn
Download Episode Transcript

Incident Response and Forensic Challenges in a Work-from-Home World

Our 2021 Data Security Incident Response Report discussed the challenges that organizations are facing with forensic investigations and ransomware recovery in the work-from-home world. Some of the greatest difficulties our clients encountered in the past year involved key components of incident response — communicating with employees, resetting remote user passwords, and deploying endpoint detection and response (EDR) tools at scale. There are steps that organizations can (and should) take to put themselves in a position to respond to incidents efficiently and effectively in a remote-work paradigm.

First, as part of your incident response plan (IRP), identify the partners you are going to engage to help you respond to an incident. The top three are your legal counsel, forensic firm, and recovery support or “helping hands” provider. Your carrier likely has an approved panel of providers for some or all of these services. Research who they are, include their contact information in your IRP, and maintain a copy of your IRP off your network. Continue Reading

Executive Order on Improving the Nation’s Cybersecurity: What Does It Mean for Business?

In response to recent highly publicized cybersecurity incidents, President Biden signed an Executive Order on May 12, 2021, that contains eight key initiatives aimed at modernizing the federal government’s response to cyberattacks.

Although the initiatives outlined in the Executive Order only apply to federal contractors (many of which already comply with agency-specific cybersecurity rules), all companies and organizations should pay attention to them, as they could be used as models for other laws and as the “baseline” for what security measures businesses will be expected to implement.

Removing Barriers to Sharing Threat Information / National Security Systems

Overview: The Executive Order calls for updates to federal information technology and operational technology service contract terms to allow federal contractors to share threat intelligence and information about cybersecurity incidents with different federal agencies. Specifically, the Executive Order is asking that such contracts be designed to ensure that these contractors “collect and preserve data, information, and reporting relevant to cybersecurity event prevention, detection, response, and investigation on all information systems over which they have control, including systems operated on behalf of agencies.” Continue Reading

LexBlog