Listen to this post

The government’s announcement of renewed emphasis on cybersecurity enforcement has spawned recent million-dollar enforcement actions. Continued government attention on cybersecurity promises a treacherous enforcement environment in 2023 and beyond.

Several recent government initiatives have focused on cybersecurity enforcement.  Towards the end of 2021, the Department of Justice announced a Civil Cyber-Fraud Initiative to use the False Claims Act (“FCA”) to hold companies and individuals accountable for: 1) deficient cybersecurity; 2) misrepresentations of cybersecurity; and/or 3) insufficient monitoring or reporting of cybersecurity incidents. The Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”) now requires the Cybersecurity and Infrastructure Security Agency to develop and implement regulations requiring covered entities to report covered cybersecurity incidents. The FTC Safeguards Rule requires non-banking financial institutions, including mortgage brokers and automobile dealerships, to develop, implement, and maintain a comprehensive cybersecurity program to protect customer information. Most concerning is that the deadline for compliance with the FTC Safeguards Rule is now June of 2023.

In July of 2022, as part of the Civil Cyber-Fraud Initiative, the Department of Justice announced a $9 million Aerojet settlement to resolve cybersecurity fraud claims brought pursuant to the FCA by a whistleblower who was the former Senior Director of Cyber Security, Compliance, and Controls for Aerojet. The whistleblower claimed that Aerojet’s contracts with the government mandated specific cybersecurity standards, and despite knowing that its systems did not meet these standards, Aerojet pursued and fraudulently obtained the contracts.

The Aerojet qui tam and resulting settlement forecasts how use of this enforcement mechanism in the cybersecurity space might play out. The government has now specifically promised that when contractual cybersecurity standards are not satisfied, the government will attempt to utilize the FCA to enforce cybersecurity fraud claims. And, as the deadline for compliance with the FTC Safeguards Rule quickly approaches, companies must be prepared for certification requests to potentially incorporate various cybersecurity requirements, including compliance with the FTC Safeguards Rule. To avoid potential FCA liability, companies and individuals need to be absolutely aware of any cybersecurity requirements in government contracts, including how compliance is certified, and how to monitor and report any cybersecurity incidents.  Often, organizations are not aware of what they have agreed to contractually regarding cybersecurity or privacy.  A company employee may receive an email link from a customer and merely click boxes certifying compliance in order to earn the work, without ever reading the terms to which they’re binding the company.

Companies may not be prepared for the consequences of cybersecurity requirements and certifications in contracts—but they should be.  This year promises to be an even more active year for cybersecurity enforcement. 

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

While the federal government has relied on the False Claims Act (FCA) to combat fraud across a range of sectors since 1986, in 2022, the Department of Justice set its sights on new enforcement priorities, including fraud in the cybersecurity realm. In October 2021, DOJ announced its Civil Cyber-Fraud Initiative, dedicated to using the FCA to combat new and emerging cyber threats. The initiative utilizes the tools contained in the FCA to hold government contractors liable for providing deficient cybersecurity products or services. Additionally, the initiative seeks to root out misrepresentations of cybersecurity practices or intentional failures to report cyber incidents. The government obtained its first settlement under this initiative in 2022, collecting $930,000. Companies and individuals should expect that this is just the beginning for FCA cases related to cybersecurity. To keep you apprised of the current enforcement trends and the status of the law, Bradley’s Government Enforcement and Investigations Practice Group is pleased to present the False Claims Act: 2022 Year in Review, our 11th annual review of significant FCA cases, developments and trends.

Under the European Union’s General Data Protection Regulation (GDPR), individual data subjects have the right to request that the data controller share information regarding the data subject’s personal information. This includes the right to know the “recipients or categories of recipients” to whom the data subject’s personal data has been disclosed. To date, data controllers have defaulted to disclosing the categories of recipients only, rather than the specific recipients by name. But that’s about to change.

On January 12, 2023, the Court of Justice of the European Union (CJEU) ruled that data controllers must specifically identify the recipients, rather than solely the categories of recipients, in response to a data subject access request. Although the ruling specifically addressed data subject access requests pursuant to Article 15 (data subject access rights) of GDPR, the decision also has significant implications for required disclosures at the point of collection under Article 13.  

Background

This case began when an Austrian individual, RW, submitted a data subject access request to Österreichische Post AG (OP), an Austrian postal service provider, seeking the identity of any recipients of his data. Per Article 15 of GDPR, data subjects may request “the recipients or categories of recipient to whom the personal data have been or will be disclosed.” In its response to RW, OP stated that it shares RW’s personal information with trading partners for marketing purposes but refused to identify the specific recipients. RW filed suit, seeking the identity of the recipients, but the case was initially dismissed on the basis that GDPR “gives the controller the option of informing the data subject only of the categories of recipient, without having to identify by name the specific recipients to whom personal data are transferred.” RW appealed the decision to the Austrian Supreme Court (Oberster Gerichtshof), which referred the question to the CJEU for a preliminary ruling.

CJEU Adopts Expansive Interpretation of GDPR

In a decision with widespread ramifications, the CJEU ruled that controllers must reveal the specific identities of data recipients to the data subject in response to a data subject access request. Revealing the categories of recipients alone is only sufficient if revealing the specific identity of recipients is impossible. In support of its decision, the CJEU emphasized that, in light of the GDPR’s overall goals, the right of access requires transparency in all personal data processing. The CJEU noted that access to the identity of recipients is necessary in order for the data subject to exercise data subject rights under GDPR (such as the rights to rectification, erasure, and restriction of processing).

Takeaways

This ruling has significant implications when it comes to both data subject access requests and point-of-collection disclosures:

  • Data subject access requests – At a minimum, the CJEU decision now requires controllers to disclose the specific identities of data recipients in response to data subject access requests. If a controller determines that it is impossible to share specific identities (specifically because it does not yet know the identities of all recipients), the controller should clearly document the reasoning by which it made the impossibility determination – and it had better actually be impossible. In the context of GDPR’s Article 14.5(b) (providing disclosures where data was collected from a third party), the European Data Protection Board (EDPB) stated that “something is either impossible or it is not; there are no degrees of impossibility.” And it is likely the EDPB will adopt a similar attitude with regard to data subject access requests.
  • Disclosures at point of collection – In addition to disclosing specific recipient identities in response to data subject access requests, controllers should consider revisiting point-of-collection disclosures to EU data subjects in general. Like GDPR’s Article 15 (data subject access rights), Article 13 (point-of-collection disclosures) has language requiring the disclosure of “the recipients orcategories of recipients of the personal data, if any.” While the CJEU has yet to extend its reasoning to point of collection disclosures, if controllers already know the identities of data recipients, they should consider disclosing those identities at the point of collection as well.  

As a reminder, data subject access requests are subject to the threshold inquiry of whether the request is manifestly unfounded or excessive, so the transparency required under the GDPR is not without its limits. Also, controllers are not required to disclose the identity of specific recipients if it would be impossible — specifically, in the CJEU’s view, if the identity is not yet known. An impossibility determination should be used sparingly (and documented thoroughly, if used) in light of the EDPB’s findings on impossibility. At the end of the day, companies would be well-suited to err on the side of honoring reasonable data subject access requests where possible, as the costs for defending litigation and/or a regulatory investigation would dwarf the administrative costs of responding to these requests. For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

On January 26, 2023, the U.S. National Institute of Standards and Technology (NIST) released the Artificial Intelligence (AI) Risk Management Framework (AI Risk Management Framework 1.0), a voluntary guidance document for managing and mitigating the risks of designing, developing, deploying, and using AI products and services. NIST also released a companion playbook for navigating the framework, a roadmap for future work, and mapping of the framework to other standards and principles, both at home and abroad. This guidance, developed in a consensus-based approach across a broad cross section of stakeholders, offers an essential foundation and important building block toward responsible AI governance.

The AI Framework

We stand at the crossroads as case law and regulatory law struggle to keep up with technology. As regulators consider policy solutions and levers to regulate AI risks and trustworthiness, many technology companies have adopted self-governing ethical principles and standards surrounding the development and use of artificial and augmented intelligence technologies. In the absence of clear legal rules, these internal expectations guide organizational actions and serve to reduce the risk of legal liability and negative reputational impact.

Over the past 18 months, NIST developed the AI Risk Management Framework with input from and in collaboration with the private and public sector. The framework takes a major step toward public-private collaboration and consensus through a structured yet flexible approach allowing organizations to anticipate and introduce accountability structures. The first half of the AI Risk Management Framework outlines principles for trustworthy AI, and the remainder describes how organizations can address these in practice by applying the core functions of creating a culture of risk management (governance), identifying risks and context (map), assessing and tracking risks (measure), and prioritizing risk based on impact (manage). NIST plans to work with the AI community to update the framework periodically.

Specifically, the framework offers noteworthy contributions on the pathway toward governable and accountable AI systems: 

  • Moves beyond technical standards to consider social and professional responsibilities in making AI risk determinations
  • Establishes trust principles, namely that responsible AI systems are valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair, with harmful bias managed
  • Emphasizes context (e.g., industry, sector, business purposes, technology assessments) in critically analyzing the risks and potential impacts of particular use cases
  • Provides steps for managing risks via governance functions; mapping broad perspectives and interdependencies to testing, evaluation, verification, and validation within a defined business case; measuring AI risks and impacts; and managing resources to mitigate negative impacts
  • Rationalizes the field so that organizations of all sizes can adopt recognized practices and scale as AI technology and regulations develop

The Playbook

This companion tool provides actionable strategies for the activities in the core framework. As with NIST’s Cybersecurity and Privacy Frameworks, the AI Risk Management Framework is expected to evolve with stakeholder input. NIST expects the AI community will build out these strategies for a dynamic playbook and will update the playbook in Spring 2023 with any comments received by the end of February.

The Roadmap

The roadmap for the NIST AI Risk Management Framework identifies the priorities and key activities that NIST and other organizations could undertake to advance the state of AI trustworthiness. Importantly, NIST intends to grapple with one of the more complex issues in implementing AI frameworks, namely balancing the trade-offs among and between the trust principles to consider the use cases and values at play. NIST seeks to showcase these profiles and case studies that highlight particular use cases and organizational challenges. NIST also will work across the federal government and on the international stage to identify and align standards development.

Mapping to Other Standards

The AI Risk Management Framework includes a map that crosswalks AI principles to global standards, such as the proposed European Union Artificial Intelligence Act, the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, and the Biden administration’s Executive Order 13960 and Blueprint for an AI Bill of Rights. The crosswalk enables organizations to readily leverage existing frameworks and principles.

Conclusion

AI is a rapidly developing field and offers many potential benefits but poses novel challenges and risks. With the launch of the framework, NIST also published supportive stakeholder perspectives  from business and professional associations, technology companies, and thinktanks such as the U.S. Chamber of Commerce, the Bipartisan Policy Center, and the Federation of American Scientists. Having the NIST AI Risk Management Framework’s foundational approach that evolves as our understanding of the technology and its impact evolves provides flexibility and a starting point to help regulators improve policy options and avoids a more prescriptive approach that may stifle innovation. The AI Risk Management Framework and its accompanying resources articulate expectations and will help AI stakeholders implement best practices for managing the opportunities, responsibilities, and challenges of artificial intelligence technologies.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Data Privacy Day, annually celebrated on January 28, is the new year nudge we need to prioritize the safety of our personal information. The digital world will continue to evolve, and the line between our online and offline lives will continue to blur. As we continue to rely on digital technology to manage our personal and professional lives, we must rethink what we share, when we share, how we share, and who we share it with.

Grab your coffee and join us for a morning Q&A with our Bradley Cybersecurity and Privacy team to celebrate Data Privacy Day (a day early!). We will be available between 10-10:50 a.m. ET on Friday, January 27 for you to drop in and ask your toughest privacy questions. Please register here and we hope to see you there!

Over the past few decades, technology has taken a fascinating turn. One can use a retinal scan to expedite the airport security process. Need to clock in for work? This can be done with the scan of a finger. We even have the convenience of unlocking our iPhones with a simple, quick gaze into the phone’s front camera. While the use of this technology has certainly made things easier, such use across various industries has led to concerns about individual privacy.

In response to these concerns, the Mississippi Legislature, on January 12, 2023, proposed House Bill 467, the Biometric Identifiers Privacy Act. The proposed legislation, among other things, seeks to require private entities (1) to be forthcoming about their collection and storage of individuals’ biometric identifiers, and (2) to develop a policy that establishes a retention schedule and guidelines for destroying the biometric identifiers of individuals.

What are biometric identifiers?

Inquiring minds may be wondering, what are biometric identifiers? Simply put, and pursuant to the act, biometric identifiers are defined as “the data of an individual generated by the individual’s unique biological characteristics.” Biometric identifiers may include, but are not limited to:

  • Faceprints
  • Fingerprints
  • Voiceprints
  • Retina or iris images

The act defines biometric identifiers to not include:

  • A writing sample of written signature
  • A photograph or video, except for data collected from the biological characteristics of a person depicted in the photograph or video
  • A human biological sample used for valid scientific testing or screening
  • Demographic data
  • A physical description, including height, weight, hair color, eye color, or a tattoo description
  • Donated body parts that have been obtained or stored by a federal agency
  • Information collected, used, or stored for purposes under the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA)
  • Images or film of the human anatomy used to diagnose and treat medical conditions or to further validate scientific testing
  • Information collected, used, or disclosed for human subject research that is conducted in accordance with the federal policy for the protection of human subjects

If passed, who will the act apply to?

The act will apply to private entities only. The act defines a private entity as “any individual acting in a commercial context, partnership, corporation, limited liability company, association, or other group, however organized.” The act will not apply to state or local government agencies or entities.

What will a Mississippi private entity need to do to ensure it is in compliance with the act?

If enacted, Mississippi private entities in possession of biometric identifiers will be required to, among other things:

  • Inform subjected individuals (or their legal representative), in writing, that they are collecting or storing that individual’s biometric identifier(s)
  • Inform the individual, in writing, of the purpose of the collection, storage, and/or use of their biometric identifier(s) and the length to which they plan to collect, store, and/or use
  • Obtain a written release executed by the subject (or legal representative) of the biometric identifier
  • Develop a publicly accessible written policy that establishes a retention schedule and guidelines for permanently destroying a biometric identifier
    • The entity is not required to make its policy publicly accessible if the policy (1) applies only to employees of that private entity, and (2) is used solely within the private entity for operation of the private entity.
    • Additionally, the entity must destroy any possession of an individual’s biometric identifier on the earliest of (1) the date on which the purpose of collecting or obtaining the biometric identifiers have been satisfied; (2) one year after the individual’s last interaction with the private entity; or (3) 30 days after receiving an individual’s (or legal representative’s) request to delete the biometric identifiers.

Furthermore, if an individual (or legal representative) requests that the private entity disclose any biometric identifiers that the private entity collected, the private entity must do so free of charge.

Of course, nothing in life is free. Such “free” disclosure is specific to entities that (1) do business in Mississippi; (2) are for profit; (3) collect consumers’ biometric identifiers or have such identifiers collected on their behalf; and (4) obtained revenue exceeding $10 million in the preceding calendar year.

What does this mean for Mississippi private entities?

Let’s face it, most people are sick and tired of having to remember passwords and verification questions for every system or database they must access on a regular basis. Because of this, people may prefer the collection, storage, and/or use of their biometric identifiers in exchange for convenience and easy access. However, use of such biometric identifiers will require entities to comply with applicable state and federal laws. To avoid any civil liability for the failure to protect an individual’s biometric identifiers under Mississippi law, Mississippi private entities should:

  • Prepare policies that are in compliance with the act, and make such policies available to individuals whose biometric data is being obtained. Specifically, draft a policy that details the entity’s retention plan for the collection and storage of biometric identifiers, as well as guidelines for destroying the biometric identifiers. Compliance with such policies is key.
  • Inform individuals, in writing, that you are collecting their biometric data. A private entity should also inform the individual, in writing, of the specific purpose and length of term for collecting the biometric data.
  • Obtain written releases from individuals whose biometric identifiers are being collected, stored, and/or used.
  • Use strong cybersecurity software and processes using a reasonable standard of care within the private entity’s industry to protect the biometric identifiers of individuals.
  • Destroy the biometric identifiers upon request by the individual.
  • Train management on the policies and the importance of protecting biometric identifiers so they can answer and alleviate individuals’ questions and/or concerns regarding the collection of their biometric identifiers.

A failure to comply with the act will have its consequences. The act creates a private right of action against an offending entity. If successful in proving their claims, individuals may recover the greater of $1,000 or actual damages for negligently violating the act or the greater of $5,000 or actual damages for intentionally or recklessly violating the act plus reasonable attorneys’ fees and costs, and other relief to which a court deems appropriate.

If passed, the act will take effect on July 1, 2023. For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

The case of Popa v. Harriet Carter Gifts, Inc. “began with a quest for pet stairs.” Plaintiff Ashley Popa searched Harriet Carter Gifts’ website, added pet stairs to her cart, but never completed the purchase. During her “quest,” Popa’s information was collected not only by Harriet Carter Gifts, but also by a third-party marketing company, NaviStone, using cookies technology. In an opinion with potentially far-reaching ramifications, the Third Circuit held that NaviStone’s collection of Popa’s information violated Pennsylvania’s Wiretapping and Electronic Surveillance Control Act (WESCA). This decision follows the Ninth Circuit’s lead in reviving similar claims brought pursuant to the California Invasion of Privacy Act that were initially dismissed on the basis that retroactive consent was not valid. In both cases, however, there remains a question as to whether the consumers impliedly consented to the collection of their browsing information as a result of disclosures in the website operators’ privacy policies.

Background of Litigation

Like many other states’ wiretapping laws, WESCA “prohibits the interception of wire, electronic, or oral communications, which means it is unlawful to acquire those communications using a device.” It also provides a private right of action for individuals to bring suit against parties for such unlawful interception.

While Popa was on the Harriet Carter website browsing for pet stairs, Popa’s browser communicated with servers operated by Harriet Carter Gifts. And, as part of the online marketing services NaviStone provides to Harriet Carter Gifts, Popa’s website also communicated with servers operated by NaviStone. This interaction allowed NaviStone to place a tracking cookie on Popa’s device. The cookie, in turn, allowed NaviStone to collect information about how Popa interacted with the Harriet Carter website to enable Navistone to show Popa personalized advertisements across the web.

Popa filed a class action lawsuit against Harriet Carter and NaviStone, claiming that Harriet Carter and NaviStone used tracking technology without her knowledge or consent in violation of WESCA. The district court granted summary judgment in favor of Harriet Carter and NaviStone on the WESCA violation claim, holding that NaviStone could not have “intercepted” Popa’s communications because NaviStone was a “party” to the “electronic conversation,” or alternatively, that if any interception did occur, such interception occurred outside Pennsylvania’s borders, and thus WESCA did not apply.

On appeal, the U.S. Court of Appeals for the Third Circuit ruled Harriet Carter and NaviStone could be held liable for violating WESCA if they deployed software and tracking cookies to collect data about a website visitor’s behavior without the visitor’s consent. While questions remain regarding whether Harriet Carter’s website had a posted privacy policy during Popa’s visit, and whether that privacy policy was sufficient to imply consent, numerous class actions have already been filed under similar theories. Because the district court did not address the implied consent argument in its summary judgment order, the Third Circuit declined to address it in the first instance and instead remanded the case to the trial court for further proceedings. The question will now become whether, under Pennsylvania law, Popa “knew or should have known[] that the conversation was being recorded” as a result of the website’s privacy policy such that she impliedly consented to the recording.

Takeaways

This ruling highlights the importance of obtaining consent from website visitors before collecting their data. It also underscores the need for retailers and digital marketers to be aware of and comply with state and federal laws related to electronic communications and data collection.

The decision also has practical implications for companies and digital marketing service providers engaged in the “passive collection” of consumer data in which background technologies collect a consumer’s information without the consumer affirmatively providing that information. The Third Circuit’s broad interpretation of the “interception” of a communication and narrow interpretation of the exceptions to liability under WESCA may increase the risks to companies and service providers that use these tracking technologies in Pennsylvania or states with similar wiretapping or privacy laws.

To mitigate these risks, companies should carefully review their online marketing practices, website operations, privacy disclosures, and consent mechanisms to ensure compliance with state and federal laws related to electronic communications, data privacy, and data collection. Providing clear and transparent privacy notices that disclose how these background communications work and who receives them may help to establish an implied consent defense to WESCA claims. However, the exact elements or standards required to obtain implied consent are currently unclear. Nonetheless, prior express consent from all parties is another clear defense to WESCA and other state wiretap claims.

President Biden issued Executive Order (EO) 14083 on September 15, 2022, establishing five factors for reviews by the Committee on Foreign Investment in the U.S. (CFIUS), and areas of heightened scrutiny for transactions impacting the U.S. supply chain, cybersecurity, sensitive personal data, agricultural production, and Section 1758 technologies.

Driven by eroding economic and geopolitical conditions, the U.S. and its primary trading partners have continued to expand the regulation of foreign direct investment. EO 14083 and an earlier EO in May both invoked the Defense Production Act (DPA) with resulting foreign direct investment implications.

As background, businesses involved in the U.S. defense industrial base have been protected from foreign direct investment by CFIUS – but changes to U.S. laws and regulations on foreign direct investment have expanded the protections beyond the traditional U.S. defense industry. The Foreign Investment Risk Review Modernization Act (FIRRMA) expanded CFIUS to protect businesses engaged in critical technologies, critical infrastructure, and sensitive personal data. FIRRMA was intended to close gaps in national security review risks and resulted in expanded CFIUS coverage and powers. Subsequent changes to U.S. foreign direct investment regulations have further impacted U.S. businesses engaged in critical technologies, critical infrastructure, and sensitive personal data.

Factors for Review

EO 14083 further advances U.S. foreign direct investment protections by requiring that CFIUS specifically consider five factors in its national security reviews, namely impacts to U.S.:

  • Supply chains, including but not limited to the defense industrial base, derived in part from EO 14017 regarding America’s Supply Chains.
  • Cybersecurity defenses and protections, both commercial and governmental
  • Sensitive personal data of U.S. citizens, including access by foreign actors
  • Industry segments from cumulative foreign investments or investment trends
  • Technological leadership in microelectronics, artificial intelligence, biotechnology and biomanufacturing, quantum computing, advanced clean energy, climate adaption technologies, and significantly the advanced clean energy, climate adaptation technologies, critical rare earth materials, and significantly – “elements of the agriculture industrial base that have implications for food security” – based on Export Control Reform Act (ECRA) / FIRRMA  Section 1758 covered technologies.

New Areas of Impact

Foreign investment trends in U.S. industry segments

EO 14083 references industries and industry segments that “are fundamental to U.S. technological leadership and therefore national security.” Based on guidance in the EO, CFIUS will now be required to assess a covered transaction in the context of other investments in the relevant industry or industry segment. In doing so, CFIUS will likely review proposed transactions in the context of previous cleared and proposed transactions in the same industry segment, in order to determine if collectively the transactions could cumulatively result in the transfer of Section 1758 technologies in key industries or otherwise harm national security. As a result, parties considering a transaction, like CFIUS, will need to take industry trends and transactions into account – not just the specific proposed transaction.

Cybersecurity defenses and protections

The White House has previously emphasized the importance of cybersecurity. And FIRRMA identified “cybersecurity vulnerabilities” as a relevant factor for CFIUS. Now EO 14083 more specifically identifies the nature of vulnerabilities that CFIUS should guard against. Some of these are familiar themes: critical infrastructure (already a prong for CFIUS jurisdiction); the defense industrial base; national security priorities (from EO 14028); and critical energy infrastructure, such as smart grids (similar to the Department of Energy’s “100-day plan”). But the order specifies two new types of intrusions, which may echo news items from recent years. First, CFIUS should consider transactions’ effects in giving a foreign person capability to affect the “confidentiality, integrity, or availability of United States communications.” Second, it should try to foresee activity designed to “interfere with United States elections.” It remains to be seen how broadly those factors could reach. Still, because cybersecurity was already a factor under FIRRMA, it is likely that this specific development represents a refinement, not a sharp change of direction.

Sensitive personal data of U.S. citizens

Under FIRRMA, CFIUS should consider exposure of “personally identifiable information.” But EO 14083 recognizes that “personally identifiable” is a moving target. New technology and more data allow previously anonymous datasets to be de-anonymized. The order also broadens the historical focus on individuals. Instead, it talks about exploiting data to target “individuals or groups” — it even loosens the kind of data to include “data on sub-populations.” If your company keeps data on U.S. individuals or “sub-populations” — however well anonymized — then expect that CFIUS will consider whether your data could be used (including in combination with other data) to undermine national security. Combined with the refined specification of cybersecurity vulnerabilities, this could lead to some previously unexpected decisions by CFIUS.

Agriculture Industrial Base

White House guidance notes the EO does not expand CFIUS jurisdiction and should be read in the context of existing authority. However, the EO expressly included “elements of the agriculture industrial base that have implications for food security” – not otherwise expressly addressed by CFIUS regulation or FIRRMA. Given that CFIUS has already been focused on most of the other factors highlighted in the EO, perhaps the most significant impact of EO 14083 is its implication to the U.S. agriculture industry. It is not surprising that there are national security implications to U.S. food production and supply, particularly based upon recent past shortages and projections of further shortages in the future. What is surprising is that FIRRMA provided for the application of CFIUS to food production via the DPA – as invoked by the recent EO. Nonetheless, the EO specific reference to the “agriculture industrial base” is likely best assessed in the context of pending legislation proposing to address foreign investment in U.S. agriculture.

The proposed Foreign Adversary Risk Management Act (the “FARM” Act) would expand the CFIUS definition of “critical infrastructure” to include agricultural production facilities and real estate, i.e., the U.S. agricultural supply chain. Similar bills, such as, the Food is National Security Act, have been proposed to include U.S. agriculture under CFIUS. The inclusion of “…agriculture industrial base…” in the EO 14083 may be a foreshadowing of the expansion of CFIUS reviews to foreign investment in U.S. agricultural production or products via the FARM Act or otherwise.

Statutory authority for the coverage of the U.S. agriculture industrial base can be derived from CFIUS jurisdiction over “critical infrastructure” created by FIRRMA. Appendix A to the FIRRMA regulations define “Critical Infrastructure” facilities and functions to be “Covered Transactions” under CFIUS. Additionally, Title III of the DPA “allows the President to provide economic incentives to secure domestic industrial capabilities essential to meet national defense and homeland security requirements.” The DPA was invoked by President Biden in May 2022 to addresses the U.S. infant formula shortage, and in EO 14083 to address national security threats to the U.S. supply chain, cybersecurity, sensitive personal data, Section 1758 technologies, and the “agriculture industrial base.” What is not widely known is that U.S. companies can be subject to CFIUS review for a period of 60 months following a presidential evocation of the DPA. FIRRMA Appendix A provides in part “… that has been funded, in whole or in part, by […] (a) Defense Production Act of 1950 Title III program …..” The FIRRMA definition of “covered transactions” also specifically includes “(d) Any other transaction, transfer, agreement, or arrangement, the structure of which is designed or intended to evade or circumvent the application of section 721.”

Concluding Observations

U.S. companies covered by the Defense Production Act are subject to CFIUS review and can remain subject to CFIUS review for a period of 60 months following the receipt of any DPA funding.

EO 14083 reinforces the need for U.S. businesses to be mindful of changes in U.S. regulations applicable to foreign ownership, control, or influence – the need for early diligence of any transaction involving international investment – and to carefully assess the implications of accepting funding under the DPA and jurisdiction of CFIUS beyond the U.S. defense industry.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

David Vance Lucas is a member of Bradley’s Intellectual Property and Cybersecurity and Privacy practice groups and leads the firm’s International and Cross Border team. Much of David’s experience was accumulated as general counsel for a multinational technology company. He now advises both U.S. and foreign clients on the harmonized application of U.S., UK, and European laws, as well as CFIUStication™ of FDI diligence and disclosures.

Andrew Tuggle’s practice focuses on intellectual property, cybersecurity, and international trade. He guides clients through government regulations of cybersecurity, exports, and cross-border transactions. He also helps clients protect their innovations through patents, trademarks, and trade secrets.

California’s Attorney General Rob Bonta has made clear that California Consumer Privacy Act (CCPA) enforcement is going to be a priority for the AG’s office. On Friday, the California AG’s office announced a $1.2 million settlement of an enforcement action against Sephora, Inc. for allegedly insufficient disclosures as required by the CCPA. The biggest takeaways from this enforcement action are that (1) California will focus on clear and accurate disclosures made to consumers; (2) California is taking a liberal approach to the definition of what constitutes the “sale” of consumer data; and (3) this is a further reminder that user-enabled global privacy controls — where users can set a default “do not sell” signal through their browser — have the same effect as an affirmative request to opt out of data sharing. Bonta further indicated that a number of non-compliance notices are on their way to various other businesses purportedly violating the CCPA, and companies should take prompt action to respond and correct any deficiencies, lest they become the next Sephora.

Enforcement Action Background

The allegations against Sephora included a combination of disclosure and opt-out request failures, including:

  • Failing to disclose in its privacy policy that it was selling users’ personal data and that consumers have the right to opt out of the sale;
  • Failing to include a “Do Not Sell My Personal Information” link on its webpage and mobile app, and two or more methods for users to opt out of the sale of their data;
  • Failing to process global privacy control requests by users to opt out of the sale of their personal information;
  • Failing to execute valid service-provider contracts with each third party, which is one exception to a “sale” under the CCPA; and
  • Failing to cure these alleged deficiencies within 30 days of notice.

Sephora was allegedly permitting third-party companies to install tracking software on Sephora’s website and app to track users’ activity to better market to those individuals. The complaint alleged that “Sephora gave companies access to consumer personal information in exchange for free or discounted analytics and advertising benefits,” which the State of California considered to be a “sale” of personal information for purposes of the CCPA. Thus, according to the State of California, “[b]oth the trade of personal information for analytics and the trade of personal information for an advertising option constituted sales under the CCPA.” Because these transactions were viewed as “sales” of users’ personal information, the CCPA disclosure and opt-out requirements were triggered. 

Takeaways

The CCPA is not going away anytime soon, and companies should take note that the California Attorney General’s Office is keeping a close eye on CCPA compliance. If your business is one of the (un)lucky winners of the non-compliance notices referenced in Bonta’s announcement, the 30-day cure period should be treated as a hard deadline to remedy any alleged compliance issues.  Moreover, in light of the impending California Privacy Rights Act (CPRA) amendments set to take effect on January 1, 2023, with a look-back period to January 1, 2022, companies should take these steps for proactive CPRA compliance:

  • Assess if your business meets new thresholds;
  • Determine if your business collects sensitive personal information;
  • Amend service provider agreements and update templates;
  • Update your data retention policy;
  • Analyze how new privacy rights affect your business;
  • Determine if your business is a “high risk data processor”;
  • Ensure you are adequately disclosing data sales and opt-out rights on your website;
  • Ensure you have adequate processes to comply with both user opt-out requests and global privacy control requests; and
  • If you receive a non-compliance notice from the California Attorney General’s Office, retain counsel immediately — or at the very least, don’t ignore it. 

As companies look towards their CPRA compliance plans between now and the first of next year, these enforcement actions (and the issued address in them) provide clear insight into expectations and regulatory interpretation. The best offense isn’t always a good defense. But in this case, that platitude proves true.

Criminal cyber attacks that deprive access to vital digital information and hold it for ransom are a constant and ever-increasing threat. No organization is immune. 

Due to the exponential rise in ransomware attacks, cyber insurance coverage for ransom payments – one of the tools for mitigating cyber risk – now requires steeper premiums for much less coverage. Some argue that insurers’ payments have contributed to the increase in attacks.  Meanwhile, the FBI continues to warn that paying a ransom is never a guarantee that encrypted data will be recovered. 

Whether to pay a ransom has now become a matter of state public policy. In an effort to deter ransomware attacks on state agencies, North Carolina became the first state to enact laws prohibiting the use of tax dollars to pay ransoms (N.C.G.S. 143‑800). Pennsylvania is considering following suit. A proposed ban on ransom payments in New York would extend to private companies (see New York State Senate Bill S6806A). Whether these efforts will successfully deter cybercrime remains to be seen.  

These developments serve as a reminder to focus on cybersecurity fundamentals.  Organizations should review their cybersecurity measures on a regular basis as a matter of good governance. Simple security measures such as multifactor authentication and providing regular employee training on phishing and other social engineering scams can make all the difference.

Whether paying ransoms causes an increase in ransomware attacks by emboldening criminals will continue to be debated. But any such increase likely pales in comparison to the risks associated with the failure to institute appropriate cybersecurity measures. Too many organizations remain easy pickings. 

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.