On January 26, 2023, the U.S. National Institute of Standards and Technology (NIST) released the Artificial Intelligence (AI) Risk Management Framework (AI Risk Management Framework 1.0), a voluntary guidance document for managing and mitigating the risks of designing, developing, deploying, and using AI products and services. NIST also released a companion playbook for navigating the framework, a roadmap for future work, and mapping of the framework to other standards and principles, both at home and abroad. This guidance, developed in a consensus-based approach across a broad cross section of stakeholders, offers an essential foundation and important building block toward responsible AI governance.

The AI Framework

We stand at the crossroads as case law and regulatory law struggle to keep up with technology. As regulators consider policy solutions and levers to regulate AI risks and trustworthiness, many technology companies have adopted self-governing ethical principles and standards surrounding the development and use of artificial and augmented intelligence technologies. In the absence of clear legal rules, these internal expectations guide organizational actions and serve to reduce the risk of legal liability and negative reputational impact.

Over the past 18 months, NIST developed the AI Risk Management Framework with input from and in collaboration with the private and public sector. The framework takes a major step toward public-private collaboration and consensus through a structured yet flexible approach allowing organizations to anticipate and introduce accountability structures. The first half of the AI Risk Management Framework outlines principles for trustworthy AI, and the remainder describes how organizations can address these in practice by applying the core functions of creating a culture of risk management (governance), identifying risks and context (map), assessing and tracking risks (measure), and prioritizing risk based on impact (manage). NIST plans to work with the AI community to update the framework periodically.

Specifically, the framework offers noteworthy contributions on the pathway toward governable and accountable AI systems: 

  • Moves beyond technical standards to consider social and professional responsibilities in making AI risk determinations
  • Establishes trust principles, namely that responsible AI systems are valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair, with harmful bias managed
  • Emphasizes context (e.g., industry, sector, business purposes, technology assessments) in critically analyzing the risks and potential impacts of particular use cases
  • Provides steps for managing risks via governance functions; mapping broad perspectives and interdependencies to testing, evaluation, verification, and validation within a defined business case; measuring AI risks and impacts; and managing resources to mitigate negative impacts
  • Rationalizes the field so that organizations of all sizes can adopt recognized practices and scale as AI technology and regulations develop

The Playbook

This companion tool provides actionable strategies for the activities in the core framework. As with NIST’s Cybersecurity and Privacy Frameworks, the AI Risk Management Framework is expected to evolve with stakeholder input. NIST expects the AI community will build out these strategies for a dynamic playbook and will update the playbook in Spring 2023 with any comments received by the end of February.

The Roadmap

The roadmap for the NIST AI Risk Management Framework identifies the priorities and key activities that NIST and other organizations could undertake to advance the state of AI trustworthiness. Importantly, NIST intends to grapple with one of the more complex issues in implementing AI frameworks, namely balancing the trade-offs among and between the trust principles to consider the use cases and values at play. NIST seeks to showcase these profiles and case studies that highlight particular use cases and organizational challenges. NIST also will work across the federal government and on the international stage to identify and align standards development.

Mapping to Other Standards

The AI Risk Management Framework includes a map that crosswalks AI principles to global standards, such as the proposed European Union Artificial Intelligence Act, the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, and the Biden administration’s Executive Order 13960 and Blueprint for an AI Bill of Rights. The crosswalk enables organizations to readily leverage existing frameworks and principles.


AI is a rapidly developing field and offers many potential benefits but poses novel challenges and risks. With the launch of the framework, NIST also published supportive stakeholder perspectives  from business and professional associations, technology companies, and thinktanks such as the U.S. Chamber of Commerce, the Bipartisan Policy Center, and the Federation of American Scientists. Having the NIST AI Risk Management Framework’s foundational approach that evolves as our understanding of the technology and its impact evolves provides flexibility and a starting point to help regulators improve policy options and avoids a more prescriptive approach that may stifle innovation. The AI Risk Management Framework and its accompanying resources articulate expectations and will help AI stakeholders implement best practices for managing the opportunities, responsibilities, and challenges of artificial intelligence technologies.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Data Privacy Day, annually celebrated on January 28, is the new year nudge we need to prioritize the safety of our personal information. The digital world will continue to evolve, and the line between our online and offline lives will continue to blur. As we continue to rely on digital technology to manage our personal and professional lives, we must rethink what we share, when we share, how we share, and who we share it with.

Grab your coffee and join us for a morning Q&A with our Bradley Cybersecurity and Privacy team to celebrate Data Privacy Day (a day early!). We will be available between 10-10:50 a.m. ET on Friday, January 27 for you to drop in and ask your toughest privacy questions. Please register here and we hope to see you there!

Over the past few decades, technology has taken a fascinating turn. One can use a retinal scan to expedite the airport security process. Need to clock in for work? This can be done with the scan of a finger. We even have the convenience of unlocking our iPhones with a simple, quick gaze into the phone’s front camera. While the use of this technology has certainly made things easier, such use across various industries has led to concerns about individual privacy.

In response to these concerns, the Mississippi Legislature, on January 12, 2023, proposed House Bill 467, the Biometric Identifiers Privacy Act. The proposed legislation, among other things, seeks to require private entities (1) to be forthcoming about their collection and storage of individuals’ biometric identifiers, and (2) to develop a policy that establishes a retention schedule and guidelines for destroying the biometric identifiers of individuals.

What are biometric identifiers?

Inquiring minds may be wondering, what are biometric identifiers? Simply put, and pursuant to the act, biometric identifiers are defined as “the data of an individual generated by the individual’s unique biological characteristics.” Biometric identifiers may include, but are not limited to:

  • Faceprints
  • Fingerprints
  • Voiceprints
  • Retina or iris images

The act defines biometric identifiers to not include:

  • A writing sample of written signature
  • A photograph or video, except for data collected from the biological characteristics of a person depicted in the photograph or video
  • A human biological sample used for valid scientific testing or screening
  • Demographic data
  • A physical description, including height, weight, hair color, eye color, or a tattoo description
  • Donated body parts that have been obtained or stored by a federal agency
  • Information collected, used, or stored for purposes under the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA)
  • Images or film of the human anatomy used to diagnose and treat medical conditions or to further validate scientific testing
  • Information collected, used, or disclosed for human subject research that is conducted in accordance with the federal policy for the protection of human subjects

If passed, who will the act apply to?

The act will apply to private entities only. The act defines a private entity as “any individual acting in a commercial context, partnership, corporation, limited liability company, association, or other group, however organized.” The act will not apply to state or local government agencies or entities.

What will a Mississippi private entity need to do to ensure it is in compliance with the act?

If enacted, Mississippi private entities in possession of biometric identifiers will be required to, among other things:

  • Inform subjected individuals (or their legal representative), in writing, that they are collecting or storing that individual’s biometric identifier(s)
  • Inform the individual, in writing, of the purpose of the collection, storage, and/or use of their biometric identifier(s) and the length to which they plan to collect, store, and/or use
  • Obtain a written release executed by the subject (or legal representative) of the biometric identifier
  • Develop a publicly accessible written policy that establishes a retention schedule and guidelines for permanently destroying a biometric identifier
    • The entity is not required to make its policy publicly accessible if the policy (1) applies only to employees of that private entity, and (2) is used solely within the private entity for operation of the private entity.
    • Additionally, the entity must destroy any possession of an individual’s biometric identifier on the earliest of (1) the date on which the purpose of collecting or obtaining the biometric identifiers have been satisfied; (2) one year after the individual’s last interaction with the private entity; or (3) 30 days after receiving an individual’s (or legal representative’s) request to delete the biometric identifiers.

Furthermore, if an individual (or legal representative) requests that the private entity disclose any biometric identifiers that the private entity collected, the private entity must do so free of charge.

Of course, nothing in life is free. Such “free” disclosure is specific to entities that (1) do business in Mississippi; (2) are for profit; (3) collect consumers’ biometric identifiers or have such identifiers collected on their behalf; and (4) obtained revenue exceeding $10 million in the preceding calendar year.

What does this mean for Mississippi private entities?

Let’s face it, most people are sick and tired of having to remember passwords and verification questions for every system or database they must access on a regular basis. Because of this, people may prefer the collection, storage, and/or use of their biometric identifiers in exchange for convenience and easy access. However, use of such biometric identifiers will require entities to comply with applicable state and federal laws. To avoid any civil liability for the failure to protect an individual’s biometric identifiers under Mississippi law, Mississippi private entities should:

  • Prepare policies that are in compliance with the act, and make such policies available to individuals whose biometric data is being obtained. Specifically, draft a policy that details the entity’s retention plan for the collection and storage of biometric identifiers, as well as guidelines for destroying the biometric identifiers. Compliance with such policies is key.
  • Inform individuals, in writing, that you are collecting their biometric data. A private entity should also inform the individual, in writing, of the specific purpose and length of term for collecting the biometric data.
  • Obtain written releases from individuals whose biometric identifiers are being collected, stored, and/or used.
  • Use strong cybersecurity software and processes using a reasonable standard of care within the private entity’s industry to protect the biometric identifiers of individuals.
  • Destroy the biometric identifiers upon request by the individual.
  • Train management on the policies and the importance of protecting biometric identifiers so they can answer and alleviate individuals’ questions and/or concerns regarding the collection of their biometric identifiers.

A failure to comply with the act will have its consequences. The act creates a private right of action against an offending entity. If successful in proving their claims, individuals may recover the greater of $1,000 or actual damages for negligently violating the act or the greater of $5,000 or actual damages for intentionally or recklessly violating the act plus reasonable attorneys’ fees and costs, and other relief to which a court deems appropriate.

If passed, the act will take effect on July 1, 2023. For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

The case of Popa v. Harriet Carter Gifts, Inc. “began with a quest for pet stairs.” Plaintiff Ashley Popa searched Harriet Carter Gifts’ website, added pet stairs to her cart, but never completed the purchase. During her “quest,” Popa’s information was collected not only by Harriet Carter Gifts, but also by a third-party marketing company, NaviStone, using cookies technology. In an opinion with potentially far-reaching ramifications, the Third Circuit held that NaviStone’s collection of Popa’s information violated Pennsylvania’s Wiretapping and Electronic Surveillance Control Act (WESCA). This decision follows the Ninth Circuit’s lead in reviving similar claims brought pursuant to the California Invasion of Privacy Act that were initially dismissed on the basis that retroactive consent was not valid. In both cases, however, there remains a question as to whether the consumers impliedly consented to the collection of their browsing information as a result of disclosures in the website operators’ privacy policies.

Background of Litigation

Like many other states’ wiretapping laws, WESCA “prohibits the interception of wire, electronic, or oral communications, which means it is unlawful to acquire those communications using a device.” It also provides a private right of action for individuals to bring suit against parties for such unlawful interception.

While Popa was on the Harriet Carter website browsing for pet stairs, Popa’s browser communicated with servers operated by Harriet Carter Gifts. And, as part of the online marketing services NaviStone provides to Harriet Carter Gifts, Popa’s website also communicated with servers operated by NaviStone. This interaction allowed NaviStone to place a tracking cookie on Popa’s device. The cookie, in turn, allowed NaviStone to collect information about how Popa interacted with the Harriet Carter website to enable Navistone to show Popa personalized advertisements across the web.

Popa filed a class action lawsuit against Harriet Carter and NaviStone, claiming that Harriet Carter and NaviStone used tracking technology without her knowledge or consent in violation of WESCA. The district court granted summary judgment in favor of Harriet Carter and NaviStone on the WESCA violation claim, holding that NaviStone could not have “intercepted” Popa’s communications because NaviStone was a “party” to the “electronic conversation,” or alternatively, that if any interception did occur, such interception occurred outside Pennsylvania’s borders, and thus WESCA did not apply.

On appeal, the U.S. Court of Appeals for the Third Circuit ruled Harriet Carter and NaviStone could be held liable for violating WESCA if they deployed software and tracking cookies to collect data about a website visitor’s behavior without the visitor’s consent. While questions remain regarding whether Harriet Carter’s website had a posted privacy policy during Popa’s visit, and whether that privacy policy was sufficient to imply consent, numerous class actions have already been filed under similar theories. Because the district court did not address the implied consent argument in its summary judgment order, the Third Circuit declined to address it in the first instance and instead remanded the case to the trial court for further proceedings. The question will now become whether, under Pennsylvania law, Popa “knew or should have known[] that the conversation was being recorded” as a result of the website’s privacy policy such that she impliedly consented to the recording.


This ruling highlights the importance of obtaining consent from website visitors before collecting their data. It also underscores the need for retailers and digital marketers to be aware of and comply with state and federal laws related to electronic communications and data collection.

The decision also has practical implications for companies and digital marketing service providers engaged in the “passive collection” of consumer data in which background technologies collect a consumer’s information without the consumer affirmatively providing that information. The Third Circuit’s broad interpretation of the “interception” of a communication and narrow interpretation of the exceptions to liability under WESCA may increase the risks to companies and service providers that use these tracking technologies in Pennsylvania or states with similar wiretapping or privacy laws.

To mitigate these risks, companies should carefully review their online marketing practices, website operations, privacy disclosures, and consent mechanisms to ensure compliance with state and federal laws related to electronic communications, data privacy, and data collection. Providing clear and transparent privacy notices that disclose how these background communications work and who receives them may help to establish an implied consent defense to WESCA claims. However, the exact elements or standards required to obtain implied consent are currently unclear. Nonetheless, prior express consent from all parties is another clear defense to WESCA and other state wiretap claims.

President Biden issued Executive Order (EO) 14083 on September 15, 2022, establishing five factors for reviews by the Committee on Foreign Investment in the U.S. (CFIUS), and areas of heightened scrutiny for transactions impacting the U.S. supply chain, cybersecurity, sensitive personal data, agricultural production, and Section 1758 technologies.

Driven by eroding economic and geopolitical conditions, the U.S. and its primary trading partners have continued to expand the regulation of foreign direct investment. EO 14083 and an earlier EO in May both invoked the Defense Production Act (DPA) with resulting foreign direct investment implications.

As background, businesses involved in the U.S. defense industrial base have been protected from foreign direct investment by CFIUS – but changes to U.S. laws and regulations on foreign direct investment have expanded the protections beyond the traditional U.S. defense industry. The Foreign Investment Risk Review Modernization Act (FIRRMA) expanded CFIUS to protect businesses engaged in critical technologies, critical infrastructure, and sensitive personal data. FIRRMA was intended to close gaps in national security review risks and resulted in expanded CFIUS coverage and powers. Subsequent changes to U.S. foreign direct investment regulations have further impacted U.S. businesses engaged in critical technologies, critical infrastructure, and sensitive personal data.

Factors for Review

EO 14083 further advances U.S. foreign direct investment protections by requiring that CFIUS specifically consider five factors in its national security reviews, namely impacts to U.S.:

  • Supply chains, including but not limited to the defense industrial base, derived in part from EO 14017 regarding America’s Supply Chains.
  • Cybersecurity defenses and protections, both commercial and governmental
  • Sensitive personal data of U.S. citizens, including access by foreign actors
  • Industry segments from cumulative foreign investments or investment trends
  • Technological leadership in microelectronics, artificial intelligence, biotechnology and biomanufacturing, quantum computing, advanced clean energy, climate adaption technologies, and significantly the advanced clean energy, climate adaptation technologies, critical rare earth materials, and significantly – “elements of the agriculture industrial base that have implications for food security” – based on Export Control Reform Act (ECRA) / FIRRMA  Section 1758 covered technologies.

New Areas of Impact

Foreign investment trends in U.S. industry segments

EO 14083 references industries and industry segments that “are fundamental to U.S. technological leadership and therefore national security.” Based on guidance in the EO, CFIUS will now be required to assess a covered transaction in the context of other investments in the relevant industry or industry segment. In doing so, CFIUS will likely review proposed transactions in the context of previous cleared and proposed transactions in the same industry segment, in order to determine if collectively the transactions could cumulatively result in the transfer of Section 1758 technologies in key industries or otherwise harm national security. As a result, parties considering a transaction, like CFIUS, will need to take industry trends and transactions into account – not just the specific proposed transaction.

Cybersecurity defenses and protections

The White House has previously emphasized the importance of cybersecurity. And FIRRMA identified “cybersecurity vulnerabilities” as a relevant factor for CFIUS. Now EO 14083 more specifically identifies the nature of vulnerabilities that CFIUS should guard against. Some of these are familiar themes: critical infrastructure (already a prong for CFIUS jurisdiction); the defense industrial base; national security priorities (from EO 14028); and critical energy infrastructure, such as smart grids (similar to the Department of Energy’s “100-day plan”). But the order specifies two new types of intrusions, which may echo news items from recent years. First, CFIUS should consider transactions’ effects in giving a foreign person capability to affect the “confidentiality, integrity, or availability of United States communications.” Second, it should try to foresee activity designed to “interfere with United States elections.” It remains to be seen how broadly those factors could reach. Still, because cybersecurity was already a factor under FIRRMA, it is likely that this specific development represents a refinement, not a sharp change of direction.

Sensitive personal data of U.S. citizens

Under FIRRMA, CFIUS should consider exposure of “personally identifiable information.” But EO 14083 recognizes that “personally identifiable” is a moving target. New technology and more data allow previously anonymous datasets to be de-anonymized. The order also broadens the historical focus on individuals. Instead, it talks about exploiting data to target “individuals or groups” — it even loosens the kind of data to include “data on sub-populations.” If your company keeps data on U.S. individuals or “sub-populations” — however well anonymized — then expect that CFIUS will consider whether your data could be used (including in combination with other data) to undermine national security. Combined with the refined specification of cybersecurity vulnerabilities, this could lead to some previously unexpected decisions by CFIUS.

Agriculture Industrial Base

White House guidance notes the EO does not expand CFIUS jurisdiction and should be read in the context of existing authority. However, the EO expressly included “elements of the agriculture industrial base that have implications for food security” – not otherwise expressly addressed by CFIUS regulation or FIRRMA. Given that CFIUS has already been focused on most of the other factors highlighted in the EO, perhaps the most significant impact of EO 14083 is its implication to the U.S. agriculture industry. It is not surprising that there are national security implications to U.S. food production and supply, particularly based upon recent past shortages and projections of further shortages in the future. What is surprising is that FIRRMA provided for the application of CFIUS to food production via the DPA – as invoked by the recent EO. Nonetheless, the EO specific reference to the “agriculture industrial base” is likely best assessed in the context of pending legislation proposing to address foreign investment in U.S. agriculture.

The proposed Foreign Adversary Risk Management Act (the “FARM” Act) would expand the CFIUS definition of “critical infrastructure” to include agricultural production facilities and real estate, i.e., the U.S. agricultural supply chain. Similar bills, such as, the Food is National Security Act, have been proposed to include U.S. agriculture under CFIUS. The inclusion of “…agriculture industrial base…” in the EO 14083 may be a foreshadowing of the expansion of CFIUS reviews to foreign investment in U.S. agricultural production or products via the FARM Act or otherwise.

Statutory authority for the coverage of the U.S. agriculture industrial base can be derived from CFIUS jurisdiction over “critical infrastructure” created by FIRRMA. Appendix A to the FIRRMA regulations define “Critical Infrastructure” facilities and functions to be “Covered Transactions” under CFIUS. Additionally, Title III of the DPA “allows the President to provide economic incentives to secure domestic industrial capabilities essential to meet national defense and homeland security requirements.” The DPA was invoked by President Biden in May 2022 to addresses the U.S. infant formula shortage, and in EO 14083 to address national security threats to the U.S. supply chain, cybersecurity, sensitive personal data, Section 1758 technologies, and the “agriculture industrial base.” What is not widely known is that U.S. companies can be subject to CFIUS review for a period of 60 months following a presidential evocation of the DPA. FIRRMA Appendix A provides in part “… that has been funded, in whole or in part, by […] (a) Defense Production Act of 1950 Title III program …..” The FIRRMA definition of “covered transactions” also specifically includes “(d) Any other transaction, transfer, agreement, or arrangement, the structure of which is designed or intended to evade or circumvent the application of section 721.”

Concluding Observations

U.S. companies covered by the Defense Production Act are subject to CFIUS review and can remain subject to CFIUS review for a period of 60 months following the receipt of any DPA funding.

EO 14083 reinforces the need for U.S. businesses to be mindful of changes in U.S. regulations applicable to foreign ownership, control, or influence – the need for early diligence of any transaction involving international investment – and to carefully assess the implications of accepting funding under the DPA and jurisdiction of CFIUS beyond the U.S. defense industry.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

David Vance Lucas is a member of Bradley’s Intellectual Property and Cybersecurity and Privacy practice groups and leads the firm’s International and Cross Border team. Much of David’s experience was accumulated as general counsel for a multinational technology company. He now advises both U.S. and foreign clients on the harmonized application of U.S., UK, and European laws, as well as CFIUStication™ of FDI diligence and disclosures.

Andrew Tuggle’s practice focuses on intellectual property, cybersecurity, and international trade. He guides clients through government regulations of cybersecurity, exports, and cross-border transactions. He also helps clients protect their innovations through patents, trademarks, and trade secrets.

California’s Attorney General Rob Bonta has made clear that California Consumer Privacy Act (CCPA) enforcement is going to be a priority for the AG’s office. On Friday, the California AG’s office announced a $1.2 million settlement of an enforcement action against Sephora, Inc. for allegedly insufficient disclosures as required by the CCPA. The biggest takeaways from this enforcement action are that (1) California will focus on clear and accurate disclosures made to consumers; (2) California is taking a liberal approach to the definition of what constitutes the “sale” of consumer data; and (3) this is a further reminder that user-enabled global privacy controls — where users can set a default “do not sell” signal through their browser — have the same effect as an affirmative request to opt out of data sharing. Bonta further indicated that a number of non-compliance notices are on their way to various other businesses purportedly violating the CCPA, and companies should take prompt action to respond and correct any deficiencies, lest they become the next Sephora.

Enforcement Action Background

The allegations against Sephora included a combination of disclosure and opt-out request failures, including:

  • Failing to disclose in its privacy policy that it was selling users’ personal data and that consumers have the right to opt out of the sale;
  • Failing to include a “Do Not Sell My Personal Information” link on its webpage and mobile app, and two or more methods for users to opt out of the sale of their data;
  • Failing to process global privacy control requests by users to opt out of the sale of their personal information;
  • Failing to execute valid service-provider contracts with each third party, which is one exception to a “sale” under the CCPA; and
  • Failing to cure these alleged deficiencies within 30 days of notice.

Sephora was allegedly permitting third-party companies to install tracking software on Sephora’s website and app to track users’ activity to better market to those individuals. The complaint alleged that “Sephora gave companies access to consumer personal information in exchange for free or discounted analytics and advertising benefits,” which the State of California considered to be a “sale” of personal information for purposes of the CCPA. Thus, according to the State of California, “[b]oth the trade of personal information for analytics and the trade of personal information for an advertising option constituted sales under the CCPA.” Because these transactions were viewed as “sales” of users’ personal information, the CCPA disclosure and opt-out requirements were triggered. 


The CCPA is not going away anytime soon, and companies should take note that the California Attorney General’s Office is keeping a close eye on CCPA compliance. If your business is one of the (un)lucky winners of the non-compliance notices referenced in Bonta’s announcement, the 30-day cure period should be treated as a hard deadline to remedy any alleged compliance issues.  Moreover, in light of the impending California Privacy Rights Act (CPRA) amendments set to take effect on January 1, 2023, with a look-back period to January 1, 2022, companies should take these steps for proactive CPRA compliance:

  • Assess if your business meets new thresholds;
  • Determine if your business collects sensitive personal information;
  • Amend service provider agreements and update templates;
  • Update your data retention policy;
  • Analyze how new privacy rights affect your business;
  • Determine if your business is a “high risk data processor”;
  • Ensure you are adequately disclosing data sales and opt-out rights on your website;
  • Ensure you have adequate processes to comply with both user opt-out requests and global privacy control requests; and
  • If you receive a non-compliance notice from the California Attorney General’s Office, retain counsel immediately — or at the very least, don’t ignore it. 

As companies look towards their CPRA compliance plans between now and the first of next year, these enforcement actions (and the issued address in them) provide clear insight into expectations and regulatory interpretation. The best offense isn’t always a good defense. But in this case, that platitude proves true.

Criminal cyber attacks that deprive access to vital digital information and hold it for ransom are a constant and ever-increasing threat. No organization is immune. 

Due to the exponential rise in ransomware attacks, cyber insurance coverage for ransom payments – one of the tools for mitigating cyber risk – now requires steeper premiums for much less coverage. Some argue that insurers’ payments have contributed to the increase in attacks.  Meanwhile, the FBI continues to warn that paying a ransom is never a guarantee that encrypted data will be recovered. 

Whether to pay a ransom has now become a matter of state public policy. In an effort to deter ransomware attacks on state agencies, North Carolina became the first state to enact laws prohibiting the use of tax dollars to pay ransoms (N.C.G.S. 143‑800). Pennsylvania is considering following suit. A proposed ban on ransom payments in New York would extend to private companies (see New York State Senate Bill S6806A). Whether these efforts will successfully deter cybercrime remains to be seen.  

These developments serve as a reminder to focus on cybersecurity fundamentals.  Organizations should review their cybersecurity measures on a regular basis as a matter of good governance. Simple security measures such as multifactor authentication and providing regular employee training on phishing and other social engineering scams can make all the difference.

Whether paying ransoms causes an increase in ransomware attacks by emboldening criminals will continue to be debated. But any such increase likely pales in comparison to the risks associated with the failure to institute appropriate cybersecurity measures. Too many organizations remain easy pickings. 

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

Following a near unanimous vote in the Connecticut House, Connecticut is set to become the fifth state to pass comprehensive privacy legislation. With the addition of the Connecticut Data Privacy Act (CTDPA), Connecticut joins California, Virginia, Colorado, and Utah, in regulating businesses that possess, store, and/or sell consumers’ personal data. The CTDPA comes on the heels of the Utah Consumer Privacy Act (UCPA), recently passed in March 2022. You can read the full text of CTDPA here.

Who’s affected?

Like the Colorado Privacy Act (CPA) and Virginia’s Consumer Data Protection Act (VCDPA), the CTDPA will place similar personal data security and disclosure requirements on businesses that meet prescribed thresholds.

The CTDPA will regulate all businesses that conduct business in the state or produce products or services targeted to consumers in the state, and it establishes either of the two thresholds in the preceding calendar year:

  1. Processed personal data of at least 100,000 consumers (excluding personal data processed solely for completing a payment transaction), or
  2. Processed personal data of at least 25,000 consumers and derived at least 25% gross revenue from the sale of personal data.

The CTDPA’s applicability requirements are in line with the CPA and VCDPA, rather than the UCPA, which is a more business-friendly jurisdiction.

What obligations are placed on businesses?

Akin to the other state laws, businesses have several similar obligations to ensure compliance with the CTDPA:

  • Sensitive Data: Unlike the UCPA, if a business is going to collect or use “sensitive data,” the consumer must provide consent beforehand. The CTDPA defines “sensitive data” to include racial or ethnic origin, religious beliefs, health condition or diagnosis, sexual orientation, genetic or biometric data for the purposes of identifying an individual, children’s data, and precise geolocation data.
  • Protective Measures: Establish, implement, and maintain reasonable administrative, technical, and physical data security practices.
  • Children’s Data: Must receive consent before selling the personal data of or conducting targeted advertising relating to consumers between ages 13 and 16 years old, if the business has actual knowledge of the consumer’s age.
  • Data Protection Assessments: Conduct and document a data protection assessment for each of the business’s processing activities that present a heightened risk of harm to consumers. A “heightened risk of harm” includes (1) processing for targeted advertising, (2) the sale of personal data, (3) processing for profiling, and (4) processing sensitive data. These assessments may also be requested by the Connecticut attorney general, however, it is not subject to a FOIA request.
  • Opt-Out Preference Signal: Similar to the California Privacy Rights Act (CPRA) and the CPA, the CTDPA obligates businesses to institute an opt-opt preference signal for consumers. The opt-out signal would apply to all businesses who conduct targeted advertising or sell personal data. This requirement will take effect January 1, 2025, six months after the CPA’s opt-out preference signal requirement goes into effect.

Other Notable CTDPA Provisions

  • Consumer Rights: Consumers will have the right to (1) access their personal data held by businesses; (2) delete that personal data; (3) correct any inaccuracies in that personal data; (4) transmit their personal data to another business (but only where the collection was conducted by automated means); and (5) opt out of targeted advertising, the sale of their personal data, and/or profiling.
  • Exemptions: Exemptions include(1) nonprofit organizations; (2) financial institutions and information regulated under GLBA; (3) covered entities, business associates, and information regulated under HIPAA; and (4) employee and business-to-business (B2B) information.
  • Task Force: Like the VCDPA, the CTDPA does not grant any rulemaking authority. By September 1, 2022, the CTDPA instructs the General Assembly’s joint standing committee to convene a privacy task force to study various privacy issues – in particular algorithmic decision-making and the possibility of expanding certain protections and rights under the CTDPA, as well as issues stemming from data colocation. By January 1, 2023, the task force will submit a report showcasing its findings and recommendations to the committee.

Enforcement power and the opportunity to cure

State residents will not have a private right of action under the CTDPA. Sole enforcement authority will be vested in the Connecticut Attorney General’s Office. The attorney general will also have the power to review and evaluate businesses’ data protection assessments for compliance in relation to an investigation. Violations of the CTDPA will constitute an unfair trade practice under Connecticut law.

The CTDPA contains a right to cure provision comparable to those in California Consumer Privacy Act (CCPA) and the CPA. Between July 1, 2023, and December 31, 2024, businesses will have a 60-day right to cure deficiencies upon written notice from the attorney general. After that time, opportunities to cure an alleged violation are based on the attorney general’s discretion. CCPA’s right to cure period sunsets on January 1, 2023, and Colorado’s sunsets on January 1, 2025.


Upon promulgation, the CTDPA will take effect on July 1, 2023, the same date as the CPA. There are still numerous proposed privacy laws pending throughout the United States. Given the recent enactment of UCPA and now the CTDPA, the 2022 wave of state privacy laws could just be getting started. More than ever, it will be critical for businesses to have a working understanding of which state privacy laws apply to them, how they can comply, and how they can avoid regulatory inquiry. As more states pass privacy laws, it puts further pressure on the federal government to pass a nationwide law.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

On March 18, 2022, President Biden issued a letter to California Gov. Gavin Newsom (the “March 18th letter”) requesting that he secure California’s computer systems and critical infrastructure in light of recent Russian cyberattacks against Ukraine. President Biden advised  Newsom to gather his leadership team to discuss California’s cybersecurity and address several fundamental questions, including whether California’s Public Utility Commissions (or other California agencies) set minimum cybersecurity standards for California’s critical infrastructure.

President Biden further encouraged Newsom to promulgate the standards set forth in his May 2021 Executive Order, Improving the Nation’s Cybersecurity (the “May 2021 Executive Order”), to secure California’s computer systems and critical infrastructure.

Three days later, on March 21, 2022, the president issued a statement informing U.S. citizens that now is “a critical moment to accelerate our work to improve domestic cybersecurity and bolster our national resilience” (the “March 21st statement”). He averred that although the administration has made great efforts to strengthen U.S. national cyber defenses, they cannot achieve such an imperative goal alone. President Biden wrote that most of America’s critical infrastructure is owned and operated by the private sector and urged them to fortify their cyber defenses immediately.

The March 21st statement was accompanied by a Fact Sheet, where the administration encouraged private companies to employ specific actions to help protect U.S. critical services. Some of the suggested actions were included in the May 2021 Executive Order and March 18th letter. The most vital actions included:

  • Mandating multi-factor authentication on computer systems;
  • Deploying modern security tools on computers and devices;
  • Inquiring insight from cybersecurity professionals to ensure that systems are patched and protected against all known vulnerabilities;
  • Backing up data and ensuring that companies have offline backups;
  • Conducting exercises and drills of emergency plans;
  • Encrypting data;
  • Educating employees on how to detect cybersecurity events; and
  • Engaging proactively with a local FBI field office or a Cybersecurity and Infrastructure Security Agency’s (CISA) Regional Office to establish relationships in advance of cybersecurity events.

As emphasized in the March 18th letter and March 21st statement, state governments and private companies are currently at high risk for cyberattacks and should govern themselves accordingly. Taking this into consideration, companies operating in and around U.S. critical services and infrastructure should be aware of the administration’s comments and suggestions and should review their current cyber-defense protocols and procedures to ensure that the appropriate protections are in place. The CISA website provides helpful insight as to how private companies can help counter Russian cyberattacks.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Preparing for the Tidal Wave and Bracing for the Tsunami: Utah Becomes the Fourth State to Pass Privacy LegislationAt last count, at least 39 states have introduced (or passed) comprehensive privacy legislation. After what was previously a watch-and-wait game of legislative whack-a-mole, we are now seeing this legislation get passed and implemented more regularly and with greater speed.

Case in point, within two months of entering the new year, Senate Bill 227, titled the Utah Consumer Privacy Act (UCPA), passed both houses of the Utah Legislature and is now awaiting signature from Utah Gov. Spencer J. Cox as of March 3. Once on his desk, Gov. Cox can sign or veto the UCPA before it becomes law after 20 days. If enacted, Utah will quickly become the fourth state to enact general data privacy legislation in the United States, following California, Colorado, and Virginia. The UCPA would take effect on December 31, 2023.

The UCPA closely resembles the Virginia Consumer Data Privacy Act (VCDPA) and the Colorado Privacy Act (CPA), but also shares provisions with the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA).

You can read the full text of the UCPA here.

What does this mean for your business? We highlight key aspects of the UCPA below:

What Businesses Are Affected?

The UCPA would apply to all for-profit controllers and processors who generate annual revenue of at least $25 million by either (a) conducting business in the state or (b) producing products or services that are targeted to state residents, and satisfy one of two thresholds:

  1. In a calendar year, processes personal data of at least 100,000 state residents, or
  2. Derives over 50% of its gross revenue from the sale of personal data, and processes the personal data of at least 25,000 state residents.

The UCPA’s $25 million threshold adds an additional component to consider (namely an annual revenue and processing requirement), unlike the singular components of the CCPA/CPRA, VCDPA, or CPA.

Personal Data vs. Sensitive Data

Like the CCPA/CPRA, VCDPA, and CPA, the UCPA differentiates between “personal data” and “sensitive data.” The UCPA defines “sensitive data” as personal data revealing racial or ethnic origins, religious beliefs, sexual orientation, citizenship or immigration status, medical history or health information, biometric data, and specific geolocation data. However, the UCPA exempts the collection of personal data revealing racial or ethnic origins when processed by a “video communication service,” an undefined term. This carve-out has been in the UCPA since the Utah Legislature’s 2021 proposed bill.

Unlike the CPA and VCDPA, the UCPA does not require consent before a controller may lawfully process sensitive data, only that “clear notice” and an “opportunity to opt out” be provided beforehand.

Consumer Rights

The UCPA provides similar rights to existing state privacy laws:

  1. Right to Know/Access: Consumers may request whether a controller is processing their personal data and get access to the personal data.
  2. Right to Delete: Consumer can direct the controller to delete the personal data provided by the consumer.
  3. Right to Transmit/Port: Similar to the VCDPA, a consumer can have the controller transfer their personal data to another controller where the processing is carried out by automated means.
  4. Right to Opt-Out: Consumers can opt out of the processing of their personal data for the purposes of targeted advertising and the sale of their personal data. Additionally, while not listed under the right to opt out, consumers also have the right to opt out of any processing of their sensitive data, barring any exemptions, as mentioned above.

Notably absent from the UCPA is the right to correction, in contrast to the other three states that all granted consumers the right to correct inaccuracies in their personal data processed by the controller.

No Data Protection Assessment Obligations

The UCPA does not require any risk or data protection assessment before processing consumer personal data. The CPA and VCDPA both require completion of data protection assessments where any processing presents a “heightened risk of harm to a consumer.” Similarly, the CCPA/CPRA directs the implementation of regulations for businesses to conduct “risk assessments” on a regular basis and a “cybersecurity audit” where processing “presents significant risk to consumers’ privacy or security.”

Penalties, Investigations and Amendment Procedures

In what is largely a point of contention for states seeking to enact privacy legislation, the UCPA does not grant a private right of action for any UCPA violation. Only the Utah attorney general may enforce the UCPA. Violating entities have a 30-day cure period before the Utah AG may initiate an action. In instituting an action, the Utah AG may recover actual damages to the consumer of at most $7,500 for each UCPA violation. If multiple controllers or processors are involved in the same violation, each may be liable for the percentage of their respective fault.

Similar to the VCDPA, the UCPA does not grant any rulemaking authority to the Utah AG. However, the UCPA directs the Utah AG to compile a report that (a) evaluates the liability and enforcement provisions of UCPA, and (b) summarizes the data protected and not protected from UCPA. The Utah AG must then deliver this report to the Utah Legislature’s Business and Labor Interim Committee by July 1, 2025. This report will inform the Legislature if any amendments are warranted.


The UCPA has a multitude of exemptions. Below is a list of noteworthy entities and information inapplicable to the UCPA:

  1. Employee and Business-to-Business (B2B) Exemption: The UCPA only applies to personal data concerning state residents who are acting in an individual or household context. This is in contrast to the CCPA, whose exemptions are slated to expire when the CPRA takes effect January 1, 2023.
  2. Financial institutions, affiliates of financial institutions, and information regulated under GLBA
  3. Covered entities, business associates, and protected health information regulated under HIPAA
  4. Information regulated under FERPA
  5. Non-profit businesses


Given that another state data privacy law was passed so swiftly in 2022, Utah is certainly not going to be the last piece of legislation we will see this year. To date, Florida, Indiana, Oklahoma, and Wisconsin already have proposed privacy bills moving across their respective houses. It is likely only a matter of time before we are inundated with a complex patchwork of state laws that privacy experts have theorized would occur for years.