Criminal cyber attacks that deprive access to vital digital information and hold it for ransom are a constant and ever-increasing threat. No organization is immune. 

Due to the exponential rise in ransomware attacks, cyber insurance coverage for ransom payments – one of the tools for mitigating cyber risk – now requires steeper premiums for much less coverage. Some argue that insurers’ payments have contributed to the increase in attacks.  Meanwhile, the FBI continues to warn that paying a ransom is never a guarantee that encrypted data will be recovered. 

Whether to pay a ransom has now become a matter of state public policy. In an effort to deter ransomware attacks on state agencies, North Carolina became the first state to enact laws prohibiting the use of tax dollars to pay ransoms (N.C.G.S. 143‑800). Pennsylvania is considering following suit. A proposed ban on ransom payments in New York would extend to private companies (see New York State Senate Bill S6806A). Whether these efforts will successfully deter cybercrime remains to be seen.  

These developments serve as a reminder to focus on cybersecurity fundamentals.  Organizations should review their cybersecurity measures on a regular basis as a matter of good governance. Simple security measures such as multifactor authentication and providing regular employee training on phishing and other social engineering scams can make all the difference.

Whether paying ransoms causes an increase in ransomware attacks by emboldening criminals will continue to be debated. But any such increase likely pales in comparison to the risks associated with the failure to institute appropriate cybersecurity measures. Too many organizations remain easy pickings. 

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

Following a near unanimous vote in the Connecticut House, Connecticut is set to become the fifth state to pass comprehensive privacy legislation. With the addition of the Connecticut Data Privacy Act (CTDPA), Connecticut joins California, Virginia, Colorado, and Utah, in regulating businesses that possess, store, and/or sell consumers’ personal data. The CTDPA comes on the heels of the Utah Consumer Privacy Act (UCPA), recently passed in March 2022. You can read the full text of CTDPA here.

Who’s affected?

Like the Colorado Privacy Act (CPA) and Virginia’s Consumer Data Protection Act (VCDPA), the CTDPA will place similar personal data security and disclosure requirements on businesses that meet prescribed thresholds.

The CTDPA will regulate all businesses that conduct business in the state or produce products or services targeted to consumers in the state, and it establishes either of the two thresholds in the preceding calendar year:

  1. Processed personal data of at least 100,000 consumers (excluding personal data processed solely for completing a payment transaction), or
  2. Processed personal data of at least 25,000 consumers and derived at least 25% gross revenue from the sale of personal data.

The CTDPA’s applicability requirements are in line with the CPA and VCDPA, rather than the UCPA, which is a more business-friendly jurisdiction.

What obligations are placed on businesses?

Akin to the other state laws, businesses have several similar obligations to ensure compliance with the CTDPA:

  • Sensitive Data: Unlike the UCPA, if a business is going to collect or use “sensitive data,” the consumer must provide consent beforehand. The CTDPA defines “sensitive data” to include racial or ethnic origin, religious beliefs, health condition or diagnosis, sexual orientation, genetic or biometric data for the purposes of identifying an individual, children’s data, and precise geolocation data.
  • Protective Measures: Establish, implement, and maintain reasonable administrative, technical, and physical data security practices.
  • Children’s Data: Must receive consent before selling the personal data of or conducting targeted advertising relating to consumers between ages 13 and 16 years old, if the business has actual knowledge of the consumer’s age.
  • Data Protection Assessments: Conduct and document a data protection assessment for each of the business’s processing activities that present a heightened risk of harm to consumers. A “heightened risk of harm” includes (1) processing for targeted advertising, (2) the sale of personal data, (3) processing for profiling, and (4) processing sensitive data. These assessments may also be requested by the Connecticut attorney general, however, it is not subject to a FOIA request.
  • Opt-Out Preference Signal: Similar to the California Privacy Rights Act (CPRA) and the CPA, the CTDPA obligates businesses to institute an opt-opt preference signal for consumers. The opt-out signal would apply to all businesses who conduct targeted advertising or sell personal data. This requirement will take effect January 1, 2025, six months after the CPA’s opt-out preference signal requirement goes into effect.

Other Notable CTDPA Provisions

  • Consumer Rights: Consumers will have the right to (1) access their personal data held by businesses; (2) delete that personal data; (3) correct any inaccuracies in that personal data; (4) transmit their personal data to another business (but only where the collection was conducted by automated means); and (5) opt out of targeted advertising, the sale of their personal data, and/or profiling.
  • Exemptions: Exemptions include(1) nonprofit organizations; (2) financial institutions and information regulated under GLBA; (3) covered entities, business associates, and information regulated under HIPAA; and (4) employee and business-to-business (B2B) information.
  • Task Force: Like the VCDPA, the CTDPA does not grant any rulemaking authority. By September 1, 2022, the CTDPA instructs the General Assembly’s joint standing committee to convene a privacy task force to study various privacy issues – in particular algorithmic decision-making and the possibility of expanding certain protections and rights under the CTDPA, as well as issues stemming from data colocation. By January 1, 2023, the task force will submit a report showcasing its findings and recommendations to the committee.

Enforcement power and the opportunity to cure

State residents will not have a private right of action under the CTDPA. Sole enforcement authority will be vested in the Connecticut Attorney General’s Office. The attorney general will also have the power to review and evaluate businesses’ data protection assessments for compliance in relation to an investigation. Violations of the CTDPA will constitute an unfair trade practice under Connecticut law.

The CTDPA contains a right to cure provision comparable to those in California Consumer Privacy Act (CCPA) and the CPA. Between July 1, 2023, and December 31, 2024, businesses will have a 60-day right to cure deficiencies upon written notice from the attorney general. After that time, opportunities to cure an alleged violation are based on the attorney general’s discretion. CCPA’s right to cure period sunsets on January 1, 2023, and Colorado’s sunsets on January 1, 2025.


Upon promulgation, the CTDPA will take effect on July 1, 2023, the same date as the CPA. There are still numerous proposed privacy laws pending throughout the United States. Given the recent enactment of UCPA and now the CTDPA, the 2022 wave of state privacy laws could just be getting started. More than ever, it will be critical for businesses to have a working understanding of which state privacy laws apply to them, how they can comply, and how they can avoid regulatory inquiry. As more states pass privacy laws, it puts further pressure on the federal government to pass a nationwide law.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

On March 18, 2022, President Biden issued a letter to California Gov. Gavin Newsom (the “March 18th letter”) requesting that he secure California’s computer systems and critical infrastructure in light of recent Russian cyberattacks against Ukraine. President Biden advised  Newsom to gather his leadership team to discuss California’s cybersecurity and address several fundamental questions, including whether California’s Public Utility Commissions (or other California agencies) set minimum cybersecurity standards for California’s critical infrastructure.

President Biden further encouraged Newsom to promulgate the standards set forth in his May 2021 Executive Order, Improving the Nation’s Cybersecurity (the “May 2021 Executive Order”), to secure California’s computer systems and critical infrastructure.

Three days later, on March 21, 2022, the president issued a statement informing U.S. citizens that now is “a critical moment to accelerate our work to improve domestic cybersecurity and bolster our national resilience” (the “March 21st statement”). He averred that although the administration has made great efforts to strengthen U.S. national cyber defenses, they cannot achieve such an imperative goal alone. President Biden wrote that most of America’s critical infrastructure is owned and operated by the private sector and urged them to fortify their cyber defenses immediately.

The March 21st statement was accompanied by a Fact Sheet, where the administration encouraged private companies to employ specific actions to help protect U.S. critical services. Some of the suggested actions were included in the May 2021 Executive Order and March 18th letter. The most vital actions included:

  • Mandating multi-factor authentication on computer systems;
  • Deploying modern security tools on computers and devices;
  • Inquiring insight from cybersecurity professionals to ensure that systems are patched and protected against all known vulnerabilities;
  • Backing up data and ensuring that companies have offline backups;
  • Conducting exercises and drills of emergency plans;
  • Encrypting data;
  • Educating employees on how to detect cybersecurity events; and
  • Engaging proactively with a local FBI field office or a Cybersecurity and Infrastructure Security Agency’s (CISA) Regional Office to establish relationships in advance of cybersecurity events.

As emphasized in the March 18th letter and March 21st statement, state governments and private companies are currently at high risk for cyberattacks and should govern themselves accordingly. Taking this into consideration, companies operating in and around U.S. critical services and infrastructure should be aware of the administration’s comments and suggestions and should review their current cyber-defense protocols and procedures to ensure that the appropriate protections are in place. The CISA website provides helpful insight as to how private companies can help counter Russian cyberattacks.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Preparing for the Tidal Wave and Bracing for the Tsunami: Utah Becomes the Fourth State to Pass Privacy LegislationAt last count, at least 39 states have introduced (or passed) comprehensive privacy legislation. After what was previously a watch-and-wait game of legislative whack-a-mole, we are now seeing this legislation get passed and implemented more regularly and with greater speed.

Case in point, within two months of entering the new year, Senate Bill 227, titled the Utah Consumer Privacy Act (UCPA), passed both houses of the Utah Legislature and is now awaiting signature from Utah Gov. Spencer J. Cox as of March 3. Once on his desk, Gov. Cox can sign or veto the UCPA before it becomes law after 20 days. If enacted, Utah will quickly become the fourth state to enact general data privacy legislation in the United States, following California, Colorado, and Virginia. The UCPA would take effect on December 31, 2023.

The UCPA closely resembles the Virginia Consumer Data Privacy Act (VCDPA) and the Colorado Privacy Act (CPA), but also shares provisions with the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA).

You can read the full text of the UCPA here.

What does this mean for your business? We highlight key aspects of the UCPA below:

What Businesses Are Affected?

The UCPA would apply to all for-profit controllers and processors who generate annual revenue of at least $25 million by either (a) conducting business in the state or (b) producing products or services that are targeted to state residents, and satisfy one of two thresholds:

  1. In a calendar year, processes personal data of at least 100,000 state residents, or
  2. Derives over 50% of its gross revenue from the sale of personal data, and processes the personal data of at least 25,000 state residents.

The UCPA’s $25 million threshold adds an additional component to consider (namely an annual revenue and processing requirement), unlike the singular components of the CCPA/CPRA, VCDPA, or CPA.

Personal Data vs. Sensitive Data

Like the CCPA/CPRA, VCDPA, and CPA, the UCPA differentiates between “personal data” and “sensitive data.” The UCPA defines “sensitive data” as personal data revealing racial or ethnic origins, religious beliefs, sexual orientation, citizenship or immigration status, medical history or health information, biometric data, and specific geolocation data. However, the UCPA exempts the collection of personal data revealing racial or ethnic origins when processed by a “video communication service,” an undefined term. This carve-out has been in the UCPA since the Utah Legislature’s 2021 proposed bill.

Unlike the CPA and VCDPA, the UCPA does not require consent before a controller may lawfully process sensitive data, only that “clear notice” and an “opportunity to opt out” be provided beforehand.

Consumer Rights

The UCPA provides similar rights to existing state privacy laws:

  1. Right to Know/Access: Consumers may request whether a controller is processing their personal data and get access to the personal data.
  2. Right to Delete: Consumer can direct the controller to delete the personal data provided by the consumer.
  3. Right to Transmit/Port: Similar to the VCDPA, a consumer can have the controller transfer their personal data to another controller where the processing is carried out by automated means.
  4. Right to Opt-Out: Consumers can opt out of the processing of their personal data for the purposes of targeted advertising and the sale of their personal data. Additionally, while not listed under the right to opt out, consumers also have the right to opt out of any processing of their sensitive data, barring any exemptions, as mentioned above.

Notably absent from the UCPA is the right to correction, in contrast to the other three states that all granted consumers the right to correct inaccuracies in their personal data processed by the controller.

No Data Protection Assessment Obligations

The UCPA does not require any risk or data protection assessment before processing consumer personal data. The CPA and VCDPA both require completion of data protection assessments where any processing presents a “heightened risk of harm to a consumer.” Similarly, the CCPA/CPRA directs the implementation of regulations for businesses to conduct “risk assessments” on a regular basis and a “cybersecurity audit” where processing “presents significant risk to consumers’ privacy or security.”

Penalties, Investigations and Amendment Procedures

In what is largely a point of contention for states seeking to enact privacy legislation, the UCPA does not grant a private right of action for any UCPA violation. Only the Utah attorney general may enforce the UCPA. Violating entities have a 30-day cure period before the Utah AG may initiate an action. In instituting an action, the Utah AG may recover actual damages to the consumer of at most $7,500 for each UCPA violation. If multiple controllers or processors are involved in the same violation, each may be liable for the percentage of their respective fault.

Similar to the VCDPA, the UCPA does not grant any rulemaking authority to the Utah AG. However, the UCPA directs the Utah AG to compile a report that (a) evaluates the liability and enforcement provisions of UCPA, and (b) summarizes the data protected and not protected from UCPA. The Utah AG must then deliver this report to the Utah Legislature’s Business and Labor Interim Committee by July 1, 2025. This report will inform the Legislature if any amendments are warranted.


The UCPA has a multitude of exemptions. Below is a list of noteworthy entities and information inapplicable to the UCPA:

  1. Employee and Business-to-Business (B2B) Exemption: The UCPA only applies to personal data concerning state residents who are acting in an individual or household context. This is in contrast to the CCPA, whose exemptions are slated to expire when the CPRA takes effect January 1, 2023.
  2. Financial institutions, affiliates of financial institutions, and information regulated under GLBA
  3. Covered entities, business associates, and protected health information regulated under HIPAA
  4. Information regulated under FERPA
  5. Non-profit businesses


Given that another state data privacy law was passed so swiftly in 2022, Utah is certainly not going to be the last piece of legislation we will see this year. To date, Florida, Indiana, Oklahoma, and Wisconsin already have proposed privacy bills moving across their respective houses. It is likely only a matter of time before we are inundated with a complex patchwork of state laws that privacy experts have theorized would occur for years.

Defense Contractor Denied FCA Summary Judgment in First Test of DOJ’s New Civil Cyber-Fraud InitiativeOn February 1, 2022, the United States District Court for the Eastern District of California ruled that a False Claims Act (FCA) case against defense contractor Aerojet Rocketdyne Holdings and Aerojet Rockdyne Inc. (collectively “Aerojet”) could go forward on triable issues of fact as to whether noncompliance with government cybersecurity requirements are material to the government’s decisions to approve contracts. The federal court denied Aerojet’s motion for summary judgment and issued the first major ruling in an FCA case testing the Department of Justice’s new Civil Cyber-Fraud Initiative.

Announced in October 2021, the purpose of the government’s Civil Cyber-Fraud Initiative is to utilize the FCA to pursue cybersecurity-related fraud by government contractors and grant recipients. DOJ announced plans to focus on entities that knowingly misrepresent their cybersecurity practices or protocols, knowingly violate obligations to monitor and report cybersecurity incidents and breaches, or knowingly provide deficient cybersecurity products or services. In this case, the relator — the defendant’s former senior director of Cybersecurity, Compliance & Controls — alleged that Aerojet knew its cybersecurity programs fell short of Department of Defense and NASA acquisition regulations, which were part of contracts between Aerojet and the agencies.

Despite declining to intervene in the Aerojet case in June 2018, the government filed a statement of interest two weeks after it announced the Civil Cyber-Fraud Initiative, assailing Aerojet’s arguments that it was entitled to summary judgement. Notably, the government argued that the contractual deficiencies were a source of damages even if Aerojet otherwise complied with the contracts because “the government did not just contract for rocket engines, but also contracted with [Aerojet] to store the government’s technical data on a computer system that met certain cybersecurity requirements.” The government also argued that assertions that the entire defense industry is not compliant with cybersecurity requirements has no bearing on whether such compliance is material to the government’s payment decision in any particular case.

The court commented on how the relevant regulations required government contractors to implement specific safeguards to protect unclassified technical information from cybersecurity threats. Although the court acknowledged that Aerojet may have disclosed certain cybersecurity shortcomings to the government, the court questioned whether Aerojet failed to disclose key events, and the results of audits showing gaps in Aerojet’s cybersecurity. The court also expressed concern as to whether Aerojet knowingly misrepresented their intention to comply with the cybersecurity provisions of their contracts in the first place. Given the new initiative, the filing of the statement of interest in this case, and this recent federal ruling, government contractors and grant recipients would be wise to review the cybersecurity requirements in their contracts, grants, and licenses to ensure compliance and avoid being caught in the snare of the government’s new focus on cybersecurity.

For more information on this developing case and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

The Lloyd’s Market Association (the “LMA”) recently released four model clauses to exclude coverage for “war” from cyber insurance policies. The exclusions align with the requirement that all insurance policies written at Lloyd’s must exclude losses caused by war. Given the insurance industry’s weakening appetite for cyber risks, the issue for insureds is the extent to which the broad definition of “war” in these exclusions could give insurers wide latitude for denial of coverage beyond the traditional concept of “war” between sovereign states.

Standardizing definitions: war

The four exclusions together create four levels of coverage based on a consistent set of definitions for key terms. All four exclude cyber losses caused by “war,” defined broadly to mean:

the use of physical force by a state against another state or as part of a civil war, rebellion, revolution, insurrection, and/or

military or usurped power or confiscation or nationalization or requisition or destruction of or damage to property by or under the order of any government or public or local authority.

The definition emphasizes action directed by a “[sovereign] state [or] any government or public or local authority.” The full scope of “local authority” is unclear but is potentially far-reaching. For example, the Ninth Circuit once held that a war exclusion had not been triggered by actions of Hamas, because the “foreign terrorist organization” was not a sovereign state. But Hamas could well have been considered a “local authority [with] military or usurped power” — and losses due to actions of Hamas might therefore be excluded from coverage under the new LMA exclusions. Similarly, inclusion of terms such as “revolution” and “insurrection” have the potential to extend the scope of this exclusion bar beyond the traditional understanding of what constitutes “war.”

Standardizing definitions: cyber operations (and attribution)

Further, the exclusions each exclude losses caused by (some) “cyber operations,” the definition of which also focuses on state-to-state activity:

Cyber operation means the use of a computer system by or on behalf of a state to disrupt, deny, degrade, manipulate or destroy information in a computer system of or in another state.

Attribution of a cyber operation as being “by or on behalf of a state” is tricky. The Office of the Director of National Intelligence explained in a 2018 document that attribution is “painstaking” and “difficult” and that there is “[n]o simple technical process or automated solution.”

The exclusions prescribe that attribution be determined first by “the government of the state in which the computer system affected … is physically located.” Among other problems with this procedure is that such a state could itself be the perpetrator of the cyber operation. In the absence of the state’s attribution, “it shall be for the insurer to prove attribution.”

Four degrees of exclusion of cyber operations

The four clauses each use the same definitions, exclude war losses, and prescribe the same criteria for attribution of cyber operations. But the clauses differ in the degree to which each excludes losses from cyber operations.

  • Exclusion No. 1 (LMA5564) is the strictest. It excludes losses from all cyber operations.
  • Exclusion No. 2 (LMA5565) does cover — with specified coverage limits — losses that are not due to cyber operations that either: (1) are retaliatory between China, France, Germany, Japan, Russia, UK, or USA; or (2) have a “major detrimental impact” on a state’s security, defense, or “essential services.” The exclusion does not define either “retaliatory” or “major detrimental impact.”
  • Exclusion No. 3 (LMA5566) provides for the same losses as does Exclusion No. 2, but without specifying coverage limits.
  • Exclusion No. 4 (LMA5567) is the most generous (but is still restrictive). In addition to the coverage of Exclusion No. 3, it also covers effects on “bystanding cyber assets,” defined as:

a computer system used by the insured or its third-party service providers that is not physically located in an impacted state but is affected by a cyber operation.

These four levels would give insurers some flexibility to customize policies for customers. Still, none is very friendly to insureds, except through the background principle that an exclusion’s applicability must be proved by the insurer. We have presented before about the impacts of war exclusions (particularly on defense contractors). Such exclusions impact all insureds when cyber threats respect no borders.


We have written before about the insurance industry facing silent and systemic cyber risks. As insurers better map the risk landscape, we expect to see more variety and maturity in such exclusions. But the LMA war exclusion clauses suggest that insurers are — for now — taking a very cautious approach. Consequently — and as premiums for cyber insurance continue to rise — insureds should carefully determine whether their operations are sufficiently insured from foreseeable risks.

Contact Heather Wright or Andrew Tuggle with any questions or to discuss the new provisions’ potential impact on your business today. For updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

ALERT: New State Privacy Requirements for Mortgages Funded After December 1, 2021As of yesterday, any new Freddie Mac mortgage funded will need to comply with state Address Confidentiality Program (ACP) requirements. ACPs are state-sponsored programs designed to protect victims of crimes such as domestic abuse, sexual assault, stalking, or human trafficking from further harm. Recently ACPs have been extended to other individuals, such as healthcare workers and public health officials. Although ACPs have been in effect since at least 1991, with Washington state being the first to adopt such a law, they have largely flown under the radar within many privacy compliance programs. However, these lesser known statutes are now gaining recognition in the world of corporate compliance.

By keeping a victim’s home, work, and/or school address confidential, ACPs act as a shield to prevent perpetrators from finding – and continuing to harm – their victims. ACPs operate by providing a “designated address” for victims to use instead of their physical (or actual) address. When used properly, the designated address diverts a victim’s mail to a confidential third-party location (often a P.O. Box and/or a “lot number”), after which a state agency forwards the mail to the victim’s actual address. Additionally – and perhaps most importantly – ACPs prohibit those with knowledge of a victim’s location information from disclosing it to other parties. In this way, ACPs seek to protect the physical location and safety of victims.

While the obligation to accept and use ACP “designated addresses” (and the corollary designation to keep actual addresses confidential) only applies to government entities in many states, there are a handful of states that apply these obligations to private entities as well.

Some private companies, however, have chosen to expand state ACP law protections to all customers who identify as victims, regardless of whether the underlying state law requires these obligations. Likewise, Freddie Mac, opting to broaden the scope of these obligations, released a bulletin on September 1, 2021 requiring all sellers to inform Freddie Mac of a borrower’s substitute ACP mailing address. Additionally, within five business days after the funding date the seller must email Freddie Mac with the following information:

  • Freddie Mac loan number
  • Borrower name
  • Borrower ACP mailing address (including, when applicable, any lot number or required uniquely identifiable number)

Previously, Freddie Mac did not have a process for identifying borrowers participating in an ACP. Freddie Mac stated in its bulletin that the new guidance was in response to questions regarding its process regarding victim borrowers.

Freddie Mac also updated its delivery instructions for ULDD Data Point Borrower Mail To Address Same As Property Indicator (Sort ID 572) to specify that “false” should be selected when the mailing address is not the same as the mortgaged premises and to add a reference to the notification requirement (see guide impacts: Sections 1301.2 and 6302.9).

CMMC 2.0 – Simplification and Flexibility of DoD Cybersecurity Requirements

Continuing Effort to Protect National Security Data and NetworksEvolving and increasing threats to U.S. defense data and national security networks have necessitated changes and refinements to U.S. regulatory requirements intended to protect such.

In 2016, the U.S. Department of Defense (DoD) issued a Defense Federal Acquisition Regulation Supplement (DFARs) intended to better protect defense data and networks. In 2017, DoD began issuing a series of memoranda to further enhance protection of defense data and networks via Cybersecurity Maturity Model Certification (CMMC). In December 2019, the Department of State, Directorate of Defense Trade Controls (DDTC) issued long-awaited guidance in part governing the minimum encryption requirements for storage, transport and/or transmission of controlled but unclassified information (CUI) and technical defense information (TDI) otherwise restricted by ITAR.

DFARs initiated the government’s efforts to protect national security data and networks by implementing specific NIST cyber requirements for all DoD contractors with access to CUI, TDI or a DoD network. DFARs was self-compliant in nature.

CMMC provided a broad framework to enhance cybersecurity protection for the Defense Industrial Base (DIB). CMMC proposed a verification program to ensure that NIST-compliant cybersecurity protections were in place to protect CUI and TDI that reside on DoD and DoD contractors’ networks. Unlike DFARs, CMMC initially required certification of compliance by an independent cybersecurity expert.

The DoD has announced an updated cybersecurity framework, referred to as CMMC 2.0. The announcement comes after a months-long internal review of the proposed CMMC framework. It still could take nine to 24 months for the final rule to take shape. But for now, CMMC 2.0 promises to be simpler to understand and easier to comply with.

Three Goals of CMMC 2.0

Broadly, CMMC 2.0 is similar to the earlier-proposed framework. Familiar elements include a tiered model, required assessments, and contractual implementation. But the new framework is intended to facilitate three goals identified by DoD’s internal review.

  • Simplify the CMMC standard and provide additional clarity on cybersecurity regulations, policy, and contracting requirements.
  • Focus on the most advanced cybersecurity standards and third-party assessment requirements for companies supporting the highest priority programs.
  • Increase DoD oversight of professional and ethical standards in the assessment ecosystem.

Key Changes under CMMC 2.0

The most impactful changes of CMMC 2.0 are

  • A reduction from five to three security levels.
  • Reduced requirements for third-party certifications.
  • Allowances for plans of actions and milestones (POA&Ms).

CMMC 2.0 has only three levels of cybersecurity

An innovative feature of CMMC 1.0 had been the five-tiered model that tailored a contractor’s cybersecurity requirements according to the type and sensitivity of the information it would handle. CMMC 2.0 keeps this model, but eliminates the two “transitional” levels in order to reduce the total number of security levels to three. This change also makes it easier to predict which level will apply to a given contractor. At this time, it appears that:

  • Level 1 (Foundational) will apply to federal contract information (FCI) and will be similar to the old first level;
  • Level 2 (Advanced) will apply to controlled unclassified information (CUI) and will mirror NIST SP 800-171 (similar to, but simpler than, the old third level); and
  • Level 3 (Expert) will apply to more sensitive CUI and will be partly based on NIST SP 800-172 (possibly similar to the old fifth level).

Significantly, CMMC 2.0 focuses on cybersecurity practices, eliminating the few so-called “maturity processes” that had baffled many DoD contractors.

CMMC 2.0 relieves many certification requirements

Another feature of CMMC 1.0 had been the requirement that all DoD contractors undergo third-party assessment and certification. CMMC 2.0 is much less ambitious and allows Level 1 contractors — and even a subset of Level 2 contractors — to conduct only an annual self-assessment. It is worth noting that a subset of Level 2 contractors — those having “critical national security information” — will still be required to seek triennial third-party certification.

CMMC 2.0 reinstitutes POA&Ms

An initial objective of CMMC 1.0 had been that — by October 2025 — contractual requirements would be fully implemented by DoD contractors. There was no option for partial compliance. CMMC 2.0 reinstitutes a regime that will be familiar to many, by allowing for submission of Plans of Actions and Milestones (POA&Ms). The DoD still intends to specify a baseline number of non-negotiable requirements. But a remaining subset will be addressable by a POA&M with clearly defined timelines. The announced framework even contemplates waivers “to exclude CMMC requirements from acquisitions for select mission-critical requirements.”

Operational takeaways for the defense industrial base

For many DoD contractors, CMMC 2.0 will not significantly impact their required cybersecurity practices — for FCI, focus on basic cyber hygiene; and for CUI, focus on NIST SP 800-171. But the new CMMC 2.0 framework dramatically reduces the number of DoD contractors that will need third-party assessments. It could also allow contractors to delay full compliance through the use of POA&Ms beyond 2025.

Increased Risk of Enforcement

Regardless of the proposed simplicity and flexibility of CMMC 2.0, DoD contractors need to remain vigilant to meet their respective CMMC 2.0 level cybersecurity obligations.

Immediately preceding the CMMC 2.0 announcement, the U.S. Department of Justice (DOJ) announced a new Civil Cyber-Fraud Initiative on October 6 to combat emerging cyber threats to the security of sensitive information and critical systems. In its announcement, the DOJ advised that it would pursue government contractors who fail to follow required cybersecurity standards.

As Bradley has previously reported in more detail, the DOJ plans to utilize the False Claims Act to pursue cybersecurity-related fraud by government contractors or involving government programs, where entities or individuals, put U.S. information or systems at risk by knowingly:

  • Providing deficient cybersecurity products or services
  • Misrepresenting their cybersecurity practices or protocols, or
  • Violating obligations to monitor and report cybersecurity incidents and breaches.

The DOJ also expressed their intent to work closely on the initiative with other federal agencies, subject matter experts and its law enforcement partners throughout the government.

As a result, while CMMC 2.0 will provide some simplicity and flexibility in implementation and operations, U.S. government contractors need to be mindful of their cybersecurity obligations to avoid new heightened enforcement risks.

Contact Andrew Tuggle or David Vance Lucas with any questions about the impact of CMMC 2.0 on your business.

FTC Finalizes Updated Safeguards Rule Under GLBA to Dramatically Expand Data Security Requirements and Scope of RuleUntil now, companies primarily regulated by the Federal Trade Commission (FTC) were given only vague directives to implement systems sufficient to safeguard customer data, coupled with FTC “recommendations” as to best practices. That is about to change with the FTC’s finalization of its proposed amendments to the Standards for Safeguarding Customer Information (Safeguards Rule) on October 27. The new requirements will become effective one year after the rule is published in the Federal Register, so companies should start planning for compliance now to avoid fire drills down the road.

The new Safeguards Rule is more aligned with the requirements imposed by the Federal Financial Institutions Examination Council (FFIEC) for banking and depository institutions and, in some respects, imposes more burdensome requirements. Companies subject to the FTC’s authority should start prepping now to ensure that their current data security practices and infrastructure — and those of their service providers — will survive FTC scrutiny.

Who Is Covered by the Amended Safeguards Rule?

The FTC’s jurisdiction applies to a surprisingly broad range of businesses. This updated rule applies to entities traditionally within the FTC’s jurisdiction for rulemaking and enforcement, which include non-banking (non-depository) institutions such as mortgage brokers, mortgage servicers, payday lenders, and other similar entities.

But the FTC’s jurisdiction does not end there, and in fact, the rule’s definition now encompasses companies that never traditionally would be considered “financial institutions.” For example, the scope of the new rule now broadly applies to businesses that bring together buyers and sellers of a product or service, potentially drawing in companies of all shapes and sizes, such as marketing companies. Furthermore, the FTC has previously determined that higher education institutions also fall within the definition of “financial institutions,” and thus are subject to the rule’s requirements, because higher education institutions participate in financial activities, such as making federal student loans.

Businesses operating in any of these industries should take note of these important changes and determine whether sweeping changes to existing information security programs are necessary.

What Are the Biggest Takeaways?

There are a number of changes contained in the final Safeguards Rule, but the biggest takeaways are:

  1. The definition of “financial institution” has been dramatically expanded to include “finders,” or entities that bring together buyers and sellers of any product or service for transactions;
  2. More specific requirements for information security programs were imposed, including encryption for data both in transit and at rest;
  3. A new small business exception to the Safeguards Rule’s requirements was added;
  4. Mandatory requirements for risk assessments (which now must be set forth in writing) were established; and
  5. Required periodic reporting on information security programs to boards of directors or governing bodies was implemented.

We will explore some of these noteworthy changes in greater detail.

Expanded Definition of “Financial Institution”

The definition of “financial institution” has now been expanded to include a “finder,” which is defined as a company that “bring[s] together one or more buyers and sellers of any product or service for transactions that the parties themselves negotiate and consummate.” To support this change, the FTC reasoned that finders often possess sensitive consumer financial information and should be subject to the requirements of the Safeguards Rule in protecting it from unauthorized disclosure.

Commenters on the proposed rule expressed grave concerns that the term “finder” is overly broad and would inappropriately sweep large swaths of companies into the definition of a “financial institution.” The Association of National Advertisers also expressed concern that advertisers could constitute “finders” under this definition due to their role in connecting buyers and sellers. However, the FTC noted that the although the definition is broad, its scope is significantly limited because the Safeguards Rule applies only to the information of customers and “it will not apply to finders that have only isolated interactions with consumers and that do not receive information from other financial institutions about those institutions’ customers.” It remains to be seen how broadly the term “finder” will be construed, and this could prove to be one of the biggest question marks of the scope of the new rule.

Specific Data Protection Requirements

Whereas the FTC previously left specific aspects of satisfactory information security systems up to the discretion of the business, the FTC now requires that financial institutions address the following:

  • Access controls;
  • Data inventory and classification;
  • Encryption;
  • Secure development practices;
  • Authentication;
  • Information disposal procedures;
  • Change management;
  • Testing; and
  • Incident response.

The biggest takeaway here is that the FTC is imposing more specific requirements, such as encryption, for the protection of sensitive customer information, whereas the previous Safeguards Rule allowed financial institutions to exercise discretion by referring to data protection in generalities. In addition, the rule’s encryption requirements, which include encrypting data both in transit and at rest, are more burdensome than the FFIEC’s proposed guidelines, which do not require banks to encrypt data at rest unless the institution’s risk assessment determines that such encryption is necessary.

Mandatory Periodic Reporting

The new rule now requires that a financial institution’s chief information security officer must now report in writing, at least annually, to the financial institution’s board of directors or governing body regarding the following:

  • The overall status of the information security program and financial institution’s compliance with the Safeguards Rule; and
  • Material matters related to the information security program, addressing issues such as risk assessment, risk management and control decisions, service provider arrangements, results of testing, security events or violations and management’s responses thereto, and recommendations for changes in the information security program.

If the company does not have a board of directors or equivalent governing body, the chief information security officer must “make the report to a senior officer responsible for the financial institution’s information security program.”

Small Business Exemption

The new rule adds a “small business” exemption, which excludes businesses that “maintain customer information concerning” fewer than 5,000 consumers from the following requirements of the new rule:

  • Requiring a written risk assessment (314.4(b)(1));
  • Requiring continuous monitoring or annual penetration testing and biannual vulnerability assessment (314.4(d)(2));
  • Requiring a written incident response plan (314.4(h)); and
  • Requiring an annual written report by the chief information security officer (314.4(i)).

It remains to be seen, however, how many businesses will be able to take advantage of this exception as a practical matter, especially for businesses that are required to maintain consumer records for a certain number of years.

Other Noteworthy Changes

In addition to the important changes outlined above, there are also several other important changes to note. Although not intended to be exhaustive, the list of other changes include:

  • The addition of a definition for “authorized user,” which means “any employee, contractor, agent, customer, or other person that is authorized to access any of your information systems or data.” This term was added in conjunction with specific data access restriction requirements and more specific requirements for monitoring anomalous patterns of usage by “authorized users.”
  • The definition of “security event” now includes the compromise of customer information in physical form, as opposed to only electronic form.
  • The new rule imposes mandatory requirements for risk assessments (which now must be set forth in writing). Risk assessments were already required, but requirements are now more explicit.


In light of these updates, financial institutions should review their policies and procedures, as well as their contracts with service providers, to ensure that all security information systems comply with the new, detailed security requirements of the amended Safeguards Rule. As always, an ounce of prevention on the front end will highly reduce the risk of an FTC enforcement action or consumer litigation down the line.

For more information on this issue and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

A Fintech Leader’s Thoughts on the North Carolina Regulatory Sandbox Act

As part of Bradley’s continuing coverage of the North Carolina Sandbox Act, we wanted to know what community members and NC fintech aficionados thought about this proposed legislation. We posed six questions to Tariq Bokhari, an influential leader in the financial technology (fintech) industry, who serves as the executive director of the Carolina Fintech Hub (CFH). Read more of our conversation below on how this regulatory sandbox will impact North Carolina’s fintech industry.

Bradley: How will the NC Regulatory Sandbox Act affect fintech companies generally?

Bokhari: The premise behind the NC Regulatory Sandbox Act (Innovation Sandbox) is that innovators and startups in tech 1) have difficulty piloting new ideas in a fail-fast manner due to a regulatory system not designed for that, and 2) are viewed and set up as disruptive forces to incumbent stakeholders, rather than opportunities to partner with those incumbents in a win-win scenario. The Innovation Sandbox is designed to create tools that decrease both of those headwinds that are pervasive across the country, and in doing so create a competitive advantage for our region and all that reside within it. The Carolina Fintech Hub has championed this effort for several years now, and found like-minded partners like the NC Blockchain Initiative, because we strongly believe being the most entrepreneurial and nimble of the 50 states will position us as global leaders in technology and innovation.

Bradley: What products and services are applicable for this program?

Bokhari: Its scope can truly be anything that touches technology, although the initial focus will be on fintech, insurtech and blockchain. I envision this program being expanded after one or two years to include other areas, like possibly securities, thus making the program more comprehensive.

Bradley: With regulatory sandboxes already being set up for other states’ finance and insurance economies, do you see a possible playbook for North Carolina’s fintech industry?

Bokhari: There are a few unique differences with NC’s Innovation Sandbox, including our unique focus on promoting the partnership between our incumbents and startups rather than disruptive friction between them.

Bradley: Can you elaborate on what makes NC’s Innovation Sandbox unique?

Bokhari: With its unique sandbox approach, North Carolina decided to start simple while allowing for natural evolution, namely by embedding formalized accountability around innovation across NC via an Innovation Commission.

This Innovation Commission is designed to be centralized (not embedded in any one state agency), cross-representative (to maximize collaboration), lightweight in its design (very simple in its mandate) and serve as a clearing house of innovation requests and ideas (sourced from the industry with the help of established non-governmental organizations (NGOs)).

The Innovation Commission is really envisioned to have only two major tasks: 1) review the requests of those who apply to participate in any of the sandbox’s available tools, and if it deems the requests to have merit, route them to the appropriate regulating agency or agencies for ultimate decisioning; and 2) review requests to create new tools that enable further innovation, and if it deems the requests to have merit, route them to the appropriate regulating agency or agencies for ultimate decisioning.

Bradley: How will this proposed legislation change economic development in North Carolina?

Bokhari: In its simplest form, this legislation will create an Innovation Commission that will give North Carolina a significant advantage over every other state in recruiting and retaining tech companies. These companies will be able to perform certain activities with reduced governmental red tape here.

Bradley: How does Carolina Fintech Hub plan to help their fintech partners strike that balance between protecting consumers while promoting emerging fintech technologies and innovations in the field?

Bokhari: Defining the tools is the most challenging task for any state to address, so the “secret sauce” in our approach is not trying to assume what set of tools is needed up front. Instead of assuming, we use a platform that can react to the market demands for tools as they are recognized in this formal Innovation Commission structure, while still operating within the confines of the existing regulatory agency construct to avoid unneeded or complicated friction.

The proposed legislation envisions a tool that has also been incorporated in other states’ sandbox efforts to date. This tool would enable small-scale piloting of innovations without having to apply for what may otherwise be cumbersome licenses or having to build out large-scale compliance programs for certain regulatory frameworks.

In addition to the tool described above, there are two additional tools that are in the hopper for near term exploration when the Innovation Commission is established: 1) after successful completion of the centralized sandbox program, a startup receives a limited scope “stamp of recognition” that can provide additional confidence to incumbent banks and institutions and their vendor risk management processes when they contemplate engaging the startup; and 2) a blockchain Innovation Sandbox use case. This still requires significant design and vetting. And the goals here are aspirational: I envision, not only startups that are freestanding entities taking advantage of the Innovation Sandbox, but also startups acquired or established as affiliates or subsidiaries by big financial institutions utilizing the sandbox as well, as the model matures.

Bradley: Do you anticipate the Regulatory Sandbox Act will slow the pace of companies integrating emerging technology, including blockchain technology, in North Carolina’s fintech space? And will the act attract companies to relocate to NC?

Bokhari: I am highly confident this legislation will multiply the pace of innovative, emerging technology across NC, as well as our ability to recruit nationally and internationally, for a simple reason: Companies will be able to operate with less friction, and capitalize on more partnerships with our incumbents in NC, more so than in any other state. I am most interested in the blockchain aspects of the regulation. I am seeing a spike in smart contract and crypto activity lately, but most places across the country don’t even know this activity is happening, let alone have a sophisticated system to champion it statewide.

Bradley is closely monitoring this legislation and will provide continuing coverage of the proposed bill in the coming weeks and months. If passed, the legislation could become effective October 1, 2021.