The Lloyd’s Market Association (the “LMA”) recently released four model clauses to exclude coverage for “war” from cyber insurance policies. The exclusions align with the requirement that all insurance policies written at Lloyd’s must exclude losses caused by war. Given the insurance industry’s weakening appetite for cyber risks, the issue for insureds is the extent to which the broad definition of “war” in these exclusions could give insurers wide latitude for denial of coverage beyond the traditional concept of “war” between sovereign states.

Standardizing definitions: war

The four exclusions together create four levels of coverage based on a consistent set of definitions for key terms. All four exclude cyber losses caused by “war,” defined broadly to mean:

the use of physical force by a state against another state or as part of a civil war, rebellion, revolution, insurrection, and/or

military or usurped power or confiscation or nationalization or requisition or destruction of or damage to property by or under the order of any government or public or local authority.

The definition emphasizes action directed by a “[sovereign] state [or] any government or public or local authority.” The full scope of “local authority” is unclear but is potentially far-reaching. For example, the Ninth Circuit once held that a war exclusion had not been triggered by actions of Hamas, because the “foreign terrorist organization” was not a sovereign state. But Hamas could well have been considered a “local authority [with] military or usurped power” — and losses due to actions of Hamas might therefore be excluded from coverage under the new LMA exclusions. Similarly, inclusion of terms such as “revolution” and “insurrection” have the potential to extend the scope of this exclusion bar beyond the traditional understanding of what constitutes “war.”

Standardizing definitions: cyber operations (and attribution)

Further, the exclusions each exclude losses caused by (some) “cyber operations,” the definition of which also focuses on state-to-state activity:

Cyber operation means the use of a computer system by or on behalf of a state to disrupt, deny, degrade, manipulate or destroy information in a computer system of or in another state.

Attribution of a cyber operation as being “by or on behalf of a state” is tricky. The Office of the Director of National Intelligence explained in a 2018 document that attribution is “painstaking” and “difficult” and that there is “[n]o simple technical process or automated solution.”

The exclusions prescribe that attribution be determined first by “the government of the state in which the computer system affected … is physically located.” Among other problems with this procedure is that such a state could itself be the perpetrator of the cyber operation. In the absence of the state’s attribution, “it shall be for the insurer to prove attribution.”

Four degrees of exclusion of cyber operations

The four clauses each use the same definitions, exclude war losses, and prescribe the same criteria for attribution of cyber operations. But the clauses differ in the degree to which each excludes losses from cyber operations.

  • Exclusion No. 1 (LMA5564) is the strictest. It excludes losses from all cyber operations.
  • Exclusion No. 2 (LMA5565) does cover — with specified coverage limits — losses that are not due to cyber operations that either: (1) are retaliatory between China, France, Germany, Japan, Russia, UK, or USA; or (2) have a “major detrimental impact” on a state’s security, defense, or “essential services.” The exclusion does not define either “retaliatory” or “major detrimental impact.”
  • Exclusion No. 3 (LMA5566) provides for the same losses as does Exclusion No. 2, but without specifying coverage limits.
  • Exclusion No. 4 (LMA5567) is the most generous (but is still restrictive). In addition to the coverage of Exclusion No. 3, it also covers effects on “bystanding cyber assets,” defined as:

a computer system used by the insured or its third-party service providers that is not physically located in an impacted state but is affected by a cyber operation.

These four levels would give insurers some flexibility to customize policies for customers. Still, none is very friendly to insureds, except through the background principle that an exclusion’s applicability must be proved by the insurer. We have presented before about the impacts of war exclusions (particularly on defense contractors). Such exclusions impact all insureds when cyber threats respect no borders.

Outlook

We have written before about the insurance industry facing silent and systemic cyber risks. As insurers better map the risk landscape, we expect to see more variety and maturity in such exclusions. But the LMA war exclusion clauses suggest that insurers are — for now — taking a very cautious approach. Consequently — and as premiums for cyber insurance continue to rise — insureds should carefully determine whether their operations are sufficiently insured from foreseeable risks.

Contact Heather Wright or Andrew Tuggle with any questions or to discuss the new provisions’ potential impact on your business today. For updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

ALERT: New State Privacy Requirements for Mortgages Funded After December 1, 2021As of yesterday, any new Freddie Mac mortgage funded will need to comply with state Address Confidentiality Program (ACP) requirements. ACPs are state-sponsored programs designed to protect victims of crimes such as domestic abuse, sexual assault, stalking, or human trafficking from further harm. Recently ACPs have been extended to other individuals, such as healthcare workers and public health officials. Although ACPs have been in effect since at least 1991, with Washington state being the first to adopt such a law, they have largely flown under the radar within many privacy compliance programs. However, these lesser known statutes are now gaining recognition in the world of corporate compliance.

By keeping a victim’s home, work, and/or school address confidential, ACPs act as a shield to prevent perpetrators from finding – and continuing to harm – their victims. ACPs operate by providing a “designated address” for victims to use instead of their physical (or actual) address. When used properly, the designated address diverts a victim’s mail to a confidential third-party location (often a P.O. Box and/or a “lot number”), after which a state agency forwards the mail to the victim’s actual address. Additionally – and perhaps most importantly – ACPs prohibit those with knowledge of a victim’s location information from disclosing it to other parties. In this way, ACPs seek to protect the physical location and safety of victims.

While the obligation to accept and use ACP “designated addresses” (and the corollary designation to keep actual addresses confidential) only applies to government entities in many states, there are a handful of states that apply these obligations to private entities as well.

Some private companies, however, have chosen to expand state ACP law protections to all customers who identify as victims, regardless of whether the underlying state law requires these obligations. Likewise, Freddie Mac, opting to broaden the scope of these obligations, released a bulletin on September 1, 2021 requiring all sellers to inform Freddie Mac of a borrower’s substitute ACP mailing address. Additionally, within five business days after the funding date the seller must email Freddie Mac with the following information:

  • Freddie Mac loan number
  • Borrower name
  • Borrower ACP mailing address (including, when applicable, any lot number or required uniquely identifiable number)

Previously, Freddie Mac did not have a process for identifying borrowers participating in an ACP. Freddie Mac stated in its bulletin that the new guidance was in response to questions regarding its process regarding victim borrowers.

Freddie Mac also updated its delivery instructions for ULDD Data Point Borrower Mail To Address Same As Property Indicator (Sort ID 572) to specify that “false” should be selected when the mailing address is not the same as the mortgaged premises and to add a reference to the notification requirement (see guide impacts: Sections 1301.2 and 6302.9).

CMMC 2.0 – Simplification and Flexibility of DoD Cybersecurity Requirements

Continuing Effort to Protect National Security Data and NetworksEvolving and increasing threats to U.S. defense data and national security networks have necessitated changes and refinements to U.S. regulatory requirements intended to protect such.

In 2016, the U.S. Department of Defense (DoD) issued a Defense Federal Acquisition Regulation Supplement (DFARs) intended to better protect defense data and networks. In 2017, DoD began issuing a series of memoranda to further enhance protection of defense data and networks via Cybersecurity Maturity Model Certification (CMMC). In December 2019, the Department of State, Directorate of Defense Trade Controls (DDTC) issued long-awaited guidance in part governing the minimum encryption requirements for storage, transport and/or transmission of controlled but unclassified information (CUI) and technical defense information (TDI) otherwise restricted by ITAR.

DFARs initiated the government’s efforts to protect national security data and networks by implementing specific NIST cyber requirements for all DoD contractors with access to CUI, TDI or a DoD network. DFARs was self-compliant in nature.

CMMC provided a broad framework to enhance cybersecurity protection for the Defense Industrial Base (DIB). CMMC proposed a verification program to ensure that NIST-compliant cybersecurity protections were in place to protect CUI and TDI that reside on DoD and DoD contractors’ networks. Unlike DFARs, CMMC initially required certification of compliance by an independent cybersecurity expert.

The DoD has announced an updated cybersecurity framework, referred to as CMMC 2.0. The announcement comes after a months-long internal review of the proposed CMMC framework. It still could take nine to 24 months for the final rule to take shape. But for now, CMMC 2.0 promises to be simpler to understand and easier to comply with.

Three Goals of CMMC 2.0

Broadly, CMMC 2.0 is similar to the earlier-proposed framework. Familiar elements include a tiered model, required assessments, and contractual implementation. But the new framework is intended to facilitate three goals identified by DoD’s internal review.

  • Simplify the CMMC standard and provide additional clarity on cybersecurity regulations, policy, and contracting requirements.
  • Focus on the most advanced cybersecurity standards and third-party assessment requirements for companies supporting the highest priority programs.
  • Increase DoD oversight of professional and ethical standards in the assessment ecosystem.

Key Changes under CMMC 2.0

The most impactful changes of CMMC 2.0 are

  • A reduction from five to three security levels.
  • Reduced requirements for third-party certifications.
  • Allowances for plans of actions and milestones (POA&Ms).

CMMC 2.0 has only three levels of cybersecurity

An innovative feature of CMMC 1.0 had been the five-tiered model that tailored a contractor’s cybersecurity requirements according to the type and sensitivity of the information it would handle. CMMC 2.0 keeps this model, but eliminates the two “transitional” levels in order to reduce the total number of security levels to three. This change also makes it easier to predict which level will apply to a given contractor. At this time, it appears that:

  • Level 1 (Foundational) will apply to federal contract information (FCI) and will be similar to the old first level;
  • Level 2 (Advanced) will apply to controlled unclassified information (CUI) and will mirror NIST SP 800-171 (similar to, but simpler than, the old third level); and
  • Level 3 (Expert) will apply to more sensitive CUI and will be partly based on NIST SP 800-172 (possibly similar to the old fifth level).

Significantly, CMMC 2.0 focuses on cybersecurity practices, eliminating the few so-called “maturity processes” that had baffled many DoD contractors.

CMMC 2.0 relieves many certification requirements

Another feature of CMMC 1.0 had been the requirement that all DoD contractors undergo third-party assessment and certification. CMMC 2.0 is much less ambitious and allows Level 1 contractors — and even a subset of Level 2 contractors — to conduct only an annual self-assessment. It is worth noting that a subset of Level 2 contractors — those having “critical national security information” — will still be required to seek triennial third-party certification.

CMMC 2.0 reinstitutes POA&Ms

An initial objective of CMMC 1.0 had been that — by October 2025 — contractual requirements would be fully implemented by DoD contractors. There was no option for partial compliance. CMMC 2.0 reinstitutes a regime that will be familiar to many, by allowing for submission of Plans of Actions and Milestones (POA&Ms). The DoD still intends to specify a baseline number of non-negotiable requirements. But a remaining subset will be addressable by a POA&M with clearly defined timelines. The announced framework even contemplates waivers “to exclude CMMC requirements from acquisitions for select mission-critical requirements.”

Operational takeaways for the defense industrial base

For many DoD contractors, CMMC 2.0 will not significantly impact their required cybersecurity practices — for FCI, focus on basic cyber hygiene; and for CUI, focus on NIST SP 800-171. But the new CMMC 2.0 framework dramatically reduces the number of DoD contractors that will need third-party assessments. It could also allow contractors to delay full compliance through the use of POA&Ms beyond 2025.

Increased Risk of Enforcement

Regardless of the proposed simplicity and flexibility of CMMC 2.0, DoD contractors need to remain vigilant to meet their respective CMMC 2.0 level cybersecurity obligations.

Immediately preceding the CMMC 2.0 announcement, the U.S. Department of Justice (DOJ) announced a new Civil Cyber-Fraud Initiative on October 6 to combat emerging cyber threats to the security of sensitive information and critical systems. In its announcement, the DOJ advised that it would pursue government contractors who fail to follow required cybersecurity standards.

As Bradley has previously reported in more detail, the DOJ plans to utilize the False Claims Act to pursue cybersecurity-related fraud by government contractors or involving government programs, where entities or individuals, put U.S. information or systems at risk by knowingly:

  • Providing deficient cybersecurity products or services
  • Misrepresenting their cybersecurity practices or protocols, or
  • Violating obligations to monitor and report cybersecurity incidents and breaches.

The DOJ also expressed their intent to work closely on the initiative with other federal agencies, subject matter experts and its law enforcement partners throughout the government.

As a result, while CMMC 2.0 will provide some simplicity and flexibility in implementation and operations, U.S. government contractors need to be mindful of their cybersecurity obligations to avoid new heightened enforcement risks.

Contact Andrew Tuggle or David Vance Lucas with any questions about the impact of CMMC 2.0 on your business.

FTC Finalizes Updated Safeguards Rule Under GLBA to Dramatically Expand Data Security Requirements and Scope of RuleUntil now, companies primarily regulated by the Federal Trade Commission (FTC) were given only vague directives to implement systems sufficient to safeguard customer data, coupled with FTC “recommendations” as to best practices. That is about to change with the FTC’s finalization of its proposed amendments to the Standards for Safeguarding Customer Information (Safeguards Rule) on October 27. The new requirements will become effective one year after the rule is published in the Federal Register, so companies should start planning for compliance now to avoid fire drills down the road.

The new Safeguards Rule is more aligned with the requirements imposed by the Federal Financial Institutions Examination Council (FFIEC) for banking and depository institutions and, in some respects, imposes more burdensome requirements. Companies subject to the FTC’s authority should start prepping now to ensure that their current data security practices and infrastructure — and those of their service providers — will survive FTC scrutiny.

Who Is Covered by the Amended Safeguards Rule?

The FTC’s jurisdiction applies to a surprisingly broad range of businesses. This updated rule applies to entities traditionally within the FTC’s jurisdiction for rulemaking and enforcement, which include non-banking (non-depository) institutions such as mortgage brokers, mortgage servicers, payday lenders, and other similar entities.

But the FTC’s jurisdiction does not end there, and in fact, the rule’s definition now encompasses companies that never traditionally would be considered “financial institutions.” For example, the scope of the new rule now broadly applies to businesses that bring together buyers and sellers of a product or service, potentially drawing in companies of all shapes and sizes, such as marketing companies. Furthermore, the FTC has previously determined that higher education institutions also fall within the definition of “financial institutions,” and thus are subject to the rule’s requirements, because higher education institutions participate in financial activities, such as making federal student loans.

Businesses operating in any of these industries should take note of these important changes and determine whether sweeping changes to existing information security programs are necessary.

What Are the Biggest Takeaways?

There are a number of changes contained in the final Safeguards Rule, but the biggest takeaways are:

  1. The definition of “financial institution” has been dramatically expanded to include “finders,” or entities that bring together buyers and sellers of any product or service for transactions;
  2. More specific requirements for information security programs were imposed, including encryption for data both in transit and at rest;
  3. A new small business exception to the Safeguards Rule’s requirements was added;
  4. Mandatory requirements for risk assessments (which now must be set forth in writing) were established; and
  5. Required periodic reporting on information security programs to boards of directors or governing bodies was implemented.

We will explore some of these noteworthy changes in greater detail.

Expanded Definition of “Financial Institution”

The definition of “financial institution” has now been expanded to include a “finder,” which is defined as a company that “bring[s] together one or more buyers and sellers of any product or service for transactions that the parties themselves negotiate and consummate.” To support this change, the FTC reasoned that finders often possess sensitive consumer financial information and should be subject to the requirements of the Safeguards Rule in protecting it from unauthorized disclosure.

Commenters on the proposed rule expressed grave concerns that the term “finder” is overly broad and would inappropriately sweep large swaths of companies into the definition of a “financial institution.” The Association of National Advertisers also expressed concern that advertisers could constitute “finders” under this definition due to their role in connecting buyers and sellers. However, the FTC noted that the although the definition is broad, its scope is significantly limited because the Safeguards Rule applies only to the information of customers and “it will not apply to finders that have only isolated interactions with consumers and that do not receive information from other financial institutions about those institutions’ customers.” It remains to be seen how broadly the term “finder” will be construed, and this could prove to be one of the biggest question marks of the scope of the new rule.

Specific Data Protection Requirements

Whereas the FTC previously left specific aspects of satisfactory information security systems up to the discretion of the business, the FTC now requires that financial institutions address the following:

  • Access controls;
  • Data inventory and classification;
  • Encryption;
  • Secure development practices;
  • Authentication;
  • Information disposal procedures;
  • Change management;
  • Testing; and
  • Incident response.

The biggest takeaway here is that the FTC is imposing more specific requirements, such as encryption, for the protection of sensitive customer information, whereas the previous Safeguards Rule allowed financial institutions to exercise discretion by referring to data protection in generalities. In addition, the rule’s encryption requirements, which include encrypting data both in transit and at rest, are more burdensome than the FFIEC’s proposed guidelines, which do not require banks to encrypt data at rest unless the institution’s risk assessment determines that such encryption is necessary.

Mandatory Periodic Reporting

The new rule now requires that a financial institution’s chief information security officer must now report in writing, at least annually, to the financial institution’s board of directors or governing body regarding the following:

  • The overall status of the information security program and financial institution’s compliance with the Safeguards Rule; and
  • Material matters related to the information security program, addressing issues such as risk assessment, risk management and control decisions, service provider arrangements, results of testing, security events or violations and management’s responses thereto, and recommendations for changes in the information security program.

If the company does not have a board of directors or equivalent governing body, the chief information security officer must “make the report to a senior officer responsible for the financial institution’s information security program.”

Small Business Exemption

The new rule adds a “small business” exemption, which excludes businesses that “maintain customer information concerning” fewer than 5,000 consumers from the following requirements of the new rule:

  • Requiring a written risk assessment (314.4(b)(1));
  • Requiring continuous monitoring or annual penetration testing and biannual vulnerability assessment (314.4(d)(2));
  • Requiring a written incident response plan (314.4(h)); and
  • Requiring an annual written report by the chief information security officer (314.4(i)).

It remains to be seen, however, how many businesses will be able to take advantage of this exception as a practical matter, especially for businesses that are required to maintain consumer records for a certain number of years.

Other Noteworthy Changes

In addition to the important changes outlined above, there are also several other important changes to note. Although not intended to be exhaustive, the list of other changes include:

  • The addition of a definition for “authorized user,” which means “any employee, contractor, agent, customer, or other person that is authorized to access any of your information systems or data.” This term was added in conjunction with specific data access restriction requirements and more specific requirements for monitoring anomalous patterns of usage by “authorized users.”
  • The definition of “security event” now includes the compromise of customer information in physical form, as opposed to only electronic form.
  • The new rule imposes mandatory requirements for risk assessments (which now must be set forth in writing). Risk assessments were already required, but requirements are now more explicit.

Takeaways

In light of these updates, financial institutions should review their policies and procedures, as well as their contracts with service providers, to ensure that all security information systems comply with the new, detailed security requirements of the amended Safeguards Rule. As always, an ounce of prevention on the front end will highly reduce the risk of an FTC enforcement action or consumer litigation down the line.

For more information on this issue and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

A Fintech Leader’s Thoughts on the North Carolina Regulatory Sandbox Act

As part of Bradley’s continuing coverage of the North Carolina Sandbox Act, we wanted to know what community members and NC fintech aficionados thought about this proposed legislation. We posed six questions to Tariq Bokhari, an influential leader in the financial technology (fintech) industry, who serves as the executive director of the Carolina Fintech Hub (CFH). Read more of our conversation below on how this regulatory sandbox will impact North Carolina’s fintech industry.

Bradley: How will the NC Regulatory Sandbox Act affect fintech companies generally?

Bokhari: The premise behind the NC Regulatory Sandbox Act (Innovation Sandbox) is that innovators and startups in tech 1) have difficulty piloting new ideas in a fail-fast manner due to a regulatory system not designed for that, and 2) are viewed and set up as disruptive forces to incumbent stakeholders, rather than opportunities to partner with those incumbents in a win-win scenario. The Innovation Sandbox is designed to create tools that decrease both of those headwinds that are pervasive across the country, and in doing so create a competitive advantage for our region and all that reside within it. The Carolina Fintech Hub has championed this effort for several years now, and found like-minded partners like the NC Blockchain Initiative, because we strongly believe being the most entrepreneurial and nimble of the 50 states will position us as global leaders in technology and innovation.

Bradley: What products and services are applicable for this program?

Bokhari: Its scope can truly be anything that touches technology, although the initial focus will be on fintech, insurtech and blockchain. I envision this program being expanded after one or two years to include other areas, like possibly securities, thus making the program more comprehensive.

Bradley: With regulatory sandboxes already being set up for other states’ finance and insurance economies, do you see a possible playbook for North Carolina’s fintech industry?

Bokhari: There are a few unique differences with NC’s Innovation Sandbox, including our unique focus on promoting the partnership between our incumbents and startups rather than disruptive friction between them.

Bradley: Can you elaborate on what makes NC’s Innovation Sandbox unique?

Bokhari: With its unique sandbox approach, North Carolina decided to start simple while allowing for natural evolution, namely by embedding formalized accountability around innovation across NC via an Innovation Commission.

This Innovation Commission is designed to be centralized (not embedded in any one state agency), cross-representative (to maximize collaboration), lightweight in its design (very simple in its mandate) and serve as a clearing house of innovation requests and ideas (sourced from the industry with the help of established non-governmental organizations (NGOs)).

The Innovation Commission is really envisioned to have only two major tasks: 1) review the requests of those who apply to participate in any of the sandbox’s available tools, and if it deems the requests to have merit, route them to the appropriate regulating agency or agencies for ultimate decisioning; and 2) review requests to create new tools that enable further innovation, and if it deems the requests to have merit, route them to the appropriate regulating agency or agencies for ultimate decisioning.

Bradley: How will this proposed legislation change economic development in North Carolina?

Bokhari: In its simplest form, this legislation will create an Innovation Commission that will give North Carolina a significant advantage over every other state in recruiting and retaining tech companies. These companies will be able to perform certain activities with reduced governmental red tape here.

Bradley: How does Carolina Fintech Hub plan to help their fintech partners strike that balance between protecting consumers while promoting emerging fintech technologies and innovations in the field?

Bokhari: Defining the tools is the most challenging task for any state to address, so the “secret sauce” in our approach is not trying to assume what set of tools is needed up front. Instead of assuming, we use a platform that can react to the market demands for tools as they are recognized in this formal Innovation Commission structure, while still operating within the confines of the existing regulatory agency construct to avoid unneeded or complicated friction.

The proposed legislation envisions a tool that has also been incorporated in other states’ sandbox efforts to date. This tool would enable small-scale piloting of innovations without having to apply for what may otherwise be cumbersome licenses or having to build out large-scale compliance programs for certain regulatory frameworks.

In addition to the tool described above, there are two additional tools that are in the hopper for near term exploration when the Innovation Commission is established: 1) after successful completion of the centralized sandbox program, a startup receives a limited scope “stamp of recognition” that can provide additional confidence to incumbent banks and institutions and their vendor risk management processes when they contemplate engaging the startup; and 2) a blockchain Innovation Sandbox use case. This still requires significant design and vetting. And the goals here are aspirational: I envision, not only startups that are freestanding entities taking advantage of the Innovation Sandbox, but also startups acquired or established as affiliates or subsidiaries by big financial institutions utilizing the sandbox as well, as the model matures.

Bradley: Do you anticipate the Regulatory Sandbox Act will slow the pace of companies integrating emerging technology, including blockchain technology, in North Carolina’s fintech space? And will the act attract companies to relocate to NC?

Bokhari: I am highly confident this legislation will multiply the pace of innovative, emerging technology across NC, as well as our ability to recruit nationally and internationally, for a simple reason: Companies will be able to operate with less friction, and capitalize on more partnerships with our incumbents in NC, more so than in any other state. I am most interested in the blockchain aspects of the regulation. I am seeing a spike in smart contract and crypto activity lately, but most places across the country don’t even know this activity is happening, let alone have a sophisticated system to champion it statewide.

Bradley is closely monitoring this legislation and will provide continuing coverage of the proposed bill in the coming weeks and months. If passed, the legislation could become effective October 1, 2021.

Another Data Privacy Law? Colorado Enacts the Colorado Privacy ActColorado became the third state to enact comprehensive data privacy legislation when Gov. Jared Polis signed the Colorado Privacy Act (CPA) on July 8, 2021. The CPA shares similarities with its stateside predecessors, the California Consumer Privacy Act (CCPA), the California Privacy Rights Enforcement Act (CPRA), and the Virginia Consumer Data Protection Act (VCDPA), as well as the European Union’s General Data Protection Regulation (GDPR). But the CPA’s nuances must be considered as companies subject to these statutes craft holistic compliance programs.

The CPA goes into effect on July 1, 2023. But, given its complexity, the time for companies to start preparing is now. Here are some answers to questions about the scope of the new law, the consumer rights it provides, the obligations it imposes on businesses, and its enforcement methods.

Does the CPA apply to my business?

The CPA’s jurisdictional scope is most like the VCDPA’s. The CPA applies to any “controller” – defined as an entity that “determines the purposes for and means of processing personal data” – that “conducts business in Colorado” or produces or delivers products or services “intentionally targeted” to Colorado residents and either (1) controls or processes the personal data of 100,000+ “consumers” each calendar year; or (2) controls or processes the personal data of 25,000+ consumers and derives revenue or receives discounts from selling personal data.

“Personal data” is defined as “information that is linked or reasonably linkable to an identified or identifiable individual” other than “publicly available information” or “de-identified data.” The CPA defines a “consumer” as a “Colorado resident acting only in an individual or household context.”

The CPA provides entity-level exemptions to air carriers and national securities associations, among others. Unlike the CCPA, CPRA, and VCDPA, the CPA does not provide an entity-level exemption to non-profit organizations.

How do I handle consumer requests regarding their personal data?

The CPA provides consumers with the right to submit authenticated requests to a controller to (1) opt-out of certain processing of their personal data; (2) access their personal data and confirm if it is being processed; (3) correct inaccuracies in their personal data; (4) delete their personal data; and (5) obtain their personal data in a portable format. A controller must inform the consumer of any actions taken or not taken in response within certain timelines.

Like the VCDPA and unlike the CCPA and CPRA, the CPA provides consumers with the right to appeal a controller’s decision concerning an authenticated request. Controllers must set up internal processes for handling such appeals.

What are a consumer’s opt-out rights?

The CPA provides consumers with the right to opt-out of the processing of personal data for: (1) sale; (2) targeted advertising; or (3) profiling. The final two opt-out rights are also found in the VCDPA and CPRA, but not the CCPA.

Like the CCPA’s definition, the CPA’s definition of the “sale” of personal data is broad: “the exchange of personal data for monetary or other valuable consideration by a controller to a third party.” But the CPA’s exceptions to this definition are much broader. Under the CPA, a controller does not sell personal data by disclosing personal data (1) to an affiliate; (2) to a “processor” that processes the personal data on the controller’s behalf; (3) to a third party for “purposes of providing a product or service” that the consumer requests; (4) that a consumer “directs the controller to disclose or intentionally discloses by using the controller to interact” with a third party; or (5) that the consumer “intentionally made available … to the general public via a channel of mass media.”

When do I have to obtain opt-in consent from a consumer?

The CPA requires that a controller obtain opt-in consent before processing (1) “sensitive” data; (2) the personal data of a “known” child;  or (3) personal data “for purposes that are not reasonably necessary to or compatible with” the processing purposes that the controller previously specified to the consumer. To provide the requisite “consent,” a consumer must make a “clear, affirmative act” that signifies their “freely given, specific, informed, and unambiguous agreement” to the processing.

Importantly, a consumer accepting broad terms of use or a “similar document that contains descriptions of personal data processing along with other, unrelated information” does not constitute consent. Nor does an agreement obtained through “dark patterns,” which the CPA defines as user interfaces “designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making, or choice.”

What does my privacy notice have to say?

A controller must provide consumers with a privacy notice that is “reasonably accessible, clear, and meaningful.” All privacy notices must include the following information: (1) the categories of personal data collected or processed; (2) the purposes for which personal data are processed; (3) the categories of personal data shared with third parties; (4) the categories of third parties with whom the controller shares personal data; and (5) how a consumer can submit authenticated requests and appeals regarding such requests.

If a controller sells personal data or processes such data for targeted advertising, the privacy notice must “clearly and conspicuously disclose” that fact and how consumers can opt-out. The “opt-out method” also must be provided in a separate location that is “clear, conspicuous, and readily accessible.”

Do I have to perform data protection assessments?

Similar to the VCDPA and GDPR, the CPA requires that controllers conduct and document a “data protection assessment” regarding each of its processing activities that: (1) involves personal data acquired on or after July 1, 2023; and (2) presents a “heightened risk of harm” to a consumer. Processing that presents such a heightened risk includes (1) selling personal data; (2) processing sensitive data; (3) processing personal data for targeted advertising; or (4) processing personal data for profiling that presents a “reasonably foreseeable risk” of certain consumer harms.

Among other requirements, a data protection assessment must “identify and weigh” the benefits of the processing to the “controller, the consumer, other stakeholders, and the public” against “the potential risks to the rights of the consumer,” as “mitigated by safeguards that the controller can employ to reduce the risks.”

What are my data minimization and security requirements?

A controller’s “collection of personal data must be adequate, relevant, and limited to what is reasonably necessary” to the processing’s purposes that have been disclosed to the consumer. As noted above, a controller cannot process for another purpose without the consumer’s consent.

A controller must take “reasonable measures” to secure personal data from “unauthorized acquisition” during storage and use. These data security practices must be appropriate for the “nature” of the controller’s business and the “volume, scope, and nature of the personal data processed.”

How is the CPA enforced?

Unlike the CCPA, the CPA does not provide a private right of action. It is enforceable only by Colorado’s attorney general and district attorneys. CPA violations constitute a deceptive trade practice and are thus subject to civil penalties of up to $20,000 per violation.

Until January 1, 2025, the attorney general or district attorney must provide notice of a violation, which triggers a 60-day cure period. If the controller fails to cure the violation within this period, the attorney general or district attorney may initiate an enforcement action.

Conclusion

While the CPA’s similarities to predecessor privacy statutes will allow companies to leverage their current compliance efforts to obtain CPA compliance, the statute’s enactment nonetheless adds another layer to already onerous data privacy obligations.

Bradley’s Cybersecurity and Privacy team is here to help. Stay tuned for further updates and alerts on privacy-law developments by subscribing to the Online and OnPoint blog.

Technology Boom in NC? What You Should Know About the Proposed Regulatory Sandbox in the Tarheel StateTechnology is evolving and advancing at a dizzying pace across the globe. Emerging technologies are reimagining everything from how we interact with each other to how we interact with businesses and institutions. Given the upward trajectory of technology, it seems that the “innovation” business is ripe for opportunity — an opportunity that appears poised to take off in North Carolina.

In 2021 alone, North Carolina has been the target for some very high-profile technology announcements, including Google’s plans to open a cloud engineering hub in Durham and Apple’s new campus in Research Triangle Park. These exciting developments are now coupled with recent proposed legislation that would create a “regulatory sandbox” further incentivizing technological economic development to expand North Carolina citizens’ access to products and services or unique business models not currently widely available.

A regulatory sandbox allows companies and entrepreneurs to test emerging technologies, products, services, or business models at the leading edge of (or even outside of) an established regulatory framework. Sandboxes have popped up across the country — from Arizona in 2018 to Kentucky, Nevada, Utah, Vermont, and Wyoming in 2019 to Florida and West Virginia in 2020 — as a way of spurring economic growth and breaking down the barriers to market access often faced by creative business models and startups. North Carolina is one of the most recent states to investigate the potential economic benefits of becoming an innovation hub. Although North Carolina tried and failed to implement a sandbox in 2019, the 2021 iteration seems more likely to succeed given the growing number of peer states that have since adopted, or are currently working on, comparable sandbox-creating legislation.

North Carolina’s Regulatory Sandbox Act of 2021 (the “NC Sandbox Act”) seeks to establish a more flexible regulatory environment for the financial services and insurance industries within the state. Here is what you should know about the proposed NC Sandbox Act, which is currently pending before the Committee on Commerce:

Purpose and Applicability

The NC Sandbox Act would permit an applicant to temporarily test an innovative financial product or service, making such product or service available to consumers on a limited basis without subjecting the applicant company to certain licensing or other regulatory obligations otherwise imposed under applicable state law.

The NC Sandbox Act would apply to entities regulated by the Office of Commissioner of Banks or the Department of Insurance and offering a product or service that falls within the definition of an “innovative product or service,” i.e., the entities are using a new or emerging technology, or are providing products, services, business models, or delivery mechanisms not currently widely accessible to the public.

Establishment of North Carolina Innovation Council

To govern the program, the NC Sandbox Act proposes to create an “Innovation Council,” which would be tasked with supporting innovation, investment, and job creation within North Carolina by encouraging participation in the regulatory sandbox. The 11-person council would set standards, principles, guidelines, and policy priorities for the types of innovations that the regulatory sandbox program would support. Interestingly, early analysis of the bill expressly mentions authorizing the Innovation Council to focus on blockchain initiatives (here’s a legislative analysis of SB470). The Innovation Council would also be responsible for approving admission into the regulatory sandbox program.

Innovation Waiver Applications

For $50, an innovator can apply for admission into the regulatory sandbox program. In determining whether to admit an applicant, the Innovation Council will consider:

  1. The nature of the innovation product or service proposed to be made available to consumers, including the potential risk to consumers;
  2. The methods that will be used to protect consumers and resolve complaints during the sandbox period;
  3. The entity’s business plan, including availability of capital;
  4. Whether the entity’s management has the necessary expertise to conduct a pilot of the innovative product or service during the sandbox period;
  5. Whether any person substantially involved in the development, operation, or management of the innovative product or service has been convicted of or is currently under investigation for fraud or state or federal securities violations; and
  6. Any other factor that the Innovation Council or the applicable state agency determines to be relevant.

By tasking the Innovation Council with the responsibility of considering consumer protection in addition to economic growth when evaluating applicant entities, proponents of the legislation seemingly attempt to avoid some of the criticisms that surrounded the 2019 sandbox proposal.

Applicants must also have a physical presence in North Carolina. A waiver of specified requirements imposed by statute or rule may be granted as part of entry into the program and would be valid for the duration of participation in the regulatory sandbox, but typically not to exceed 24 months.

More to Come

The proposed legislation also addresses sandbox program requirements, consumer protections, record requirements, privacy, and other initiative and obligations in more detail. As mentioned above, the legislation is currently pending before the committee, and Bradley is closely monitoring this legislation and will provide continuing coverage of the proposed bill in the coming weeks and months. If passed, the legislation could become effective October 1, 2021.

If you have any questions, please reach out to the authors, Erin Illman or Lyndsay Medlin.

Energy and Infrastructure Companies Need to Know about the DOE’s and Other Agencies’ Focus on CybersecurityOn March 18, 2021, the Department of Energy’s (DOE) Office of Cybersecurity, Energy Security, and Emergency Response (CESER) announced three new research programs that are “designed to safeguard and protect the U.S. energy system” from potential cyberattacks. The DOE also announced a 100-day plan to address cybersecurity risks to the U.S. electric system. Not to be left behind, the Transportation Security Administration (TSA) issued a new security directive in light of the Colonial Pipeline cyberattack. Together, these agency actions demonstrate the scale and intensity of the threat to the energy industry and the focus of the government to curb the threat to our national infrastructure systems. Energy companies should monitor these developments and assess their internal controls to ensure they are cyber-resilient.

The Colonial Pipeline cyberattack surfaced on May 7, 2021, and confronted residents of many Southern states with a real possibility of running out of gas. But, in the days leading up to the ransomware attack, the DOE and the Biden administration were already turning their attention to cyberthreats to the energy industry. The electric system was of special concern, being another piece of critical infrastructure vulnerable to attacks — extensive power interruptions could have devasting consequences. The Colonial Pipeline cyberattack vividly demonstrates that the post-9/11 sensitivity to terrorists’ physical threats must now include cyber threats.

Less than a week after the pipeline restarted, the DOE revealed its three-prong research plan. The research programs will focus on: (1) securing against vulnerabilities in globally sourced technologies; (2) developing solutions to electromagnetic and geomagnetic interference; and (3) cultivating both research on cybersecurity solutions and the new talent needed to deploy it. The emphasis on the supply chain echoes anxieties in the Executive Order on Improving the Nation’s Cybersecurity, with its goals for the security of commercial software.

Importantly, the DOE is attempting to work with the industry. It kicked off its implementation of a 100-day plan — a plan formed by the Biden administration “to enhance the cybersecurity of electric utilities’ industrial control systems (ICS) and secure the energy sector supply chain” — by soliciting input from stakeholders. Through a Request for Information (RFI), the Office of Electricity sought comments from the public on various aspects of the electric infrastructure. When the public-comment period closed on June 7, 2021, nearly 100 entities had submitted comments. The energy industry is fully as interested in these issues as is the government.

Directly responding to the Colonial Pipeline cyberattack, the Department of Homeland Security (DHS) — through the TSA — issued Security Directive Pipeline-2021-01, aimed at tightening its control of pipelines’ security. The directive requires that critical pipeline operators (1) report cyber incidents; (2) designate a Cybersecurity Coordinator; and (3) assess, remediate, and report their cybersecurity measures. Failures to correct deficiencies or to comply with the new rules could result in substantial fines under the TSA’s enabling statute.

Federal agencies and the Biden administration are giving strong, coordinated signals that — as a result of cyber threats and attacks — lax standards, minimal enforcement, and carrots for compliance are things of the past. However, the large number of agencies and divisions with enforcement powers could make compliance confusing and difficult — especially if different critical infrastructure industries are subject to different standards. As a result, infrastructure and energy companies should take action now to harden their security measures. Best practices will help mitigate not only government scrutiny, but also the threat of an attack.

Executive Order on Cybersecurity Sets Aggressive TimelineThe Colonial Pipeline cyberattack prompted the issuance of a long-awaited executive order (EO) on improving U.S. cybersecurity. The EO mandates that, within six months, all federal agencies implement multi-factor authentication (MFA) and both at-rest and in-transit encryption. It also calls for agencies to comprehensively log, share, and analyze information about cyber incidents and creates a Cyber Safety Review Board to that end. The EO sets deadlines for agencies to write guidelines for securing software and detecting threats.

Bradley has authored prior articles and alerts regarding the U.S. governments’ increasing attention to cybersecurity — including at the Department of Defense, federal government as a whole, and even at the state level. With its focus on timelines and deadlines, this EO emphasizes the urgency of improving cybersecurity across industries.

Three goals, with a focus on timing

In a press call, the White House highlighted three goals of the EO:

  • Protect federal networks with specific tools, such as encryption, MFA, endpoint detection and response (EDR), logging, and Zero Trust Architecture.
  • Improve the security of commercial software by establishing security requirements; by using the power of the purse to prime the market for secure software; and by labelling consumer products with a cybersecurity grade.
  • Pool agencies’ information about incidents and enhance incident responses, including through a Cyber Incident Review Board (modelled on the national board that investigates plane crashes).

Reflecting the urgency of better cybersecurity, the EO sets clear, tight deadlines — more than 40 of them. The earliest deadline is set only 14 days after the EO’s release. More than 15 agencies — including the Office of Management and Budget, the Attorney General, the DoD, CISA, and NIST — are tasked with specific responsibilities to write, implement, or enforce the new measures.

Outline of the executive order

The Biden administration’s stated policy is that cybersecurity is a “top priority and essential to national and economic security.” To that end the provisions of the EO apply to “all Federal Information Systems.”

The EO specifically addresses the following issues:

  • Removing barriers to sharing threat information. The White House’s fact sheet uses the phrase “sharing between government and the private sector.” This section aims to expand the requirements on the private sector to provide incident information to the government. To that end, the EO calls for revision of both the FAR and DFARS reporting requirements. Defense contractors are already familiar with the DFARS requirement to “rapidly report” cyber incidents within 72 hours. New requirements may require less rapid reporting for less sensitive incidents.

 

  • Modernizing federal government cybersecurity. This section mandates specific security requirements. Before November 8, 2021, all federal agencies must implement MFA and encryption. Additionally, the EO sets a timeline for adoption of more secure cloud services and for government-wide Zero Trust Architecture. Importantly, this section repeats that the administration’s policy of “protecting privacy and civil liberties” is in tension with modern cybersecurity.

 

  • Enhancing software supply chain security. As chartered, NIST will shoulder the burden for establishing baseline security standards for software, including defining “critical software” and secure procedures for software development. One important component will be providing a Software Bill of Materials (SBOM), which is a record of the details and supply-chain relationships of components used to build software. The SBOM is similar to a list of ingredients on food packaging. It will allow tracking of open-source and other third-party components through a supply chain so that risks can be more easily evaluated — and patched. A second important component is a “consumer labeling program” similar to Singapore’s, for grading the cybersecurity of IoT devices.

 

  • Establishing a Cyber Safety Review Board. When a plane crashes, the National Transportation Safety Board investigates and makes recommendations to improve the safety of air transportation. There is no similar body for reviewing cyber incidents. The EO mandates that the Department of Homeland Security (DHS) establish just such a board, with both government and private-sector representatives having seats at the table. A senior administration official explained that the board’s first task will be to review the SolarWinds incident.

 

  • Standardizing the federal government’s playbook. The EO calls for creation of a “playbook” for agencies to use in responding to cybersecurity vulnerabilities and incidents. Recognizing that some such guidance has been in place for many years, the EO expressly requires that the guidance “incorporate all appropriate NIST standards.”

 

  • Improving detection of vulnerabilities and incidents. Agencies are called to actively hunt for threats and vulnerabilities. Each agency must submit its plan to CISA for a Continuous Diagnostics and Mitigation Program. This program has been around since 2012. The EO seeks to enhance threat-hunting activities and deployment of other Endpoint Detection and Response (EDR) initiatives.

 

  • Improving investigative and remediation capabilities. The very earliest deadline set by the EO is May 26 for DHS to recommend requirements for logging events and retaining other relevant incident data. The EO invites the FAR Council to consider the recommendations in its revision of the FAR and the DFARS reporting requirements.

What this means for industry

Much of the EO mandates actions by government agencies. But it does create action items for private entities. Above all, government contractors should watch for impactful changes to FAR and DFARS cybersecurity clauses. These have been revised multiple times recently, and we expect the Biden administration to revise them again — especially amid ongoing delays of the CMMC rollout. Software developers should begin inventorying their products and preparing SBOMs, especially for those in use by government agencies. Manufacturers of IoT devices should also expect that their devices must soon bear a label that marks their security level. Market forces may encourage production of higher-security devices.

Contact Andrew Tuggle, David Vance Lucas, or Sarah Sutton Osborne with any questions about the order’s impact on your business.

Circuit Split No More: 2nd Circuit Clarifies Article III Standing in Data Breach CasesWhile more states push forward on new privacy legislation statutorily granting consumers the right to litigate control of their personal information, federal courts continue to ponder how data breach injury fits traditional standing requirements. Previous to McMorris v. Carlos Lopez, McMorris v. Carlos Lopez & Assocs., LLC, many have argued there was a circuit split regarding whether an increased risk of identity theft resulting from a data breach is sufficient to establish Article III standing. However, in McMorris, the Second Circuit denied any confusion among its sister courts. Rather, the Second Circuit interestingly held that all courts have technically allowed for the possibility that an increased risk of identity theft could establish standing, but no plaintiff has yet hit the mark. Despite implying that standing could hypothetically exist in certain cases, however, the Second Circuit nonetheless found that McMorris fell short.

Devonne McMorris was an employee at a veteran’s health services clinic, Carlos Lopez & Associates LLP (CLA). In 2018, a CLA employee mistakenly sent an email to other CLA employees containing a spreadsheet with sensitive personally identifiable information (PII), including, but not limited to, Social Security numbers, home addresses, and dates of birth of McMorris and over 100 other CLA employees. McMorris and other class-action plaintiffs filed suit claiming that this purported breach caused them to cancel credit cards, purchase credit monitoring and identity theft protection services, and assess whether to apply for new Social Security numbers. The class-action plaintiffs reached a settlement with CLA, but when sent to the district court for approval, the United States District Court for the Southern District of New York rejected the parties’ agreement for lack of Article III standing. Only McMorris appealed to the Second Circuit.

The Holding

After reviewing recent decisions delivered by other circuits regarding standing and an increased risk for identity theft, the Second Circuit denied the existence of  a circuit split, stating “[i]n actuality, no court of appeals has explicitly foreclosed plaintiffs from establishing standing based on a risk of future identity theft – even those courts that have declined to find standing on the facts of a particular case.”

In deciding the present case, as a case of first impression, the Second Circuit unequivocally held that an increased risk of identity theft could be enough to establish standing, but only under the right circumstances. The Second Circuit set forth a non-exhaustive list of factors to consider:

  1. Whether the plaintiff’s data has been exposed as the result of a targeted attempt to obtain that data (which would make future harm more likely);
  2. Whether any portion of the dataset has already been misused, even if the plaintiffs themselves have not yet experienced identity theft or fraud; and
  3. Whether the type of data that has been exposed is of such a sensitive nature that the risk of identity theft or fraud is heightened.

Despite the foregoing encouragement to would-be plaintiffs, the Second Circuit then struck a blow, holding that self-created damages, in the form of proactive steps to acquire protection from future harm post-data breach, such as purchasing credit monitoring, does not establish an injury in fact. Because there was no evidence of further dissemination of the PII and McMorris’ data was not exposed as a result of a targeted hacking attempt, thereby making future harm hypothetical, McMorris lacked Article III standing. Although the data was sensitive, the court stated “[t]he sensitive nature of McMorris’s internally disclosed PII, by itself, does not demonstrate that she is at substantial risk of future identity theft or fraud.”

McMorris has large implications for both companies and victims of data breaches because the Second Circuit made sweeping proclamations about the national state of the law of standing for data breach victims. Although the refusal to recognize credit monitoring as indicia of future harm may make it difficult for would-be plaintiffs to prove heightened risk and establish standing, the Second Circuit has nonetheless created a hypothetical roadmap for doing so in an area of the law that has been analogized to the Wild West. Notably, the roadmap enumerated by the court seems to encompass the “risk of harm” analysis used by several states, namely, that if data is accessed or acquired by an unauthorized party, it is still not a data breach if there is no risk of harm to the data subject. With this in mind, companies should review their policies and procedures regarding the prevention of and reaction to data breaches. With appropriate prevention and monitoring tools, the chance of a successful “targeted attempt to obtain data,” which could result in lawsuits, is decreased. Moreover, procedures, such as encryption of sensitive data, lower the likelihood that stolen data has “a high risk for identity theft or fraud.”

Contact Lissette Payne or Lyndsay Medlin with any questions or to discuss the impact of this case. For other updates and alerts regarding data breach liability, subscribe to Bradley’s privacy blog, Online and On Point.