Executive Order on Cybersecurity Sets Aggressive TimelineThe Colonial Pipeline cyberattack prompted the issuance of a long-awaited executive order (EO) on improving U.S. cybersecurity. The EO mandates that, within six months, all federal agencies implement multi-factor authentication (MFA) and both at-rest and in-transit encryption. It also calls for agencies to comprehensively log, share, and analyze information about cyber incidents and creates a Cyber Safety Review Board to that end. The EO sets deadlines for agencies to write guidelines for securing software and detecting threats.

Bradley has authored prior articles and alerts regarding the U.S. governments’ increasing attention to cybersecurity — including at the Department of Defense, federal government as a whole, and even at the state level. With its focus on timelines and deadlines, this EO emphasizes the urgency of improving cybersecurity across industries.

Three goals, with a focus on timing

In a press call, the White House highlighted three goals of the EO:

  • Protect federal networks with specific tools, such as encryption, MFA, endpoint detection and response (EDR), logging, and Zero Trust Architecture.
  • Improve the security of commercial software by establishing security requirements; by using the power of the purse to prime the market for secure software; and by labelling consumer products with a cybersecurity grade.
  • Pool agencies’ information about incidents and enhance incident responses, including through a Cyber Incident Review Board (modelled on the national board that investigates plane crashes).

Reflecting the urgency of better cybersecurity, the EO sets clear, tight deadlines — more than 40 of them. The earliest deadline is set only 14 days after the EO’s release. More than 15 agencies — including the Office of Management and Budget, the Attorney General, the DoD, CISA, and NIST — are tasked with specific responsibilities to write, implement, or enforce the new measures.

Outline of the executive order

The Biden administration’s stated policy is that cybersecurity is a “top priority and essential to national and economic security.” To that end the provisions of the EO apply to “all Federal Information Systems.”

The EO specifically addresses the following issues:

  • Removing barriers to sharing threat information. The White House’s fact sheet uses the phrase “sharing between government and the private sector.” This section aims to expand the requirements on the private sector to provide incident information to the government. To that end, the EO calls for revision of both the FAR and DFARS reporting requirements. Defense contractors are already familiar with the DFARS requirement to “rapidly report” cyber incidents within 72 hours. New requirements may require less rapid reporting for less sensitive incidents.


  • Modernizing federal government cybersecurity. This section mandates specific security requirements. Before November 8, 2021, all federal agencies must implement MFA and encryption. Additionally, the EO sets a timeline for adoption of more secure cloud services and for government-wide Zero Trust Architecture. Importantly, this section repeats that the administration’s policy of “protecting privacy and civil liberties” is in tension with modern cybersecurity.


  • Enhancing software supply chain security. As chartered, NIST will shoulder the burden for establishing baseline security standards for software, including defining “critical software” and secure procedures for software development. One important component will be providing a Software Bill of Materials (SBOM), which is a record of the details and supply-chain relationships of components used to build software. The SBOM is similar to a list of ingredients on food packaging. It will allow tracking of open-source and other third-party components through a supply chain so that risks can be more easily evaluated — and patched. A second important component is a “consumer labeling program” similar to Singapore’s, for grading the cybersecurity of IoT devices.


  • Establishing a Cyber Safety Review Board. When a plane crashes, the National Transportation Safety Board investigates and makes recommendations to improve the safety of air transportation. There is no similar body for reviewing cyber incidents. The EO mandates that the Department of Homeland Security (DHS) establish just such a board, with both government and private-sector representatives having seats at the table. A senior administration official explained that the board’s first task will be to review the SolarWinds incident.


  • Standardizing the federal government’s playbook. The EO calls for creation of a “playbook” for agencies to use in responding to cybersecurity vulnerabilities and incidents. Recognizing that some such guidance has been in place for many years, the EO expressly requires that the guidance “incorporate all appropriate NIST standards.”


  • Improving detection of vulnerabilities and incidents. Agencies are called to actively hunt for threats and vulnerabilities. Each agency must submit its plan to CISA for a Continuous Diagnostics and Mitigation Program. This program has been around since 2012. The EO seeks to enhance threat-hunting activities and deployment of other Endpoint Detection and Response (EDR) initiatives.


  • Improving investigative and remediation capabilities. The very earliest deadline set by the EO is May 26 for DHS to recommend requirements for logging events and retaining other relevant incident data. The EO invites the FAR Council to consider the recommendations in its revision of the FAR and the DFARS reporting requirements.

What this means for industry

Much of the EO mandates actions by government agencies. But it does create action items for private entities. Above all, government contractors should watch for impactful changes to FAR and DFARS cybersecurity clauses. These have been revised multiple times recently, and we expect the Biden administration to revise them again — especially amid ongoing delays of the CMMC rollout. Software developers should begin inventorying their products and preparing SBOMs, especially for those in use by government agencies. Manufacturers of IoT devices should also expect that their devices must soon bear a label that marks their security level. Market forces may encourage production of higher-security devices.

Contact Andrew Tuggle, David Vance Lucas, or Sarah Sutton Osborne with any questions about the order’s impact on your business.

Circuit Split No More: 2nd Circuit Clarifies Article III Standing in Data Breach CasesWhile more states push forward on new privacy legislation statutorily granting consumers the right to litigate control of their personal information, federal courts continue to ponder how data breach injury fits traditional standing requirements. Previous to McMorris v. Carlos Lopez, McMorris v. Carlos Lopez & Assocs., LLC, many have argued there was a circuit split regarding whether an increased risk of identity theft resulting from a data breach is sufficient to establish Article III standing. However, in McMorris, the Second Circuit denied any confusion among its sister courts. Rather, the Second Circuit interestingly held that all courts have technically allowed for the possibility that an increased risk of identity theft could establish standing, but no plaintiff has yet hit the mark. Despite implying that standing could hypothetically exist in certain cases, however, the Second Circuit nonetheless found that McMorris fell short.

Devonne McMorris was an employee at a veteran’s health services clinic, Carlos Lopez & Associates LLP (CLA). In 2018, a CLA employee mistakenly sent an email to other CLA employees containing a spreadsheet with sensitive personally identifiable information (PII), including, but not limited to, Social Security numbers, home addresses, and dates of birth of McMorris and over 100 other CLA employees. McMorris and other class-action plaintiffs filed suit claiming that this purported breach caused them to cancel credit cards, purchase credit monitoring and identity theft protection services, and assess whether to apply for new Social Security numbers. The class-action plaintiffs reached a settlement with CLA, but when sent to the district court for approval, the United States District Court for the Southern District of New York rejected the parties’ agreement for lack of Article III standing. Only McMorris appealed to the Second Circuit.

The Holding

After reviewing recent decisions delivered by other circuits regarding standing and an increased risk for identity theft, the Second Circuit denied the existence of  a circuit split, stating “[i]n actuality, no court of appeals has explicitly foreclosed plaintiffs from establishing standing based on a risk of future identity theft – even those courts that have declined to find standing on the facts of a particular case.”

In deciding the present case, as a case of first impression, the Second Circuit unequivocally held that an increased risk of identity theft could be enough to establish standing, but only under the right circumstances. The Second Circuit set forth a non-exhaustive list of factors to consider:

  1. Whether the plaintiff’s data has been exposed as the result of a targeted attempt to obtain that data (which would make future harm more likely);
  2. Whether any portion of the dataset has already been misused, even if the plaintiffs themselves have not yet experienced identity theft or fraud; and
  3. Whether the type of data that has been exposed is of such a sensitive nature that the risk of identity theft or fraud is heightened.

Despite the foregoing encouragement to would-be plaintiffs, the Second Circuit then struck a blow, holding that self-created damages, in the form of proactive steps to acquire protection from future harm post-data breach, such as purchasing credit monitoring, does not establish an injury in fact. Because there was no evidence of further dissemination of the PII and McMorris’ data was not exposed as a result of a targeted hacking attempt, thereby making future harm hypothetical, McMorris lacked Article III standing. Although the data was sensitive, the court stated “[t]he sensitive nature of McMorris’s internally disclosed PII, by itself, does not demonstrate that she is at substantial risk of future identity theft or fraud.”

McMorris has large implications for both companies and victims of data breaches because the Second Circuit made sweeping proclamations about the national state of the law of standing for data breach victims. Although the refusal to recognize credit monitoring as indicia of future harm may make it difficult for would-be plaintiffs to prove heightened risk and establish standing, the Second Circuit has nonetheless created a hypothetical roadmap for doing so in an area of the law that has been analogized to the Wild West. Notably, the roadmap enumerated by the court seems to encompass the “risk of harm” analysis used by several states, namely, that if data is accessed or acquired by an unauthorized party, it is still not a data breach if there is no risk of harm to the data subject. With this in mind, companies should review their policies and procedures regarding the prevention of and reaction to data breaches. With appropriate prevention and monitoring tools, the chance of a successful “targeted attempt to obtain data,” which could result in lawsuits, is decreased. Moreover, procedures, such as encryption of sensitive data, lower the likelihood that stolen data has “a high risk for identity theft or fraud.”

Contact Lissette Payne or Lyndsay Medlin with any questions or to discuss the impact of this case. For other updates and alerts regarding data breach liability, subscribe to Bradley’s privacy blog, Online and On Point.

Florida Legislature Considers Sweeping Data-Privacy Legislation Supported by GovernorFlorida has joined the wave of states considering new comprehensive data privacy legislation. On February 15, 2021, Rep. Fiona McFarland introduced HB 969, modeled after the California Consumer Privacy Act (CCPA). The bill is supported by Gov. Ron DeSantis and the speaker of the Florida House. As introduced, HB 969 would apply to for-profit businesses that either have annual gross revenues exceeding $25 million, annually buy, sell or receive the personal information of at least 50,000 consumers or derive at least 50% of its annual global revenues from selling or sharing consumers’ personal information. A Senate version of a similar bill (SB 1734) introduced by Republican Sen. Jennifer Bradley passed through its first committee earlier this week.

Both bills impose a number of requirements on covered entities relating to consumers’ personal information – for example, entities must maintain an online privacy policy and update it annually, provide notice at the point of collection, respond to consumers’ requests for copies of their personal information or to correct such information or delete it under certain circumstances. Covered entities also must provide consumers with the right to opt out of sharing personal information, and they are prohibited from discriminating against those who choose to do so. The bills also go a step further than what is required under CCPA and include additional business obligations, such as data retention and limited use requirements.

The companion bills also provide consumers with numerous rights regarding their collected personal information, including the right to request that a business provide a copy of their personal information collected, the right to have their personal information be deleted by covered entities, and the right to have inaccurate personal data corrected.

Like the CCPA, the Florida bills provide a private cause of action against a business if there is a data breach. Similarly, the private right of action is limited to only certain data breaches. A consumer could sue a business if their nonencrypted and nonredacted personal information was stolen in a data breach as a result of the business’s failure to maintain reasonable security procedures and practices to protect it. If this happens, the consumer can sue for the amount of monetary damages actually suffered from the breach or up to $750 per incident.

For all other violations, only the Florida Department of Legal Affairs can file an action. If the department has reason to believe that any business is in violation and that proceedings would be in the public interest, the department may bring an action against such business and may seek a civil penalty of not more than $2,500 for each unintentional violation or $7,500 for each intentional violation. Such fines may be tripled if the violation involves a consumer who is sixteen years of age or younger. A business may be found to be in violation if it fails to cure any alleged violation within 30 days after being notified in writing by the department of the alleged noncompliance.

In their current form, if passed, both bills have an effective date of January 1, 2022. The legislation has been assigned to the Commerce Committee and the Civil Justice and Property Rights subcommittees. The bill has already received a favorable recommendation from the Regulatory Reform subcommittee. The companion Senate bill is also pending in committee. With the support of the governor and the speaker of the house, there is a strong possibility that some form of legislation will pass. Stay tuned for further updates and alerts from Bradley on state privacy law developments and obligations by subscribing to Bradley’s privacy blog, Online and OnPoint.

Privacy Litigation Updates for the Financial Services Sector: Claims Against Yodlee Survive and Limited Discovery of Envestnet AllowedIn November 2020, Yodlee and its parent company Envestnet filed separate motions to dismiss the class action lawsuit brought over Yodlee’s alleged data collection and use practices. Yodlee’s motion to dismiss argued that plaintiffs failed to state a claim under Federal Rule of Civil Procedure 12(b)(6), while Envestnet argued that its status as the parent company to Yodlee was not enough for the court to establish personal jurisdiction over Envestnet under Federal Rule of Civil Procedure 12(b)(2).

On February 16, 2021, Federal Magistrate Judge Sallie Kim partially granted and partially denied Yodlee’s motion to dismiss and reserved ruling on Envestnet’s motion to dismiss. The court allowed plaintiffs to cure deficiencies and file an amended complaint. On March 15, 2021, plaintiffs filed a Second Amended Complaint.

Yodlee’s Motion to Dismiss

Claims 1 and 10 – Invasion of Privacy:

The court held that plaintiffs have a reasonable expectation of privacy in their individual financial accounts. Yodlee is alleged to have improperly accessed and retained data from these personal accounts. Furthermore, Yodlee is alleged to have sold aggregated financial data that “would only take a few steps to identify the individual.”

The court denied Yodlee’s motion to dismiss Claims 1 and 10.

Claim 2 – Stored Communications Act:

The court held that plaintiffs failed to allege facts sufficient to satisfy the element of “electronic storage” because plaintiffs only alleged Yodlee “stores the information for its own misuse of the data.”

The court granted Yodlee’s motion to dismiss Claim 2 with leave to amend.

Claim 3 – Unjust Enrichment:

The court held that plaintiffs’ allegations of acquiring their data through a fraudulent scheme and selling that data was pled with enough particularity to put Yodlee on notice of the substance of the alleged fraudulent scheme.

The court denied Yodlee’s motion to dismiss Claim 3.

Claim 4 – California Civil Code § 1709:

The court found that plaintiffs sufficiently alleged Yodlee’s alleged fraudulent scheme to deceive plaintiffs.

The court denied Yodlee’s motion to dismiss Claim 4.

Claim 5 – California Unfair Competition Law – Business and Professional Code § 17200:

The court held that plaintiffs did not allege “a transaction or contract with Yodlee,” only the “Loss of Benefit of the Bargain,” and as such, it is unclear how plaintiffs “lost money or property as a result of Yodlee’s alleged conduct.” Furthermore, although plaintiffs allege the inability to seek indemnification and the heightened risk of identity theft, the court held that since neither of these have occurred yet, they are merely potential and hypothetical and not enough to have standing to bring suit over this cause of action.

The court granted Yodlee’s motion to dismiss Claim 5 with leave to amend.

Claims 7 and 9 – Computer Fraud and Abuse Act and California Comprehensive Data Access and Fraud Act:

The court held that plaintiffs’ damage claims of “the costs of conducting damage assessments, restoring the data to its condition prior to the offense, and consequential damages they incurred by, inter alia, spending time conducting research to ensure that their identity had not been compromised and accounts reflect the proper balances” were conclusory and insufficient to show damage or loss.

The court granted Yodlee’s motion to dismiss Claims 7 and 9 with leave to amend.

Claim 8 – California Anti-Phishing Act of 2005:

The court held that plaintiffs’ allegations that Yodlee represented themselves to be plaintiffs’ financial institutions, which was an allegedly fraudulent and deceitful impersonation of those institutions, and induced plaintiffs to provide their login credentials to defendants, were sufficient to state a claim under the California Anti-Phishing Act.

The court denied Yodlee’s motion to dismiss Claim 8.

Envestnet’s Motion to Dismiss for Lack of Personal Jurisdiction

The court held that plaintiffs have not alleged sufficient facts to bring an alter ego claim against Envestnet. The court noted that an alter ego claim is a rare remedy. To be invoked, the court held that there must be (1) unity of interest and (2) an inequitable result will occur if not invoked. To show unity of interest, plaintiffs should plead a fact supporting at least two or three of the following factors: “commingling of funds, identification of the equitable owners with domination and control of the two entities, instrumentality or conduit for a single venture or the business of an individual, failure to maintain minutes or adequate corporate records, use of the same office or business locations, identical equitable ownership of the two entities, use of a corporation as a mere shell, and the failure to adequately capitalize a corporation.” Furthermore, in some jurisdictions, such as the present jurisdiction, a showing of bad faith is required.

The court noted that, as it stands, plaintiffs have not alleged sufficient facts to support their alter ego claim. However, the court reserved ruling on Envestnet’s motion to dismiss until plaintiffs have an opportunity to conduct discovery on the issue. The court provided plaintiffs the opportunity to issue five document requests, five interrogatories, and five requests for admissions, as well as take one deposition of Envestnet. Plaintiffs must then file a supplemental brief no later than May 28, 2021, and Envestnet may file a response by June 11, 2021.


Many of plaintiffs’ claims have survived the motion to dismiss, bringing to light the legal and reputational risks from these data-sharing practices. Considering this pending case, businesses should review their privacy policies and procedures to ensure their data privacy compliance programs are up to date, accurately disclose their sharing practices, and protect consumer data. Based on this order, there are two significant areas to watch: anonymized, aggregated data and application programming interface (API) interactions.

Anonymized, Aggregated Data

The court found that plaintiffs have a reasonable expectation of privacy in their personal, financial accounts at an individual level. Though Yodlee argued that plaintiffs do not have a reasonable expectation of privacy in anonymized, aggregated data, the court noted that plaintiffs’ allegations that it “would only take a few steps to identify the individual Plaintiffs from the transactions.”

All businesses should review their contracts with third-party service providers, including those that provide APIs, to ensure that contractual language defining anonymized, aggregated data complies with relevant privacy laws and provides required protections, as well as defines whether and to what extent the business grants the third party permission to use and further disclose such anonymized, aggregated data.

API Interactions

Many of plaintiffs’ claims were based on the lack of and/or unclear disclosure of Yodlee’s interactions with their financial institutions. While plaintiffs allege that Yodlee does not have authority or approval from each financial institution, the use of a login screen that appears to be the financial institution is likely part of the API software agreement that the financial institutions pay to use. Businesses should ensure that any interaction with third-party processors on their websites or applications clearly and explicitly states the role of the third party and that such role is properly reflected in the businesses’ privacy policies.

If you have any questions or to discuss your company’s data sharing practices, contact Courtney Achee, Lissette Payne or Kelley Hails. For more information on this developing case and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Critical Changes for U.S. Cleared FacilitiesCodification of the NISPOM and replacement of JPAS

Two significant changes are underway by the Defense Counterintelligence and Security Agency (DCSA) – both of which require the immediate attention of businesses that hold a U.S. security clearance or are in the process of application for a clearance.

The first change is the codification of the National Industrial Security Program Operating Manual (NISPOM). As background, the NISPOM has been the key guidance for protecting classified and certain other controlled information in accordance with the National Industrial Security Program (NISP) as currently overseen by the DCSA.

The Department of Defense (DoD) published a Final Rule codifying the NISPOM on December 21, 2020, which became effective February 24, 2021. The Final Rule requires that contractors must implement changes no later than six months after the date the Final Rule is published (August 2021). The NISPOM is now codified at 32 CFR Part 117. Further guidance on the Final Rule’s implementation will be published in an Industrial Security Letter (ISL).

The Final Rule establishes requirements, policies and procedures in accordance with the NISP – which outlines the protection of classified information that is disclosed to, or developed by contractors, licensees, grantees, or certificate holders to prevent unauthorized disclosure.

Key changes include:

  • Requirements for cleared contractors to submit reports pursuant to Security Executive Agent Directive (SEAD) 3 and cognizant security agency (CSA) guidance.
  • An additional facility clearance tool for DCSA and government contracting activities (GCAs) as a limited entity eligibility specific to the requesting GCA’s classified information, and to a single, narrowly defined contract, agreement, or circumstance.
  • Elimination of the requirement for National Interest Determinations (NIDs) for certain covered contractors operating under a special security agreement with ownership by countries designated as part of the National Technology and Industrial Base (United Kingdom, Canada or Australia).
  • Determinations by a CSA with respect to requirements for top secret accountability.
  • Permitting Intrusion Detection System (IDS) installation and UL-2050 certification by an Occupational Safety and Health Administration (OSHA) Nationally Recognized Testing Laboratory (NRTL).
  • Directing cleared contractors to refer to 32 CFR Part 2001 for guidance on requirements for the protection of classified national security information (CNSI) to ensure consistency with national policy.
  • Clarification of responsibilities for Senior Management Officials (SMO).
  • Clarification that upon completion of a classified contract, the contractor must return all government provided or deliverable information to the custody of the government.

Significantly, for non-U.S. entities, the Final Rule also eliminates the requirement that entities under Foreign Ownership, Control, or Influence (FOCI) operating under a Special Security Agreement (SSA) attain a NID. Under the Final Rule, SSA entities covered by the NITB will be permitted to begin contract performance without first obtaining a NID.

The DCSA is currently reviewing existing ISLs to determine those that will be retained, re-issued, and/or rescinded. DCSA is also working on revisions to NISP-related forms, including the SF-328 – “Certificate Pertaining to Foreig-n Interests,” DD Form 441 – “Security Agreement,” and DD Form 441-1 – “Security Agreement Appendage.”

The second significant change by the DCSA is the retirement of the Joint Personnel Adjudications System (JPAS). JPAS is being replaced by the Defense Information Security System (DISS) as the next step toward deployment of the National Background Investigation Services (NBIS) and implementation of the Trusted Workforce 2.0 continuous vetting policy.

JPAS transitioned to a read-only mode on March 15, 2021, and will be fully retired on March 31, 2021. All updates to eligibility, access, and visit data must be completed in DISS prior to March 15.

Cleared contractors should work with their SMO, DCSA representative and counsel to assure their understanding and compliance with these significant changes. For other updates and alerts regarding national security law developments, subscribe to Bradley’s privacy blog Online and On Point.

A Second Chance for the Public Health Emergency Privacy ActThis is the seventh alert in a series of Bradley installments on privacy and cybersecurity developments arising from the COVID-19 pandemic. Click to read the first, second, third, fourth, fifth, and sixth installments.

Sen. Mark Warner (D-Va.) has re-introduced a bill to create the Public Health Emergency Privacy Act (PHEPA). First introduced in May 2020, the bill died in committee. This time, Warner is joined by 11 cosponsors in the Senate and by 32 sponsors of a related bill in the House of Representatives.

This newly introduced bill is identical to the earlier version, which we reported on at the time. PHEPA would have the usual notice-and-consent backbone, requiring affirmative consent from a consumer before a covered organization could collect, use, or disclose his or her emergency health data. Organizations collecting the data would need to protect it with reasonable security and not use the data for any purposes beyond those expressly identified in a privacy policy.

No preemption

Two controversial aspects of PHEPA bear repeating. First, PHEPA would expressly not preempt state laws. That would effectively make PHEPA a floor that states could raise either by existing legislation or with new legislation. For organizations doing business in multiple states, this could result in having to comply with higher standards than created by the federal bill, at least in some states.

Private right of action

Second, PHEPA would provide a private right of action to consumers. In addition to enforcement by FTC and by states’ attorneys general, under PHEPA, affected consumers could sue directly for statutory damages of up to $5,000 per violation. Consumers could also recover attorneys’ fees and litigation costs.

Work in progress

When first introduced last year, the bill competed with a bill from Sen. Roger Wicker (R-Miss.) and others to create the “COVID-19 Consumer Data Protection Act of 2020.” The competing bill had fewer protections, express preemption, and no private right of action. Both bills died in committee.

The Wicker bill has not yet been reintroduced, and the Warner bill does not yet have bipartisan support. So, it remains to be seen when, how, and even if the federal government will create data privacy protections — either related to the COVID-19 pandemic or more generally. We will continue to update you as we learn more.

Biometric Privacy Law Expansions and Private Rights of ActionThe days of only seeing biometric techniques in spy films are well behind us. A simple thumbprint can open a phone. Systems like Alexa can recognize your voice and play your favorite music. Some banks even allow customers to make payments by using voice command and fingerprint recognition.

In 2008, Illinois became the first state to enact a comprehensive law concerning biometric information.  The Illinois Biometric Information Privacy Act regulates the collection, use, safeguarding, handling, storage, retention, and destruction of biometric information. The legislation requires a private entity in possession of biometric identifiers or biometric information to develop a written policy and make it available to the public. The legislation further exempts the sale, lease, or trade of biometric identifiers or biometric information. The legislation sent shockwaves through the privacy world by creating a private right of action. Under the Illinois law, if a violating entity acts negligently, the prevailing party can recover liquidated damages of $1,000 or actual damages, whichever is greater. If a violating entity acts intentionally or recklessly, the prevailing party can recover liquidated damages of $5,000 or actual damages, whichever is greater. The prevailing party is also entitled to reasonable attorneys’ fees and costs, other litigation expenses, and an injunction.

On the heels of the groundbreaking Illinois law, lawmakers in Maryland, South Carolina, Virginia, and New York have proposed legislation seeking to regulate how companies collect and handle biometric information. All of these states have followed in the footsteps of Illinois by proposing a private right of action. But, as summarized below, while the proposed penalties in some states mirror or are similar to those in Illinois, other states have significantly increased the potential penalties.


The proposed legislation allows for a private right of action. Similar to the Illinois law, the prevailing party can recover $1,000, or actual damages against a negligent party, and $5,000, or actual damages against an entity that acted intentionally or recklessly. The prevailing party can also recover attorneys’ fees and costs, including expert witness fees and other litigation expenses. The party may also obtain an injunction.

Unlike the Illinois law, the Maryland legislation does not create an absolute requirement for a private entity to make a publicly available written policy. An exemption may apply if the policy only relates to employees of the private entity and is used solely for internal company operations.

South Carolina

The proposed legislation in South Carolina is substantially broader than the Illinois law. Like in Illinois, the prevailing party can recover $1,000 or actual damages against a negligent entity. The party is also entitled to attorneys’ fees and costs and an injunction. However, if a business intentionally or recklessly violates a provision of the statute, then the floor for recovery increases to $10,000. Moreover, a business that fails to notify consumers of a breach of security within 72 hours is subject to a fine of up to $5,000 for each consumer that was not notified.

Unlike in Illinois, consumers will have a right to know the categories and specific pieces of information collected. The proposed South Carolina legislation also allows businesses to sell biometric information, which is a substantial departure from the law in Illinois. However, consumers may request that a business delete the biometric information or discontinue selling the information. The proposed legislation is similar to the California Consumer Privacy Act in that it requires a business to provide a link on its website for consumers to easily opt-out of the sale of their biometric information and it allows businesses to offer financial incentives for the collection, sale, or deletion of biometric information.


The proposed legislation in Virginia also imposes higher damages. If the legislation passes, an employer who violates the statute would be subject to a civil penalty of up to $25,000 for each violation. However, the statute only applies to employees and employers.

New York

The proposed New York legislation is similar to the Illinois act. The legislation would allow a prevailing party to recover liquidated damages of $1,000 or actual damages against an entity that is found negligent. If an entity acts intentionally or recklessly, the baseline award increases to $5,000. The prevailing party can also recover attorneys’ fees and costs and obtain an injunction.

To recap, businesses found in violation of the biometric laws described above could be subject to pay a minimum of $1,000-$25,000 per private right of action. Considering the magnitude of this risk, businesses should continue to stay apprised of new legislative developments in this area of the law.

For more information on this issue and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

NYDFS Publishes Cyber Insurance Risk Framework, Warns of Silent CyberThe New York Department of Financial Services (DFS) has issued a Cyber Insurance Risk Framework (the “Framework”) of best practices for carriers. The first of its kind, the Framework tells carriers to establish formal strategies for measuring and managing cyber risks. It applies to all insurance carriers — not only those who write cyber policies, but also those who may be exposed to silent cyber risks — referring to potential cyber-related losses under traditional, as opposed to cyber specific, property and liability policies.

Risks for Carriers

The introduction to the Framework cites the COVID-19 pandemic, the SolarWinds hack, and a rise in ransomware attacks as examples of increased cyber risk for all organizations. Cyber insurance helps businesses manage these risks and can also lead to better cybersecurity with premium pricing incentives for good cyber hygiene. DFS warns, however, that unless carriers assess risks accurately, the availability of cyber insurance could allow policyholders to rely on insurance rather than on strong security.

The Framework

So that the cyber insurance market can best protect economic interests, the Framework lists six best practices that carriers “should employ.” Specifically, carriers should establish a “formal cyber insurance risk strategy” incorporating each of the following practices:

  • Manage and eliminate exposure to silent cyber insurance risk;
  • Evaluate systemic risk;
  • Rigorously measure policyholder risk;
  • Educate policyholders and insurance producers;
  • Obtain cybersecurity expertise; and
  • Require notice to law enforcement.

While the Framework is not a step-by-step guide or a mandate with the force of law, it does explain how a carrier can “take an approach that is proportionate to its risk.”

Measuring Risks

The Framework emphasizes the importance of measuring risk and notes at the outset that current cyber exposure may be vastly underestimated in comparison to the premiums being charged. Systemic risk — such as vulnerabilities in software common across policyholders or attacks coordinated by state-sponsored groups — can lead to large, correlated losses. Additionally, silent cyber risks — losses from cyber incidents in policies that do not affirmatively grant cyber coverage — create uncertainty and represent cyber risks that might not have been measured as such before now.

Though the Framework identifies significant risk of loss, it is short on guidance for exactly how to “rigorously measure” risks other than suggestions about the application process and the importance of obtaining clear information about the policyholder’s third-party vendors and open-source software components. (DFS has emphasized the importance of third parties before, identifying them as a consistent weak link in cybersecurity efforts, as has the Office of the Comptroller of the Currency.) As the cyber insurance market matures, we can expect to see more standardized assessments of cyber hygiene, such as the Cybersecurity Maturity Model Certification (CMMC) and the Basic Assessment currently being implemented by the Department of Defense for contractors in its supply chain.

Managing Risks

Other aspects of the Framework focus on managing risks. Carriers should educate their policyholders about cybersecurity, teaching about good practices. This is consistent with the function of the cyber insurance market as strengthening security. It also lowers the overall cyber insurance risk in the system. Additionally, carriers should themselves stay educated by recruiting and training cybersecurity experts and committing to the development of sophisticated vendors.

The Framework also recommends that policies should require that victims notify law enforcement as a condition of coverage. Many businesses hesitate to call law enforcement, even when they are the victims of cybercrime. Because some attacks can be prevented by good cyber hygiene, there is “victim blaming” to some extent, which keeps business from reporting to law enforcement.  In addition, some have expressed concern that law enforcement may limit options for responding to attacks — because of official stances against paying ransoms, for example.

Against these potential downsides, DFS emphasizes that law enforcement agencies are a pool of knowledge across incidents. On top of helping a victim now, what is learned in an incident can be used to help the next potential victim or even to prevent attacks.


DFS has never been afraid to move first on new issues facing policyholders and carriers. Specifically, DFS has led the way on cybersecurity regulation at least since its Regulation 500 took effect in 2017. We expect DFS will continue its dialogue with the industry, leading to more comprehensive and specific guidance. Bradley will report on new developments.

Contact Heather Wright or Andrew Tuggle with any questions or to discuss the new framework’s impact on your business today. For more information on this issue and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Privacy Requirements under COVID-19 Emergency Rental Assistance ProgramMany relief programs have been implemented over the past year in response to COVID-19, and keeping up with the changing requirements for these programs can be daunting. A new twist in the requirements is the mandate for implementation of privacy requirements under the Emergency Rental Assistance Program. Here are some details about the Emergency Rental Assistance Program, and how to ensure compliance with the privacy requirements.

What is the Emergency Rental Assistance Program?

On December 27, 2020, the Consolidated Appropriations Act, 2021 was enacted that contained provisions for coronavirus response and relief. The act called for $25 billion in rental assistance to be distributed by states, U.S. territories, certain local governments, Indian tribes, and other governing bodies (grantees) that apply for funds through the Emergency Rental Assistance Program. The funds are specifically allocated for rent and utility assistance and housing expenses incurred due to COVID-19. Eligible households meeting certain requirements can receive up to 15 months of assistance for rent and other expenses covered under the program. The list of eligible grantees and additional information can be found on the Emergency Rental Assistance Program website of the U.S. Department of the Treasury.

What are the privacy requirements?

Each grantee is required to (1) establish data privacy and security requirements with appropriate measures to ensure the protection of the privacy of the individuals and households, (2) provide that the information collected, including any personally identifiable information, is collected and used only for submitting reports to the federal government, and (3) provide confidentiality protections for data collected about any individuals who are survivors of intimate partner violence, sexual assault or stalking.

How are landlords affected?

Provided certain requirements are followed, landlords or owners of residential dwellings can apply for rental assistance from the grantees on behalf of their renters or can assist renters in applying for assistance. Landlords need to be aware that the grantees most likely will have specific privacy requirements that they will need to abide by when handling the information of its renters. Since the Emergency Rental Assistance Program privacy requirements will be implemented by multiple different government entities, there will likely be variations in requirements. Therefore, vigilance is needed to ensure compliance.

The Emergency Rental Assistance Program is currently being rolled out. We will update you in the upcoming weeks and months as to additional guidance on the implementation of these privacy requirements.

The Man Behind the Curtain: College Admissions and FERPA RequestsAspiring college students spend enormous amounts of time trying to unlock the magic formula that leads to those magic words: Congratulations, you’ve been accepted! But, for many students, the focus on admissions does not stop once they matriculate.

Starting in 2015, schools such as Harvard, Yale, Penn, and Stanford saw a dramatic uptick in students requesting to view their admissions records under the Family Educational Rights and Privacy Act (FERPA). Pursuant to FERPA, a student may request, and an educational institution must disclose, any of that student’s “educational records.” Attempting to discern the black box of elite college admissions, students requested to see their admissions records by the hundreds. Colleges and universities were not only inundated by these requests but also had to figure out what, exactly, do we have to disclose?

When is disclosure required?

Under FERPA, a student’s admission record only becomes an “educational record” that requires disclosure if the student matriculates at the university. Students who are not accepted to the university, or who are accepted and do not enroll, are not covered by FERPA. This means that students hoping to get a glimpse into why they were rejected by their top school will be unable to gain access to those records. However, if a student matriculates and then later requests access to their admission records, the college or university must comply within 45 days.

Some universities have gone so far as to change their retention policies, deleting all admission records once they have “served their purpose” in order to avoid the headache of complying with hundreds of FERPA admissions requests every month and to avoid the potential of releasing their admissions formula to the public. While the deletion of records no longer needed or required is generally good policy for privacy and data security reasons, it is not necessary to stay FERPA compliant. Instead, schools should have a well-written policy regarding retention of admission records as a part of an overall privacy strategy. However, if schools choose to retain admission records, students may access those records vis-à-vis FERPA requests.

What needs to be disclosed?

Understandably, professors and teachers are nervous about the idea of their private comments regarding a student’s suitability for admission being revealed to that student. Some administrators have even wondered if they could redact these records so as not to disclose teacher names. Thus, a question that has emerged from these situations is do schools have to disclose the names of educators who have made recommendations or otherwise “scored” the students’ applications?

As is normally the case in the legal world, it depends. However, we’ve provided some guidance below to help you navigate some of these questions.

Letters of Recommendation

There are the typical letters of recommendation from teachers and professors that accompany any application (undergrad or beyond) to a university program. Students are given the option during the application process to waive their rights to view these recommendations – and most do. This means that any recommendation or letter that accompanied that student’s application will not be disclosed under a later FERPA request. However, if the student does not waive that right, those recommendations would be disclosed so long as they are maintained as part of the “educational records” on file.

Application Scores and Notes

More importantly for universities and colleges, there are often notes from prospective professors and educators who regularly evaluate applications. This means that a student’s favorite English professor may have, unbeknownst to her, indicated on her admissions application that her “test scores are low” or that her writing is “average.” Understandably, teachers are worried about students requesting and viewing these types of notes. This has led to questions about whether a school can “redact” a teacher’s identity in this scenario. The short answer is probably not. FERPA does not explicitly touch on this point but it does provide for redaction in certain scenarios. Those scenarios are mostly limited to instances where the “educational records” included aggregate student information. In that situation, a school must redact the information of other students included in the record before disclosing it to the requesting student. It can be inferred from FERPA’s reasoning here that a redaction, if any, should be made to protect the privacy rights of the student, not the faculty. As tempting as it may be to redact teacher names, it is likely not FERPA compliant.


If a school is nervous about the amount and type of admissions information that could be accessed by students making FERPA requests, that school should make sure it is intentional about its admission records retention policy. Which records are retained? For how long? For what use? If retaining teacher “scores” is important or desirable, the school should be prepared to disclose those scores and the names of the teachers making them in the event of a FERPA request. If those scores aren’t valuable for the school’s records, or the school is worried about releasing its admissions “formula,” the school should consider deleting them altogether as a part of its retention policy. One note of warning, though: If a school chooses to delete or not retain these types of records, it must do so across the board. Deleting records in response to FERPA requests would be a clear violation of the law. Subscribe to Bradley’s privacy blog Online and On Point for additional updates and alerts regarding privacy law developments.