Privacy Requirements under COVID-19 Emergency Rental Assistance ProgramMany relief programs have been implemented over the past year in response to COVID-19, and keeping up with the changing requirements for these programs can be daunting. A new twist in the requirements is the mandate for implementation of privacy requirements under the Emergency Rental Assistance Program. Here are some details about the Emergency Rental Assistance Program, and how to ensure compliance with the privacy requirements.

What is the Emergency Rental Assistance Program?

On December 27, 2020, the Consolidated Appropriations Act, 2021 was enacted that contained provisions for coronavirus response and relief. The act called for $25 billion in rental assistance to be distributed by states, U.S. territories, certain local governments, Indian tribes, and other governing bodies (grantees) that apply for funds through the Emergency Rental Assistance Program. The funds are specifically allocated for rent and utility assistance and housing expenses incurred due to COVID-19. Eligible households meeting certain requirements can receive up to 15 months of assistance for rent and other expenses covered under the program. The list of eligible grantees and additional information can be found on the Emergency Rental Assistance Program website of the U.S. Department of the Treasury.

What are the privacy requirements?

Each grantee is required to (1) establish data privacy and security requirements with appropriate measures to ensure the protection of the privacy of the individuals and households, (2) provide that the information collected, including any personally identifiable information, is collected and used only for submitting reports to the federal government, and (3) provide confidentiality protections for data collected about any individuals who are survivors of intimate partner violence, sexual assault or stalking.

How are landlords affected?

Provided certain requirements are followed, landlords or owners of residential dwellings can apply for rental assistance from the grantees on behalf of their renters or can assist renters in applying for assistance. Landlords need to be aware that the grantees most likely will have specific privacy requirements that they will need to abide by when handling the information of its renters. Since the Emergency Rental Assistance Program privacy requirements will be implemented by multiple different government entities, there will likely be variations in requirements. Therefore, vigilance is needed to ensure compliance.

The Emergency Rental Assistance Program is currently being rolled out. We will update you in the upcoming weeks and months as to additional guidance on the implementation of these privacy requirements.

The Man Behind the Curtain: College Admissions and FERPA RequestsAspiring college students spend enormous amounts of time trying to unlock the magic formula that leads to those magic words: Congratulations, you’ve been accepted! But, for many students, the focus on admissions does not stop once they matriculate.

Starting in 2015, schools such as Harvard, Yale, Penn, and Stanford saw a dramatic uptick in students requesting to view their admissions records under the Family Educational Rights and Privacy Act (FERPA). Pursuant to FERPA, a student may request, and an educational institution must disclose, any of that student’s “educational records.” Attempting to discern the black box of elite college admissions, students requested to see their admissions records by the hundreds. Colleges and universities were not only inundated by these requests but also had to figure out what, exactly, do we have to disclose?

When is disclosure required?

Under FERPA, a student’s admission record only becomes an “educational record” that requires disclosure if the student matriculates at the university. Students who are not accepted to the university, or who are accepted and do not enroll, are not covered by FERPA. This means that students hoping to get a glimpse into why they were rejected by their top school will be unable to gain access to those records. However, if a student matriculates and then later requests access to their admission records, the college or university must comply within 45 days.

Some universities have gone so far as to change their retention policies, deleting all admission records once they have “served their purpose” in order to avoid the headache of complying with hundreds of FERPA admissions requests every month and to avoid the potential of releasing their admissions formula to the public. While the deletion of records no longer needed or required is generally good policy for privacy and data security reasons, it is not necessary to stay FERPA compliant. Instead, schools should have a well-written policy regarding retention of admission records as a part of an overall privacy strategy. However, if schools choose to retain admission records, students may access those records vis-à-vis FERPA requests.

What needs to be disclosed?

Understandably, professors and teachers are nervous about the idea of their private comments regarding a student’s suitability for admission being revealed to that student. Some administrators have even wondered if they could redact these records so as not to disclose teacher names. Thus, a question that has emerged from these situations is do schools have to disclose the names of educators who have made recommendations or otherwise “scored” the students’ applications?

As is normally the case in the legal world, it depends. However, we’ve provided some guidance below to help you navigate some of these questions.

Letters of Recommendation

There are the typical letters of recommendation from teachers and professors that accompany any application (undergrad or beyond) to a university program. Students are given the option during the application process to waive their rights to view these recommendations – and most do. This means that any recommendation or letter that accompanied that student’s application will not be disclosed under a later FERPA request. However, if the student does not waive that right, those recommendations would be disclosed so long as they are maintained as part of the “educational records” on file.

Application Scores and Notes

More importantly for universities and colleges, there are often notes from prospective professors and educators who regularly evaluate applications. This means that a student’s favorite English professor may have, unbeknownst to her, indicated on her admissions application that her “test scores are low” or that her writing is “average.” Understandably, teachers are worried about students requesting and viewing these types of notes. This has led to questions about whether a school can “redact” a teacher’s identity in this scenario. The short answer is probably not. FERPA does not explicitly touch on this point but it does provide for redaction in certain scenarios. Those scenarios are mostly limited to instances where the “educational records” included aggregate student information. In that situation, a school must redact the information of other students included in the record before disclosing it to the requesting student. It can be inferred from FERPA’s reasoning here that a redaction, if any, should be made to protect the privacy rights of the student, not the faculty. As tempting as it may be to redact teacher names, it is likely not FERPA compliant.

Conclusion

If a school is nervous about the amount and type of admissions information that could be accessed by students making FERPA requests, that school should make sure it is intentional about its admission records retention policy. Which records are retained? For how long? For what use? If retaining teacher “scores” is important or desirable, the school should be prepared to disclose those scores and the names of the teachers making them in the event of a FERPA request. If those scores aren’t valuable for the school’s records, or the school is worried about releasing its admissions “formula,” the school should consider deleting them altogether as a part of its retention policy. One note of warning, though: If a school chooses to delete or not retain these types of records, it must do so across the board. Deleting records in response to FERPA requests would be a clear violation of the law. Subscribe to Bradley’s privacy blog Online and On Point for additional updates and alerts regarding privacy law developments.

Privacy Moves to the East Coast: Virginia Set to Enact Comprehensive Consumer Data Protection LawVirginia is primed to become the next U.S. state to pass comprehensive data-privacy legislation with striking similarities to the California Consumers Privacy Act (CCPA), the California Privacy Rights Act (CPRA), and the E.U.’s General Data Protection Regulation (GDPR).

The legislation, known as the Consumer Data Protection Act, passed the Virginia House of Delegates on January 29 by a vote of 89-9. On February 3, the Virginia Senate unanimously approved an identical bill 39-0. All that is left now is for Gov. Ralph Northam to sign the bill into law. If passed, the law will become effective alongside CPRA, on January 1, 2023.

Key Provisions of the Consumer Data Protection Bill

Applicability

This legislation is applicable to businesses that either conduct business in Virginia or “produce products or services that are targeted to” Virginia and “during a calendar year, (1) control or process personal data of at least 100,000” Virginians or that (2) “control or process personal data of at least 25,000 [Virginians] and derive over 50 percent of gross revenue from the sale of personal data.”

Interestingly, “consumer” is defined more narrowly than CCPA or CPRA, and only includes a natural person acting in an individual or household context. The definition of consumer affirmatively excepts any natural person acting in a commercial or employment context.

Additionally, there are broad exemptions for financial institutions subject to the federal Gramm-Leach-Bliley Act and covered entities and business associates governed by HIPAA or HITECH. Non-profit organizations and institutions of higher education are also exempt under the proposed legislation.

Personal Data

The legislation broadly defines “personal data” to mean “any information that is linked or reasonably linkable to an identified or identifiable natural person.”

Privacy Rights

The legislation gives consumers an opt-out right regarding “the processing of the personal data for purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.” It also provides consumers with the right to confirm if their data is being processed, to correct inaccuracies, to data deletion, and to data portability. A similarity between this legislation and the newly enacted CPRA is that both provide an explicit opt-out right extended to targeted advertising and profiling.

Data Protection Assessments

The legislation imposes new obligations, not currently required under any U.S. privacy law, including a new requirement for data controllers to conduct data protection assessments of any processing activities that involve personal data used in any of the following: (a) targeted advertising, (b) sale of personal data, (c) for purposes of profiling, (d) sensitive data, and (e) data that presents a heightened risk of harm to consumers.

The Virginia attorney general can request that a controller disclose data protection assessments, and the attorney general is specifically tasked with evaluating data protection assessments for compliance with the responsibilities set out in the proposed legislation. There is also a specific provision that prevents the waiver of attorney-client privilege or work product protection when the assessment is requested or turned over to the attorney general for review.

Consent

The legislation defines consent as “a clear affirmative act signifying a consumer’s freely given, specific, informed, and unambiguous agreement to process personal data relating to the consumer.” This is a very high standard and similar to the consent standard established by the GDPR.

Enforcement

Markedly, the legislation does not provide for a private right of action, rather the attorney general will have the exclusive right to enforce the law. The attorney general may seek up to $7,500 per violation of the law.

Conclusion

It is anticipated that the law will continue to move quickly through the legislative process and could be signed into law by the governor by the end of February. With what looks to be at least two new comprehensive state laws on the horizon, first in California with CPRA and likely in Virginia, companies need to start planning now for implementation of these laws in 2023. Bradley’s Cybersecurity and Privacy team is here to help. Stay tuned for further updates and alerts from Bradley on state privacy law developments, including Virginia’s privacy rights and obligations by subscribing to Bradley’s privacy blog, Online and OnPoint.

Why It Matters Whether Hashed Passwords Are Personal Information Under U.S. LawOn January 22, 2021, Bleeping Computer reported about yet another data dump by the hacker group Shiny Hunters, this time for a clothing retailer. Shiny Hunters is known for exfiltrating large databases of customer information, often through misconfigured or otherwise compromised database. These databases typically contain credential information for customers, as was the case here. What made this report a bit unique was that Bleeping Computer also reported that: “The passwords stored in the database are hashed using SHA-256 or SHA-512 according to threat actors who have started to analyze the database. One threat actor claims to have already cracked the passwords for 158,000 SHA-256 passwords but has been unable to crack the SHA-512 passwords.” This revelation explicitly highlights what is increasingly becoming an important legal question: Are hashed passwords secure? Or, perhaps more importantly from a legal perspective, does an unauthorized person having access to a username/email address and an accompanying hashed password “permit access to an online account?”

We’ve known for some time that the EU considers hashed passwords to be personal information under GDPR and has specifically advised against using well-known hashing algorithms such as MD5 and SHA-1. Similarly, NIST has recommended that federal agencies stop using SHA-1 for generating digital signatures, generating time stamps, and other applications. Yet, hashing has continued to be touted in the U.S. as a secure, deidentified data point under U.S. law.

The reason the classification of hashed values is critically important is because whether or not the information permits access to an online account can be determinative of whether it is personal information for the purposes of some breach notification statutes, as well as the private right of action in CPRA. This argument has already been advanced in California, in Atkinson v. Minted, Inc., 3:20-cv-03869-JS (N.D. Cal. June 2020) (see First Amended Complaint at Par. 13 “Because passwords that are merely ‘hashed’ and ‘salted’ are not encrypted, they ‘can be accessed and used even while […] redacted with different levels of utility based on how much manipulating of the data is done to protect privacy.’ [Citation omitted]. Therefore, at a minimum, the PII disclosed in the Data Breach included user passwords that would permit sophisticated hackers like the Shiny Hunters to access to an online account.”) To understand the implications of these hacks and the potential impact on litigation requires a bit of technical understanding of hashing and why it is used.

A “hash” of a password is the result of a hashing function applied to the password, and it is used to avoid storing a password in plain text while also allowing a quick and easy evaluation of credentials for a site. The hashing function takes the password and scrambles it up with a large number of simple rote operations with the intent to make it impossible to determine the password from the hash even if the hashing function used is known. As an example, consider a simple computer model of a pool table with a perfectly uniform friction surface, the balls racked precisely at one spot on one end and the cue ball place precisely on the spot in the other. Only a few inputs such as the angle and force of the cue stick hitting the cue ball and a few hard-coded laws of physics will determine the final position of all the balls after they come to rest after a new break. However, even if the laws of physics are simple to apply and mechanical, the complexity of the interaction of all the balls means that it would be impossible to discern the input values by looking at the result — in this case the final position of all the balls. For non-trivial inputs where the balls moved significantly, the resulting position of the balls would give absolutely no information about the inputs, even for someone well versed in the laws of physics. Another feature of this example is that when given a precise set of final positions and inputs that purportedly generate them, it would be trivial to confirm by plugging in the inputs and running the model.

These features — being effectively impossible to determine the input from the outputs alone even knowing the rules to generate the output and being relatively easy to confirm if an input and output match — are the characteristics of what are referred to in mathematics as one-way functions. As the name suggests this is because they are easy to compute one way and effectively impossible to compute the other. Hashing is a one-way function with the password being the input and the “hashed password” being the output, which means that possession of a password and hashed password pair makes it trivial to check if they match but possession of just the hashed password makes it effectively impossible to determine the plain text password. Unfortunately, however, even if a hashing algorithm is effective in not allowing reverse calculation of the password, it does not mean it is entirely secure as there are other mechanisms that can effectively reveal the plain text password.

While all industry standard hashing algorithms may make it effectively impossible to work backward from a hashed password to determine a password under current computational limits, there are other techniques that can make hashed passwords insecure. If a hacker gets a set of hashed passwords, one simple attack vector is to generate a table of possible passwords, run the hashing algorithm, produce their corresponding hash values and then compare the hashed values for matches. Given the processing power available, hackers have and continue to generate enormous tables that contain anything from every possible combination of values for shorter passwords to lists including variants of common and known passwords. These are called rainbow tables, and if the password used is included in one of these tables, then the cleartext password is known to the hacker. There are two factors that impact the effectiveness of these attacks. The first is password complexity because hackers continue to generate complete dictionaries of all combinations of shorter and more simple passwords, such as those containing only letters or numbers. The longer and more complex a password is the less likely it is to be part of a complete dictionary of possible values because the sheer number of combinations are too great to compute.

Another defense to these rainbow attacks is called “salting,” which involves including a value in with the password prior to running the hash. This means the resulting hash will not match the rainbow tables.  For example, if the user uses “password” as their password, the resultant hash is undoubtedly already in a rainbow table as it is arguably the most common password used.  But say the site adds “2@3” to the front and back prior to hashing, the resultant hash is for 2@3password2@3.  This value is highly unlikely to be in a rainbow table even though the user used one of the single most common passwords. The effect of adding a salt even if the value is known to an attacker is that pre-existing rainbow tables are ineffective and a hacker would have to generate a new one, which is time consuming and costly.  An even more secure use of salting is applying a salt value that is kept secret. This is recommended in NIST Special Publication 800-63B “Digital Identity Guidelines” (June 2017). With a secret salt, the site can always check a password entered by adding the secret value, running the hash and comparing to its stored hash. As long as the salt value remains secret, this is a very effective method against rainbow attacks and other current methods of attack.  Unfortunately, many sites do not use secret salt values or complex passwords and therefore the relative security of hashed passwords varies tremendously.

The question of whether a hashed password “permits access” to an online account is a complex question that has not been fully addressed from a legal standpoint. On one end of the spectrum, a hashed password could be considered to never permit access to an online account because even if the actual password might be determined from the hashed value, the hash itself does not permit access as it always requires some additional hacking effort to determine the plaint text password. On the other end spectrum is an argument that if a password could be determined for one or more hashed passwords it means that the whole set “permits” access. Most likely, if this issue is fully litigated, courts will end up somewhere in the middle of this spectrum but that remains to be seen. Another complicating factor is that technical advances in computing power will continue to move the needle on the effectiveness of existing attacks, perhaps establishing practices such as hashing without a salt value as per se insecure.

As cyberattacks continue to grow at a dramatic rate, coupled with what appears to be renewed interest by cybercriminals to “crack” hashed passwords, it seems likely that hashing will come under increased scrutiny in both the courts and in the minds of the public at large. It will be important for companies to not only review their hashing techniques and employ best practices, but to pay attention to technical advances available to both the company and the hackers as well.

A Year Later: CCPA Compliance Reminder to Review Your Privacy PolicyHas it been a year already? Many businesses diligently made sure they did their best to hit the moving CCPA target as they welcomed 2020 and the effective date of the statute last year. A year ago, all we had were draft regulations and a statute, and businesses had to do their best to comply. So, it is not surprising many CCPA disclosures had an effective date of January 1, 2020.  But if that is the last time your disclosures were reviewed, your business is signaling non-compliance with CCPA.

Section 1798.130 contains the various lists required in the CCPA online privacy policy that includes categories collected and shared “in the preceding 12 months” (see, e.g., 1798.130(a)(5)(B) “a list of the categories of personal information it has collected about consumers in the preceding 12 months…”). In view of this time frame for the disclosure, the statute includes the affirmative requirement that a business must make these disclosures and “update that information at least once every 12 months…” (see 1798.130(a)(5)).

If you have not reviewed your CCPA disclosures since January 1, 2020, your business likely needs to do an additional review of your privacy policy based on the new regulations promulgated in the last 12 months. On January 1, 2020, businesses had to choose between trying to comply with original draft regulations released on October 10, 2019 or the high-level statutory requirements. In either case, the “final regulations” and subsequent modifications to those have provided additional requirements or in some cases removed requirements set out in the original proposed regulation. Therefore, when doing the required substantive review, it is recommended that the latest version of the regulations released on August 14,2020 (the “final regulations”) are consulted. Note that there are some slight modifications to even the final regulations that have been proposed but not yet approved. Here are a few notable differences between the original draft regulations and final regulations:

  • Section 999.305(a)(3) of the original draft regulations required a business to get “explicit consent” if it intended to use the personal information for a materially different purpose than what was explicitly disclosed at collection. This language contemplating “explicit consent” has been removed entirely in the final regulations.
  • The original draft regulations gave businesses the option to use the link title “Do Not Sell My Personal Information” or “Do Not Sell My Info.” While many chose the latter, this option was removed in subsequent revisions and only “Do Not Sell My Personal Information” remains as the specified link title (see, e.g., Section 999.305(b)(3)).
  • The original draft regulations did not exempt employees from the various disclosures. The final regulations clarified that employee disclosures were limited to notice at collection, did not have to have a “Do Not Sell” link or link to the privacy policy.
  • The original draft regulations had a process whereby a business that did not directly collect information from a consumer could only sell it by obtaining a signed attestation that the source provided a “Do Not Sell” link. This option was removed entirely and replaced with an implied requirement that a business that does not directly collect information must still provide a notice at collection if they sell the consumer’s personal information (see 999.305(d)).
  • The original draft regulations required an “interactive webform” as a method of submitting verified consumer requests but this affirmative requirement has been removed (see final regulations Section 999.312(a) that requires a toll-free number and one other method).

Although CCPA makes this type of annual review a regulatory requirement, privacy is an evolving area of law and privacy reviews should be built into the organizational compliance process of any business. Staying on top of privacy laws, regulations, guidance, and requirements is becoming even more important as we move into 2021, which is likely to be another landmark year for privacy. Stay tuned-in to these developments through Bradley’s Online and OnPoint blog by subscribing here.

Privacy Litigation Updates for the Financial Services Sector: Yodlee and Envestnet Sued for Data Disclosure and Processing PracticesConsumers are more aware than ever of data privacy and security issues. As technology develops, vast quantities of data are collected on individuals every minute of every day. Customers trust their institutions to keep the troves of financial data on them private and secure.

Wesch v. Yodlee, Inc. and Envestnet, Inc.

A recent class action lawsuit filed against Yodlee and its parent company, Envestnet, puts a spotlight on the data-sharing practices of consumer financial information. The plaintiff, Deborah Wesch, alleges in the complaint that Yodlee’s business practice of collecting, extracting, and selling personal data violates several privacy laws, such as the California Consumer Privacy Act (CCPA), California’s Financial Information Privacy Act (CalFIPA), the California Online Privacy Protection Act (CalOPPA), and the Gramm-Leach Bliley Act (GLBA) Privacy Rule. The complaint brings to light an issue that many financial institutions have and continue to grapple with under a dual state and federal privacy regime — namely whether Yodlee’s data aggregation practices are subject to CCPA or whether Yodlee qualifies as a “financial institution” under GLBA and CalFIPA. To further complicate the issue – even if Yodlee does qualify as a financial institution, .

Even if Yodlee does qualify as a financial institution, the information would need to be collected pursuant to GLBA and CalFIPA, in order to fit within the narrow exception provided under CCPA.

Based on the complaint, Yodlee is a data aggregation and data analytics company. It obtains access to individuals’ financial data through its Application Programming Interface (API) software, which is used to connect financial apps and software to third-party service providers. This software is integrated into the platforms of some of the largest financial institutions in the country.

The plaintiff alleges that the software is “silently integrate[d] into its clients’ existing platforms to provide various financial services.” This silent integration is significant because “the customer believes that it is interacting with its home institution (e.g., its bank) and has no idea it is logging into or using a Yodlee product.” But when the customer enters their bank login information, Yodlee “stores a copy of each individual’s bank login information (i.e., her username and password) on its own system after the connection is made between that individual’s bank account and any other third-party service (e.g., PayPal).”

Once Yodlee has access to the individual’s account, the plaintiff alleges that Yodlee routinely extracts data from the user’s account, even after an individual has severed the connection between its bank account and the third-party service. After access is revoked, Yodlee accesses the account by relying on their own stored copy of the individual’s credentials. Then, Yodlee allegedly aggregates the data, including bank account balances and transaction histories, and sells it to third parties for a fee.

The plaintiff alleges that when she connected her bank account to PayPal through Yodlee’s account verification API, she did not receive the appropriate disclosures about Yodlee’s business practices of storing her account log-in information. Yodlee then continuously accessed and extracted information from her account and sold her personal data to third parties without her knowledge or consent. Even though “PayPal discloses to individuals that Yodlee is involved in connecting their bank account to PayPal’s service for the limited purpose of confirming the individual’s bank details, checking their balance, and transactions, as needed,” the plaintiff alleges that “Yodlee’s involvement with the individual’s data goes well beyond the limited consent provided to facilitate a connection between their bank account and PayPal.”

Takeaway

Banks and other businesses that deal with financial information have unique privacy considerations that should be evaluated in light of this pending case. First and foremost, businesses should re-evaluate their data-sharing practices with third parties that are known data aggregators by reviewing their contracts with such third-party service providers. Businesses often rely on legal bases to share information with third parties. Businesses should review these legal bases to ensure compliance with applicable privacy laws. More specifically, businesses that qualify as “financial institutions” under GLBA and/or CalFIPA should evaluate what legal basis they are relying on when sharing customer information with non-affiliate third parties.

  • If the service provider exception is the legal basis relied upon to share data, businesses should confirm their contracts properly impose the limitations required by applicable privacy laws, such as those required by GLBA or CalFIPA.
  • Moreover, if the consent or authorization exception is the legal basis relied upon to share data, businesses should ensure that they have received consent as defined by, and in accordance with, applicable privacy laws.
  • However, if the business has taken the position that the information being shared does not qualify as “nonpublic personal information” or “personally identifiable financial information,” the business should review the relevant definitions under applicable privacy laws to ensure that such information does not fall within the scope of nonpublic personal information.
  • Alternatively, if the business relies on the information being extracted by the third party as being de-identified data, the business should take steps to ensure that the data truly meets the applicable standard for de-identification under all applicable privacy laws.

Even if the sharing is permitted under applicable privacy laws, businesses should consider potential claims brought under California’s Unfair Competition Law when designing the interaction of their application with a third-party processor’s application. Particularly when a consumer enters their credentials to link an app or to verify a bank account, if the screen displays the bank’s logo, it may cause consumers to believe they are entering their information on the bank’s secure portal rather than providing their credentials to a third party. Banks should make it clear to consumers that they are interacting with an outside third-party.

Lastly, aside from the legal implications of sharing customer information, businesses should also consider the reputational risk to certain data-sharing practices. Stay tuned for a follow-up blog discussing the case decision and its impact on the privacy and financial services sector.

Hanna Andersson and Salesforce Receive Preliminary Approval for Settlement of CCPA-Based Class Action LitigationIn 2019, Hanna Andersson, a children’s apparel store, suffered a data breach while using a Salesforce e-commerce platform. As a result of the breach, customers filed a class action lawsuit, alleging customer data was stolen and asking that both Hanna Andersson and Salesforce be held liable under the California Consumer Protection Act (CCPA).

Background

Barnes v. Hanna Andersson and Salesforce (4:20-cv-00812-DMR) was one of the first cases filed under the newly effective CCPA, and it has garnered much attention from privacy experts and attorneys alike. According to the complaint, the data breach allegedly occurred from September 16, 2019, to November 11, 2019, during which time hackers collected sensitive consumer information, such as customer names, billing and shipping addresses, payment card numbers, CVV codes, and credit card expiration dates. On December 5, 2019, law enforcement found this information on the dark web and alerted Hanna Andersson, which then investigated the incident and confirmed that Salesforce’s platform was “infected with malware.” Hanna Andersson reported this breach to customers and the California attorney general on January 15, 2020.

Plaintiffs allege that the breach was caused by Hanna Andersson’s and Salesforce’s “negligent and/or careless acts and omissions and failure to protect customer’s data … [and failure] to detect the breach.” Plaintiffs further allege that, as a result of the breach, Hanna Andersson’s customers “face a lifetime risk of identity theft.”

Moreover, Bernadette Barnes, the named plaintiff in the case, alleges that she now experiences anxiety as a result of time spent reviewing the “account compromised by the breach, contacting her credit card company, exploring credit monitoring options, and self-monitoring her accounts.” Barnes also claims to now feel hesitation about shopping on other online websites.

Settlement

In December 2020, the court preliminarily approved the class action settlement filed by the plaintiffs. This settlement included both monetary and non-monetary requirements. First, a $400,000 settlement fund that will provide cash payments of up to $500 per class member, with expense awards of up to $5,000 available to class members with extraordinary circumstances, such as rampant identity theft. The actual payment to the average class member is not ascertainable now since it will vary depending on the ultimate size of the class, however it is expected to be approximately $38 per class member. Second, the settlement requires Hanna Andersson to improve its cybersecurity through, but not limited to, the following measures: hiring a director of cybersecurity, implementing multi-factor authentication for cloud services accounts, and conducting a risk assessment consistent with the NIST Risk Management Framework.

Takeaway

At this point, it’s unclear whether these requirements should be viewed as setting an industry standard for compliance or for setting minimum practices. For example, multi-factor authentication has long been considered an industry standard and an order for its implementation seems more like an indictment of Hanna Andersson’s practices than the creation of a new and more robust standard. Additionally, it is noteworthy that the court ordered Hanna Andersson to hire a director of cybersecurity and not a chief information security officer (CISO). While at first glance these seem like simple differences in title, a CISO is an executive-level position that typically plays an enterprise-wide role in developing and implementing privacy and cybersecurity policies, while also responding to any incidents that may occur. Many consider a CISO role that reports directly to the CEO to be an industry best-practice. In comparison, a director of cybersecurity is not an executive-level position, rather it is a role that may often report to a CISO or be siloed within an Information Technology department. Typically, the director of cybersecurity has less authority and power to shape policies enterprise-wide.

Regardless, this settlement will set the stage for any upcoming CCPA-related privacy and cybersecurity disputes. Furthermore, this settlement will provide insight into who may be sued under CCPA, specifically whether third-party processors may be brought into litigation going forward. In light of this decision, businesses should compare their privacy and cybersecurity practices to the settlement requirements, while bearing in mind that these represent the minimum for compliance, not necessarily the industry standard.

Continue to look for further updates and alerts from Bradley on state privacy rights and obligations.

FTC Eyes Vendor Oversight in Safeguards Rule SettlementOn December 15, 2020, the FTC announced a proposed settlement with Ascension Data & Analytics, LLC, a mortgage industry analytics company, related to alleged violations of the Gramm-Leach-Bliley Act’s (GLBA) Safeguards Rule. In particular, the FTC claimed that Ascension Data & Analytics’ vendor, OpticsML, left “tens of thousands of consumers[’]” sensitive personal information exposed “to anyone on the internet for a year” due to an error configuring the server and the storage location. The FTC contended that Ascension Data & Analytics failed to properly oversee OpticsML. The FTC voted 3-1-1 to issue the administrative complaint and to accept the consent agreement, with the full consent agreement package to be published soon in the Federal Register for the 30-day comment period.

As detailed in the FTC’s complaint, Ascension Data & Analytics contracted with OpticsML to OCR mortgage documents. OpticsML stored the information on a cloud-based server and in a separate cloud-based storage location. Due to a configuration issue, the database was publicly exposed, meaning anyone could access the personal information without the need for credentials. Although Ascension Data & Analytics required vetting security measures of its vendors in its “Third Party Vendor Risk Management” policy, which was in place since September 2016, the FTC claimed that Ascension Data & Analytics never vetted OpticsML. Additionally, the FTC asserted that since at least September 2016, Ascension Data & Analytics never required its service providers to implement privacy measures to protect personal information.

The proposed settlement contains multiple action items for Ascension Data & Analytics to complete. Ascension Data & Analytics must establish and implement a data security program, engage an independent third-party professional to assess the procedures on an initial and biennial basis, and certify annually to the FTC its compliance with the settlement. As part of the mandated data security program, Ascension must not only conduct initial due diligence on any vendor with access to consumer data, but it must also conduct an annual written assessment of each vendor “commensurate with the risk it poses to the security of” personal information.

Takeaways

There are three big takeaways from the complaint and settlement.

  • First, the FTC is ramping up enforcement of the Safeguards Rule. This is not much of a surprise given the FTC’s focus on the Safeguards Rule, as evidenced by the virtual workshop it hosted this summer to discuss the proposed changes to the rule.
  • Second, the FTC appears to see vendor oversight as a key component of implementation of the Safeguards Rule. While other agencies, such as the CFPB, have indicated a specific interest in vendor oversight, this is now on the FTC’s radar.
  • Finally, this settlement underscores that regulated entities need to actively operationalize written policies and procedures, particularly around third-party risk. Financial institutions should ensure that they are in compliance with the Safeguards Rule generally and also engage in initial due diligence and continuous oversight of their vendors in order to avoid enforcement based on their vendors’ conduct.

Continue to look for further updates and alerts from Bradley relating to privacy rights and obligations.

Massachusetts Voters Approve Measure for Expanded Access to Vehicle DataIn a roller coaster of an election week, it was easy for smaller ballot measures to become overshadowed. One ballot measure that you may have missed is Massachusetts’s Ballot Question 1 regarding the “right to repair” motor vehicles. The ballot measure expands access to a driver’s motor vehicle data. Vehicles are increasingly becoming more computerized due to the use of “telematics systems.” Telematic systems are systems that collect and wirelessly transmit mechanical data to a remote server. The types of mechanical data captured can include location, speed, braking, fuel consumption, vehicle errors, and more. Ballot Question 1 is a measure to expand access to such mechanical data.

Beginning with model year 2022, manufacturers of motor vehicles sold in Massachusetts that use telematics systems must equip the vehicles with a standardized open access data platform. This will allow owners and independent vehicle repair facilities to access the data. Owners will be able to easily view mechanical data on their cellphones. Ballot Question 1 will also give owners more options for car repair services. Local vehicle repair shops will now have access to data that is usually restricted to dealers.

Opponents of Ballot Question 1 are concerned about the widened net of access to a driver’s data, especially location data. There are also concerns about the increased risk for security breaches. Privacy advocates have long voiced concerns about large-scale collection of location data. In fact, the New York Times Privacy Project has published a series of articles about the dangers of unregulated collection of location data.

With the influx of cars with telematic systems hitting the market, ballot measures surrounding access to vehicle data will likely increase in the next few years. However, with counter-veiling privacy pushes to regulate or limit the use of location data, we may also see a tug of war between various laws that seek to provide access to location data and those who seek to regulate the collection and use of that information.

Continue to look for further updates and alerts from Bradley on state privacy rights and obligations.

No Unreasonable Searches or Seizures of Electronic Data in MichiganThe most intimate information can be found in the data on our cellphones and laptops, from geo-location data to search history. The level of privacy protections afforded to electronic data and communications have been unclear and ambiguous for years, but after this election, Michigan now has some clarity.

On November 03, 2020, Proposal 2 was passed by an overwhelming majority, amending Article 1, Section 11 of Michigan’s Constitution. Proposal 2 will provide personal protections by prohibiting unreasonable searches or seizures of electronic data and communications. It will do so through requiring law enforcement to obtain a search warrant to access an individual’s electronic data and communication under the same conditions needed to search through an individual’s home or seize an individual’s papers or physical items.

The amendment will fill a gap in the Michigan Constitution. Before the passing of Proposal 2, the Michigan Constitution provided protections against unreasonable searches and seizures for physical items but did not explicitly mention electronic data or communications. With the amendment to the Michigan Constitution, there will be no more ambiguity over whether data — such as cell phone data, communications with Alexa and Google Home, or Apple Watch health metrics — is private and requires a warrant to be searched or seized by law enforcement in Michigan.

Michigan is not the first state (and likely won’t be the last) to provide some level of protection for electronic data and communications. Utah has enacted a law banning warrantless searches and seizures of electronic data and communications. Moreover, the Florida Constitution mentions “the right of the people to be secure … against the unreasonable interception of private communications.” However, both Utah and Florida’s protections are not as strong as Michigan’s explicit protections in a constitutional amendment. Going forward, it is likely that more states will follow Michigan’s lead in protecting electronic data and communications through constitutional amendments.

Continue to look for further updates and alerts from Bradley on state privacy rights and obligations.