Listen to this post

On May 16, 2023, the U.S. Senate Judiciary Committee conducted a significant oversight hearing on the regulation of artificial intelligence (AI) technology, specifically focusing on newer models of generative AI that create new content from large datasets. The hearing was chaired by Sen. Richard Blumenthal for the Subcommittee on Privacy, Technology, and the Law. He opened the hearing with an audio-cloned statement generated by ChatGPT to demonstrate ominous risks associated with social engineering and identity theft. Notable witnesses included Samuel Altman, CEO of OpenAI, Christina Montgomery, chief privacy and trust officer at IBM, and Gary Marcus, professor emeritus of Psychology and Neural Science at New York University – each of whom advocated for the regulation of AI in different ways. 

Altman advocated for the establishment of a new federal agency responsible for licensing AI models according to specific safety standards and monitoring certain AI capabilities. He emphasized the need for global AI standards and controls and described the safeguards implemented by OpenAI in the design, development, and deployment of their ChatGPT product. He explained that before deployment and with continued use, ChatGPT undergoes independent audits, as well as ongoing safety monitoring and testing. He also discussed how OpenAI foregoes the use of personal data in ChatGPT to lessen the risk of privacy concerns, but also noted how the end user of the AI product impacts all of the risks and challenges that AI represents.

Montgomery from IBM supported a “precision regulation” approach, focusing on specific use cases and addressing risks rather than broadly regulating the technology itself, the approach taken in the proposed EU AI Act as an example. While Montgomery highlighted the need for clear regulatory guidance for AI developers, she stopped short of advocating for a federal agency or commission. Instead, she described the AI licensure process as obtaining a “license from society” and stressed the importance of transparency for users so they know when they interact with AI, but noted that IBM models are more B2B than consumer facing. She advocated for a “reasonable care” standard to hold AI systems accountable. Montgomery also discussed IBM’s internal governance framework, which includes a lead AI officer and an ethics board, as well as impact assessments, transparency of data sources, and user notification when interacting with AI.

Marcus argued that the current court system is insufficient for regulating AI and expressed the need for new laws governing AI technologies and strong agency oversight. He proposed an agency similar to the Food and Drug Administration (FDA) with the authority to monitor AI and conduct safety reviews, including the ability to recall AI products. Marcus also recommended increased funding for AI safety research, both in the short term and long term.

The senators seemed poised to regulate AI in this Congress, whether through an agency or via the courts, and expressed bipartisan concerns about deployment and uses of AI that pose significant dangers that require intervention. Furthermore, the importance of technology and organizational governance rules was underscored, with the recommendation of adopting cues from the EU AI Act in taking a strong leadership position and a risk-based approach. During the hearing, there were suggestions to incorporate constitutional AI by emphasizing the upfront inclusion of values in the AI models, rather than solely focusing on training them to avoid harmful content.

The senators debated the necessity of a comprehensive national privacy law to provide essential data protections for AI, with proponents for such a bill on both sides of the aisle. They also discussed the potential regulation of social media platforms that currently enjoy exemptions under Section 230 of the Communications Decency Act of 1996, specifically addressing the issue of harms to children. The United States find itself at a critical juncture where the evolution of technology has outpaced the development of both regulatory frameworks and the case law. As Congress grapples with the task of addressing the risks and ensuring the trustworthiness of AI, technology companies and AI users are taking the initiative to establish internal ethical principles and standards governing the creation and deployment of artificial and augmented intelligence technologies. These internal guidelines serve as a compass for organizational conduct, mitigating the potential for legal repercussions and safeguarding against negative reputational consequences in the absence of clear legal guidelines.

Listen to this post

Effective July 1, 2023, a new Florida law will limit certain health care providers from storing patient information offshore. CS/CS/SB 264 (Chapter 2023-33, Laws of Florida), amends the Florida Electronic Health Records Exchange Act to require health care providers who use certified electronic health record technology to ensure that patient information is physically maintained in the continental United States or its territories or Canada.

The law broadly applies to “all patient information stored in an offsite physical or virtual environment,” including patient information stored through third-party or subcontracted computing facilities or cloud computing service providers. Further, it applies to all qualified electronic health records that are stored using any technology that can allow information to be electronically retrieved, accessed, or transmitted.

The new law is limited to health care providers listed below who use “certified electronic health record technology” or CEHRT – a term of art applicable to technology certified to the certification criteria adopted by the U.S. Department of Health and Human Services (HHS):

  • Certain entities licensed by the Florida Agency for Health Care Administration (AHCA), including hospitals, healthcare clinics, ambulatory surgical centers, home health agencies, hospices, home medical equipment providers, nursing homes, assisted living facilities, intermediate care facilities for persons with developmental disabilities, laboratories authorized to perform testing under the Drug-Free Workplace Act, birth centers, abortion clinics, crisis stabilization units, short-term residential treatment facilities, residential treatment facilities, residential treatment centers for children and adolescents, nurse registries, companion services or homemaker services providers, adult day care centers, adult family-care homes, homes for special services, transitional living facilities, prescribed pediatric extended care centers, healthcare services pools, and organ, tissue, and eye procurement organizations;
  • Certain licensed health care practitioners, including physicians, physician assistants, anesthesiologist assistants, pharmacists, dentists, chiropractors, podiatrists, naturopathic physicians, nursing home administrators, optometrists, registered nurses, advanced practice registered nurses, psychologists, clinical social workers, marriage and family therapists, mental health counselors, physical therapists, speech language pathologists, audiologists, occupational therapists, respiratory therapists, dieticians, orthotists, prosthetists, electrologists, massage therapists, licensed clinical laboratory personnel, medical physicists, genetic counselors, opticians, certified radiologic personnel, and acupuncturists;
  • Licensed pharmacies;
  • Certain mental health and substance abuse service providers and their clinical and nonclinical staff who provide inpatient or outpatient services;
  • Licensed continuing care facilities; and
  • Home health aides.

At this time, the HHS certification program includes inpatient EHRs for hospitals and ambulatory EHRs for eligible health care providers, the only provider types eligible to participate in the Centers for Medicare and Medicaid Services (CMS) payment programs requiring CEHRT. While other health care providers such as ambulatory surgery centers, pharmacies, long-term post-acute care providers, home health and hospice are not eligible to participate in those CMS payment programs, they arguably fall within the scope of the Florida offshoring prohibition if they “utilize” CEHRT. Further, given its broad language, the statute could technically be read as covering all patient information stored by a health care provider utilizing CEHRT, even if that patient information is stored in an application that is not so certified.

The new law also amends Florida’s Health Care License Procedures Act to require entities submitting an initial or renewal licensure application to AHCA to sign an affidavit attesting under the penalty of perjury that the entity is in compliance with the new requirement that patient information be stored in the continental United States or its territories or Canada. Entities licensed by AHCA must remain in compliance with the data storage requirement or face possible disciplinary action by AHCA.

Furthermore, the new law requires an entity licensed by AHCA to ensure that a person or entity who possesses a controlling interest in the licensed entity does not hold, either directly or indirectly, an interest in an entity that has a business relationship with a “foreign country of concern” or that is subject to section 287.135, Florida Statutes, which prohibits local governments from contracting with certain scrutinized companies. “Foreign country of concern” is defined by the new law as “the People’s Republic of China, the Russian Federation, the Islamic Republic of Iran, the Democratic People’s Republic of Korea, the Republic of Cuba, the Venezuelan regime of Nicolás Maduro, or the Syrian Arab Republic, including any agency of or any other entity of significant control of such foreign country of concern.”

Listen to this post

Tennessee has joined the growing number of states that have enacted comprehensive data privacy laws. On the final day of this year’s legislative session, the Tennessee legislature passed the Tennessee Information Protection Act (TIPA), and Governor Bill Lee signed TIPA into law on May 11, 2023.  

TIPA marks a significant development in data privacy for businesses operating in the state. This comprehensive legislation grants consumers enhanced control over their personal information while establishing stringent responsibilities for businesses and service providers. Navigating TIPA’s extensive requirements is crucial for maintaining your company’s compliance and reputation.

Here are key takeaways from the bill passed by the legislature:

  • Entities Affected: The law affects entities that conduct business in Tennessee or provide products or services to Tennessee residents, exceed $25 million in revenue, and meet one of these criteria:
    • Control or process information of 25,000 or more Tennessee consumers per year and derive more than 50% of gross revenue from the sale of personal information; or
    • Control or process information of at least 175,000 Tennessee consumers.
  • Consumer Rights: TIPA creates consumer rights to confirm, access, correct, delete, or obtain a copy of their personal information, or opt out of specific uses of their data. Controllers must respond to authenticated consumer requests within 45 days, with a possible 45-day extension, and establish an appeal process for refusals to take action on requests. If the controller cannot authenticate the consumer’s request, they can ask for additional information to do so.
  • Data Controller Responsibilities: Controllers must limit data collection and processing to what is necessary, maintain data security practices, avoid discrimination, and obtain consent for processing sensitive data. Controllers must provide a clear and accessible privacy notice detailing their practices, and, if selling personal information or using it for targeted advertising, disclose these practices and provide an opt-out option. Controllers must also offer a secure and reliable means for consumers to exercise their rights without requiring consumers to create a new account.
  • Controller–Processor Requirements: Processors must adhere to controllers’ instructions and assist them in meeting their obligations, including responding to consumer rights requests and providing necessary information for data protection assessments. Contracts between controllers and processors must outline data processing procedures, including confidentiality, data deletion or return, compliance demonstration, assessments, and subcontractor engagement. The determination of whether a person is acting as a controller or processor depends on the context and specific processing of personal information.
  • Data Protection Assessments: Controllers must conduct and document data protection assessments for specific data processing activities involving personal information. These assessments must weigh the benefits and risks of processing, with certain factors considered. Assessments are confidential, exempt from public disclosure, and not retroactive.
  • De-Identified Data Exemptions: Controllers must take measures to ensure that de-identified data cannot be associated with a natural person, publicly commit to not reidentifying data, and contractually obligate recipients to comply with the law. Consumer rights do not apply to pseudonymous data under certain conditions, and controllers must exercise oversight of disclosed pseudonymous or de-identified data.
  • Major Similarities to CCPA: TIPA shares many similarities with the CCPA, including (but not limited to):
    • Granting consumers the right to access, delete, and opt out of the sale of their personal information, and requiring businesses to provide notice of their data collection and usage practice;
    • Requiring controllers and processors to enter into contracts outlining the terms and conditions of data processing and obligating subcontractors to meet the obligations of the processor; and
    • Requiring data protection assessments for certain processing activities, weighing the benefits and risks associated with the processing.
  • Affirmative Defense: TIPA provides for an “affirmative defense” against violations of the law by adhering to a written privacy policy that conforms to the NIST privacy framework or comparable standards. The privacy program’s scale and scope must be appropriate based on factors such as business size, activities, personal information sensitivity, available tools, and compliance with other laws. In addition, certifications from the Asia Pacific Economic Cooperation’s Cross-Border Privacy Rules and Privacy Recognition for Processors systems may be considered in evaluating the program.
  • Enforcement: The Tennessee Attorney General retains exclusive enforcement authority for TIPA;the law expressly states that there is no private right of action. The Tennessee Attorney General must provide 60 days’ written notice and an opportunity to cure before initiating enforcement action. If the alleged violations are not cured, the Tennessee Attorney General may file an action and seek declaratory and/or injunctive relief, civil penalties up to $7,500.00 for each violation, reasonable attorney’s fees and investigative costs, and treble damages in the case of a willful or knowing violation.
  • Dates and Deadlines: TIPA becomes effective on July 1, 2025.
  • Exemptions: The law includes numerous exemptions, including (but not limited to):
    • Government entities;
    • Financial institutions, their affiliates, and data subject to the Gramm-Leach-Bliley Act (GLBA);
    • Insurance companies;
    • Covered entities, business associates, and protected health information governed by the Health Insurance Portability and Accountability Act (HIPAA) and/or the Health Information Technology for Economic and Clinical Health Act (HITECH);
    • Nonprofit organizations;
    • Higher education institutions; and
    • Personal information that is subject to other laws such as the Children’s Online Privacy Protection Act (COPPA), the Family Educational Rights and Privacy Act (FERPA), and the Fair Credit Reporting Act (FCRA).

Despite having extensive carve-outs, TIPA grants consumers extensive rights over their personal information, and places stringent compliance obligations on businesses (controllers) and service providers (processors). Businesses should start planning for compliance now to avoid costly enforcement actions down the road.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

Listen to this post

Federally insured credit unions are now required to report a cyber incident to the National Credit Union Administration (NCUA) Board within 72 hours. This final rule was unanimously approved by the NCUA on February 17, 2023 and will take effect September 1, 2023 – giving credit unions just over 6 months to update their data incident response teams, policies, and procedures accordingly.

The new rule states that a “reportable” cyber incident is an incident that leads to at least one of the following outcomes:

  • A “substantial loss” of the confidentiality, integrity, or availability of a network or member information system that (i) causes the unauthorized access to or exposure of “sensitive data,” (ii) disrupts vital member services, or (iii) seriously impacts the “safety and resiliency” of operational systems and processes;
  • A disruption of business operations, vital member services, or a member information system resulting from a cyberattack or exploitation of vulnerabilities; or
  • A disruption of business operations or unauthorized access to sensitive data facilitated through, or caused by, a compromise of a credit union service organization, cloud service provider, or other third-party data hosting provider or by a supply chain compromise.

If a credit union experiences any of these outcomes, it must notify the NCUA “as soon as possible but no later than 72 hours” from the time it reasonably believes that it has experienced a reportable cyber incident. Disruption to business operations seems to be the central consideration in whether cyber incident will be reportable, which mirrors the considerations of banking regulator’s final rule that governs federally insured banks. The NCUA has indicated that it will issue additional guidance before the rule goes into effect on September 1, 2023, including examples of both non-reportable and reportable incidents, and the proper method for providing notice to the NCUA via email, telephone, or other similar prescribed methods. This initial notification is merely an “early alert” to NCUA and does not require a detailed incident assessment within that initial 72-hour time frame.

In response to public comments, the NCUA clarified that this reporting requirement is distinct from the current five-day period to report “catastrophic acts,” which are defined as “any disaster, natural or otherwise, resulting in physical destruction or damage to the credit union or causing an interruption in vital member services” that is projected to last more than two consecutive business days. The NCUA dismissed concerns that it may be difficult for credit unions to differentiate between a “catastrophic act” and “reportable cyber incident,” and rejected requests to apply the longer five-day reporting period for events that may fall within both definitions. The NCUA also noted that “catastrophic acts” includes non-natural disasters such as a power grid failure or physical attack and indicated that it may provide additional clarification at a later date if needed. As currently drafted, a reportable cyber incident may very well fall within the scope of such definitions, and if that is the case, credit unions should likely err on the side of reporting the incident within 36 hours. To provide some clarity on the scope of the new rule, the NCUA stated it would be retaining the non-exhaustive examples set forth in the proposed rule constituting reportable cyber incidents, which include:

  • If a credit union becomes aware that a substantial level of sensitive data is unlawfully accessed, modified, or destroyed, or if the integrity of a network or member information system is compromised;
  • If a credit union becomes aware that a member information system has been unlawfully modified and/or sensitive data has been left exposed to an unauthorized person, process, or device, regardless of intent;
  • A DDoS attack that disrupts member account access;
  • A computer hacking incident that disables a credit union’s operations;
  • A ransom malware attack that encrypts a core banking system or backup data;
  • Third-party notification to a credit union that they have experienced a breach of a credit union employee’s personally identifiable information;
  • A detected, unauthorized intrusion into a network information system;
  • Discovery or identification of zero-day malware (which is a cyber-attack that exploits a previously unknown hardware, firmware, or software vulnerability) in a network or information system;
  • Internal breach or data theft by an insider;
  • Member information compromised as a result of card skimming at a credit union’s ATM; or
  • Sensitive data exfiltrated outside of the credit union or a contracted third party in an unauthorized manner, such as through a flash drive or online storage account.

On the other hand, blocked phishing attempts, failed attempts to gain access to systems, and unsuccessful malware attempts would not trigger a reporting requirement.

Notably, the NCUA’s reporting timeline is longer than the 36-hour timeline that applies to banks. The NCUA chose the 72-hour timeline in an effort to align the rule to reporting requirements for critical infrastructure, and specifically, to the requirements of the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA), which requires certain entities in critical infrastructure sectors—such as financial services, telecommunications, information technology, healthcare, energy, and others—to report certain cyber incidents to the Cybersecurity and Infrastructure Security Agency. This timeframe also aligns with GDPR and the UK Data Protection Act 2018, which require notification to the supervisory authority “without undue delay” and, where feasible, not later than 72 hours of becoming aware of a reportable breach. The NCUA decided to roll out its final reporting rule even though the final rule implementing CIRCIA is not required to be published until 2025.   Although the upcoming NCUA regulations will provide additional guidance, companies should not delay putting systems into place to detect and report cyber incidents where appropriate. Such preparations could include conducting training to ensure that employees are aware of the new reporting requirements, a chain of command for reporting suspected cyber incidents for review, updating the credit union’s incident response plan, and assigning relevant task owners for various phases of the incident response plan. Some aspects of the incident response plan will likely need to be supplemented once the NCUA issues additional guidance closer to the implementation date; however, credit unions should not delay in revisiting their data security monitoring and incident response procedures given the short notification timeframe.

Listen to this post

The government’s announcement of renewed emphasis on cybersecurity enforcement has spawned recent million-dollar enforcement actions. Continued government attention on cybersecurity promises a treacherous enforcement environment in 2023 and beyond.

Several recent government initiatives have focused on cybersecurity enforcement.  Towards the end of 2021, the Department of Justice announced a Civil Cyber-Fraud Initiative to use the False Claims Act (“FCA”) to hold companies and individuals accountable for: 1) deficient cybersecurity; 2) misrepresentations of cybersecurity; and/or 3) insufficient monitoring or reporting of cybersecurity incidents. The Cyber Incident Reporting for Critical Infrastructure Act of 2022 (“CIRCIA”) now requires the Cybersecurity and Infrastructure Security Agency to develop and implement regulations requiring covered entities to report covered cybersecurity incidents. The FTC Safeguards Rule requires non-banking financial institutions, including mortgage brokers and automobile dealerships, to develop, implement, and maintain a comprehensive cybersecurity program to protect customer information. Most concerning is that the deadline for compliance with the FTC Safeguards Rule is now June of 2023.

In July of 2022, as part of the Civil Cyber-Fraud Initiative, the Department of Justice announced a $9 million Aerojet settlement to resolve cybersecurity fraud claims brought pursuant to the FCA by a whistleblower who was the former Senior Director of Cyber Security, Compliance, and Controls for Aerojet. The whistleblower claimed that Aerojet’s contracts with the government mandated specific cybersecurity standards, and despite knowing that its systems did not meet these standards, Aerojet pursued and fraudulently obtained the contracts.

The Aerojet qui tam and resulting settlement forecasts how use of this enforcement mechanism in the cybersecurity space might play out. The government has now specifically promised that when contractual cybersecurity standards are not satisfied, the government will attempt to utilize the FCA to enforce cybersecurity fraud claims. And, as the deadline for compliance with the FTC Safeguards Rule quickly approaches, companies must be prepared for certification requests to potentially incorporate various cybersecurity requirements, including compliance with the FTC Safeguards Rule. To avoid potential FCA liability, companies and individuals need to be absolutely aware of any cybersecurity requirements in government contracts, including how compliance is certified, and how to monitor and report any cybersecurity incidents.  Often, organizations are not aware of what they have agreed to contractually regarding cybersecurity or privacy.  A company employee may receive an email link from a customer and merely click boxes certifying compliance in order to earn the work, without ever reading the terms to which they’re binding the company.

Companies may not be prepared for the consequences of cybersecurity requirements and certifications in contracts—but they should be.  This year promises to be an even more active year for cybersecurity enforcement. 

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

While the federal government has relied on the False Claims Act (FCA) to combat fraud across a range of sectors since 1986, in 2022, the Department of Justice set its sights on new enforcement priorities, including fraud in the cybersecurity realm. In October 2021, DOJ announced its Civil Cyber-Fraud Initiative, dedicated to using the FCA to combat new and emerging cyber threats. The initiative utilizes the tools contained in the FCA to hold government contractors liable for providing deficient cybersecurity products or services. Additionally, the initiative seeks to root out misrepresentations of cybersecurity practices or intentional failures to report cyber incidents. The government obtained its first settlement under this initiative in 2022, collecting $930,000. Companies and individuals should expect that this is just the beginning for FCA cases related to cybersecurity. To keep you apprised of the current enforcement trends and the status of the law, Bradley’s Government Enforcement and Investigations Practice Group is pleased to present the False Claims Act: 2022 Year in Review, our 11th annual review of significant FCA cases, developments and trends.

Under the European Union’s General Data Protection Regulation (GDPR), individual data subjects have the right to request that the data controller share information regarding the data subject’s personal information. This includes the right to know the “recipients or categories of recipients” to whom the data subject’s personal data has been disclosed. To date, data controllers have defaulted to disclosing the categories of recipients only, rather than the specific recipients by name. But that’s about to change.

On January 12, 2023, the Court of Justice of the European Union (CJEU) ruled that data controllers must specifically identify the recipients, rather than solely the categories of recipients, in response to a data subject access request. Although the ruling specifically addressed data subject access requests pursuant to Article 15 (data subject access rights) of GDPR, the decision also has significant implications for required disclosures at the point of collection under Article 13.  

Background

This case began when an Austrian individual, RW, submitted a data subject access request to Österreichische Post AG (OP), an Austrian postal service provider, seeking the identity of any recipients of his data. Per Article 15 of GDPR, data subjects may request “the recipients or categories of recipient to whom the personal data have been or will be disclosed.” In its response to RW, OP stated that it shares RW’s personal information with trading partners for marketing purposes but refused to identify the specific recipients. RW filed suit, seeking the identity of the recipients, but the case was initially dismissed on the basis that GDPR “gives the controller the option of informing the data subject only of the categories of recipient, without having to identify by name the specific recipients to whom personal data are transferred.” RW appealed the decision to the Austrian Supreme Court (Oberster Gerichtshof), which referred the question to the CJEU for a preliminary ruling.

CJEU Adopts Expansive Interpretation of GDPR

In a decision with widespread ramifications, the CJEU ruled that controllers must reveal the specific identities of data recipients to the data subject in response to a data subject access request. Revealing the categories of recipients alone is only sufficient if revealing the specific identity of recipients is impossible. In support of its decision, the CJEU emphasized that, in light of the GDPR’s overall goals, the right of access requires transparency in all personal data processing. The CJEU noted that access to the identity of recipients is necessary in order for the data subject to exercise data subject rights under GDPR (such as the rights to rectification, erasure, and restriction of processing).

Takeaways

This ruling has significant implications when it comes to both data subject access requests and point-of-collection disclosures:

  • Data subject access requests – At a minimum, the CJEU decision now requires controllers to disclose the specific identities of data recipients in response to data subject access requests. If a controller determines that it is impossible to share specific identities (specifically because it does not yet know the identities of all recipients), the controller should clearly document the reasoning by which it made the impossibility determination – and it had better actually be impossible. In the context of GDPR’s Article 14.5(b) (providing disclosures where data was collected from a third party), the European Data Protection Board (EDPB) stated that “something is either impossible or it is not; there are no degrees of impossibility.” And it is likely the EDPB will adopt a similar attitude with regard to data subject access requests.
  • Disclosures at point of collection – In addition to disclosing specific recipient identities in response to data subject access requests, controllers should consider revisiting point-of-collection disclosures to EU data subjects in general. Like GDPR’s Article 15 (data subject access rights), Article 13 (point-of-collection disclosures) has language requiring the disclosure of “the recipients orcategories of recipients of the personal data, if any.” While the CJEU has yet to extend its reasoning to point of collection disclosures, if controllers already know the identities of data recipients, they should consider disclosing those identities at the point of collection as well.  

As a reminder, data subject access requests are subject to the threshold inquiry of whether the request is manifestly unfounded or excessive, so the transparency required under the GDPR is not without its limits. Also, controllers are not required to disclose the identity of specific recipients if it would be impossible — specifically, in the CJEU’s view, if the identity is not yet known. An impossibility determination should be used sparingly (and documented thoroughly, if used) in light of the EDPB’s findings on impossibility. At the end of the day, companies would be well-suited to err on the side of honoring reasonable data subject access requests where possible, as the costs for defending litigation and/or a regulatory investigation would dwarf the administrative costs of responding to these requests. For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

On January 26, 2023, the U.S. National Institute of Standards and Technology (NIST) released the Artificial Intelligence (AI) Risk Management Framework (AI Risk Management Framework 1.0), a voluntary guidance document for managing and mitigating the risks of designing, developing, deploying, and using AI products and services. NIST also released a companion playbook for navigating the framework, a roadmap for future work, and mapping of the framework to other standards and principles, both at home and abroad. This guidance, developed in a consensus-based approach across a broad cross section of stakeholders, offers an essential foundation and important building block toward responsible AI governance.

The AI Framework

We stand at the crossroads as case law and regulatory law struggle to keep up with technology. As regulators consider policy solutions and levers to regulate AI risks and trustworthiness, many technology companies have adopted self-governing ethical principles and standards surrounding the development and use of artificial and augmented intelligence technologies. In the absence of clear legal rules, these internal expectations guide organizational actions and serve to reduce the risk of legal liability and negative reputational impact.

Over the past 18 months, NIST developed the AI Risk Management Framework with input from and in collaboration with the private and public sector. The framework takes a major step toward public-private collaboration and consensus through a structured yet flexible approach allowing organizations to anticipate and introduce accountability structures. The first half of the AI Risk Management Framework outlines principles for trustworthy AI, and the remainder describes how organizations can address these in practice by applying the core functions of creating a culture of risk management (governance), identifying risks and context (map), assessing and tracking risks (measure), and prioritizing risk based on impact (manage). NIST plans to work with the AI community to update the framework periodically.

Specifically, the framework offers noteworthy contributions on the pathway toward governable and accountable AI systems: 

  • Moves beyond technical standards to consider social and professional responsibilities in making AI risk determinations
  • Establishes trust principles, namely that responsible AI systems are valid and reliable; safe, secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair, with harmful bias managed
  • Emphasizes context (e.g., industry, sector, business purposes, technology assessments) in critically analyzing the risks and potential impacts of particular use cases
  • Provides steps for managing risks via governance functions; mapping broad perspectives and interdependencies to testing, evaluation, verification, and validation within a defined business case; measuring AI risks and impacts; and managing resources to mitigate negative impacts
  • Rationalizes the field so that organizations of all sizes can adopt recognized practices and scale as AI technology and regulations develop

The Playbook

This companion tool provides actionable strategies for the activities in the core framework. As with NIST’s Cybersecurity and Privacy Frameworks, the AI Risk Management Framework is expected to evolve with stakeholder input. NIST expects the AI community will build out these strategies for a dynamic playbook and will update the playbook in Spring 2023 with any comments received by the end of February.

The Roadmap

The roadmap for the NIST AI Risk Management Framework identifies the priorities and key activities that NIST and other organizations could undertake to advance the state of AI trustworthiness. Importantly, NIST intends to grapple with one of the more complex issues in implementing AI frameworks, namely balancing the trade-offs among and between the trust principles to consider the use cases and values at play. NIST seeks to showcase these profiles and case studies that highlight particular use cases and organizational challenges. NIST also will work across the federal government and on the international stage to identify and align standards development.

Mapping to Other Standards

The AI Risk Management Framework includes a map that crosswalks AI principles to global standards, such as the proposed European Union Artificial Intelligence Act, the Organisation for Economic Co-operation and Development (OECD) Recommendation on AI, and the Biden administration’s Executive Order 13960 and Blueprint for an AI Bill of Rights. The crosswalk enables organizations to readily leverage existing frameworks and principles.

Conclusion

AI is a rapidly developing field and offers many potential benefits but poses novel challenges and risks. With the launch of the framework, NIST also published supportive stakeholder perspectives  from business and professional associations, technology companies, and thinktanks such as the U.S. Chamber of Commerce, the Bipartisan Policy Center, and the Federation of American Scientists. Having the NIST AI Risk Management Framework’s foundational approach that evolves as our understanding of the technology and its impact evolves provides flexibility and a starting point to help regulators improve policy options and avoids a more prescriptive approach that may stifle innovation. The AI Risk Management Framework and its accompanying resources articulate expectations and will help AI stakeholders implement best practices for managing the opportunities, responsibilities, and challenges of artificial intelligence technologies.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Data Privacy Day, annually celebrated on January 28, is the new year nudge we need to prioritize the safety of our personal information. The digital world will continue to evolve, and the line between our online and offline lives will continue to blur. As we continue to rely on digital technology to manage our personal and professional lives, we must rethink what we share, when we share, how we share, and who we share it with.

Grab your coffee and join us for a morning Q&A with our Bradley Cybersecurity and Privacy team to celebrate Data Privacy Day (a day early!). We will be available between 10-10:50 a.m. ET on Friday, January 27 for you to drop in and ask your toughest privacy questions. Please register here and we hope to see you there!

Over the past few decades, technology has taken a fascinating turn. One can use a retinal scan to expedite the airport security process. Need to clock in for work? This can be done with the scan of a finger. We even have the convenience of unlocking our iPhones with a simple, quick gaze into the phone’s front camera. While the use of this technology has certainly made things easier, such use across various industries has led to concerns about individual privacy.

In response to these concerns, the Mississippi Legislature, on January 12, 2023, proposed House Bill 467, the Biometric Identifiers Privacy Act. The proposed legislation, among other things, seeks to require private entities (1) to be forthcoming about their collection and storage of individuals’ biometric identifiers, and (2) to develop a policy that establishes a retention schedule and guidelines for destroying the biometric identifiers of individuals.

What are biometric identifiers?

Inquiring minds may be wondering, what are biometric identifiers? Simply put, and pursuant to the act, biometric identifiers are defined as “the data of an individual generated by the individual’s unique biological characteristics.” Biometric identifiers may include, but are not limited to:

  • Faceprints
  • Fingerprints
  • Voiceprints
  • Retina or iris images

The act defines biometric identifiers to not include:

  • A writing sample of written signature
  • A photograph or video, except for data collected from the biological characteristics of a person depicted in the photograph or video
  • A human biological sample used for valid scientific testing or screening
  • Demographic data
  • A physical description, including height, weight, hair color, eye color, or a tattoo description
  • Donated body parts that have been obtained or stored by a federal agency
  • Information collected, used, or stored for purposes under the federal Health Insurance Portability and Accountability Act of 1996 (HIPAA)
  • Images or film of the human anatomy used to diagnose and treat medical conditions or to further validate scientific testing
  • Information collected, used, or disclosed for human subject research that is conducted in accordance with the federal policy for the protection of human subjects

If passed, who will the act apply to?

The act will apply to private entities only. The act defines a private entity as “any individual acting in a commercial context, partnership, corporation, limited liability company, association, or other group, however organized.” The act will not apply to state or local government agencies or entities.

What will a Mississippi private entity need to do to ensure it is in compliance with the act?

If enacted, Mississippi private entities in possession of biometric identifiers will be required to, among other things:

  • Inform subjected individuals (or their legal representative), in writing, that they are collecting or storing that individual’s biometric identifier(s)
  • Inform the individual, in writing, of the purpose of the collection, storage, and/or use of their biometric identifier(s) and the length to which they plan to collect, store, and/or use
  • Obtain a written release executed by the subject (or legal representative) of the biometric identifier
  • Develop a publicly accessible written policy that establishes a retention schedule and guidelines for permanently destroying a biometric identifier
    • The entity is not required to make its policy publicly accessible if the policy (1) applies only to employees of that private entity, and (2) is used solely within the private entity for operation of the private entity.
    • Additionally, the entity must destroy any possession of an individual’s biometric identifier on the earliest of (1) the date on which the purpose of collecting or obtaining the biometric identifiers have been satisfied; (2) one year after the individual’s last interaction with the private entity; or (3) 30 days after receiving an individual’s (or legal representative’s) request to delete the biometric identifiers.

Furthermore, if an individual (or legal representative) requests that the private entity disclose any biometric identifiers that the private entity collected, the private entity must do so free of charge.

Of course, nothing in life is free. Such “free” disclosure is specific to entities that (1) do business in Mississippi; (2) are for profit; (3) collect consumers’ biometric identifiers or have such identifiers collected on their behalf; and (4) obtained revenue exceeding $10 million in the preceding calendar year.

What does this mean for Mississippi private entities?

Let’s face it, most people are sick and tired of having to remember passwords and verification questions for every system or database they must access on a regular basis. Because of this, people may prefer the collection, storage, and/or use of their biometric identifiers in exchange for convenience and easy access. However, use of such biometric identifiers will require entities to comply with applicable state and federal laws. To avoid any civil liability for the failure to protect an individual’s biometric identifiers under Mississippi law, Mississippi private entities should:

  • Prepare policies that are in compliance with the act, and make such policies available to individuals whose biometric data is being obtained. Specifically, draft a policy that details the entity’s retention plan for the collection and storage of biometric identifiers, as well as guidelines for destroying the biometric identifiers. Compliance with such policies is key.
  • Inform individuals, in writing, that you are collecting their biometric data. A private entity should also inform the individual, in writing, of the specific purpose and length of term for collecting the biometric data.
  • Obtain written releases from individuals whose biometric identifiers are being collected, stored, and/or used.
  • Use strong cybersecurity software and processes using a reasonable standard of care within the private entity’s industry to protect the biometric identifiers of individuals.
  • Destroy the biometric identifiers upon request by the individual.
  • Train management on the policies and the importance of protecting biometric identifiers so they can answer and alleviate individuals’ questions and/or concerns regarding the collection of their biometric identifiers.

A failure to comply with the act will have its consequences. The act creates a private right of action against an offending entity. If successful in proving their claims, individuals may recover the greater of $1,000 or actual damages for negligently violating the act or the greater of $5,000 or actual damages for intentionally or recklessly violating the act plus reasonable attorneys’ fees and costs, and other relief to which a court deems appropriate.

If passed, the act will take effect on July 1, 2023. For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.