Listen to this post

While Andrew Ferguson advocates for a restrained regulatory approach at the FTC, his statements and voting record reveal clear priority areas where businesses can expect continued vigorous enforcement. Two areas stand out in particular: children’s privacy and location data. This is the second post in our series on what to expect from the FTC under Ferguson as chair.

Our previous post examined Ferguson’s broad regulatory philosophy centered on “Staying in Our Lane.” This post focuses specifically on the two areas where Ferguson has shown the strongest commitment to vigorous enforcement, explaining how these areas are exceptions to his generally cautious approach to extending FTC authority.

Prioritizing Children’s Privacy

Ferguson has demonstrated strong support for protecting children’s online privacy. In his January 2025 concurrence on COPPA Rule amendments, he supported the amendments as “the culmination of a bipartisan effort initiated when President Trump was last in office.” However, he also identified specific problems with the final rule, including:

  • Provisions that might inadvertently lock companies into existing third-party vendors, potentially harming competition;
  • A new requirement prohibiting indefinite data retention that could have unintended consequences, such as deleting childhood digital records that adults might value; and
  • Missed opportunities to clarify that the rule doesn’t obstruct the use of children’s personal information solely for age verification.

Ferguson’s enforcement record as commissioner reveals his belief that children’s privacy represents a “settled consensus” area where the commission should exercise its full enforcement authority. In the Cognosphere (Genshin Impact) settlement from January 2025, Ferguson made clear that COPPA violations alone were sufficient to justify his support for the case, writing that “these alleged violations of COPPA are severe enough to justify my voting to file the complaint and settlement even though I dissent from three of the remaining four counts.”

In his statement on the Social Media and Video Streaming Services Report from September 2024, Ferguson argued for empowering parents:

“Congress should empower parents to assert direct control over their children’s online activities and the personal data those activities generate… Parents should have the right to see what their children are sending and receiving on a service, as well as to prohibit their children from using it altogether.”

The FTC’s long history of COPPA enforcement across multiple administrations means businesses should expect continued aggressive action in this area under Ferguson. His statements suggest he sees children’s privacy as uniquely important, perhaps because children cannot meaningfully consent to data collection and because Congress has provided explicit statutory authority through COPPA, aligning with his preference for clear legislative mandates.

Location Data: A Clear Focus Area

Ferguson has shown particular concern about precise location data, which he views as inherently revealing of private details about people’s lives. In his December 2024 concurrence on the Mobilewalla case, he supported holding companies accountable for:

“The sale of precise location data linked to individuals without adequate consent or anonymization,” noting that “this type of data—records of a person’s precise physical locations—is inherently intrusive and revealing of people’s most private affairs.”

The FTC’s actions against location data companies signal that this will remain a priority enforcement area. Although Ferguson concurred in the complaints in the Mobilewalla case, he took a nuanced position. He supported charges related to selling precise location data without sufficient anonymization and without verifying consumer consent. However, he dissented from counts alleging unfair practices in categorizing consumers based on sensitive characteristics, arguing that “the FTC Act imposes consent requirements in certain circumstances. It does not limit how someone who lawfully acquired those data might choose to analyze those data.”

What This Means for Businesses

Companies should pay special attention to these two priority areas in their compliance efforts:

For Children’s Privacy:

  • Revisit COPPA compliance if your service may attract children
  • Review age verification mechanisms and parental consent processes
  • Implement data minimization practices for child users
  • Consider broader parental control features

For Location Data:

  • Implement clear consent mechanisms specifically for location tracking
  • Consider anonymization techniques for location information
  • Document processes for verifying consumer consent for location data
  • Be cautious about tying location data to individual identifiers
  • Implement and document reasonable retention periods for location data

While Ferguson may be more cautious about expanding the FTC’s regulatory reach in new directions, these established priority areas will likely see continued robust enforcement under his leadership. Companies should ensure their practices in these sensitive domains align with existing legal requirements.

Listen to this post

The Telephone Consumer Protection Act (TCPA) continues to be a major source of litigation risk for businesses engaged in outbound marketing. In the first quarter of 2025, litigation under the TCPA surged dramatically, with 507 class action lawsuits filed — more than double the volume compared to the same period in 2024. This steep rise reflects shifting enforcement patterns and a growing emphasis on consumer communications practices. Companies should be aware of several emerging trends and evolving interpretations that are shaping the compliance environment.

TCPA Class Action Trends

In the first quarter of 2025, 507 TCPA class actions were filed, representing a 112% increase compared to the same period in 2024. April filings also reflected continued growth, indicating a sustained trend.

Key statistics:

  • Approximately 80% of current TCPA lawsuits are class actions.
  • By contrast, only 2%-5% of lawsuits under other consumer protection statutes, such as the Fair Debt Collection Practices Act (FDCPA) or the Fair Credit Reporting Act (FCRA), are filed as class actions.

This trend highlights the unique procedural and financial exposure associated with TCPA compliance.

Time-of-Day Allegations on the Rise

There has been an uptick in lawsuits alleging that companies are contacting consumers outside of the TCPA’s permitted calling hours — before 8 a.m. or after 9 p.m. local time. In March 2025 alone, a South Florida firm filed over 100 lawsuits alleging violations of these timing restrictions, many of which involved text messages.

Under the TCPA, telephone solicitations are not permitted during restricted hours, unless:

  • The consumer has given prior express permission;
  • There is an established business relationship; or
  • The call is made by or on behalf of a tax-exempt nonprofit organization.

It is currently unclear whether these exemptions definitively apply to time-of-day violations. A petition filed with the FCC in March 2025 seeks clarification on whether prior express consent precludes liability for messages sent during restricted hours. The FCC accepted the petition and opened a public comment period that closed in April.

Drivers of Increased Litigation

Several factors appear to be contributing to the rise in TCPA filings:

  • An increase in plaintiff firm activity and case volume;
  • Ongoing confusion regarding the interpretation of revocation rules; and
  • Continued complaints regarding telemarketing practices, including unwanted robocalls and text messages.

These dynamics reflect a broader trend of regulatory and private enforcement in the consumer protection space.

Compliance Considerations

Businesses should take steps to ensure their outbound communication practices are aligned with current TCPA requirements. This includes:

  • Documenting consumer consent clearly at the point of lead capture;
  • Ensuring systems adhere to permissible calling and texting times;
  • Reviewing policies and procedures for revocation of consent; and
  • Seeking guidance from counsel with experience in consumer protection laws.

Conclusion

The volume and nature of TCPA litigation in 2025 underscore the need for proactive compliance. Companies should treat consumer communication compliance as a core operational issue. Regular policy reviews, up-to-date systems, and informed legal support are essential to mitigating risk in this evolving area of law.

Listen to this post

During the 2024 legislative session, the Colorado General Assembly passed Senate Bill 24-205, which is known as the Colorado Artificial Intelligence Act (CAIA). This law will take effect on February 1, 2026, and requires developers and deployers of a high-risk AI system to protect Colorado residents (“consumers”) from risks of algorithmic discrimination. Notably, the Act also requires that developers or deployers must disclose to consumers that they are interacting with an AI system. Colorado Gov. Jared Polis, however, had some concerns in 2024 and expected that the legislators would refine key definitions and update the compliance structure before the effective date in February 2026.

As Colorado moves forward toward implementation, the Colorado AI Impact Task Force issued its recommendations for updates in its February 1, 2025 Report. These updates — along with the description of the Act — are covered below.

Background

A “high-risk” AI system is defined to include any machine-based system that infers outputs from data inputs and has a material legal or similar effect on the provision, denial, cost, or terms of a product or service. The statute identifies various sectors that involve consequential decisions, such as decisions related to healthcare, employment, financial or credit, housing, insurance, or legal services. Additionally, CAIA has numerous carve-outs for technologies that perform narrow tasks or certain functions, such as cybersecurity, data storage, and chatbots.

Outside of use case scenarios, CAIA also imposes on developers of AI systems the duty to prevent algorithmic discrimination and protect consumers from any known or foreseeable risks arising from the use of the AI system. A developer is one that develops or modifies an AI system used in the state of Colorado. Among other things, a developer must make documentation available for the intended uses and potential harmful uses of the high-risk AI system. 

Similarly, CAIA also regulates a person that is doing business in Colorado and deploys a high-risk AI system for Colorado residents  to use (the “deployer”). Deployers face stricter regulations and must inform consumers when AI is involved in a consequential decision. The Act requires deployers to implement a risk management policy and program to govern the use of the AI system.  Further, the deployers must report any identified discrimination to the Attorney General’s Office within 90 days and must allow consumers to appeal AI-based decisions or request human review of the decision when possible. 

Data Privacy and Consumer Rights

Consumers have the right to opt out of data processing related to AI-based decisions and may appeal any AI-based decisions. This opt-out provision may impact further automated decision-making related to the Colorado resident and the processing of personal data profiling of that consumer. The deployer must also disclose to the consumer when a high-risk AI system has been used in the decision-making process that results in an adverse decision to the consumer. 

Exemptions

The CAIA contains various exemptions, including for entities operating under other regulatory regimes (e.g., insurers, banks, and HIPAA-covered entities) or for the use of certain approved technologies (e.g., technology cleared, approved, or certified by a federal agency, such as the FAA or FDA). But there are some caveats, however. For example, HIPAA-covered entities are exempt to the extent they are providing healthcare recommendations that are generated by an AI system that require the HIPAA-covered entity to take action to implement the recommendation and are not considered to be “high risk.” Small businesses are exempt to the extent that they employ fewer than 50 full-time employees and do not train the system with their own data. Thus, deployers should closely analyze the available exemptions to ensure their activities fall squarely within the recommendations.

Updates

As highlighted in the recent Colorado AI Impact Task Force Report, the report encourages additional changes to CAIA before it is enforced in February 2026. The current concerns deal with ambiguities, compliance burdens, and various stakeholder concerns. The Governor is concerned with whether the guardrails inhibit innovation and AI progress in the State. 

The Colorado AI Impact Task Force notes that there is consensus to refine documentation and notification requirements. However, there is less consensus on how to adjust the definition of “consequential decisions.” Reworking the exemptions to the definition of covered systems is also a change desired by both industry and the public. 

Other potential changes to the CAIA depend on how interconnected sections are potentially revised in relation to other related provisions. For example, changes to the definition of “algorithmic discrimination” depend on other issues related to obligations of developers and deployers to prevent algorithmic discrimination and related enforcement. Similarly, intervals for impact assessments may be affected greatly by changes to the definition of “intentional and substantial modification” to high-risk AI systems. Further, those impact assessments are interrelated with the developer’s risk management programs and will likely implicate any proposed changes to either impact assessments or risk management programs. 

Lastly, there remains firm disagreement on amendments related to several definitions. “Substantial factor” is one debated definition that will take a creative approach to define the scope of AI technologies subject to the CAIA. Similarly, “duty of care” is hotly contested for developers and deployers and whether to remove that concept or replace it with more stringent obligations. Other debated topics that are subject to change include the exemption for small business, the opportunity to cure incidents of non-compliance, trade secret exemptions, consumer right to appeal, and the scope of attorney general rulemaking.

Guidance

Given that most stakeholders recognize that changes are needed, any business impacted by the CAIA should continue to watch the developments in the legislative process for potential changes that could drastically impact the scope and requirements of the Colorado AI Act.

Takeaways

Businesses should assess whether they, or their vendors, use any AI system that could be considered high risk under the CAIA. Some recommendations include:

  • Assess AI usage and consider whether that use is within the definition of the CAIA, including whether any exemptions are available
  • Conduct an AI risk assessment consistent with the Colorado AI Act
  • Develop an AI compliance plan that is consistent with the CAIA consumer protections regarding notification and appeal processes
  • Continue to monitor the updates to the CAIA
  • Evaluate contracts with AI vendors to ensure that necessary documentation is provided by the developer or deployer

Colorado has taken the lead as one of the first states in the nation to enact sweeping AI laws. Other states will likely look to the progress of Colorado and enact similar legislation or make improvements where needed. Therefore, watching the CAIA and its implementation is of great importance in the burgeoning field of consumer-focused AI systems that impact consequential decisions in the consumer’s healthcare, financial well-being, education, housing, or employment.

Listen to this post

Introduction

On May 7, 2025, the Utah Artificial Intelligence Policy Act (UAIP) amendments will go into effect. These  amendments provide significant updates to Utah’s 2024 artificial intelligence (AI) laws. In particular, the amendments focus on  regulation of AI in the realm of consumer protection (S.B. 226 and S.B. 332), mental health applications (H.B. 452), and unauthorized impersonations (aka “deep fakes”) (S.B. 271). 

Background (SB 149)

In March 2024, Utah became one of the first states to enact comprehensive artificial intelligence legislation with the passage of the Utah Artificial Intelligence Policy Act (UAIP, S.B. 149)  specifically addressing AI. Commonly referred to as the “Utah AI Act,” these provisions create important obligations for businesses that use AI systems to interact with Utah consumers. The UAIP Act went into effect as of May 1, 2024.

If your business provides or uses AI-powered software or services that Utah residents access, you need to understand these requirements — even if your company isn’t based in Utah. This post will help break down these key amendments and what they mean for your business operations.

GenAI Defined

The Utah Act defines generative AI (GenAI) as a system that is (a) trained on data, (b) interacts with a person in Utah, and (c) generates outputs similar to outputs created by a human. (SB 149, 13-2-12(1)(a)). 

Transparency and Disclosure Requirements

If a company provides services in a regulated occupation (that is, those occupations that require a person to obtain a state certification or license to practice that occupation), the company shall disclose when the person is interacting with GenAI in the delivery of regulated services if the interaction is defined as “high risk” by the statute. The disclosure regarding regulated services shall be provided at the beginning of the interaction and disclosed orally if during a verbal interaction or in writing if via a written interaction. If the GenAI supplier wants the benefit of the Safe Harbor, then use of the AI system shall be disclosed at the beginning of any interaction and throughout the exchange of information.  (S.B. 226).

If a company uses GenAI to interact with a person in Utah in “non-regulated” occupations, the company must disclose that the person is interacting with a GenAI system and not a human when asked by the Utah consumer. 

S.B. 226 further added mandatory requirements for high-risk interactions related to health, financial, and biometric data, or providing personalized advice in areas like finance, law, or healthcare. Additionally, S.B. 226 granted authority to the Division of Consumer Protection to make rules to specify the form and methods of the required disclosures.

Enforcement and Penalties

The Utah Division of Consumer Protection may impose an administrative fine of up to $2,500 for each violation of the UAIP. The courts or the Utah attorney general may also impose a civil penalty of no more than $5,000 for each violation of a court order or administrative order. As made clear by S.B. 226, violations of the Act are subject to injunctive relief, disgorgement of profits, and subject to paying the Division’s attorney fees and costs. 

Key Takeaways

The 2024 Act requires that companies clearly and conspicuously disclose when a person is interacting with GenAI if asked or requested by the person interacting with the AI system. The restrictions are even tighter when using GenAI in a regulated occupation that involve sensitive personal information or significant personal decision in the high-risk categories as amended in 2025 under S.B. 226. In those instances, the company shall disclose the use of GenAI. If the supplier wants the benefit of the 2025 Safe Harbor under S.B. 226, the AI system shall disclose its use at the beginning of an interaction and throughout the interaction.   

Conclusion

Utah, along with several other states, took the lead to enact AI-related laws. It is likely that states will continue to regulate AI technology ahead of the federal government.

Stay tuned for subsequent blog posts that will provide updates on mental health applications (H.B. 452) and unauthorized impersonations (aka “deep fakes”) (S.B. 271).

Listen to this post

Since Andrew Ferguson assumed the role of FTC chair in January 2025, following his year-long tenure as a commissioner, businesses have been watching closely for signals of how the agency might redirect its focus on privacy enforcement. Ferguson’s public statements, concurrences, and dissents provide valuable insight into his regulatory philosophy and what companies can expect under his leadership. This is the first in our series on what to expect from the FTC on privacy enforcement during Ferguson’s tenure as chair.

Philosophical Approach: “Staying in Our Lane”

The cornerstone of Ferguson’s regulatory philosophy is a commitment to enforcing existing laws without extending the FTC’s reach beyond what he views as its congressional mandate. In his September 2024 remarks at the International Consumer Protection and Enforcement Network, Ferguson emphasized:

“We must be mindful not to stretch the scope of consumer-protection laws beyond their rightful purpose. We must stay in our lane.”

He cautioned against treating consumer protection law as a “panacea for social ills,” arguing that doing so undermines the rule of law, creates legal uncertainty, and can have a chilling effect on innovation.

His statements reflect a concern that regulatory overreach not only exceeds statutory authority but can actively harm the innovation economy. In his June 2024 Taiwan remarks, Ferguson noted that “competition law will not get us the privacy standards we seek, nor is it intended to,” revealing his preference for domain-specific approaches to different regulatory problems.

The FTC as “Cop on the Beat” Rather Than Rulemaker

Ferguson has expressed skepticism about extensive rulemaking. In his December 2024 dissent to the FTC’s Regulatory Plan and Agenda, he stated bluntly:

“The Commission under President Trump will focus primarily on our traditional role as a cop on the beat. We will vigorously and faithfully enforce the laws that Congress has passed, rather than writing them.”

This suggests businesses may expect:

  • Fewer new privacy rules and more case-by-case enforcement actions;
  • Stricter textual interpretation of existing statutes such as COPPA, GLBA, and Section 5 of the FTC Act; and
  • Less reliance on policy statements and sub-regulatory guidance.

Ferguson’s preference for enforcement over rulemaking represents both a philosophical position on separation of powers and a practical assessment of the commission’s strengths. In his dissent on the Non-Compete Clause Rule, he argued that “the difficulty of legislating in Congress is a feature of the Constitution’s design, not a fault,” suggesting he views the constraints of the legislative process as purposeful rather than obstacles to be circumvented.

Section 5 Enforcement: Clear Standards for “Unfairness”

Ferguson favors a more restrained interpretation of the FTC’s unfairness authority under Section 5. While he acknowledges the three-part test established by Congress (substantial injury, not reasonably avoidable, and not outweighed by countervailing benefits), he tends to apply this test more narrowly than his predecessors.

In multiple statements, Ferguson has emphasized that:

  • Clear harm is required – The “substantial injury” prong requires demonstrable harm, not speculative or theoretical injuries.
  • Consent as central – Ferguson views proper notice and consent as often sufficient to render injuries “reasonably avoidable” by consumers.
  • Business practices vs. outcomes – He distinguishes between business practices that directly cause harm and those that may enable harmful outcomes but aren’t inherently harmful themselves.

For example, in his Mobilewalla concurrence, Ferguson supported unfairness claims related to the unconsented collection of sensitive data but rejected unfairness theories based on how lawfully collected data was subsequently categorized or analyzed.

Under Ferguson’s leadership, the FTC likely will invoke its unfairness authority on cases with clear, substantial injury that consumers could not reasonably avoid, rather than expanding the doctrine to address emerging technologies or novel business practices.

What It Means for Businesses

Based on Ferguson’s statements and positions, businesses should:

  • Focus on meaningful consent – Ensure that privacy notices are clear and that you have documented consent for collecting and using sensitive data.
  • Document data practices – Maintain clear records of data collection, use, and sharing practices.
  • Monitor case-by-case enforcement – Watch for enforcement actions rather than new rulemakings to understand the agency’s evolving priorities.
  • Engage with legislative processes – With Ferguson’s FTC less likely to set policy through rulemaking, businesses should increase their engagement with congressional privacy initiatives, as statutory changes may be the primary vehicle for major privacy policy developments.

While Ferguson’s approach represents a shift from the previous administration, it does not signal an abandonment of privacy enforcement. Instead, we can expect the FTC to take a more traditional approach focused on clear statutory mandates and established legal theories, with vigorous enforcement of existing privacy laws while being more cautious about expanding the FTC’s authority through creative interpretation.

Listen to this post

In this final blog post in the Bradley series on the HIPAA Security Rule notice of proposed rulemaking (NPRM), we examine how the U.S. Department of Health and Human Services (HHS) Office for Civil Rights interprets the application of the HIPAA Security Rule to artificial intelligence (AI) and other emerging technologies. While the HIPAA Security Rule has traditionally been technology agnostic, HHS explicitly addresses security measures for these evolving technology advances. The NPRM provides guidance to incorporate AI considerations into compliance strategies and risk assessments.

AI Risk Assessments

In the NPRM, HHS would require a comprehensive, up-to-date inventory of all technology assets that identifies AI technologies interacting with ePHI. HHS clarifies that the Security Rule governs ePHI used in both AI training data and the algorithms developed or used by regulated entities. As such, HHS emphasizes that regulated entities must incorporate AI into their risk analysis and management processes and regularly update their analysis to address changes in technology or operations. Entities must assess how the AI system interacts with ePHI considering the type and the amount of data accessed, how the AI uses or discloses ePHI, and who the recipients are of AI-generated outputs.

HHS expects entities to identify, track, and assess reasonably anticipated risks associated with AI models, including risks related to data access, processing, and output. Flowing from the proposed data mapping safeguards discussed in previous blog posts, regulated entities would document where and how the AI software interacts with or processes ePHI to support risk assessments. HHS would also require regulated entities to monitor authoritative sources for known vulnerabilities to the AI system and promptly remediate them according to their patch management program. This lifecycle approach to risk analysis aims to ensure the confidentiality, integrity, and availability of ePHI as technology evolves.

Integration of AI developers into the Security Risk Analysis

More mature entities typically have built out third-party vendor risk management diligence. If finalized, the NPRM would require all regulated entities contracting with AI developers to formally incorporate Business Associate Agreement (BAA) risk assessments into their security risk analysis. Entities also would need to evaluate BAs based on written security verifications that the AI vendor has documented security controls. Regulated entities should collaborate with their AI vendors to review technology assets, including AI software that interacts with ePHI. This partnership will allow entities to identify and track reasonably anticipated threats and vulnerabilities, evaluate their likelihood and potential impact, and document security measures and risk management.

Getting Started with Current Requirements

Clinicians are increasingly integrating AI into clinical workflows to analyze health records, identify risk factors, assist in disease detection, and draft real-time patient summaries for review as the “human in the loop.” According to the most recent HIMSS cybersecurity survey, most health care organizations permit the use of generative AI with varied approaches to AI governance and risk management. Nearly half the organizations surveyed did not have an approval process for AI, and only 31% report that they are actively monitoring AI systems. As a result, the majority of respondents are concerned about data breaches and bias in AI systems. 

The NPRM enhances specificity in the risk analysis process by incorporating informal HHS guidance, security assessment tools, and frameworks for more detailed specifications. Entities need to update their procurement process to confirm that their AI vendors align with the Security Rule and industry best practices, such as the NIST AI Risk Management Framework, for managing AI-related risks, including privacy, security, unfair bias, and ethical use of ePHI.

The proposed HHS requirements are not the only concerns clinicians must consider when evaluating AI vendors. HHS also has finalized a rule under Section 1557 of the Affordable Care Act requiring covered healthcare providers to identify and mitigate discrimination risks from patient care decision support tools. Regulated entities must mitigate AI-related security risks and strengthen vendor oversight in contracts involving AI software that processes ePHI to meet these new demands.

Thank you for tuning into this series of analyzing the Security Rule updates. Please contact us if there are any questions or we can assist with any steps moving forward.

Please visit the HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.

Listen to this post

In this week’s installment of our blog series on the U.S. Department of Health and Human Services’ (HHS) HIPAA Security Rule updates in its January 6 Notice of Proposed Rulemaking (NPRM), we are exploring the justifications for the proposed updates to the Security Rule. Last week’s post on the updates related to Vulnerability Management, Incident Response & Contingency Plans can be found here.

Background

Throughout this series, we have discussed updates to various aspects of the Security Rule and explored how HHS seeks to implement new security requirements and implementation specifications for regulated entities. This week, we discuss the justifications behind HHS’s move and the challenges entities face in complying with the existing rule.

Justifications

HHS discussed multiple reasons for this Security Rule update, and a few are discussed below:

  • Importance of Strong Security Posture of Regulated Entities – The preamble to the NPRM posits that the increase in use of certified electronic health records (80% of physicians’ offices and 96% of hospitals as of 2021) fundamentally shifted the landscape of healthcare delivery. As a result, the security posture of regulated entities must be updated to accommodate such advancement. As treatment is increasingly provided electronically, the additional volume of sensitive patient information to protect continues to grow.
  • Increase Cybersecurity Incident Risks – HHS cites the heightened risk to patient safety during cybersecurity incidents and ransomware attacks as a key reason for these updates. The current state of the healthcare delivery system is propelled by deep digital connectivity as prompted by the HITECH and 21st Century Cures Act. If this system is connected but insecure, the connectivity could compromise patient safety, subjecting patients to unnecessary risk and forcing them to bear unaffordable personal costs. During a cybersecurity incident, patients’ health, and potentially their lives, may be at risk where such an incident creates impediments to the provision of healthcare. Serious consequences can result from interference with the operations of a critical medical device or obstructions to the administrative or clinical operations of a regulated entity, such as preventing the scheduling of appointments or viewing of an individual’s health history.
  • The Healthcare Industry Could Benefit from Centralized Security Standards Due to Inconsistent Implementation of Current Voluntary Standards – Despite the proliferation of voluntary cybersecurity standards, industry guidelines, and best practices, HHS found that many regulated entities have been slow to strengthen their security measures to protect ePHI and their information systems. HHS also noted that recent case law, including University of Texas M.D. Anderson Cancer Center v. HHS, has not accurately set forth the steps regulated entities must take to adequately protect the confidentiality, integrity, and availability of ePHI, as required by the statute. In that case, the Fifth Circuit vacated HIPAA penalties against MD Anderson, ruling that HHS acted arbitrarily and capriciously under the Administrative Procedure Act. The court found that MD Anderson met its obligations by implementing an encryption mechanism for ePHI. HHS disagrees with whether the encryption mechanism was sufficient and asserted its authority under HIPAA to mandate strengthened security standards for ePHI. This ruling and lack of adoption of the voluntary cybersecurity standards by regulated entities has led to inconsistencies in the implementation of the Security Rule at regulated entities and providing clearer and mandatory standards were noted justifications for these revisions.

Takeaways

In 2021, Congress amended the HITECH Act, requiring HHS to assess whether an entity followed recognized cybersecurity practices in line with HHS guidance over the prior 12 months to qualify for HIPAA penalty reductions. In response to this requirement, HHS could have taken the approach of acknowledging recognized frameworks that offer robust safeguards to clarify expectations, enhance the overall security posture of covered entities, and reduce compliance gaps. While HHS refers to NIST frameworks in discussions on security, it has not formally recognized any specific frameworks to qualify for this so called “safe harbor” incentive. Instead, HHS uses this NPRM to embark on a more prescriptive approach to the substantive rule based on its evaluation of various frameworks.

HHS maintains that these Security Rule updates still allow for flexibility and scalability in its implementation. However, the revisions would limit the flexibility and raise the standards for protection beyond what was deemed acceptable in the past Security Rule iterations. Given that the Security Rule’s standard of “reasonable and appropriate” safeguards must account for cost, size, complexity, and capabilities, the more prescriptive proposals in the NPRM and lack of addressable requirements present a heavy burden — especially on smaller providers.

Whether these Security Rule revisions become finalized in the current form, a revised form, or at all remains an open item for the healthcare industry. Notably, the NPRM was published under the Xavier Becerra administration at HHS and prior to the confirmation of Robert F. Kennedy, Jr. as the new secretary of HHS. The current administration has not provided comment on its plans related to this NPRM, but we will continue to watch this as the March 7, 2025, deadline for public comment is inching closer.

Stay tuned to this series as our next and final blogpost on the NPRM will consider how HHS views the application of artificial intelligence and other emerging technologies under the HHS Security Rule.

Please visit the HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.

Listen to this post

In this week’s installment of our blog series on the U.S. Department of Health and Human Services’ (HHS) HIPAA Security Rule updates in its January 6 Notice of Proposed Rulemaking (NPRM), we discuss HHS’s proposed rules for vulnerability management, incident response, and contingency plans (45 C.F.R. §§ 164.308, 164.312). Last week’s post on the updated administrative safeguards is available here.

Existing Requirements

HIPAA currently requires regulated entities to implement policies and procedures to (1) plan for contingencies and (2) respond to security incidents. A contingency plan applies to responses to emergencies and other major occurrences, such as system failures and natural disasters. When needed, the plan must include a data backup plan, disaster recovery plan, and an emergency mode operation plan to account for the continuation of critical business processes. A security incident plan must be implemented to ensure the regulated entity can identify and respond to known or suspected incidents, as well as mitigate and resolve such incidents.

Existing entities — especially those who have unfortunately experienced a security incident — are familiar with the above requirements and their implementation specifications, some of which are “required” and others only “addressable.” As discussed throughout this series, HHS is proposing to remove the “addressability” distinction making all implementation specifications that support the security standards mandatory.

What Are the New Technical Safeguard Requirements?

The NPRM substantially modifies how a regulated entity should implement a contingency plan and respond to security incidents. HHS proposes a new “vulnerability management” standard that would require regulated entities to establish technical controls to identify and address certain vulnerabilities in their respective relevant electronic information systems. We summarize these new standards and protocols below:

Contingency Plan – The NPRM would add additional implementation standards for contingency plans. HHS is proposing a new “criticality analysis” implementation specification, requiring regulated entities to analyze their relevant electronic information systems and technology assets to determine priority for restoration. The NPRM also adds new or specifying language to the existing implementation standards, such as requiring entities to (1) ensure that procedures are in place to create and maintain “exact” backup copies of electronic protected health information (ePHI) during an applicable event; (2) restore critical relevant electronic information systems and data within 72 hours of an event; and (3) require business associates to notify covered entities within 24 hours of activating their contingency plans.

Incident Response Procedures – The NPRM would require written security incident response plans and procedures documenting how workforce members are to report suspected or known security incidents, as well as how the regulated entity should identify, mitigate, remediate, and eradicate any suspected or known security incidents.

Vulnerability Management – HHS discussed in the NPRM that its proposal to add a new “vulnerability management” standard was to address the potential for bad actors to exploit publicly known vulnerabilities. With that in mind, this standard would require a regulated entity to deploy technical controls to identify and address technical vulnerabilities in its relevant electronic information systems, which includes (1) automated vulnerability scanning at least every six months, (2) monitoring “authoritative sources” (e.g., CISA’s Known Exploited Vulnerabilities Catalog) for known vulnerabilities on an ongoing basis and remediate where applicable, (3) conducting penetration testing every 12 months, and (4) ensuring timely installation of reasonable software patches and critical updates.

Stay Tuned

Next week, we will continue Bradley’s weekly NPRM series by analyzing justifications for HHS’s proposed Security Rule updates, how the proposals may change, and areas where HHS offers its perspective on new technologies. The NPRM public comment period ends on March 7, 2025.

Please visit HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.

Listen to this post

In this week’s installment of our blog series on the U.S. Department of Health and Human Services’ (HHS) HIPAA Security Rule updates in its January 6 Notice of Proposed Rulemaking (NPRM), we are exploring the proposed updates to the HIPAA Security Rule’s administrative safeguards requirement (45 C.F.R. § 164.308). Last week’s post on the updated technical safeguards is available here.

Background

Currently, HIPAA regulated entities must generally implement nine standards for administrative safeguards protecting electronic protected health information (ePHI):

  1. Security Management Process
  2. Assigned Security Responsibility
  3. Workforce Security
  4. Information Access Management
  5. Security Awareness and Training
  6. Security Incident Procedures
  7. Contingency Plan
  8. Evaluation
  9. Business Associate Contracts and Other Arrangements

Entities are already familiar with these requirements and their implementation specifications. The existing requirements either do not identify the specific control methods or technologies to implement or are otherwise “addressable” as opposed to “required” in some circumstances for regulated entities. As noted throughout this series, HHS has proposed removing the distinction between “required” and “addressable” implementation specifications, providing for specific guidelines for implementation with limited exceptions for certain safeguards, as well as introducing new safeguards.

New Administrative Safeguard Requirements

The NPRM proposes updates to the following administrative safeguards: risk analyses, workforce security, and information access management. HHS also introduced a new administrative safeguard, technology inventory management and mapping. These updated or new administrative requirements are summarized here:

  • Asset Inventory Management –  The HIPAA Security Rule does not explicitly mandate a formal asset inventory, but HHS informal guidance and audits suggest that inventorying assets that create, receive, maintain, or transmit ePHI is a critical step in evaluating security risks. The NPRM proposes a new administrative safeguard provision requiring regulated entities to conduct and maintain written inventories of any technological assets (e.g., hardware, software, electronic media, data, etc.) capable of creating, receiving, maintaining, or transmitting ePHI, and to illustrate a network map showing the movement of ePHI throughout the organization. HHS would require these inventories and maps to be periodically reviewed and updated at least once every 12 months andwhen certain events prompt changes in how regulated entities protect ePHI, such as new, or updates to, technological assets; new threats to ePHI; transactions that impact all or part of regulated entities; security incidents; or changes in laws.
  • Risk Analysis – While conducting a risk analysis has always been a required administrative safeguard, the NPRM proposes more-detailed content specifications around items that need to be addressed in the written risk assessment, including reviewing the technology asset inventory; identifying reasonably anticipated threats and vulnerabilities; documenting security measures, policies and procedures for documenting risks and vulnerabilities to ePHI systems; and making documented “reasonable determinations” of the likelihood and potential impact of each threat and vulnerability identified.
  • Workforce Security and Information Access Management – The NPRM proposes that, with respect to its ePHI or relevant electronic information systems, regulated entities would need to establish and implement written procedures that (1) determine whether access is appropriate based on a workforce member’s role; (2) authorize access consistent with the Minimum Necessary Rule; and (3) grant and revise access consistent with role-based access policies. Under the NPRM, these administrative safeguard specifications would no longer be “addressable,” as previously classified, meaning these policies and procedures would now be mandatory for regulated entities. In addition, the NPRM develops specific standards for the content and timing for training workforce members of Security Rule compliance beyond the previous general requirements.

Next Time

Up next in our weekly NPRM series, we will dive into the HIPAA Security Rule’s updates to the Vulnerability Management, Incident Response, and Contingency Plans

Please visit the HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.

Listen to this post

In this week’s installment of our blog series on the U.S. Department of Health and Human Services’ (HHS) HIPAA Security Rule updates in its January 6 Notice of Proposed Rulemaking (NPRM), we are tackling the proposed updates to the HIPAA Security Rule’s technical safeguard requirements (45 C.F.R. § 164.312). Last week’s post on group health plan and sponsor practices is available here.

Existing Requirements

Under the existing regulations, HIPAA-covered entities and business associates must generally implement the following five standard technical safeguards for electronic protected health information (ePHI):

  1. Access Controls – Implementing technical policies and procedures for its electronic information systems that maintain ePHI to allow only authorized persons to access ePHI.
  2. Audit Controls – Implement hardware, software, and/or procedural mechanisms to record and examine activity in information systems that contain or use ePHI.
  3. Integrity – Implementing policies and procedures to ensure that ePHI is not improperly altered or destroyed.
  4. Authentication – Implementing procedures to verify that a person seeking access to ePHI is who they say they are.
  5. Transmission Security – Implementing technical security measures to guard against unauthorized access to ePHI that is being transmitted over an electronic network.

The existing requirements either do not identify the specific control methods or technologies to implement or are otherwise “addressable” as opposed to “required” in some circumstances for regulated entities — until now.

What Are the New Technical Safeguard Requirements?

The NPRM substantially modifies and specifies the particular technical safeguards needed for compliance. In particular, the NPRM restructured and recategorized existing requirements and added stringent standard and implementation specifications, and HHS has proposed removing the distinction between “required” and “addressable” implementation specifications, making all implementation specifications required with specific, limited exceptions.

A handful of the new or updated standards are summarized below:

  • Access Controls – New implementation specifications to require technical controls to ensure access are limited to individuals and technology assets that need access. Two of the controls that will be required are network segmentation and account suspension/disabling capabilities for multiple log-in failures.
  • Encryption and Decryption – Formerly an addressable implementation specification, the NPRM would make encryption of ePHI at-rest and in-transit mandatory, with a handful of limited exceptions, such as when the individual requests to receive their ePHI in an unencrypted manner.
  • Configuration Management – This new standard would require a regulated entity to establish and deploy technical controls for securing relevant electronic information systems and the technology assets in its relevant electronic information systems, including workstations, in a consistent manner. A regulated entity also would be required to establish and maintain a minimum level of security for its information systems and technology assets.
  • Audit Trail and System Log Controls – Identified as “crucial” in the NPRM, this reorganized standard formerly identified as the “audit control” would require covered entities to monitor in real-time all activity in its electronic information systems for indications of unauthorized access and activity. This standard would require the entity to perform and document an audit at least once every 12 months.
  • Authentication – This standard enhances the implementation specifications needed to ensure ePHI is properly protected from improper alteration or destruction. Of note, the NPRM would require all regulated entities to deploy multi-factor authentication (MFA) on all technology assets, subject to limited exceptions with compensating controls, such as during an emergency when MFA is infeasible. One exemption is where the regulated entity’s existing technology does not support MFA. However, the entity would need to implement a transition plan to have the ePHI transferred to another technology asset that does support MFA within a reasonable time. Medical devices authorized for marketing by the FDA before March 2023 would be exempt from MFA if the entity deployed all recommended updates and after that date if the manufacturer supports the device or the entity deployed any manufacturer-recommended updates or patches.
  • Other Notable Standards – In addition to the above, the NPRM would add standards for integrity, transmission security, vulnerability management, data backup and recovery, and information systems backup and recovery. These new standards would prescribe new or updated implementation specifications, such as conducting vulnerability scanning for technical vulnerabilities, including annual penetration testing and implementing a patch management program.

Next Time

Up next on our weekly NPRM series, we will dive into the HIPAA Security Rule’s updates to the Administrative Standards requirements.

Please visit HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.