Listen to this post

If you run an ecommerce brand or handle marketing compliance, you’ve probably heard about Texas Senate Bill 140 (SB 140) and its potential impact on text message marketing. Earlier this month, a group of plaintiffs, including an industry association and two e-commerce companies, asked a federal court in Austin to block the state from enforcing parts of the law. The state recently filed its brief opposing that request. This post examines the state’s position and its practical implications for text message marketing programs.

Texas’ SB 140

Texas Senate Bill 140 (SB 140), signed on June 20, 2025, by Gov. Greg Abbott and effective September 1, 2025, expands the Texas Business & Commerce Code’s regulation of a “telephone solicitation” to include text messages and other sales solicitation methods. Significantly, SB 140 requires businesses using telemarketing through text messages to register with the Texas secretary of state, pay a $200 fee, post a $10,000 security bond, and submit quarterly reports prior to making any telemarketing solicitations to prospective customers.

However, a key carve-out may be significant for many brands: the “customer” exemption. Under Section 302.058, Chapter 302’s requirements don’t apply when a business solicits a current or former customer and has operated under the same name for at least two years. The statute defines “purchaser” but not “customer,” leaving the ordinary meaning of “customer” to control.

Industry analysis noted that “customer” in everyday usage often includes people who patronize or otherwise deal with a business — not only those who have already completed a purchase. That reading supports treating opt-in subscribers (people who visited your store and consented to receive your promotional texts) as “customers,” even if they haven’t bought a product yet. If that interpretation holds, many established brands that text only to opted-in subscribers could fall within the exemption.

The motion seeking to block enforcement of SB 140 argued that it violates federal law. Specifically, the plaintiffs argued that the law is unconstitutional and places unreasonable burdens on businesses that send text messages to individuals who have consented to receive them.

The State’s Filing

According to the state’s filing opposing the plaintiffs’ motion for a preliminary injunction, SB 140 is about stopping unwanted, deceptive solicitations — especially spam texts sent without permission.

Two definitions matter here:

  • “Telephone call” – SB 140 tells us to use the meaning in Chapter 304. That chapter excludes transmissions that a mobile customer has agreed to receive as part of an ad-based service. In short, permission matters.
  • “Telephone solicitation” – Chapter 302 now says it’s “a call or other transmission,” and it expressly mentions texts and images.

The state argued to the court that when Chapter 302 uses the word “call,” it still refers to a “telephone call” — the same concept Chapter 304 defines. In other words, the change from “telephone call” to “call” in the solicitation definition did not expand the law to capture everything; it still ties back to a definition that carves out communications a customer has consented to.

Why does that matter? Because the state also emphasizes that Chapter 302’s purpose is to protect people from false, misleading, or deceptive telephone solicitations — not to punish businesses who send consented messages that consumers want. In fact, the state’s brief points out that the plaintiffs’ business model is consent-based texting — and says that is not a “deceptive practice.”

Put simply: If your program sends texts only to people who opted in, the state argues SB 140 is not aimed at you.

Other interesting tidbits appear in the state’s filing about who is or, in this case, is not enforcing the registration requirements.

  • Secretary of State (SoS) – The SoS administers registrations (accepts filings, keeps certificates, helps the public) but, per the state’s filing, does not investigate or enforce violations of SB 140 or Chapter 302. The office states it has not taken — and does not plan to take — enforcement actions.
  • Attorney General (AG) – The AG states it has discretionary authority to bring civil enforcement actions (for example, to seek an injunction) and to pursue civil penalties for violating an injunction. The AG also states it understands “call” in the statute to mean a “telephone call” as defined in Chapters 302 and 304.

“Discretionary” is significant here: The AG is telling the court there’s no mandatory duty to bring actions in every situation.

So, Do I Need to Register and Post a $10,000 Bond?

SB 140 folds texts into the “telephone solicitation” framework, and Chapter 302 generally requires registration and a $10,000 security (bond, letter of credit, or CD).

Taking the state’s filing at face value, three practical takeaways emerge:

  1. Consent is the bright line. The state is focused on stopping unwanted or deceptive text solicitations. Programs built on clear, documented opt-in consent do not appear to be the target the state describes to the court.
  2. “Call” tracks phone-based communications. The state reads “call” as a telephone call (which, under Chapter 304, includes certain text transmissions), with an express exclusion for ad-based transmissions the consumer agreed to receive. That reinforces the centrality of permission and how your messages are delivered.
  3. Registration may not apply to everyone. If your audience qualifies as current or former customers and you’ve been operating under the same name for two years, the Chapter 302 registration and bond requirements may not apply — particularly for consent-only programs.

Action Items for Brands and Platforms

Even with the state’s positions, SB 140 is still a live issue — and the court hasn’t ruled on the preliminary injunction yet. Here’s how to stay ready:

  • Audit your consent. Confirm your opt-in flows are clear and documented (who opted in, when, how, what disclosures they saw). This supports both the consent-based exclusion and any “customer” argument. (The state’s brief itself acknowledges the plaintiffs send texts only to consumers who want them.)
  • Check the two-year rule. If you’ve operated under the same name for two years, the “customer” exemption may help. If you’re newer or recently rebranded, consult with counsel about whether registration is advisable while the case proceeds.
  • Map your sending framework. Determine whether your delivery path fits within Chapter 304’s definition and exclusions (e.g., transmissions the user agreed to receive as part of a service), as highlighted by the state.
  • Monitor enforcement signals. The SoS says it isn’t enforcing. The AG says enforcement is discretionary and frames consent-based messaging as non-deceptive. Still, maintain robust compliance practices.
  • Private right of action. It remains an open question whether courts will recognize a consumer’s ability to sue a business directly under SB 140 for failure to register. No court has squarely decided that issue yet. However, the state’s brief narrows the statute’s focus to deceptive, non-consented solicitations and emphasizes discretionary public enforcement. That framing makes it harder for plaintiffs to argue for a broad private cause of action against consent-based text programs. Until a court rules, the question is unresolved, but the state’s position pushes against the expansive reading plaintiffs would need.
  • Stay tuned for the ruling. The court will decide whether to issue preliminary relief; until then, keep compliance tight and documentation complete.

As always, facts matter. If you’re unsure whether your program qualifies for exemptions, or whether your consent path fits within the state’s framing, seek tailored advice. But for now, if you’re only texting people who asked to hear from you — and you can prove it — you’re much closer to “compliant” than a “cautionary tale.”

Listen to this post

A new Mississippi law, known as the Walker Montgomery Protecting Children Online Act, has prompted several companies to block Mississippi IP addresses from accessing their platforms. In fact, social media company Bluesky posted a response to the enactment of the law on its website. Bluesky explained the decision to make their app unavailable to Mississippi residents, stating:

Mississippi’s HB1126 requires platforms to implement age verification for all users before they can access services like Bluesky. That means, under the law, we would need to verify every user’s age and obtain parental consent for anyone under 18. The potential penalties for non-compliance are substantial — up to $10,000 per user. Building the required verification systems, parental consent workflows, and compliance infrastructure would require significant resources that our small team is currently unable to spare as we invest in developing safety tools and features for our global community, particularly given the law’s broad scope and privacy implications.

Bluesky’s decision to block certain users came after the U.S. Supreme Court permitted the Mississippi act to take effect while First Amendment challenges to the law are pending. The Mississippi act requires any website or app that allows users to post content, create a profile, and interact with other users to verify the age of each user, no matter the type of content. The law provides hefty penalties up to $10,000 per user for failure to comply and permits parents and guardians to bring legal actions.

In its company statement explaining its decision, Bluesky highlighted the substantial financial burdens required to comply with broad age-verification laws. Bluesky specifically cited the “substantial infrastructure and developer time investments, complex privacy protections and ongoing compliance monitoring” required by such laws, noting that these costs can easily push smaller providers out of the market. The same is true for small companies that never intended to be social media platforms but nonetheless fall within the coverage of the statute.

The uncertainty about what verification efforts qualify as “commercially reasonable” further complicates compliance. The statute specifically requires the platform or application provider to verify the age of every user “with a level of certainty appropriate to the risks that arise from the information management practices” of the provider. Are companies required to store state-issued IDs or use fingerprint or facial recognition? Is use of AI permissible to determine age? What is clear is that whichever method is utilized, the costs and risks associated with storing users’ private information will increase.

While the Mississippi act continues to face legal challenges, the push to protect children online, including more stringent parental consent and age-gating is not going away. State legislatures around the country are enacting similar laws, and the U.S. Supreme Court has signaled that it will not stand in the way of enforcement while legal challenges are pending.

For more information and other updates regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point, or reach out to one of our authors.

Listen to this post

July 1 marked the official enforcement date of the Tennessee Information Protection Act (TIPA), the state’s comprehensive consumer privacy law. Signed into law in 2023, TIPA grants consumers specific rights concerning their personal information and regulates covered businesses and service providers that collect, use, share, or otherwise process consumers’ personal information. With all TIPA provisions now enforceable, it is important for regulated companies to understand the law’s comprehensive requirements.

Covered businesses and organizations

TIPA regulates entities that conduct business in Tennessee or produce products or services targeted to Tennessee residents, exceed $25 million in revenue, and meet one of the below criteria:

  • Control or process information of 25,000 or more Tennessee consumers per year and derive more than 50% of gross revenue from the sale of personal information; or
  • Control or process information of at least 175,000 Tennessee consumers during a calendar year.

Consumer Rights

TIPA grants consumers (Tennessee residents acting in a personal context only) the rights to confirm, access, correct, delete, or obtain a copy of their personal information, or opt out of specific uses of their data (such as selling data to third parties, using data for targeted advertising, or profiling consumers in certain instances). Companies must respond to authenticated consumer requests within 45 days, with a possible 45-day extension, and they must establish an appeal process for request denials. Controllers, which TIPA defines as companies that (alone or jointly) determine the purpose and means of processing personal information, must also offer a secure and reliable means for consumers to exercise their rights without requiring consumers to create a new account.

Company Responsibilities

Companies must limit data collection and processing to what is necessary, maintain appropriate data security practices, and avoid discrimination. Companies must provide a clear and accessible privacy notice detailing their practices, and, if selling personal information or using it for targeted advertising, disclose these practices and provide an opt-out option.

Opt-In for Sensitive Personal Information

TIPA prohibits processing sensitive personal information without first obtaining informed consent. Sensitive personal information is defined broadly and includes any personal information that reveals a consumer’s racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexual orientation, or citizenship or immigration status. Sensitive information also includes any data collected from a known child younger than age 13, precise geolocation data (i.e., within a 1,750-foot radius), and the processing of genetic or biometric data for the purposes of identifying an individual.

Controller-Processor Requirements

Processors must adhere to companies’ instructions and assist them in meeting their obligations, including responding to consumer rights requests and providing necessary information for data protection assessments. Contracts between companies and processors must outline data processing procedures, including confidentiality, data deletion or return, compliance demonstration, assessments, and subcontractor engagement. The determination of whether a person is acting as a company or processor depends on the context and specific processing of personal information.

Data Protection Assessments

Companies must conduct and document data protection assessments for specific data processing activities involving personal information. These assessments must weigh the benefits and risks of processing, with certain factors considered. Assessments apply to processing of personal data created or generated on or after July 1, 2024, and in investigations by the Tennessee attorney general, are to be treated as confidential and exempt from public disclosure without a waiver of attorney-client privilege or work product protection.

Major Similarities to CCPA

TIPA shares many similarities with the California Consumer Privacy Act (CCPA), including:

  • Similar consumer rights;
  • Contractual requirements between controllers and processors; and
  • Requiring data protection assessments for certain processing activities.

Affirmative Defense

TIPA provides for an “affirmative defense” against violations of the law by adhering to a written privacy policy that conforms to the NIST Privacy Framework or comparable standards. The privacy program’s scale and scope must be appropriate based on factors such as business size, activities, personal information sensitivity, available tools, and compliance with other laws. In addition, certifications from the Asia-Pacific Economic Cooperation’s Cross-Border Privacy Rules and Privacy Recognition for Processors systems may be considered in evaluating the program.

Enforcement

The Tennessee attorney general retains exclusive enforcement authority, and TIPA expressly states that there is no private right of action. The Tennessee attorney general must provide 60 days’ written notice and an opportunity to cure before initiating enforcement action. If the alleged violations are not cured, the Tennessee attorney general may file an action and seek declaratory and/or injunctive relief, civil penalties up to $7,500 for each violation, reasonable attorneys’ fees and investigative costs, and treble damages in the case of a willful or knowing violation.

Exemptions

The law includes numerous exemptions, including:

  • Government entities;
  • Financial institutions, their affiliates, and data subject to the Gramm-Leach-Bliley Act (GLBA);
  • Insurance companies;
  • Covered entities, business associates, and protected health information governed by the Health Insurance Portability and Accountability Act (HIPAA) and/or the Health Information Technology for Economic and Clinical Health Act (HITECH);
  • Nonprofit organizations;
  • Higher education institutions; and
  • Personal information that is subject to other laws, such as the Children’s Online Privacy Protection Act (COPPA), the Family Educational Rights and Privacy Act (FERPA), and the Fair Credit Reporting Act (FCRA).

TIPA is just one of seven laws slated to go into effect this year. With three more laws going into effect next year, companies should review and determine whether laws such as TIPA apply to them and take steps to comply now that the law is in effect.

Listen to this post

As FTC Chair Andrew Ferguson establishes his enforcement priorities, his positions on data categorization and surveillance pricing reveal a consistent philosophy that balances privacy protection with innovation. This is the third post in our series on what to expect from the FTC under Ferguson as chair.

Our previous posts examined Ferguson’s broad regulatory philosophy of “staying in our lane” and his priority enforcement areas in children’s privacy and location data. This post explores Ferguson’s approach to emerging privacy issues that don’t fit neatly into established legal frameworks.

Skepticism of “Sensitive Categories” Designation

Ferguson has expressed significant skepticism about the FTC designating certain categories of data as inherently “sensitive” without clear statutory basis. In his September 2024 statement on the Social Media and Video Streaming Services Report, Ferguson criticized this approach:

“I am skeptical that this is the kind of injury the law should try to address… I doubt it could. Any such line would tend toward arbitrariness and is not a stable system on which to decide whether advertisements are illegal.”

Ferguson’s critique reflects his broader concern that creating subjective lists of “sensitive” data categories raises several problems:

  1. Arbitrary line-drawing – Determining which categories qualify as “sensitive” is inherently subjective and potentially politicized.
  2. Lack of statutory basis – Section 5 does not provide clear guidance on which categories of data should receive special protection.
  3. Inconsistent application – When regulators decide which categories deserve protection, the resulting lists may reflect the decision-makers’ preferences rather than objective criteria.

Ferguson’s December 2024 concurrence in the Mobilewalla case provides the clearest view of his position on sensitive data categorization, where he wrote: “The FTC Act does not limit how someone who lawfully acquired those data might choose to analyze those data, or the conclusions that one might draw from them.” This reveals a fundamental distinction in his approach: While he believes the initial collection of sensitive data without consent may violate Section 5, he is skeptical that the FTC can regulate how lawfully obtained data is subsequently categorized or analyzed.

Ferguson’s analogy to private investigators is particularly telling: Just as investigators may legally observe someone entering a church and conclude they practice that religion, Ferguson believes that drawing conclusions from lawfully collected data is not, in itself, a Section 5 violation.

Surveillance Pricing: Fact-Finding Over Speculation

Ferguson has demonstrated a measured approach to emerging data practices like surveillance pricing — the use of consumer data to set personalized prices. In July 2024, he supported the FTC’s 6(b) study into these practices, explaining:

“One of the most important duties with which Congress has entrusted us is studying markets and industries and reporting to the public and Congress what we learn… These studies may inform future Commission enforcement actions, but they need not.”

His statement emphasized the importance of thorough fact-finding before developing policy positions, noting:

“Congress and the American people should be made aware of whether and how consumers’ private data may be used to affect their pocketbooks.”

However, in January 2025, Ferguson joined Commissioner Melissa Holyoak in dissenting from the release of preliminary “research summaries” on surveillance pricing. His dissent criticized the rushed release of early findings:

“Issuing these research summaries degrades the Commission’s Section 6(b) process. The Commission should not be releasing staff’s early impressions that ‘can be outdated with new information’ because the fact gathering process on the very issues being presented to the public is still underway.”

This suggests a commitment by Ferguson to thorough investigation of privacy issues before regulation, particularly with emerging practices that implicate consumer data.

Balancing Evidence and Action

Ferguson’s approach to both sensitive data categories and surveillance pricing illustrates his broader privacy philosophy:

  1. Demand robust evidence – Before taking regulatory action on privacy practices, Ferguson wants complete factual records that demonstrate actual harm.
  2. Favor established laws over novel theories – His skepticism of “sensitive categories” shows preference for established legal frameworks rather than expanding statutory interpretations.
  3. Emphasize procedural integrity – His objection to preliminary research summaries reveals concern with fair, thorough processes before reaching conclusions about data practices.

Ferguson appears to maintain a genuine openness to evidence that might show consumer benefits from practices such as data categorization or personalized pricing. His insistence on completing thorough market studies reflects not just procedural formalism but a substantive commitment to evidence-based regulation that considers both potential harms and benefits.

What This Means for Businesses

Based on Ferguson’s positions, here are some considerations for businesses:

For Data Categorization:

  • Focus on consent mechanisms for data collection rather than worrying about how lawfully collected data is analyzed.
  • Document legitimate business purposes for data analysis.
  • Keep watch for potential future legislation that might specifically designate certain data categories for special protection.
  • Distinguish clearly between initial data collection practices (which face greater scrutiny) and subsequent analysis of lawfully collected data (which faces less scrutiny).

For Surveillance Pricing and Similar Practices:

  • Expect continued scrutiny of personalized pricing practices, but through careful study rather than immediate regulation.
  • Maintain transparency about how customer data influences pricing.
  • Document how pricing algorithms use personal data.
  • Consider implementing clear opt-out mechanisms for data-based pricing.
  • Document instances where personalized pricing benefits consumers through lower prices or increased access, as Ferguson’s evidence-based approach may be receptive to such benefits.

Evolution Rather Than Revolution

Ferguson’s approach suggests the FTC under his leadership will maintain strong privacy enforcement but with a focus on clear statutory violations rather than expanding interpretations of unfairness. For data categorization and surveillance pricing, this means:

  1. Continued fact-finding – The commission will likely invest in thorough market studies before developing policy positions.
  2. Focus on deception over unfairness – Companies making false or misleading claims about data practices will face scrutiny, while novel “unfairness” theories will receive more skepticism.
  3. Emphasis on consent and transparency – Proper notice, consent, and transparency will remain central to the FTC’s privacy enforcement.

This approach represents evolution rather than revolution in the commission’s privacy work, with a measured path that balances consumer protection with business certainty and technological innovation.

Listen to this post

While Andrew Ferguson advocates for a restrained regulatory approach at the FTC, his statements and voting record reveal clear priority areas where businesses can expect continued vigorous enforcement. Two areas stand out in particular: children’s privacy and location data. This is the second post in our series on what to expect from the FTC under Ferguson as chair.

Our previous post examined Ferguson’s broad regulatory philosophy centered on “Staying in Our Lane.” This post focuses specifically on the two areas where Ferguson has shown the strongest commitment to vigorous enforcement, explaining how these areas are exceptions to his generally cautious approach to extending FTC authority.

Prioritizing Children’s Privacy

Ferguson has demonstrated strong support for protecting children’s online privacy. In his January 2025 concurrence on COPPA Rule amendments, he supported the amendments as “the culmination of a bipartisan effort initiated when President Trump was last in office.” However, he also identified specific problems with the final rule, including:

  • Provisions that might inadvertently lock companies into existing third-party vendors, potentially harming competition;
  • A new requirement prohibiting indefinite data retention that could have unintended consequences, such as deleting childhood digital records that adults might value; and
  • Missed opportunities to clarify that the rule doesn’t obstruct the use of children’s personal information solely for age verification.

Ferguson’s enforcement record as commissioner reveals his belief that children’s privacy represents a “settled consensus” area where the commission should exercise its full enforcement authority. In the Cognosphere (Genshin Impact) settlement from January 2025, Ferguson made clear that COPPA violations alone were sufficient to justify his support for the case, writing that “these alleged violations of COPPA are severe enough to justify my voting to file the complaint and settlement even though I dissent from three of the remaining four counts.”

In his statement on the Social Media and Video Streaming Services Report from September 2024, Ferguson argued for empowering parents:

“Congress should empower parents to assert direct control over their children’s online activities and the personal data those activities generate… Parents should have the right to see what their children are sending and receiving on a service, as well as to prohibit their children from using it altogether.”

The FTC’s long history of COPPA enforcement across multiple administrations means businesses should expect continued aggressive action in this area under Ferguson. His statements suggest he sees children’s privacy as uniquely important, perhaps because children cannot meaningfully consent to data collection and because Congress has provided explicit statutory authority through COPPA, aligning with his preference for clear legislative mandates.

Location Data: A Clear Focus Area

Ferguson has shown particular concern about precise location data, which he views as inherently revealing of private details about people’s lives. In his December 2024 concurrence on the Mobilewalla case, he supported holding companies accountable for:

“The sale of precise location data linked to individuals without adequate consent or anonymization,” noting that “this type of data—records of a person’s precise physical locations—is inherently intrusive and revealing of people’s most private affairs.”

The FTC’s actions against location data companies signal that this will remain a priority enforcement area. Although Ferguson concurred in the complaints in the Mobilewalla case, he took a nuanced position. He supported charges related to selling precise location data without sufficient anonymization and without verifying consumer consent. However, he dissented from counts alleging unfair practices in categorizing consumers based on sensitive characteristics, arguing that “the FTC Act imposes consent requirements in certain circumstances. It does not limit how someone who lawfully acquired those data might choose to analyze those data.”

What This Means for Businesses

Companies should pay special attention to these two priority areas in their compliance efforts:

For Children’s Privacy:

  • Revisit COPPA compliance if your service may attract children
  • Review age verification mechanisms and parental consent processes
  • Implement data minimization practices for child users
  • Consider broader parental control features

For Location Data:

  • Implement clear consent mechanisms specifically for location tracking
  • Consider anonymization techniques for location information
  • Document processes for verifying consumer consent for location data
  • Be cautious about tying location data to individual identifiers
  • Implement and document reasonable retention periods for location data

While Ferguson may be more cautious about expanding the FTC’s regulatory reach in new directions, these established priority areas will likely see continued robust enforcement under his leadership. Companies should ensure their practices in these sensitive domains align with existing legal requirements.

Listen to this post

The Telephone Consumer Protection Act (TCPA) continues to be a major source of litigation risk for businesses engaged in outbound marketing. In the first quarter of 2025, litigation under the TCPA surged dramatically, with 507 class action lawsuits filed — more than double the volume compared to the same period in 2024. This steep rise reflects shifting enforcement patterns and a growing emphasis on consumer communications practices. Companies should be aware of several emerging trends and evolving interpretations that are shaping the compliance environment.

TCPA Class Action Trends

In the first quarter of 2025, 507 TCPA class actions were filed, representing a 112% increase compared to the same period in 2024. April filings also reflected continued growth, indicating a sustained trend.

Key statistics:

  • Approximately 80% of current TCPA lawsuits are class actions.
  • By contrast, only 2%-5% of lawsuits under other consumer protection statutes, such as the Fair Debt Collection Practices Act (FDCPA) or the Fair Credit Reporting Act (FCRA), are filed as class actions.

This trend highlights the unique procedural and financial exposure associated with TCPA compliance.

Time-of-Day Allegations on the Rise

There has been an uptick in lawsuits alleging that companies are contacting consumers outside of the TCPA’s permitted calling hours — before 8 a.m. or after 9 p.m. local time. In March 2025 alone, a South Florida firm filed over 100 lawsuits alleging violations of these timing restrictions, many of which involved text messages.

Under the TCPA, telephone solicitations are not permitted during restricted hours, unless:

  • The consumer has given prior express permission;
  • There is an established business relationship; or
  • The call is made by or on behalf of a tax-exempt nonprofit organization.

It is currently unclear whether these exemptions definitively apply to time-of-day violations. A petition filed with the FCC in March 2025 seeks clarification on whether prior express consent precludes liability for messages sent during restricted hours. The FCC accepted the petition and opened a public comment period that closed in April.

Drivers of Increased Litigation

Several factors appear to be contributing to the rise in TCPA filings:

  • An increase in plaintiff firm activity and case volume;
  • Ongoing confusion regarding the interpretation of revocation rules; and
  • Continued complaints regarding telemarketing practices, including unwanted robocalls and text messages.

These dynamics reflect a broader trend of regulatory and private enforcement in the consumer protection space.

Compliance Considerations

Businesses should take steps to ensure their outbound communication practices are aligned with current TCPA requirements. This includes:

  • Documenting consumer consent clearly at the point of lead capture;
  • Ensuring systems adhere to permissible calling and texting times;
  • Reviewing policies and procedures for revocation of consent; and
  • Seeking guidance from counsel with experience in consumer protection laws.

Conclusion

The volume and nature of TCPA litigation in 2025 underscore the need for proactive compliance. Companies should treat consumer communication compliance as a core operational issue. Regular policy reviews, up-to-date systems, and informed legal support are essential to mitigating risk in this evolving area of law.

Listen to this post

During the 2024 legislative session, the Colorado General Assembly passed Senate Bill 24-205, which is known as the Colorado Artificial Intelligence Act (CAIA). This law will take effect on February 1, 2026, and requires developers and deployers of a high-risk AI system to protect Colorado residents (“consumers”) from risks of algorithmic discrimination. Notably, the Act also requires that developers or deployers must disclose to consumers that they are interacting with an AI system. Colorado Gov. Jared Polis, however, had some concerns in 2024 and expected that the legislators would refine key definitions and update the compliance structure before the effective date in February 2026.

As Colorado moves forward toward implementation, the Colorado AI Impact Task Force issued its recommendations for updates in its February 1, 2025 Report. These updates — along with the description of the Act — are covered below.

Background

A “high-risk” AI system is defined to include any machine-based system that infers outputs from data inputs and has a material legal or similar effect on the provision, denial, cost, or terms of a product or service. The statute identifies various sectors that involve consequential decisions, such as decisions related to healthcare, employment, financial or credit, housing, insurance, or legal services. Additionally, CAIA has numerous carve-outs for technologies that perform narrow tasks or certain functions, such as cybersecurity, data storage, and chatbots.

Outside of use case scenarios, CAIA also imposes on developers of AI systems the duty to prevent algorithmic discrimination and protect consumers from any known or foreseeable risks arising from the use of the AI system. A developer is one that develops or modifies an AI system used in the state of Colorado. Among other things, a developer must make documentation available for the intended uses and potential harmful uses of the high-risk AI system. 

Similarly, CAIA also regulates a person that is doing business in Colorado and deploys a high-risk AI system for Colorado residents  to use (the “deployer”). Deployers face stricter regulations and must inform consumers when AI is involved in a consequential decision. The Act requires deployers to implement a risk management policy and program to govern the use of the AI system.  Further, the deployers must report any identified discrimination to the Attorney General’s Office within 90 days and must allow consumers to appeal AI-based decisions or request human review of the decision when possible. 

Data Privacy and Consumer Rights

Consumers have the right to opt out of data processing related to AI-based decisions and may appeal any AI-based decisions. This opt-out provision may impact further automated decision-making related to the Colorado resident and the processing of personal data profiling of that consumer. The deployer must also disclose to the consumer when a high-risk AI system has been used in the decision-making process that results in an adverse decision to the consumer. 

Exemptions

The CAIA contains various exemptions, including for entities operating under other regulatory regimes (e.g., insurers, banks, and HIPAA-covered entities) or for the use of certain approved technologies (e.g., technology cleared, approved, or certified by a federal agency, such as the FAA or FDA). But there are some caveats, however. For example, HIPAA-covered entities are exempt to the extent they are providing healthcare recommendations that are generated by an AI system that require the HIPAA-covered entity to take action to implement the recommendation and are not considered to be “high risk.” Small businesses are exempt to the extent that they employ fewer than 50 full-time employees and do not train the system with their own data. Thus, deployers should closely analyze the available exemptions to ensure their activities fall squarely within the recommendations.

Updates

As highlighted in the recent Colorado AI Impact Task Force Report, the report encourages additional changes to CAIA before it is enforced in February 2026. The current concerns deal with ambiguities, compliance burdens, and various stakeholder concerns. The Governor is concerned with whether the guardrails inhibit innovation and AI progress in the State. 

The Colorado AI Impact Task Force notes that there is consensus to refine documentation and notification requirements. However, there is less consensus on how to adjust the definition of “consequential decisions.” Reworking the exemptions to the definition of covered systems is also a change desired by both industry and the public. 

Other potential changes to the CAIA depend on how interconnected sections are potentially revised in relation to other related provisions. For example, changes to the definition of “algorithmic discrimination” depend on other issues related to obligations of developers and deployers to prevent algorithmic discrimination and related enforcement. Similarly, intervals for impact assessments may be affected greatly by changes to the definition of “intentional and substantial modification” to high-risk AI systems. Further, those impact assessments are interrelated with the developer’s risk management programs and will likely implicate any proposed changes to either impact assessments or risk management programs. 

Lastly, there remains firm disagreement on amendments related to several definitions. “Substantial factor” is one debated definition that will take a creative approach to define the scope of AI technologies subject to the CAIA. Similarly, “duty of care” is hotly contested for developers and deployers and whether to remove that concept or replace it with more stringent obligations. Other debated topics that are subject to change include the exemption for small business, the opportunity to cure incidents of non-compliance, trade secret exemptions, consumer right to appeal, and the scope of attorney general rulemaking.

Guidance

Given that most stakeholders recognize that changes are needed, any business impacted by the CAIA should continue to watch the developments in the legislative process for potential changes that could drastically impact the scope and requirements of the Colorado AI Act.

Takeaways

Businesses should assess whether they, or their vendors, use any AI system that could be considered high risk under the CAIA. Some recommendations include:

  • Assess AI usage and consider whether that use is within the definition of the CAIA, including whether any exemptions are available
  • Conduct an AI risk assessment consistent with the Colorado AI Act
  • Develop an AI compliance plan that is consistent with the CAIA consumer protections regarding notification and appeal processes
  • Continue to monitor the updates to the CAIA
  • Evaluate contracts with AI vendors to ensure that necessary documentation is provided by the developer or deployer

Colorado has taken the lead as one of the first states in the nation to enact sweeping AI laws. Other states will likely look to the progress of Colorado and enact similar legislation or make improvements where needed. Therefore, watching the CAIA and its implementation is of great importance in the burgeoning field of consumer-focused AI systems that impact consequential decisions in the consumer’s healthcare, financial well-being, education, housing, or employment.

Listen to this post

Introduction

On May 7, 2025, the Utah Artificial Intelligence Policy Act (UAIP) amendments will go into effect. These  amendments provide significant updates to Utah’s 2024 artificial intelligence (AI) laws. In particular, the amendments focus on  regulation of AI in the realm of consumer protection (S.B. 226 and S.B. 332), mental health applications (H.B. 452), and unauthorized impersonations (aka “deep fakes”) (S.B. 271). 

Background (SB 149)

In March 2024, Utah became one of the first states to enact comprehensive artificial intelligence legislation with the passage of the Utah Artificial Intelligence Policy Act (UAIP, S.B. 149)  specifically addressing AI. Commonly referred to as the “Utah AI Act,” these provisions create important obligations for businesses that use AI systems to interact with Utah consumers. The UAIP Act went into effect as of May 1, 2024.

If your business provides or uses AI-powered software or services that Utah residents access, you need to understand these requirements — even if your company isn’t based in Utah. This post will help break down these key amendments and what they mean for your business operations.

GenAI Defined

The Utah Act defines generative AI (GenAI) as a system that is (a) trained on data, (b) interacts with a person in Utah, and (c) generates outputs similar to outputs created by a human. (SB 149, 13-2-12(1)(a)). 

Transparency and Disclosure Requirements

If a company provides services in a regulated occupation (that is, those occupations that require a person to obtain a state certification or license to practice that occupation), the company shall disclose when the person is interacting with GenAI in the delivery of regulated services if the interaction is defined as “high risk” by the statute. The disclosure regarding regulated services shall be provided at the beginning of the interaction and disclosed orally if during a verbal interaction or in writing if via a written interaction. If the GenAI supplier wants the benefit of the Safe Harbor, then use of the AI system shall be disclosed at the beginning of any interaction and throughout the exchange of information.  (S.B. 226).

If a company uses GenAI to interact with a person in Utah in “non-regulated” occupations, the company must disclose that the person is interacting with a GenAI system and not a human when asked by the Utah consumer. 

S.B. 226 further added mandatory requirements for high-risk interactions related to health, financial, and biometric data, or providing personalized advice in areas like finance, law, or healthcare. Additionally, S.B. 226 granted authority to the Division of Consumer Protection to make rules to specify the form and methods of the required disclosures.

Enforcement and Penalties

The Utah Division of Consumer Protection may impose an administrative fine of up to $2,500 for each violation of the UAIP. The courts or the Utah attorney general may also impose a civil penalty of no more than $5,000 for each violation of a court order or administrative order. As made clear by S.B. 226, violations of the Act are subject to injunctive relief, disgorgement of profits, and subject to paying the Division’s attorney fees and costs. 

Key Takeaways

The 2024 Act requires that companies clearly and conspicuously disclose when a person is interacting with GenAI if asked or requested by the person interacting with the AI system. The restrictions are even tighter when using GenAI in a regulated occupation that involve sensitive personal information or significant personal decision in the high-risk categories as amended in 2025 under S.B. 226. In those instances, the company shall disclose the use of GenAI. If the supplier wants the benefit of the 2025 Safe Harbor under S.B. 226, the AI system shall disclose its use at the beginning of an interaction and throughout the interaction.   

Conclusion

Utah, along with several other states, took the lead to enact AI-related laws. It is likely that states will continue to regulate AI technology ahead of the federal government.

Stay tuned for subsequent blog posts that will provide updates on mental health applications (H.B. 452) and unauthorized impersonations (aka “deep fakes”) (S.B. 271).

Listen to this post

Since Andrew Ferguson assumed the role of FTC chair in January 2025, following his year-long tenure as a commissioner, businesses have been watching closely for signals of how the agency might redirect its focus on privacy enforcement. Ferguson’s public statements, concurrences, and dissents provide valuable insight into his regulatory philosophy and what companies can expect under his leadership. This is the first in our series on what to expect from the FTC on privacy enforcement during Ferguson’s tenure as chair.

Philosophical Approach: “Staying in Our Lane”

The cornerstone of Ferguson’s regulatory philosophy is a commitment to enforcing existing laws without extending the FTC’s reach beyond what he views as its congressional mandate. In his September 2024 remarks at the International Consumer Protection and Enforcement Network, Ferguson emphasized:

“We must be mindful not to stretch the scope of consumer-protection laws beyond their rightful purpose. We must stay in our lane.”

He cautioned against treating consumer protection law as a “panacea for social ills,” arguing that doing so undermines the rule of law, creates legal uncertainty, and can have a chilling effect on innovation.

His statements reflect a concern that regulatory overreach not only exceeds statutory authority but can actively harm the innovation economy. In his June 2024 Taiwan remarks, Ferguson noted that “competition law will not get us the privacy standards we seek, nor is it intended to,” revealing his preference for domain-specific approaches to different regulatory problems.

The FTC as “Cop on the Beat” Rather Than Rulemaker

Ferguson has expressed skepticism about extensive rulemaking. In his December 2024 dissent to the FTC’s Regulatory Plan and Agenda, he stated bluntly:

“The Commission under President Trump will focus primarily on our traditional role as a cop on the beat. We will vigorously and faithfully enforce the laws that Congress has passed, rather than writing them.”

This suggests businesses may expect:

  • Fewer new privacy rules and more case-by-case enforcement actions;
  • Stricter textual interpretation of existing statutes such as COPPA, GLBA, and Section 5 of the FTC Act; and
  • Less reliance on policy statements and sub-regulatory guidance.

Ferguson’s preference for enforcement over rulemaking represents both a philosophical position on separation of powers and a practical assessment of the commission’s strengths. In his dissent on the Non-Compete Clause Rule, he argued that “the difficulty of legislating in Congress is a feature of the Constitution’s design, not a fault,” suggesting he views the constraints of the legislative process as purposeful rather than obstacles to be circumvented.

Section 5 Enforcement: Clear Standards for “Unfairness”

Ferguson favors a more restrained interpretation of the FTC’s unfairness authority under Section 5. While he acknowledges the three-part test established by Congress (substantial injury, not reasonably avoidable, and not outweighed by countervailing benefits), he tends to apply this test more narrowly than his predecessors.

In multiple statements, Ferguson has emphasized that:

  • Clear harm is required – The “substantial injury” prong requires demonstrable harm, not speculative or theoretical injuries.
  • Consent as central – Ferguson views proper notice and consent as often sufficient to render injuries “reasonably avoidable” by consumers.
  • Business practices vs. outcomes – He distinguishes between business practices that directly cause harm and those that may enable harmful outcomes but aren’t inherently harmful themselves.

For example, in his Mobilewalla concurrence, Ferguson supported unfairness claims related to the unconsented collection of sensitive data but rejected unfairness theories based on how lawfully collected data was subsequently categorized or analyzed.

Under Ferguson’s leadership, the FTC likely will invoke its unfairness authority on cases with clear, substantial injury that consumers could not reasonably avoid, rather than expanding the doctrine to address emerging technologies or novel business practices.

What It Means for Businesses

Based on Ferguson’s statements and positions, businesses should:

  • Focus on meaningful consent – Ensure that privacy notices are clear and that you have documented consent for collecting and using sensitive data.
  • Document data practices – Maintain clear records of data collection, use, and sharing practices.
  • Monitor case-by-case enforcement – Watch for enforcement actions rather than new rulemakings to understand the agency’s evolving priorities.
  • Engage with legislative processes – With Ferguson’s FTC less likely to set policy through rulemaking, businesses should increase their engagement with congressional privacy initiatives, as statutory changes may be the primary vehicle for major privacy policy developments.

While Ferguson’s approach represents a shift from the previous administration, it does not signal an abandonment of privacy enforcement. Instead, we can expect the FTC to take a more traditional approach focused on clear statutory mandates and established legal theories, with vigorous enforcement of existing privacy laws while being more cautious about expanding the FTC’s authority through creative interpretation.

Listen to this post

In this final blog post in the Bradley series on the HIPAA Security Rule notice of proposed rulemaking (NPRM), we examine how the U.S. Department of Health and Human Services (HHS) Office for Civil Rights interprets the application of the HIPAA Security Rule to artificial intelligence (AI) and other emerging technologies. While the HIPAA Security Rule has traditionally been technology agnostic, HHS explicitly addresses security measures for these evolving technology advances. The NPRM provides guidance to incorporate AI considerations into compliance strategies and risk assessments.

AI Risk Assessments

In the NPRM, HHS would require a comprehensive, up-to-date inventory of all technology assets that identifies AI technologies interacting with ePHI. HHS clarifies that the Security Rule governs ePHI used in both AI training data and the algorithms developed or used by regulated entities. As such, HHS emphasizes that regulated entities must incorporate AI into their risk analysis and management processes and regularly update their analysis to address changes in technology or operations. Entities must assess how the AI system interacts with ePHI considering the type and the amount of data accessed, how the AI uses or discloses ePHI, and who the recipients are of AI-generated outputs.

HHS expects entities to identify, track, and assess reasonably anticipated risks associated with AI models, including risks related to data access, processing, and output. Flowing from the proposed data mapping safeguards discussed in previous blog posts, regulated entities would document where and how the AI software interacts with or processes ePHI to support risk assessments. HHS would also require regulated entities to monitor authoritative sources for known vulnerabilities to the AI system and promptly remediate them according to their patch management program. This lifecycle approach to risk analysis aims to ensure the confidentiality, integrity, and availability of ePHI as technology evolves.

Integration of AI developers into the Security Risk Analysis

More mature entities typically have built out third-party vendor risk management diligence. If finalized, the NPRM would require all regulated entities contracting with AI developers to formally incorporate Business Associate Agreement (BAA) risk assessments into their security risk analysis. Entities also would need to evaluate BAs based on written security verifications that the AI vendor has documented security controls. Regulated entities should collaborate with their AI vendors to review technology assets, including AI software that interacts with ePHI. This partnership will allow entities to identify and track reasonably anticipated threats and vulnerabilities, evaluate their likelihood and potential impact, and document security measures and risk management.

Getting Started with Current Requirements

Clinicians are increasingly integrating AI into clinical workflows to analyze health records, identify risk factors, assist in disease detection, and draft real-time patient summaries for review as the “human in the loop.” According to the most recent HIMSS cybersecurity survey, most health care organizations permit the use of generative AI with varied approaches to AI governance and risk management. Nearly half the organizations surveyed did not have an approval process for AI, and only 31% report that they are actively monitoring AI systems. As a result, the majority of respondents are concerned about data breaches and bias in AI systems. 

The NPRM enhances specificity in the risk analysis process by incorporating informal HHS guidance, security assessment tools, and frameworks for more detailed specifications. Entities need to update their procurement process to confirm that their AI vendors align with the Security Rule and industry best practices, such as the NIST AI Risk Management Framework, for managing AI-related risks, including privacy, security, unfair bias, and ethical use of ePHI.

The proposed HHS requirements are not the only concerns clinicians must consider when evaluating AI vendors. HHS also has finalized a rule under Section 1557 of the Affordable Care Act requiring covered healthcare providers to identify and mitigate discrimination risks from patient care decision support tools. Regulated entities must mitigate AI-related security risks and strengthen vendor oversight in contracts involving AI software that processes ePHI to meet these new demands.

Thank you for tuning into this series of analyzing the Security Rule updates. Please contact us if there are any questions or we can assist with any steps moving forward.

Please visit the HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.