Listen to this post

Age Verification Compliance Obligations

Age verification requirements have rapidly moved from a niche policy concept to a central feature of the U.S. regulatory landscape. Over the last two years, and accelerating sharply in 2025, states have increasingly adopted laws that require online services to confirm a user’s age before allowing access to certain types of content or platform features. What began as a targeted effort to restrict minors’ access to sexually explicit material is now expanding into broader regulation of social media platforms, account creation, and even algorithm-driven feeds.

For companies operating online, this shift creates a familiar modern compliance challenge: a fast-growing patchwork of state laws with inconsistent requirements, evolving definitions, and uncertain enforcement outcomes driven by ongoing constitutional litigation.

Age Verification Goes Mainstream

By 2025, age verification had crossed a critical threshold. Roughly half of U.S. states now mandate some form of age gating for adult content or social media access, and additional laws are expected to take effect in 2026. The practical result is that companies can no longer treat age verification as a niche issue limited to a handful of jurisdictions. For many businesses, it has become an operational reality with immediate implications.

This trend reflects broader legislative momentum around children’s privacy and online safety. State lawmakers have increasingly focused on perceived harms to minors online, including exposure to inappropriate content, excessive social media use, and the design features that encourage prolonged engagement.

The Texas Model and the First Amendment Framework

A key development driving legislative confidence in this space stems from litigation over Texas H.B. 1181, enacted in 2023. That law requires certain commercial websites that publish sexually explicit content deemed obscene to minors to verify that visitors are at least 18 years old. Industry representatives challenged the statute as unconstitutional under the First Amendment, arguing that adults have a right to access lawful content without being forced to identify themselves or pass through an age gate.  The U.S. Supreme Court upheld Texas’s age-verification statute, H.B. 1181, in Free Speech Coalition v. Paxton (June 2025). The Court concluded that Texas may require certain websites featuring substantial sexually explicit content to verify that users are adults before granting access. Applying intermediate scrutiny, the Court determined the law permissibly advances the state’s interest in shielding minors from inappropriate material and does not create a First Amendment entitlement for adults to access that content without confirming age.

Practical and Privacy Consequences for Users

Age gates often require users to provide personal information, interact with third-party verification tools, or submit identification credentials. This can deter lawful adult access, reduce anonymity, and create friction that changes consumer behavior. Some individuals may be blocked entirely, including those who lack government-issued identification or are incorrectly flagged by automated systems.

Age-verification age gates can create significant privacy and cybersecurity risk because they often require users to submit sensitive personal information — such as date of birth, government ID credentials, or biometric verification — frequently through third-party tools. That process reduces user anonymity and can generate records linking individuals to particular categories of online activity, which may be highly sensitive. From a security standpoint, collecting and transmitting identity data expands the company’s attack surface and increases exposure if systems or vendors are compromised, since ID images and related verification data are high-value targets for fraud, identity theft, and extortion. Even when implemented with good intentions, age verification can therefore introduce new compliance and incident-response risks if businesses do not tightly control vendor access, data minimization, retention, and security safeguards.

The Expansion to Social Media: Parental Consent and Account Restrictions

In 2025, lawmakers increasingly moved beyond adult content and began targeting minors’ social media usage. Multiple states passed laws that require platforms to verify age and obtain parental consent before allowing minors to create accounts or access certain services. Some proposals focus on younger children, while others extend restrictions to anyone under age 18.

In total, eight states have enacted laws that either ban minors from obtaining social media accounts outright and/or require parental consent for certain minors to open accounts. During the 2025 legislative session alone, nearly 30 bills were introduced across 18 states attempting to impose similar restrictions.

These laws vary widely in scope, age thresholds, definitions of covered platforms, and enforcement mechanisms. For companies operating nationwide, that inconsistency is a major compliance obstacle. A product design and onboarding process that works in one state may be noncompliant in another.

Regulating Design and “Addictive” Features

A related legislative trend involves targeting features viewed as designed to increase usage among minors. In 2024, California and New York pioneered efforts to regulate “addictive algorithms” by restricting the delivery of certain feeds or engagement-driven features to minor accounts without parental consent. Some states followed with their own versions of “addictive feed” legislation in 2025, with varying levels of success.

These efforts represent an important shift. Rather than regulating only what content minors can access, lawmakers are increasingly attempting to regulate how platforms deliver content and how digital products are designed to keep users engaged.

For companies, these proposals can implicate product design decisions, internal testing, algorithmic ranking systems, and even liability theories tied to youth mental health. Even where laws are challenged or delayed, the legislative intent is clear: Design-based regulation is likely to remain on the table.

Practical Takeaways

For companies navigating this landscape, the challenge is not only meeting legal requirements, but it is building a program that can adapt quickly as laws change and court decisions reshape what is enforceable.

Key steps include:

  • Inventory applicable state requirements. Companies should map where their users are located and which laws may apply based on access and operations.
  • Assess verification tools with privacy and security in mind. Age verification solutions can create significant data risk if they involve collection, storage, or third-party processing of identity information.
  • Prepare for rapid legal shifts. Many laws are being challenged, modified, or replaced, meaning compliance strategies must be designed for change rather than permanence.
  • Coordinate across legal, privacy, cybersecurity, and product teams. Age verification impacts product architecture, user experience, and security posture — not just legal risk.
  • Document decisions and risk tradeoffs. Given uncertainty and evolving standards, maintaining a record of compliance reasoning and design choices can be valuable for regulatory inquiries and litigation defense.

Companies should expect continued state activity in 2026 and beyond and should plan now for a future in which age verification, youth safety compliance, and platform design restrictions remain central issues for online business.

Listen to this post

In 2026, a wide range of California laws regulating the development, marketing, and use of artificial intelligence (AI) go into effect. Together, these bills impose new requirements on generative AI developers, frontier-model companies, healthcare-related AI tools, platforms distributing AI-generated content, and businesses that rely on algorithmic pricing. With the deadline to comply coming up quickly, companies operating in California (or offering AI-enabled products or services to California residents) should assess how these laws apply to their technologies and update their governance and disclosure practices accordingly.

Most of the new obligations take effect in 2026, with some requirements extending into 2027 and 2028. Below is an overview of the key components of enacted bills AB 316, AB 325, AB 489, AB 621, and AB 2013.

AB 316: Liability for AI-Related Harms (Effective: January 1, 2026)

AB 316 applies broadly to any civil action where AI involvement is alleged to have caused damage. The bill limits affirmative defenses for civil liability, prohibiting defendants (which could include developers, modifiers, or users of AI) from raising an “autonomous-harm defense” in lawsuits alleging harm caused by AI-generated or AI-modified content. This defense, which might otherwise allow parties to shift blame to the technology’s independent decision-making, is explicitly barred to ensure human responsibility remains as AI models are deployed.

AB 325: Algorithmic Pricing and Antitrust (Effective: January 1, 2026)

AB 325 amends the Cartwright Act, a California anti-trust statute, to prohibit anticompetitive use or distribution of “common pricing algorithms.” A common pricing algorithm includes any methodology (computer-based or otherwise) that uses competitor data to recommend, align, stabilize, or influence prices or commercial terms. The statute creates two categories of liability: (i) use or distribution of a common pricing algorithm as part of a contract, combination, or conspiracy to restrain trade; and (ii) coercion to adopt an algorithm-recommended price or commercial term.

AB 489: Misleading Statements on Health Care Professional Oversight of  Artificial Intelligence (Effective: January 1, 2026)

AB 489 prohibits developers and deployers of AI and generative AI technologies from using titles, terms, icons, post-nominal letters, or design elements that could falsely suggest the system is providing services from a licensed healthcare professional. Any implication — direct or subtle — that a licensed professional oversees the output is barred unless such oversight exists. This prohibition applies both to advertising and to in-product functionality for both AI and generative AI systems. Each misleading representation may constitute a separate offense, and state licensing boards are authorized to investigate and enforce violations, including through civil penalties.

AB 621: Expanded Protections Against Digitized Sexually Explicit Deepfakes (Effective: January 1, 2026)

AB 621 strengthens legal protections against non-consensual, sexually explicit “deepfakes.” The law broadens the definition of “digitized sexually explicit material,” clarifies that minors cannot consent to its creation or distribution and increases damages (up to $250,000 for malicious violations), and grants public prosecutors civil enforcement authority. These expanded remedies meaningfully increase the risks for individuals and entities involved in creating or distributing such material.

AB 2013: Mandatory Dataset Disclosure for Generative AI Developers (Compliance by: January 1, 2026)

AB 2013 requires developers of generative AI systems to publicly disclose detailed information about the datasets used to train their models. Disclosures must be posted on the developer’s website and updated when substantial system modifications occur. The law raises concerns from developers related to protection of intellectual property, confidentiality, and potential litigation exposure due to the breadth of required disclosures.

Takeaways to Prepare for Compliance

With multiple California AI-related statutes becoming enforceable in 2026 (and additional obligations later in 2027, and 2028 that will be covered in a subsequent post) companies need to assess the applicability of these laws now, which may include:

  • Mapping and documenting training datasets now to ensure readiness for AB 2013.
  • Evaluating the use of shared or third-party algorithmic pricing tools under AB 325.
  • Reviewing their risk exposure related to the enabling of deepfake pornography under AB 621.
  • Considering the allocation of liability in agreements given the prohibition on autonomous harm defenses by AB 316.
  • Ensuring that their use of AI does not imply that their services are provided by a licensed healthcare professional unless overseen by a healthcare provider to comply with AB 489.

California’s AI regulatory framework is expanding rapidly. These new statutes are in addition to California’s regulations regarding Automated Decision-Making Technology, which were promulgated under the California Consumer Privacy Act and go into effect on January 1, 2026. Early preparation will help your company navigate the state’s emerging compliance landscape. If you have questions about these new California laws or their impact on your operations, please contact one of the authors.

Listen to this post

A New Approach to Data Regulation

With the U.S. Department of Justice’s Data Security Program (DSP) now in full effect, companies that handle sensitive personal data, operate across borders, or rely on global vendor ecosystems face an increasingly complex compliance environment. The DSP restricts certain data transactions involving individuals and countries of concern, imposes new compliance contractual obligations, and signals a clear national-security approach to data governance.

The DSP marks a new era in the federal government’s regulation of data transactions, applying concepts traditionally used in U.S. export control law to bulk or sensitive data exchanges. The DSP is designed to address what the DOJ has described as “the extraordinary national security threat” posed by U.S. adversaries acquiring Americans’ most sensitive data through commercial means to “commit espionage and economic espionage, conduct surveillance and counterintelligence activities, develop AI and military capabilities, and otherwise undermine our national security.”

Key Compliance Practices

U.S. businesses should work on the following:

  • Identifying where covered data resides, who has access, and which external parties interact with the information. Businesses should focus on (i) bulk datasets or datasets relating to sensitive government activities and (ii) any technical, contractual, or operational links to foreign jurisdictions that could implicate the DSP’s restrictions.
  • Evaluating existing vendor, service-provider, and data-processing agreements in light of the DSP. This includes updating provisions to address data-access controls, obligations, audit rights, and restrictions on subcontracting or data relocation. A business that holds or anticipates holding DSP-covered data should incorporate DSP-aligned terms by default in all new agreements.
  • Aligning their data compliance program to ensure effective data classification and accountability, documented vendor due-diligence procedures, and demonstrable oversight over the data they hold.
  • Training key personnel—including legal, procurement, IT, HR, and executives—so they can recognize when a transaction, vendor, or relationship may fall within the DSP’s scope.
  • Conducting thorough due diligence before engaging in a merger or acquisition transaction involving covered data or a transaction with parties that own or have access to such data.

U.S. businesses engaged in restricted transactions must also:

  • Maintain a sufficient data compliance program that verifies data flows and confirms vendor identities.
  • Develop a written description of their compliance program and implement the Cybersecurity and Infrastructure Security Agency (CISA) security requirements, which must be certified annually.
  • Conduct independent audits of their compliance program.
  • Retain all relevant records for at least ten years.

Persons and Countries of Concern

The DSP grants the federal government broad authority to designate “countries of concern” and “covered persons.” Currently designated countries are China (including Hong Kong and Macau), Cuba, Iran, North Korea, Russia, and Venezuela. A covered person generally includes any entity, contractor, or employee affiliated with one of these countries. The Attorney General may designate additional countries or individuals.

What Data is Covered?

The DSP regulates two primary categories of information: bulk U.S. sensitive personal data and government-related data. Sensitive personal data includes precise geolocation information, biometric identifiers, genomic and other “’omic” data, personal health data, personal financial data, and certain personal identifiers—each subject to specific volume thresholds. Government-related data includes precise location data associated with designated sensitive government sites and any sensitive personal data marketed as linked or linkable to U.S. government employees, contractors, or officials, which is covered regardless of volume. These categories are broad enough to encompass data controlled or processed by banking, healthcare, and technology firms. Importantly, DSP requirements apply even to de-identified, pseudonymized, or encrypted data.

Enforcement

The DOJ’s Foreign Investment Review Section (FIRS) is responsible for enforcing the DSP. Although enforcement activity has been limited thus far, the penalties for non-compliance are significant. Failing to comply with the DSP can result in civil and criminal penalties under the International Emergency Economic Powers Act, including prison sentences of up to twenty years and fines as high as $1,000,000. The DOJ’s Whistleblower Award program also provides financial incentives for individuals to report violations, increasing the likelihood that prohibited or non-compliant activities will come to the government’s attention.

Compliance Resources

For organizations preparing to comply with the DSP, additional resources are invaluable. The DOJ has published an FAQ, a compliance guide, and offers periodic enforcement updates, while CISA has released the required security controls for restricted transactions. Bradley is ready to assist businesses in interpreting these complex rules, weighing possible exceptions, assessing their exposure, and building effective compliance programs. If you have questions about the DSP or its impact on your operations, please contact one of the authors.

Listen to this post

On May 12, 2025, the Federal Trade Commission’s Rule on Unfair or Deceptive Fees  took effect, thereby thrusting the United States’ primary regulator of unfair or deceptive practices once more into the spotlight. But the spotlight given to the FTC rule, which only applies to the short-term lodging and live-event ticketing industries, has obscured the simultaneous expansion of broader reaching state legislation. These state laws, which are either currently in effect or pending in nearly half of all states, pose a far greater liability for the majority of businesses. If you’ve been comforted by the limited scope of the FTC rule and ignored the rise of these state laws, it’s not too late — this article will quickly catch you up to speed on your potential liability and the next steps your business should take.

What Are These New State Laws?

Some of the obfuscation around the growing number of state fee legislation has to do with their different naming conventions. The FTC rule refers to “unfair or deceptive” fees, but the types of activities that states seek to regulate are also known as junk fees, hidden fees, bait-and-switch pricing, or drip pricing, among others. This article will use the phrase “fee legislation” to encompass the wide variety of conventions in use. Regardless of what they’re ultimately named, these laws and regulations target fees added to the mandatory total price a consumer pays that were not disclosed prior to that consumer’s commitment to pay. In fact, in some cases, the price advertised must include all of the fees the consumer will pay.

Under both federal and state law, a business will incur liability from fee legislation when it asks the consumer to pay a fee that was not disclosed prior to expecting final payment from the consumer. But state fee legislation varies, and some states have independent requirements of how and when a fee must be disclosed. A state usually requires a “clear and conspicuous” disclosure ahead of final payment, but how that may be defined depends on the state. For those states that have only implemented regulations to bolster their unfair or deceptive acts and practices (UDAP) statutes, knowing how to properly comply is typically even less clear.

Liability Imposed by Fee Legislation

Just how severe is the liability incurred by these new state laws? Using its powerful statutory authority, the FTC can seek upwards of $53,000 per violation of its own fee rule. In a similar fashion, many states have passed regulations to expand their preexisting UDAP statutes in lieu of formal fee legislation. This means businesses can expect to incur similar “per violation” penalties from states themselves. The FTC has cautioned its rule only establishes a baseline, and nothing forecloses the ability of a state to bring its own action.

Even though a state may not have passed formal fee legislation or amended its UDAP statute to incorporate hidden fees, a business could still incur liability from a state. Take Texas for example. Recently, the state brought an action against Booking.com alleging the site’s failure to disclose mandatory fees violated Texas’ UDAP statute — even though Texas had not passed formal regulations enumerating hidden fees as part of its UDAP statute like other states. Nonetheless, the Texas attorney general was able to settle the case against Booking.com for a total of $9.5 million.

What if a state’s attorney general chooses not to pursue legal action on behalf of the state? There is still potential liability since many states have built-in private rights of action within their UDAP statutes. Depending on the state, the private right of action may exist under the statute’s “catch-all” unfair or deceptive provision or remain limited to only the statute’s enumerated unfair or deceptive business practices. Since fee legislation is an evolving area of law and many state UDAP statutes look to the FTC’s guidance on what is unfair or deceptive, it is likely that state UDAP liability could evolve to include hidden fees, even in the absence of specific state fee legislation. As such, the private rights of action within state UDAP statutes will always remain a consistent and threatening source of liability, regardless of whether formal fee legislation or expansive regulations have been passed in the state.

Finally, the most concerning challenge posed by this rise in fee legislation is the inclusion of private rights of action separate from a state’s UDAP statute. At least eight states currently have private rights of action in their fee legislation, with multiple states allowing for the recovery of punitive damages as well. Failure to properly comply with these new state laws can therefore lead to a sea of lawsuits with millions of dollars at stake. And given that this fee legislation is typically triggered based on the status of the consumer’s state residency, rather than the residency of the business, one small mistake can mean defending multiple suits across different states — including class action lawsuits.

What Steps Should You Take Now?

If your business has been watching from the sidelines under an expectation that the FTC rule doesn’t apply, it’s not too late to start ensuring compliance with the broader wave of state fee legislation. For now, consider the following:

  • Evaluate your ecommerce flows. It is vital to understand when mandatory fees are being presented to the consumer to ensure your business is in compliance.
  • Map where you conduct your business. If you’re selling to consumers in a state that has fee legislation, you are vulnerable. Knowing where your business conducts its sales ensures you know where a lawsuit may originate.
  • Keep a watchful eye on your primary state’s attorney general. Because some states have added regulations to support their existing UDAP statutes, monitoring the state attorney general’s office or related UDAP enforcer is key to managing expectations of liability.
  • Don’t wait for your industry to change. Liability for hidden fees is now a case of “when” not “if.” Private rights of action make this new legislation unpredictable. Be on the forefront of your industry by taking steps to ensure compliance today, without waiting for someone else to take the lead.

These are only the first steps your business should take. Given the threat of costly litigation from private rights of action and “per violation” liability, the national interest surrounding fee enforcement, and the FTC rule overshadowing the wider net of state legislation, staying proactive on your business’s compliance is crucial. With new fee legislation springing up, the narrative on compliance is rapidly changing.

For more information on fee legislation and other updates regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point, or reach out to one of our authors.

Listen to this post

If you run an ecommerce brand or handle marketing compliance, you’ve probably heard about Texas Senate Bill 140 (SB 140) and its potential impact on text message marketing. Earlier this month, a group of plaintiffs, including an industry association and two e-commerce companies, asked a federal court in Austin to block the state from enforcing parts of the law. The state recently filed its brief opposing that request. This post examines the state’s position and its practical implications for text message marketing programs.

Texas’ SB 140

Texas Senate Bill 140 (SB 140), signed on June 20, 2025, by Gov. Greg Abbott and effective September 1, 2025, expands the Texas Business & Commerce Code’s regulation of a “telephone solicitation” to include text messages and other sales solicitation methods. Significantly, SB 140 requires businesses using telemarketing through text messages to register with the Texas secretary of state, pay a $200 fee, post a $10,000 security bond, and submit quarterly reports prior to making any telemarketing solicitations to prospective customers.

However, a key carve-out may be significant for many brands: the “customer” exemption. Under Section 302.058, Chapter 302’s requirements don’t apply when a business solicits a current or former customer and has operated under the same name for at least two years. The statute defines “purchaser” but not “customer,” leaving the ordinary meaning of “customer” to control.

Industry analysis noted that “customer” in everyday usage often includes people who patronize or otherwise deal with a business — not only those who have already completed a purchase. That reading supports treating opt-in subscribers (people who visited your store and consented to receive your promotional texts) as “customers,” even if they haven’t bought a product yet. If that interpretation holds, many established brands that text only to opted-in subscribers could fall within the exemption.

The motion seeking to block enforcement of SB 140 argued that it violates federal law. Specifically, the plaintiffs argued that the law is unconstitutional and places unreasonable burdens on businesses that send text messages to individuals who have consented to receive them.

The State’s Filing

According to the state’s filing opposing the plaintiffs’ motion for a preliminary injunction, SB 140 is about stopping unwanted, deceptive solicitations — especially spam texts sent without permission.

Two definitions matter here:

  • “Telephone call” – SB 140 tells us to use the meaning in Chapter 304. That chapter excludes transmissions that a mobile customer has agreed to receive as part of an ad-based service. In short, permission matters.
  • “Telephone solicitation” – Chapter 302 now says it’s “a call or other transmission,” and it expressly mentions texts and images.

The state argued to the court that when Chapter 302 uses the word “call,” it still refers to a “telephone call” — the same concept Chapter 304 defines. In other words, the change from “telephone call” to “call” in the solicitation definition did not expand the law to capture everything; it still ties back to a definition that carves out communications a customer has consented to.

Why does that matter? Because the state also emphasizes that Chapter 302’s purpose is to protect people from false, misleading, or deceptive telephone solicitations — not to punish businesses who send consented messages that consumers want. In fact, the state’s brief points out that the plaintiffs’ business model is consent-based texting — and says that is not a “deceptive practice.”

Put simply: If your program sends texts only to people who opted in, the state argues SB 140 is not aimed at you.

Other interesting tidbits appear in the state’s filing about who is or, in this case, is not enforcing the registration requirements.

  • Secretary of State (SoS) – The SoS administers registrations (accepts filings, keeps certificates, helps the public) but, per the state’s filing, does not investigate or enforce violations of SB 140 or Chapter 302. The office states it has not taken — and does not plan to take — enforcement actions.
  • Attorney General (AG) – The AG states it has discretionary authority to bring civil enforcement actions (for example, to seek an injunction) and to pursue civil penalties for violating an injunction. The AG also states it understands “call” in the statute to mean a “telephone call” as defined in Chapters 302 and 304.

“Discretionary” is significant here: The AG is telling the court there’s no mandatory duty to bring actions in every situation.

So, Do I Need to Register and Post a $10,000 Bond?

SB 140 folds texts into the “telephone solicitation” framework, and Chapter 302 generally requires registration and a $10,000 security (bond, letter of credit, or CD).

Taking the state’s filing at face value, three practical takeaways emerge:

  1. Consent is the bright line. The state is focused on stopping unwanted or deceptive text solicitations. Programs built on clear, documented opt-in consent do not appear to be the target the state describes to the court.
  2. “Call” tracks phone-based communications. The state reads “call” as a telephone call (which, under Chapter 304, includes certain text transmissions), with an express exclusion for ad-based transmissions the consumer agreed to receive. That reinforces the centrality of permission and how your messages are delivered.
  3. Registration may not apply to everyone. If your audience qualifies as current or former customers and you’ve been operating under the same name for two years, the Chapter 302 registration and bond requirements may not apply — particularly for consent-only programs.

Action Items for Brands and Platforms

Even with the state’s positions, SB 140 is still a live issue — and the court hasn’t ruled on the preliminary injunction yet. Here’s how to stay ready:

  • Audit your consent. Confirm your opt-in flows are clear and documented (who opted in, when, how, what disclosures they saw). This supports both the consent-based exclusion and any “customer” argument. (The state’s brief itself acknowledges the plaintiffs send texts only to consumers who want them.)
  • Check the two-year rule. If you’ve operated under the same name for two years, the “customer” exemption may help. If you’re newer or recently rebranded, consult with counsel about whether registration is advisable while the case proceeds.
  • Map your sending framework. Determine whether your delivery path fits within Chapter 304’s definition and exclusions (e.g., transmissions the user agreed to receive as part of a service), as highlighted by the state.
  • Monitor enforcement signals. The SoS says it isn’t enforcing. The AG says enforcement is discretionary and frames consent-based messaging as non-deceptive. Still, maintain robust compliance practices.
  • Private right of action. It remains an open question whether courts will recognize a consumer’s ability to sue a business directly under SB 140 for failure to register. No court has squarely decided that issue yet. However, the state’s brief narrows the statute’s focus to deceptive, non-consented solicitations and emphasizes discretionary public enforcement. That framing makes it harder for plaintiffs to argue for a broad private cause of action against consent-based text programs. Until a court rules, the question is unresolved, but the state’s position pushes against the expansive reading plaintiffs would need.
  • Stay tuned for the ruling. The court will decide whether to issue preliminary relief; until then, keep compliance tight and documentation complete.

As always, facts matter. If you’re unsure whether your program qualifies for exemptions, or whether your consent path fits within the state’s framing, seek tailored advice. But for now, if you’re only texting people who asked to hear from you — and you can prove it — you’re much closer to “compliant” than a “cautionary tale.”

Listen to this post

A new Mississippi law, known as the Walker Montgomery Protecting Children Online Act, has prompted several companies to block Mississippi IP addresses from accessing their platforms. In fact, social media company Bluesky posted a response to the enactment of the law on its website. Bluesky explained the decision to make their app unavailable to Mississippi residents, stating:

Mississippi’s HB1126 requires platforms to implement age verification for all users before they can access services like Bluesky. That means, under the law, we would need to verify every user’s age and obtain parental consent for anyone under 18. The potential penalties for non-compliance are substantial — up to $10,000 per user. Building the required verification systems, parental consent workflows, and compliance infrastructure would require significant resources that our small team is currently unable to spare as we invest in developing safety tools and features for our global community, particularly given the law’s broad scope and privacy implications.

Bluesky’s decision to block certain users came after the U.S. Supreme Court permitted the Mississippi act to take effect while First Amendment challenges to the law are pending. The Mississippi act requires any website or app that allows users to post content, create a profile, and interact with other users to verify the age of each user, no matter the type of content. The law provides hefty penalties up to $10,000 per user for failure to comply and permits parents and guardians to bring legal actions.

In its company statement explaining its decision, Bluesky highlighted the substantial financial burdens required to comply with broad age-verification laws. Bluesky specifically cited the “substantial infrastructure and developer time investments, complex privacy protections and ongoing compliance monitoring” required by such laws, noting that these costs can easily push smaller providers out of the market. The same is true for small companies that never intended to be social media platforms but nonetheless fall within the coverage of the statute.

The uncertainty about what verification efforts qualify as “commercially reasonable” further complicates compliance. The statute specifically requires the platform or application provider to verify the age of every user “with a level of certainty appropriate to the risks that arise from the information management practices” of the provider. Are companies required to store state-issued IDs or use fingerprint or facial recognition? Is use of AI permissible to determine age? What is clear is that whichever method is utilized, the costs and risks associated with storing users’ private information will increase.

While the Mississippi act continues to face legal challenges, the push to protect children online, including more stringent parental consent and age-gating is not going away. State legislatures around the country are enacting similar laws, and the U.S. Supreme Court has signaled that it will not stand in the way of enforcement while legal challenges are pending.

For more information and other updates regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point, or reach out to one of our authors.

Listen to this post

July 1 marked the official enforcement date of the Tennessee Information Protection Act (TIPA), the state’s comprehensive consumer privacy law. Signed into law in 2023, TIPA grants consumers specific rights concerning their personal information and regulates covered businesses and service providers that collect, use, share, or otherwise process consumers’ personal information. With all TIPA provisions now enforceable, it is important for regulated companies to understand the law’s comprehensive requirements.

Covered businesses and organizations

TIPA regulates entities that conduct business in Tennessee or produce products or services targeted to Tennessee residents, exceed $25 million in revenue, and meet one of the below criteria:

  • Control or process information of 25,000 or more Tennessee consumers per year and derive more than 50% of gross revenue from the sale of personal information; or
  • Control or process information of at least 175,000 Tennessee consumers during a calendar year.

Consumer Rights

TIPA grants consumers (Tennessee residents acting in a personal context only) the rights to confirm, access, correct, delete, or obtain a copy of their personal information, or opt out of specific uses of their data (such as selling data to third parties, using data for targeted advertising, or profiling consumers in certain instances). Companies must respond to authenticated consumer requests within 45 days, with a possible 45-day extension, and they must establish an appeal process for request denials. Controllers, which TIPA defines as companies that (alone or jointly) determine the purpose and means of processing personal information, must also offer a secure and reliable means for consumers to exercise their rights without requiring consumers to create a new account.

Company Responsibilities

Companies must limit data collection and processing to what is necessary, maintain appropriate data security practices, and avoid discrimination. Companies must provide a clear and accessible privacy notice detailing their practices, and, if selling personal information or using it for targeted advertising, disclose these practices and provide an opt-out option.

Opt-In for Sensitive Personal Information

TIPA prohibits processing sensitive personal information without first obtaining informed consent. Sensitive personal information is defined broadly and includes any personal information that reveals a consumer’s racial or ethnic origin, religious beliefs, mental or physical health diagnosis, sexual orientation, or citizenship or immigration status. Sensitive information also includes any data collected from a known child younger than age 13, precise geolocation data (i.e., within a 1,750-foot radius), and the processing of genetic or biometric data for the purposes of identifying an individual.

Controller-Processor Requirements

Processors must adhere to companies’ instructions and assist them in meeting their obligations, including responding to consumer rights requests and providing necessary information for data protection assessments. Contracts between companies and processors must outline data processing procedures, including confidentiality, data deletion or return, compliance demonstration, assessments, and subcontractor engagement. The determination of whether a person is acting as a company or processor depends on the context and specific processing of personal information.

Data Protection Assessments

Companies must conduct and document data protection assessments for specific data processing activities involving personal information. These assessments must weigh the benefits and risks of processing, with certain factors considered. Assessments apply to processing of personal data created or generated on or after July 1, 2024, and in investigations by the Tennessee attorney general, are to be treated as confidential and exempt from public disclosure without a waiver of attorney-client privilege or work product protection.

Major Similarities to CCPA

TIPA shares many similarities with the California Consumer Privacy Act (CCPA), including:

  • Similar consumer rights;
  • Contractual requirements between controllers and processors; and
  • Requiring data protection assessments for certain processing activities.

Affirmative Defense

TIPA provides for an “affirmative defense” against violations of the law by adhering to a written privacy policy that conforms to the NIST Privacy Framework or comparable standards. The privacy program’s scale and scope must be appropriate based on factors such as business size, activities, personal information sensitivity, available tools, and compliance with other laws. In addition, certifications from the Asia-Pacific Economic Cooperation’s Cross-Border Privacy Rules and Privacy Recognition for Processors systems may be considered in evaluating the program.

Enforcement

The Tennessee attorney general retains exclusive enforcement authority, and TIPA expressly states that there is no private right of action. The Tennessee attorney general must provide 60 days’ written notice and an opportunity to cure before initiating enforcement action. If the alleged violations are not cured, the Tennessee attorney general may file an action and seek declaratory and/or injunctive relief, civil penalties up to $7,500 for each violation, reasonable attorneys’ fees and investigative costs, and treble damages in the case of a willful or knowing violation.

Exemptions

The law includes numerous exemptions, including:

  • Government entities;
  • Financial institutions, their affiliates, and data subject to the Gramm-Leach-Bliley Act (GLBA);
  • Insurance companies;
  • Covered entities, business associates, and protected health information governed by the Health Insurance Portability and Accountability Act (HIPAA) and/or the Health Information Technology for Economic and Clinical Health Act (HITECH);
  • Nonprofit organizations;
  • Higher education institutions; and
  • Personal information that is subject to other laws, such as the Children’s Online Privacy Protection Act (COPPA), the Family Educational Rights and Privacy Act (FERPA), and the Fair Credit Reporting Act (FCRA).

TIPA is just one of seven laws slated to go into effect this year. With three more laws going into effect next year, companies should review and determine whether laws such as TIPA apply to them and take steps to comply now that the law is in effect.

Listen to this post

As FTC Chair Andrew Ferguson establishes his enforcement priorities, his positions on data categorization and surveillance pricing reveal a consistent philosophy that balances privacy protection with innovation. This is the third post in our series on what to expect from the FTC under Ferguson as chair.

Our previous posts examined Ferguson’s broad regulatory philosophy of “staying in our lane” and his priority enforcement areas in children’s privacy and location data. This post explores Ferguson’s approach to emerging privacy issues that don’t fit neatly into established legal frameworks.

Skepticism of “Sensitive Categories” Designation

Ferguson has expressed significant skepticism about the FTC designating certain categories of data as inherently “sensitive” without clear statutory basis. In his September 2024 statement on the Social Media and Video Streaming Services Report, Ferguson criticized this approach:

“I am skeptical that this is the kind of injury the law should try to address… I doubt it could. Any such line would tend toward arbitrariness and is not a stable system on which to decide whether advertisements are illegal.”

Ferguson’s critique reflects his broader concern that creating subjective lists of “sensitive” data categories raises several problems:

  1. Arbitrary line-drawing – Determining which categories qualify as “sensitive” is inherently subjective and potentially politicized.
  2. Lack of statutory basis – Section 5 does not provide clear guidance on which categories of data should receive special protection.
  3. Inconsistent application – When regulators decide which categories deserve protection, the resulting lists may reflect the decision-makers’ preferences rather than objective criteria.

Ferguson’s December 2024 concurrence in the Mobilewalla case provides the clearest view of his position on sensitive data categorization, where he wrote: “The FTC Act does not limit how someone who lawfully acquired those data might choose to analyze those data, or the conclusions that one might draw from them.” This reveals a fundamental distinction in his approach: While he believes the initial collection of sensitive data without consent may violate Section 5, he is skeptical that the FTC can regulate how lawfully obtained data is subsequently categorized or analyzed.

Ferguson’s analogy to private investigators is particularly telling: Just as investigators may legally observe someone entering a church and conclude they practice that religion, Ferguson believes that drawing conclusions from lawfully collected data is not, in itself, a Section 5 violation.

Surveillance Pricing: Fact-Finding Over Speculation

Ferguson has demonstrated a measured approach to emerging data practices like surveillance pricing — the use of consumer data to set personalized prices. In July 2024, he supported the FTC’s 6(b) study into these practices, explaining:

“One of the most important duties with which Congress has entrusted us is studying markets and industries and reporting to the public and Congress what we learn… These studies may inform future Commission enforcement actions, but they need not.”

His statement emphasized the importance of thorough fact-finding before developing policy positions, noting:

“Congress and the American people should be made aware of whether and how consumers’ private data may be used to affect their pocketbooks.”

However, in January 2025, Ferguson joined Commissioner Melissa Holyoak in dissenting from the release of preliminary “research summaries” on surveillance pricing. His dissent criticized the rushed release of early findings:

“Issuing these research summaries degrades the Commission’s Section 6(b) process. The Commission should not be releasing staff’s early impressions that ‘can be outdated with new information’ because the fact gathering process on the very issues being presented to the public is still underway.”

This suggests a commitment by Ferguson to thorough investigation of privacy issues before regulation, particularly with emerging practices that implicate consumer data.

Balancing Evidence and Action

Ferguson’s approach to both sensitive data categories and surveillance pricing illustrates his broader privacy philosophy:

  1. Demand robust evidence – Before taking regulatory action on privacy practices, Ferguson wants complete factual records that demonstrate actual harm.
  2. Favor established laws over novel theories – His skepticism of “sensitive categories” shows preference for established legal frameworks rather than expanding statutory interpretations.
  3. Emphasize procedural integrity – His objection to preliminary research summaries reveals concern with fair, thorough processes before reaching conclusions about data practices.

Ferguson appears to maintain a genuine openness to evidence that might show consumer benefits from practices such as data categorization or personalized pricing. His insistence on completing thorough market studies reflects not just procedural formalism but a substantive commitment to evidence-based regulation that considers both potential harms and benefits.

What This Means for Businesses

Based on Ferguson’s positions, here are some considerations for businesses:

For Data Categorization:

  • Focus on consent mechanisms for data collection rather than worrying about how lawfully collected data is analyzed.
  • Document legitimate business purposes for data analysis.
  • Keep watch for potential future legislation that might specifically designate certain data categories for special protection.
  • Distinguish clearly between initial data collection practices (which face greater scrutiny) and subsequent analysis of lawfully collected data (which faces less scrutiny).

For Surveillance Pricing and Similar Practices:

  • Expect continued scrutiny of personalized pricing practices, but through careful study rather than immediate regulation.
  • Maintain transparency about how customer data influences pricing.
  • Document how pricing algorithms use personal data.
  • Consider implementing clear opt-out mechanisms for data-based pricing.
  • Document instances where personalized pricing benefits consumers through lower prices or increased access, as Ferguson’s evidence-based approach may be receptive to such benefits.

Evolution Rather Than Revolution

Ferguson’s approach suggests the FTC under his leadership will maintain strong privacy enforcement but with a focus on clear statutory violations rather than expanding interpretations of unfairness. For data categorization and surveillance pricing, this means:

  1. Continued fact-finding – The commission will likely invest in thorough market studies before developing policy positions.
  2. Focus on deception over unfairness – Companies making false or misleading claims about data practices will face scrutiny, while novel “unfairness” theories will receive more skepticism.
  3. Emphasis on consent and transparency – Proper notice, consent, and transparency will remain central to the FTC’s privacy enforcement.

This approach represents evolution rather than revolution in the commission’s privacy work, with a measured path that balances consumer protection with business certainty and technological innovation.

Listen to this post

While Andrew Ferguson advocates for a restrained regulatory approach at the FTC, his statements and voting record reveal clear priority areas where businesses can expect continued vigorous enforcement. Two areas stand out in particular: children’s privacy and location data. This is the second post in our series on what to expect from the FTC under Ferguson as chair.

Our previous post examined Ferguson’s broad regulatory philosophy centered on “Staying in Our Lane.” This post focuses specifically on the two areas where Ferguson has shown the strongest commitment to vigorous enforcement, explaining how these areas are exceptions to his generally cautious approach to extending FTC authority.

Prioritizing Children’s Privacy

Ferguson has demonstrated strong support for protecting children’s online privacy. In his January 2025 concurrence on COPPA Rule amendments, he supported the amendments as “the culmination of a bipartisan effort initiated when President Trump was last in office.” However, he also identified specific problems with the final rule, including:

  • Provisions that might inadvertently lock companies into existing third-party vendors, potentially harming competition;
  • A new requirement prohibiting indefinite data retention that could have unintended consequences, such as deleting childhood digital records that adults might value; and
  • Missed opportunities to clarify that the rule doesn’t obstruct the use of children’s personal information solely for age verification.

Ferguson’s enforcement record as commissioner reveals his belief that children’s privacy represents a “settled consensus” area where the commission should exercise its full enforcement authority. In the Cognosphere (Genshin Impact) settlement from January 2025, Ferguson made clear that COPPA violations alone were sufficient to justify his support for the case, writing that “these alleged violations of COPPA are severe enough to justify my voting to file the complaint and settlement even though I dissent from three of the remaining four counts.”

In his statement on the Social Media and Video Streaming Services Report from September 2024, Ferguson argued for empowering parents:

“Congress should empower parents to assert direct control over their children’s online activities and the personal data those activities generate… Parents should have the right to see what their children are sending and receiving on a service, as well as to prohibit their children from using it altogether.”

The FTC’s long history of COPPA enforcement across multiple administrations means businesses should expect continued aggressive action in this area under Ferguson. His statements suggest he sees children’s privacy as uniquely important, perhaps because children cannot meaningfully consent to data collection and because Congress has provided explicit statutory authority through COPPA, aligning with his preference for clear legislative mandates.

Location Data: A Clear Focus Area

Ferguson has shown particular concern about precise location data, which he views as inherently revealing of private details about people’s lives. In his December 2024 concurrence on the Mobilewalla case, he supported holding companies accountable for:

“The sale of precise location data linked to individuals without adequate consent or anonymization,” noting that “this type of data—records of a person’s precise physical locations—is inherently intrusive and revealing of people’s most private affairs.”

The FTC’s actions against location data companies signal that this will remain a priority enforcement area. Although Ferguson concurred in the complaints in the Mobilewalla case, he took a nuanced position. He supported charges related to selling precise location data without sufficient anonymization and without verifying consumer consent. However, he dissented from counts alleging unfair practices in categorizing consumers based on sensitive characteristics, arguing that “the FTC Act imposes consent requirements in certain circumstances. It does not limit how someone who lawfully acquired those data might choose to analyze those data.”

What This Means for Businesses

Companies should pay special attention to these two priority areas in their compliance efforts:

For Children’s Privacy:

  • Revisit COPPA compliance if your service may attract children
  • Review age verification mechanisms and parental consent processes
  • Implement data minimization practices for child users
  • Consider broader parental control features

For Location Data:

  • Implement clear consent mechanisms specifically for location tracking
  • Consider anonymization techniques for location information
  • Document processes for verifying consumer consent for location data
  • Be cautious about tying location data to individual identifiers
  • Implement and document reasonable retention periods for location data

While Ferguson may be more cautious about expanding the FTC’s regulatory reach in new directions, these established priority areas will likely see continued robust enforcement under his leadership. Companies should ensure their practices in these sensitive domains align with existing legal requirements.

Listen to this post

The Telephone Consumer Protection Act (TCPA) continues to be a major source of litigation risk for businesses engaged in outbound marketing. In the first quarter of 2025, litigation under the TCPA surged dramatically, with 507 class action lawsuits filed — more than double the volume compared to the same period in 2024. This steep rise reflects shifting enforcement patterns and a growing emphasis on consumer communications practices. Companies should be aware of several emerging trends and evolving interpretations that are shaping the compliance environment.

TCPA Class Action Trends

In the first quarter of 2025, 507 TCPA class actions were filed, representing a 112% increase compared to the same period in 2024. April filings also reflected continued growth, indicating a sustained trend.

Key statistics:

  • Approximately 80% of current TCPA lawsuits are class actions.
  • By contrast, only 2%-5% of lawsuits under other consumer protection statutes, such as the Fair Debt Collection Practices Act (FDCPA) or the Fair Credit Reporting Act (FCRA), are filed as class actions.

This trend highlights the unique procedural and financial exposure associated with TCPA compliance.

Time-of-Day Allegations on the Rise

There has been an uptick in lawsuits alleging that companies are contacting consumers outside of the TCPA’s permitted calling hours — before 8 a.m. or after 9 p.m. local time. In March 2025 alone, a South Florida firm filed over 100 lawsuits alleging violations of these timing restrictions, many of which involved text messages.

Under the TCPA, telephone solicitations are not permitted during restricted hours, unless:

  • The consumer has given prior express permission;
  • There is an established business relationship; or
  • The call is made by or on behalf of a tax-exempt nonprofit organization.

It is currently unclear whether these exemptions definitively apply to time-of-day violations. A petition filed with the FCC in March 2025 seeks clarification on whether prior express consent precludes liability for messages sent during restricted hours. The FCC accepted the petition and opened a public comment period that closed in April.

Drivers of Increased Litigation

Several factors appear to be contributing to the rise in TCPA filings:

  • An increase in plaintiff firm activity and case volume;
  • Ongoing confusion regarding the interpretation of revocation rules; and
  • Continued complaints regarding telemarketing practices, including unwanted robocalls and text messages.

These dynamics reflect a broader trend of regulatory and private enforcement in the consumer protection space.

Compliance Considerations

Businesses should take steps to ensure their outbound communication practices are aligned with current TCPA requirements. This includes:

  • Documenting consumer consent clearly at the point of lead capture;
  • Ensuring systems adhere to permissible calling and texting times;
  • Reviewing policies and procedures for revocation of consent; and
  • Seeking guidance from counsel with experience in consumer protection laws.

Conclusion

The volume and nature of TCPA litigation in 2025 underscore the need for proactive compliance. Companies should treat consumer communication compliance as a core operational issue. Regular policy reviews, up-to-date systems, and informed legal support are essential to mitigating risk in this evolving area of law.