Listen to this post

This summer, a proposed amendment to the Controlled Substances Act known as the Cooper Davis Act (the “act”) is making its way through congressional approvals and causing growing dissension between and among parents, consumer safety advocates, and anti-drug coalitions on one hand, and the DEA, privacy experts, and constitutional scholars on the other.

As currently written, the act will require certain social media, email and other electronic platforms and remote computing companies (the “services providers”) to report suspected violations of the Controlled Substances Act to the United States attorney general.

The act is named for Cooper Davis, a Kansas teen who died after ingesting half of a counterfeit prescription pain pill that he had allegedly purchased through SnapChat. Subsequent testing revealed that the pill contained a lethal dose of fentanyl. The act, introduced with bipartisan support, proposes to bolster the federal government’s ability to detect and prosecute illegal internet drug trafficking by holding social media, email and other internet companies accountable for the activity conducted on their platforms.

The act’s main function is to impose a reporting obligation on the electronic service providers with respect to activity occurring on their platforms, if and to the extent they have knowledge of the activity. The act applies to any service that provides users with the ability to send or receive wire or electronic communications, and/or computer storage or processing services (18 USC § 2258E). These definitions seem to sweep every internet-based company into the act’s purview. However, the impact of the act hinges not on who the act captures, but rather, what duty these companies have and how this duty will be exercised.

The act, as proposed, targets “the unlawful sale or distribution of fentanyl, methamphetamine, or the unlawful sale, distribution, or manufacture of a counterfeit controlled substance” by imposing reporting requirements on service providers.  A service provider must report unlawful sales when: (1) it obtains actual knowledge of any facts or circumstances of an unlawful sale as defined above; or (2) if a user of the service provider alleges an unlawful sale and the service provider, upon review, has reasonable belief the alleged facts or circumstances that constitute an unlawful sale exist. A service provider also may report unlawful circumstances: (1) after obtaining actual knowledge of any facts or circumstances that indicate that an unlawful activity may be imminent; or (2) if the service provider reasonably believes that any facts or circumstances of unlawful activity exist.

A service provider’s actual knowledge of the unlawful activities allows (and in some situations requires) the service provider to report information about the individual using the internet platform for unlawful purposes, including the individual’s geographic location, information relating to how and when the unlawful activity was discovered by the service provider, data relating to the violation, and the complete communication containing the intent to commit a violation of the act. There are penalties for a service provider’s failure to report: if a service provider that knowingly and willfully fails to make a report required, it will be fined no more than $190,000in the case of an initial knowing and willful failure to make a report, and no more than $380,000in the case of any second or subsequent knowing and willful failure to make a report.

In this way, the act captures the companies and conduct necessary to provide greater protection to consumers, including minors like Cooper Davis. However, by creating the duty to report, the act requires service providers to serve as a surveillance agent for the U.S. Department of Justice. Without further clarification or rulemaking, service providers will be left to determine, on their own and without a consistent industry standard, what constitutes actual knowledge of unlawful activity, and in what instance (if ever) knowledge will be imputed to a service provider based on evidence contained on their platform. The structure of the act was heavily debated in the Full Committee Executive Business Meeting that took place on July 13, 2023, and for good reason. At its worst, the act was described as “deputizing” tech companies to serve as law enforcement, without warrants or other procedures in place to protect citizens or prevent unnecessary disclosure of a user’s private information. Alternatively, consumer safety advocates may argue that the act does not go far enough, and is unnecessarily favorable to service providers at almost every turn. For example, the trigger for a mandatory report is actual knowledge on the part of the service provider, not strict liability or the mere occurrence of unlawful activity on the platform.

Further, the monetary amount of any penalty for failing to report is minimal compared to the earnings reported by many of the tech industry giants who fall within the definition of a Service provider.

From a compliance perspective, companies that fall within the definition of electronic communication service providers and remote computing services should be aware that the Cooper Davis Act could become law and impose additional reporting requirements. Practically, however, companies maintain substantial autonomy in crafting the policies to both identify and provide adequate reports of unlawful activity under the act. Like other amendments to the Controlled Substances Act, the language as written is unpredictable, and enforcement action is often the most practical way to discern the contours of the amendment. So, the impact of the act, and how companies can prepare for it, remains to be understood. The act’s good intentions but unsteady enforcement mechanisms are reminiscent of the Ryan Haight Act, another act promulgated to keep teens safe from controlled substances on the internet. The Ryan Haight Act also remains to be applied in a predictable manner following the COVID-19 public health emergency.

The act is a significant step toward protecting the public from controlled substance distribution via the internet. However, much is left to be worked out regarding the means, scope, and constitutionality of law enforcement’s surveillance of online activity in our increasingly digital world.

Listen to this post

Machine learning (ML) models are a cornerstone of modern technology, allowing models to learn from and make predictions based on vast amounts of data. These models have become integral to various industries in an era of rapid technological innovation, driving unprecedented advancements in automation, decision-making, and predictive analysis. The reliance on large amounts of data, however, raises significant concerns about privacy and data security. While the benefits of ML are manifold, they are not without accompanying challenges, particularly in relation to privacy risks. The intersection of ML with privacy laws and ethical considerations forms a complex legal landscape ripe for exploration and scrutiny. This article will explore privacy risks associated with ML, privacy in the context of California’s privacy legislation, and countermeasures to these risks.

Privacy Attacks on ML Models

There are several distinct types of attacks on ML models, four of which target the privacy of protected information.

  1. Model Inversion Attacks constitute a sophisticated privacy intrusion where an attacker endeavors to reconstruct original input data by reverse-engineering a model’s output. A practical illustration might include an online service recommending films based on previous viewing habits. Through this method, an attacker could deduce an individual’s past movie choices, uncovering private information such as race, religion, nationality, and gender. This type of information can be used to perpetuate social engineering schemes (or the use of known information to build (sham) trust and ultimately extract sensitive data from an individual). In other contexts, such an attack on more sensitive targets can lead to substantial privacy breaches, exposing information such as medical records, financial details, or personal preferences. This exposure underscores the importance of robust safeguards and understanding the underlying ML mechanisms.
  2. Membership Inference Attacks involve attackers discerning whether an individual’s personal information was utilized in training a specific algorithm, such as a recommendation system or health diagnostic tool. An analogy might be drawn to an online shopping platform, where an attacker infers that a person was part of a customer group based on recommended products, thereby learning about shopping habits or even more intimate details. These types of attacks harbor significant privacy risks, extending across various domains like healthcare, finance, and social networks. The accessibility of Membership Inference Attacks, often not requiring intricate knowledge of the target model’s architecture or original training data, amplifies their threat. This reach reinforces the necessity for interdisciplinary collaboration and strategic legal planning to mitigate these risks.
  3. Reconstruction Attacks aim to retrieve the original training data by exploiting the model’s parameters. Imagine a machine learning model as a complex, adjustable machine that takes in data (like measurements, images, or text) and produces predictions or decisions. The parameters are the adjustable parts of this machine that are fine-tuned to make it work accurately. During training, the machine learning model adjusts these parameters so that it gets better at making predictions based on the data it is trained on. These parameters hold specific information about the data and the relationships within the data. A Reconstruction Attack exploits these parameters by analyzing them to work backward and figure out the original training data. Essentially, the attacker studies the settings of the machine (parameters) and uses them to reverse-engineer the data that was used to set those parameters in the first place.
    For instance, in healthcare, ML models are trained on sensitive patient data, including medical histories and diagnoses. These models fine-tune internal settings or parameters, creating a condensed data representation. A Reconstruction Attack occurs when an attacker gains unauthorized access to these parameters and reverse-engineers them to deduce the original training data. If successful, this could expose highly sensitive information, such as confidential medical conditions.
  4. Attribute Inference Attacks constitute attempts to guess or deduce specific private attributes, such as age, income, or health conditions, by analyzing related information. Consider, for example, a fitness application that monitors exercise and diet. An attacker employing this method might infer private health information by analyzing this data. Such attacks have the potential to unearth personal details that many would prefer to remain confidential. The ramifications extend beyond privacy, with potential consequences including discrimination or bias. The potential impact on individual rights and the associated legal complexities emphasizes the need for comprehensive legal frameworks and technological safeguards.

ML Privacy under California Privacy Laws

Organizations hit by attacks targeting ML models, like the ones described, could find themselves directly violating California laws concerning consumer data privacy. The California Consumer Privacy Act (CCPA) enshrines the right of consumers to request and obtain detailed information regarding the personal data collected and processed by a business entity. This fundamental right, however, is not without potential vulnerabilities. Particularly, Model Inversion Attacks, which reverse-engineer personal data, pose a tangible risk. By enabling unauthorized access to such information, these attacks may impede or compromise the exercise of this essential right. The CCPA further affords consumers the right to request the deletion of personal information, mandating businesses to comply with such requests. Membership Inference Attacks can reveal the inclusion of specific data within training sets, potentially undermining this right. The exposure of previously deleted data could conflict with the statutory obligations under the CCPA. To safeguard consumers’ personal information, the CCPA also obligates businesses to implement reasonable security measures. Successful attacks on ML models, such as those previously described, might be construed as a failure to fulfill this obligation. Such breaches could precipitate non-compliance, attracting potential legal liabilities.

The California Privacy Rights Act (CPRA) amends the CCPA and introduces rigorous protections for Sensitive Personal Information (SPI). This category encompasses specific personal attributes, including, but not limited to, financial data, health information, and precise geolocation. Attribute Inference Attacks, through the unauthorized disclosure of sensitive attributes, may constitute a direct contravention of these provisions, signifying a significant legal breach. Focusing on transparency, the CPRA sheds light on automated decision-making processes, insisting on clarity and openness. Unauthorized inferences stemming from various attacks could undermine this transparency, thereby impacting consumers’ legal rights to comprehend the underlying logic and implications of decisions that bear upon them. Emphasizing responsible data stewardship, the CPRA enforces data minimization and purpose limitation principles. Attacks that reveal or infer personal information can transgress these principles, manifesting potential excesses in data collection and utilization beyond the clearly stated purposes by exposing data that is not relevant for the intended purposes of the models. For example, an attacker could use a model inversion attack to reconstruct the face image of a user from their name, which is not needed for the facial recognition model to function. Moreover, an attacker could use an attribute inference attack to disclose the political orientation or sexual preference of a user from their movie ratings, which is not stated or agreed by the user when using the movie recommendation model.

Mitigating ML Privacy Risk

Considering California privacy laws, as well as other state privacy laws, legal departments within organizations must develop comprehensive and adaptable strategies. These must encompass clear and enforceable agreements with third-party vendors, establish internal policies reflecting state law mandates, and conduct data protection impact assessments and actionable incident response plans to mitigate potential breaches. Continuous monitoring of evolving legal landscapes at the state and federal level ensures alignment with existing obligations and prepares organizations for future legal developments.

The criticality of technological defenses cannot be overstated. Implementing safeguards such as advanced encryption, stringent access controls, and other measures forms a robust shield against privacy attacks and legal liabilities. More broadly, the intricacies of complying with the CCPA and CPRA require an in-depth understanding of technological functionalities and legal stipulations. A cohesive collaboration among legal and technical experts and other stakeholders, such as business leaders, data scientists, privacy officers, and consumers, is essential to marry legal wisdom to technological and practical acumen. Interdisciplinary dialogue ensures that legal professionals comprehend the technological foundations and practical use case of ML while technologists grasp the legal parameters and ethical considerations embedded in the CCPA and CPRA.

Staying ahead of technological advancements and legal amendments requires constant vigilance. The CPRA’s emphasis on transparency and consumer rights underscores the importance of effective collaboration, adherence to industry best practices, regular risk assessments, and transparent engagement with regulators and stakeholders, and other principles, i.e., accountability, fairness, accuracy, and security that govern artificial intelligence. Organizations should adopt privacy-by-design and privacy-by-default approaches that embed privacy protections into the design and operation of ML models.

The Future of ML Privacy Risks

The intersection of technology and law, as encapsulated by privacy attacks on ML models, presents a vibrant and complex challenge. Navigating this terrain in the era of the CCPA and CPRA demands an integrated, meticulous approach, weaving together legal strategies, technological safeguards, and cross-disciplinary collaboration.

Organizations stand at the forefront of this evolving landscape, bearing the responsibility to safeguard individual privacy and uphold legal integrity. The path forward becomes navigable and principled by fostering a culture that embraces compliance, vigilance, and innovation and by aligning with the specific requirements of the CCPA and CPRA. The challenges are numerous and the stakes significant, yet with prudent judgment, persistent effort, and a steadfast dedication to moral values, triumph is not merely attainable, it becomes a collective duty and a communal achievement.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

There is a great YouTube video that has been circulating the internet for half a decade that reimagines a heuristically programmed algorithmic computer “HAL-9000” as Amazon Alexa in an iconic scene from “2001: A Space Odyssey.” In the clip, “Dave” asks Alexa to “open the pod bay doors,” to which Alexa responds with a series of misunderstood responses, from looking for “cod” recipes to playing The Doors on Spotify. The mock clip adds a bit of levity to an otherwise terrifying picture of AI depicted in the original scene.

The field of artificial intelligence has rapidly evolved since the 1968 film that depicted HAL. Today, artificial intelligence is embedded in many of our day-to-day activities (such as Alexa). As artificial intelligence continues to grow, more companies are looking to create policies and procedures to govern the use of this technology.

As we embark on this new “odyssey,” what makes a smart, thoughtful, and well-reasoned artificial intelligence policy? With new AI terms such as “bootstrap aggregating” and “greedy algorithms,” how do companies ensure that policies are useful and understandable to all employees who need to be apprised of them?

Why Do Organizations Need an AI Policy?

As artificial intelligence seeps into our workday, the first step for organizations is to determine what legal and regulatory standards should be considered. For example, using sensitive and proprietary data can implicate data privacy and information security concerns or increase certain risks if AI is used in a particular way. In the same vein, the use of artificial intelligence can introduce bias, discrimination, or ethical considerations that should be taken into account in any comprehensive AI policy.

A thoughtful AI policy seeks to mitigate these risks while allowing the business to innovate and incorporate the latest technologies to maximize efficiency. A robust AI policy aims to stay abreast of the rapid pace of AI evolution while reducing the potential for regulatory or legal risks. Thus, an AI policy is no longer a futuristic concept but a strategic imperative, given the challenges associated with integrating AI into business operations.

What Type of AI Policy Do You Need?

Typically, we see three distinct types of AI policies: enterprise-level (corporate) policy, third-party or vendor-level policy, and product-level policy.

  • The enterprise-level (corporate) policy, or a set of guidelines and regulations, ensures an organization’s ethical, legal, and responsible use of AI technologies. An enterprise-level policy is essential when an organization heavily relies on AI across its operations and business units.
  • The third-party or vendor-level policy provides a framework for vetting and onboarding AI vendors. A vendor-level policy becomes increasingly crucial when an organization outsources AI solutions or integrates third-party AI into its workflow.
  • The product-level policy outlines applicable use criteria for specific types of AI or specific AI products. A product-level policy is essential when an organization offers distinct AI-powered products or services or uses specific AI tools with unique capabilities and risks.

Components of an Effective AI Policy

Creating an accessible and effective AI policy requires a multifaceted approach. This includes translating complex AI terms into plain language, developing user-friendly guides with visual aids, and offering tailored training and education for all staff levels.

Encouraging interdepartmental feedback and collaboration ensures the policy’s relevance and alignment with technological advances. Utilizing established frameworks such as the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) further aids in defining and communicating policies, making complex AI concepts approachable, and fostering a policy that resonates across the organization.

A Case Study: Creating an AI Policy Using AI RMF 1.0

NIST recently unveiled the AI RMF, a vital guide for responsibly designing, implementing, utilizing, and assessing AI systems. The core of the AI RMF is a critical component that fosters conversation, comprehension, and procedures to oversee AI risks and cultivate dependable and ethical AI systems. This core is structured around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Utilizing the core functions of the AI RMF in creating an AI policy can enable organizations to build a foundation for trustworthy AI systems that are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy-enhanced, and fair with managed biases.

  • Govern: The AI policy must clearly define roles and responsibilities related to AI risk management, establish guiding principles, procedures, and practices, and foster a culture centered on risk awareness. This includes outlining processes for consistent monitoring and reporting on AI risk management performance, ensuring the policy remains relevant and practical.
    • Ensure Strong Governance: Establish clear roles, responsibilities, and principles for AI risk management, fostering a culture that values risk awareness and ethical integrity.
  • Map: This foundational stage defines the AI system’s purpose, scope, and intended use. Organizations must identify all relevant stakeholders and analyze the system’s environment, lifecycle stages, and potential impacts. The AI policy should reflect this mapping, detailing the scope of AI’s application, including geographical and temporal deployment, and those responsible for its operation.
    • Start with Clear Mapping: Outline the scope, purpose, stakeholders, and potential impacts of AI systems, ensuring that the policy reflects the detailed context of AI’s deployment.
  • Measure: The policy must specify the organization’s approach to quantifying various aspects of AI systems, such as inputs, outputs, performance metrics, risks, and trustworthiness. This includes defining the metrics, techniques, tools, and frequency of assessments. The policy should articulate principles and processes for evaluating AI’s potential impact on financial, operational, reputational, and legal aspects, ensuring alignment with organizational risk tolerance.
    • Implement Robust Measurement Procedures: Define clear metrics and methodologies to evaluate AI systems, considering various dimensions, including risks, performance, and ethical considerations.
  • Management: This phase requires the organization to implement measures to mitigate identified risks. The policy should outline strategies for managing AI-related risks, incorporating control mechanisms, protective measures, and incident response protocols. These strategies must be harmonized with the organization’s overall risk management approach, providing precise AI system monitoring and maintenance guidelines.
    • Build Effective Management Strategies: Develop strategies for managing AI risks, ensuring alignment with broader organizational objectives, and integrating control mechanisms and responsive protocols.


Creating a solid AI policy is a systematic process that requires thoughtful integration of ethical principles, transparent practices, risk management, and governance. Implementing the AI RMF can enable a holistic approach to AI policy creation, aligning with ethical integrity and robust security. By balancing the multifaceted characteristics of trustworthy AI, organizations can navigate complex tradeoffs, making transparent and justifiable decisions that reflect the values and contexts relevant to their operations. The AI RMF serves as a vital guide to managing AI risks responsibly and developing AI systems that are socially and organizationally coherent, enhancing the overall effectiveness and impact of AI operations.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

California, home to the highest number of registered vehicles in the U.S., is at the forefront of a critical issue – the privacy practices of automobile manufacturers and vehicle technology firms.

The California Privacy Protection Agency (CPPA), the state’s privacy enforcement authority, has messaged that it is launching an enforcement initiative. This initiative seeks to scrutinize the burgeoning pool of data accumulated by connected vehicles and assess whether the commercial practices of the firms gathering this data align with state regulations. This announcement signifies a crucial priority in privacy enforcement, highlighting the escalating focus on personal data management within the automotive industry.

Connected vehicles can accumulate a plethora of data through built-in apps, sensors, and cameras. As Ashkan Soltani, the executive director of CPPA, aptly describes, “Modern vehicles are effectively connected computers on wheels.” These vehicles monitor not only the occupants but also individuals in proximity. Location data, personal preferences, and information about daily routines are readily available. The implications are wide ranging; data can facilitate extensive consumer profiling, anticipate driving behavior, influence insurance premiums, and even assist urban planning and traffic studies.

While the commercial value of this data is undeniable, concerns about its management are growing. California’s enforcement announcement aims to probe this area, demanding transparency and compliance from automobile manufacturers. The CPPA will investigate whether these companies provide adequate transparency to consumers and honor their rights, including the right to know what data is being collected, the right to prohibit its dissemination, and the right to request its deletion. This type of regulatory scrutiny could also trickle down to the vast commercial network of supply, logistics, trucking, construction, and other industries that use tracking technologies in vehicles.

This concern extends beyond U.S. borders. European regulators have urged automobile manufacturers to modify their software to restrict data collection and safeguard consumer privacy. For instance, Porsche has introduced a feature on their European vehicles’ dashboards that allows drivers to either permit or retract the company’s consent to collect personal data or distribute it to third-party suppliers. Furthermore, European regulators have launched investigations into the automotive industry’s use of personal data from vehicles, including location information.

In the wake of an investigation by the Dutch privacy regulator, Tesla has amended the default settings of their vehicles’ external security cameras to remain inactive until a driver enables the outside recording function. Moreover, the camera settings now store only the last 10 minutes of recorded footage, in lieu of an hour of data previously collected. The Dutch regulatory body also stated that it infringes on privacy for the cameras to record individuals outside the vehicles without their consent. In response, Tesla’s new update includes features that alert passengers and bystanders when the external cameras are operating by blinking the vehicle’s headlights and displaying a notification on the car’s internal touchscreen. Such European investigations may indeed inform California’s regulatory approach.

However, the privacy landscape of connected cars is intricate. Automobile manufacturers, satellite radio companies, providers of in-car navigation or infotainment systems, and insurance firms are part of this complex ecosystem. For example, Stellantis, the parent company of Chrysler, recently established Mobilisights to license data to various clients, including competitor car manufacturers, under strict privacy safeguards and customer data usage consent.

As the CPPA conducts its first investigation, it marks a critical juncture, potentially shaping the future of privacy regulations and practices in the automotive industry, as well as the broader concept of mobile technologies. California’s initiative is not just a state issue — it could indicate a broader trend toward stricter regulation and enforcement in the sector. As connected cars become more common, regulators, the industry, and consumers must all navigate this complex landscape with a sharp focus on privacy.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

The construction sector is known for its perennial pursuit of efficiency, quality, and safety. In recent years, one of the tools the sector has started leveraging to achieve these goals is predictive maintenance (PM), specifically the implementation of artificial intelligence (AI) within this practice. This approach, combined with continuous advancements in AI, is revolutionizing the construction industry, promising substantial gains in operational efficiency and cost savings. However, with these developments come an array of cybersecurity threats and complex legal and regulatory challenges that must be addressed. Part 1 of this two-part series discusses the role of PM in the construction sector, and Part 2 goes deeper on the cybersecurity and vulnerabilities relating to PM’s use in the sector.

The Role of PM in the Construction Sector

At its core, PM in the construction industry relies on data-driven insights to anticipate potential equipment failures, allowing proactive measures to be taken that prevent significant downtime or exorbitant repair costs. This principle is applied across a diverse array of machinery and equipment, from colossal cranes and bulldozers to intricate electrical systems and HVAC units.

Critical to this innovative process is AI technology, which is employed to scrutinize vast volumes of data gathered from Internet of Things (IoT) sensors integrated into the machinery. Such an approach starkly contrasts with conventional maintenance practices, which tend to be reactive rather than proactive. The advent of AI-enabled PM can revolutionize this paradigm, enabling construction companies to minimize downtime, enhance safety standards, and effectuate considerable cost savings.

For instance, the integration of worker-generated data from wearable devices introduces another layer of complexity and sophistication, significantly expanding the scope of data being analyzed. These wearable devices precisely record a variety of parameters, including physical exertion levels, heart rate, and environmental exposure information, which directly pertain to an individual’s private health and personal details. Alongside machinery-related data, the physiological and environmental metrics gathered by these wearables are continuously fed into the AI system, bolstering its predictive capabilities. This intricate data, when collected and analyzed, yields invaluable insights into the conditions under which machinery operates. In certain instances, these observations can even serve as an early warning system for potential equipment issues. For instance, consistently high stress levels indicated by a worker’s wearable device while operating a specific piece of equipment could suggest an underlying machine problem that needs to be addressed.

In another use case, consider an AI-driven PM system processing vibration data from a crane’s motor. By applying machine learning to historical patterns, the system can deduce that a specific bearing is likely to malfunction within a certain timeframe. The alert generated by this prediction isn’t based solely on machinery data; it can also incorporate data from the crane operator’s wearable device, revealing elevated stress levels as the bearing begins to fail. This timely alert empowers the maintenance team to rectify the issue before it escalates into a significant breakdown or, even more detrimentally, a safety incident.

Risks of Predictive Maintenance

The rise in PM adoption simultaneously escalates the potential cybersecurity threats. The high volume of data transferred and stored, coupled with an increasing risk of data breaches and cyber-attacks, brings about grave concerns. Hackers could infiltrate PM systems, steal sensitive data, cause disruption, or manipulate the data fed into AI systems to yield incorrect predictions causing substantial harm. IoT devices, which act as the primary data sources for AI-driven maintenance systems, also present considerable cybersecurity vulnerabilities if not appropriately secured. Despite being invaluable for PM, these devices, ranging from simple machinery sensors to sophisticated wearables, have several weak points due to their inherent design and function.

PM users also face complicated new questions of privacy, liability, and compliance with industry-specific regulations. Ownership of the data that AI systems train on is a site of intense legal debate; regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Privacy Rights Act (CPRA) impose penalties for failing to properly anonymize and manage data. The question of liability in the case of an accident, and of compliance with construction-specific regulations, will also be key.

The Future of PM in Construction

Looking ahead, the use of AI in PM is expected to become even more sophisticated and widespread. The continuing development of AI technologies, coupled with the growth of IoT devices and the rollout of high-speed 5G and 6G networks, will facilitate the collection and analysis of even larger data volume, leading to even more accurate and timely predictions.

Furthermore, as AI systems become more capable of learning and adapting, they will increasingly be able to optimize their predictions and recommendations over time based on feedback and additional data. We can also expect to see increased integration between PM systems and other technological trends in construction, such as digital twins and augmented reality. For instance, a digital twin of a construction site could include real-time data on the status of various pieces of equipment and AR devices could be used to visualize potential issues and guide maintenance work.

PM, powered by AI, holds immense promise for the construction industry. It has the potential to greatly increase efficiency, reduce costs, and improve safety. However, it also brings with it significant cybersecurity threats and legal and regulatory challenges. As the industry continues to embrace this technology, it will be crucial to address these issues, striking a balance between innovation and security, compliance, and liability.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

As cyber threats have evolved and expanded, cybersecurity has emerged as a threat to organizations across sectors, and there is more urgency than ever for companies to remain vigilant and prepared. Cybersecurity incidents can come with legal implications and lead to substantial financial losses, and members of the board must increasingly be involved and knowledgeable on cybersecurity to safeguard the company’s reputation – and their own. Tabletop exercises are a potent tool to help identify and address gaps, increase cooperation on cybersecurity goals, and build organizational “muscle memory” to respond to threats.

Risks for Companies and Boards

An indispensable component of cyber preparedness is the active engagement of organizational leadership, especially the board of directors. Insufficient cyber preparedness can result in serious legal implications for both the company and the board, including shareholder actions and derivative lawsuits. These mistakes can not only threaten the organization’s reputation and lead to substantial financial losses, but also affect the reputations of individual board members. This is especially significant for board members who serve on multiple boards, as their professional reputation and credibility are at stake. A derivative action against them could harm their standing across all the boards they serve.

An engaged and well-informed board is vital to building a resilient cyber defense and plays a critical role in mitigating the risk of legal actions. By actively participating in the cyber readiness process, the board can demonstrate its commitment to protecting the company and its stakeholders from cyber threats. When properly documented, this display of due diligence becomes a powerful defense against potential shareholder litigation or derivative lawsuits. It protects not just the company’s assets and reputation but also the board members’ personal reputations, reinforcing the importance of their roles in an increasingly interconnected corporate landscape.

Using Tabletops for Organizational Insights

Tabletop exercises offer a powerful platform to practice and evaluate response strategies to hypothetical cyber incidents. These simulated scenarios serve as a systematic, interactive, and low-risk method for teams to pinpoint vulnerabilities in existing protocols, improve coordination, and critically assess the decision-making process during crises. A recent study by the National Association of Corporate Directors underscores this imperative: 48% of company boards reported conducting a cyber-centric exercise in the year leading up to the survey.

These exercises generate valuable insights like response times, decision accuracy, coordination efficiency and communication effectiveness. Gathering these insights over several exercises helps organizations to discern patterns, track progress, and identify gaps that need to be addressed. More qualitatively, insights from these exercises can allow organizations insight into the subtleties of team dynamics, decision-making, and communication. Gaps or weaknesses in any of these areas are vulnerabilities that cyber criminals can exploit as entry points to a company’s system or facilities.

Tabletop exercises have additional benefits beyond identifying weaknesses in cyber preparedness. The exercises also allow stakeholders across different departments to collaborate, fostering an integrated communication culture within an organization. This practice, critical for effective cyber preparedness, does carry certain risks, including potential miscommunications and diverging departmental priorities. To address these challenges, organizations must prioritize establishing a structured, transparent communication system that mitigates such risks. Most importantly, tabletop exercises can allow organizations to develop a “cybersecurity muscle memory.”  By running through different scenarios and discussing various response strategies, organizations can strengthen their ability to detect, mitigate, and recover from security breaches

Making Tabletops Work for You

Tabletops are not “one-and-done” exercises. For maximum impact, companies should integrate the exercises into annual plans, adapting the scenarios to the rapidly evolving cyber threat landscape. Regular reviews of the exercises, incorporation of learned lessons, and ongoing adjustments to the exercises based on new threat intelligence are vital components of robust cyber preparedness. For companies uncertain about their starting point, tabletop exercises can be customized and scaled to meet the organization’s unique needs and risks. As the company evolves, the exercises can be tailored to tackle more complex scenarios and challenges. This customization ensures the exercises remain relevant, focusing on the company’s cybersecurity objectives. The surge in cyber threats underscores the need for leadership’s proactive approach to cybersecurity. Tabletop exercises are valuable tools to help corporate leaders and the board actively witness the effectiveness of the organization’s incident response capabilities and, thus, the risks they individually face.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

Kayla Tran is a co-author of this post and is a Summer Associate at Bradley.

In recent years, the Lone Star State has been vigilant in enacting cybersecurity and data privacy laws to protect individuals and businesses from the disastrous effects of a data breach. Here is a timeline of previous cybersecurity and data privacy legislation enacted by the Texas Legislature:

  • 2007: Identity Theft Enforcement and Protection Act – requires businesses to “implement and maintain reasonable procedures” to protect consumers from the unlawful use of personal information.
  • 2009: Biometric Privacy Act – requires businesses to obtain consent from consumers before capturing any biometric identifiers.
  • 2012: Medical Records Privacy Act – protects patients from the disclosure of their information without consent.
  • 2017: Student Data Privacy Act – further protects students by restricting school websites from “engaging in targeted advertising based on personally identifiable student information.”
  • 2017: Texas Cybercrime Act – assesses criminal penalties for the intentional interruption or suspension of another person’s access to a computer system or network without consent.
  • 2017: Texas Cybersecurity Act – sets forth “specific measures to protect sensitive and confidential data [to] maintain cyberattack readiness.”
  • 2019: Texas Privacy Protection Act (HB 4390) – amends existing data breach notification obligations and creates an advisory council to study and evaluate privacy laws in the state.

Now, the Texas Data Privacy and Security Act has just made Texas one of almost a dozen states to pass a comprehensive privacy legislation. On May 28, 2023, the act passed in the Texas State House and Senate. On June 18, 2023, Gov. Greg Abbott signed the law into effect. The act is set to take effect on July 1, 2024.

The purpose of the act is to protect the personal data of “consumers who [are] residents of the state of Texas acting in an individual or household context.” The act will provide consumers with stronger individual rights to (1) confirm whether a controller is processing their personal data; (2) correct any discrepancies in their personal data; (3) delete personal data provided or obtained; (4) receive a copy of their personal data previously given to a consumer in a portable and readily usable format so long as it is available digitally and technically feasible; (5) opt-out of the process of their personal data for targeted advertising; and (6) appeal a controller’s refusal to respond to such requests.

Personal data in the act includes any information, including sensitive data, that is linked or can be reasonably linked to an identified or identifiable person. Personal data includes pseudonymous data when the data “is used by a controller in conjunction with additional information that reasonably links the data to an identified or identifiable individual.” Personal data specifically does not include “deidentified data or publicly available information.”

Who does the act apply to?

The act has a broad scope of application as it applies to organizations that (1) conduct business in Texas or produce products or services that are consumed by the residents of Texas; (2) process or engage in the sale of personal data; and (3) are not defined by the United States Small Business Administration (SBA) as a small business. However, if an organization meets the first two requirements, but is defined as a small business, it must still comply with a section of the act that requires small businesses to first obtain consumer consent for the sale of sensitive personal data.

The act will not apply to individuals acting in a commercial or employment context as it only protects consumers acting in an individual or household capacity. As a result, it is not triggered in the business-to-business or employment context. The bill also includes a list of exceptions and exemptions, including state agencies, higher education institutions, nonprofit organizations, and entities governed by the Health Information Portability and Accountability Act (HIPAA) or the Gramm-Leach-Bliley Act.

Any problems?

One problem with the act is its use of the SBA’s definition of a small business. The SBA uses a variety of definitions to define a small business. These definitions change depending on the specific industry a company is in. Therefore, the act leaves open the uncertainty of what businesses are actually covered. Additionally, the act applies to businesses that provide services that are “consumed by” rather than “targeted at,” so many organizations will be surprised to learn that the act may apply to them.

It is important to note that the act does not create a private right of action for individuals. The act is enforced and governed solely by the Texas attorney general. The act includes an initial 30-day cure period to remedy such violations, but after the 30 days with no remedy, a civil fine of up to $7,500 can be prescribed for each violation. On top of that, the cure period does not sunset, and the attorney general’s office is entitled to recover reasonable attorneys’ fees and other reasonable expenses resulting from the investigation and bringing such enforcement action under the act.

So, what does all of this mean for businesses operating in Texas?

With almost every new law comes new obligations. Here are a few things that businesses (controllers) should pay close attention to:

  • Sensitive data or personal data obtained by a controller for a purpose that is not reasonably necessary or compatible with the disclosed purpose can only be processed with a consumer’s consent. This consent must be a clear affirmative act, signaling that the consumer is freely giving specific, informed, and unambiguous consent to process their personal data. It is undetermined whether consent by a consumer can be withdrawn.
  • In certain scenarios, a business must include a “reasonably accessible and clear” privacy notice to its consumers. This notice must include “(1) the categories of personal data processed by the controller; (2) the purpose for processing personal data; (3) how consumers may exercise their consumer rights, including the appeal process; (4) the categories of personal data shared with third parties; (5) the categories of third parties with whom the data is shared; and (6) a description of the methods through which consumers can submit requests to exercise their consumer rights.” Additionally, if any of the shared personal data is sensitive, the following notice must be included: “We may sell your sensitive personal data.”
  • Businesses must conduct and document a data protection assessment for data with a higher risk of harm. This assessment must weigh the potential risks to consumer rights against any direct/indirect benefit, mitigated by safeguards, and must consider the use of deidentified data, processing context, and most importantly, reasonable consumer expectation.
  • If a business is able to show that the data needed to identify pseudonymous personal data of a consumer is kept separately and subject to technical and organizational controls that prevent the business from accessing the information, then that business has no obligation to the consumer regarding such pseudonymous data. 
  • A business can choose to authenticate a consumer’s requests to exercise their rights under the act. If the business cannot authenticate a consumer’s request, then the business is not required to comply with the consumer’s request.

While Texas is  is just one of many states that have now enacted a bill to further protect consumers’ personal data, it is clear that things are changing, and state legislative bodies are recognizing the importance of consumer privacy. With this in mind, Texas businesses need to ensure that they are in compliance with this bill. We’re just here to spread the message: Failure to comply with this bill, can result in civil penalties assessed by the attorney general of Texas. 

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

*Kayla Tran is not a licensed attorney.

Listen to this post

The Department of Defense Inspector General (DoDIG) recently released its “Audit of the DoD’s Implementation and Oversight of the Controlled Unclassified Information [CUI] Program” (DODIG-2023-078). The audit highlights some of DoD’s challenges in implementing the CUI Program and provides recommendations on how to make the program work better. The DoD’s response to the DoDIG’s audit recommendations will likely impact federal contractors working on contracts that handle CUI, including increased oversight and auditing, as well as increased training and reporting requirements.

What is CUI?

CUI is information created or possessed for the government that requires safeguarding or dissemination controls according to applicable laws, regulations, and government‑wide policies; CUI is not classified information. This audit was requested by the Senate Armed Services Committee due to “concern that DoD Components were using limited dissemination controls [LDCs] without having a legitimate rationale, thereby limiting transparency.” Essentially, Congress wasn’t as concerned with the improper dissemination of CUI, but rather with DoD’s over-marking and use of the CUI Program to limit access to information.


Before summarizing the important findings of the audit, let’s briefly review the history of the government-wide CUI Program, and DoD’s implementation thereof, starting with Executive Order 13556 issued in 2010.

EO 13556 aimed to standardize the way the entire executive branch handled unclassified information that requires safeguarding or dissemination controls. Prior to the establishment of the CUI Program, there were dozens of different programs and marking protocols administered by different agencies and DoD components, such as the most popular: For Official Use Only (FOUO), Sensitive But Unclassified (SBU), and Law Enforcement Sensitive (LES). The CUI Program, administered primarily by the National Archives, attempts to reduce the many marking and dissemination programs into a single, government-wide program, although many will note that these markings persist in some pockets of government, despite over a decade of regulatory intent.

DoD, for its part, most recently issued DoD Instruction 5200.48, which clarified previous DoD policy and established “the DoD CUI Program requirements for designating, marking, handling, and decontrolling CUI,” as well as created a requirement for CUI training. The DoD Office of the Under Secretary of Defense for Intelligence and Security (OUSD(I&S)) promulgated the guidance but left the implementation of the CUI Program to the various DoD components.

Audit Findings

The audit found:

  • DoD components did not effectively oversee the implementation of guidance to ensure that CUI documents and emails contained the required markings.
  • DoD components did not effectively oversee DoD and contractor personnel’s completion of the appropriate CUI training.
  • This implementation and oversight failure occurred because the DoD components did not have mechanisms in place to ensure that CUI documents and emails included the required markings, and the OUSD(I&S) did not require the DoD components to test, as part of their annual reporting process, a sample of CUI documents to verify whether the documents contained the required markings.
  • In addition, not all of the DoD components and contracting officials tracked whether their personnel completed the required CUI training.
  • The use of improper or inconsistent CUI markings and the lack of training can increase the risk of the unauthorized disclosure of CUI or unnecessarily restrict the dissemination of information and create obstacles to authorized information sharing.
  • Furthermore, the DoD will not meet the intent of Executive Order 13556 to standardize the way the executive branch handles CUI.

In sum, DoDIG found that DoD components routinely either over-marked information that was not properly considered CUI or improperly marked information that was CUI. A lack of training and tracking mechanisms compounded both findings. The DoDIG made 14 recommendations for improvement, six of which remain “unresolved” pending additional comments and coordination with DoD management, meaning that a revised version of the audit report will be expected later this year that incorporates management comments and tracks the resolution of outstanding recommendations.

Why Are the Audit Findings Important?

For defense contractors, these audit findings are important because they have real-world impact on contractors’ responsibilities and potential expenses under Defense Federal Acquisition Regulation Supplement (DFARS) clause 252.204‑7012, which requires contractors that maintain CUI to implement security controls specified in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800‑171. Contractors responsible for the physical and cybersecurity safeguarding of CUI on their systems are reliant on DoD Component Program offices to properly identify and notify contractors of DoD CUI at the time of contract award and throughout the life of the contract when handling CUI. DoDI 5200.48 also requires contractors who handle CUI to receive initial and annual refresher training that meets certain CUI learning objectives. The audit notes that while contractors were more compliant with their training responsibilities, the DoD components were not auditing or tracking these training requirements, which increased risk of noncompliance.

If DoD components prioritize their CUI Programs and follow the recommendations of the DoDIG audit, this could result in increased programmatic and contracting offices’ focus on the information safeguarding compliance regime, NIST controls, and CUI training for contractors.

For contractors who believe that customer CUI Programs are over-marking information and data — unnecessarily increasing compliance burdens and limiting transparency — this audit provides substantive and rhetorical support to push-back on over-marked information during requests to decontrol.


The government-wide CUI Program is over a decade old and continues to evolve, be refined, and experience growing pains. This DoDIG audit is another milestone in the CUI Program’s growth and refinement.

This audit is also timely. As recent high-profile classified information leak prosecutions have made the news there has been an increased focus on all levels of sensitive information safeguarding, including CUI.

Improving the management of the CUI Program is particularly important because the CUI regime operates in the liminal space where both Congress and interested parties want a perfect balance between protection of proper CUI and heightened transparency for everything else. This goldilocks conundrum for the CUI Program will continue to generate friction between all parties: Congressional and IG oversight, agencies implementing and managing the CUI Program, contractors managing and safeguarding data, and the public and media pursuing open and transparent government ideals.

If you have any questions about this noteworthy development, please do not hesitate to contact Nathaniel Greeson, Andy Elbon, or Matthew Flynn.    

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

In an era where our lives are ever more intertwined with technology, the security of digital platforms is a matter of national concern. A recent large-scale cyberattack affecting several U.S. federal agencies and numerous other commercial organizations emphasizes the criticality of robust cybersecurity measures.

The Intrusion

On June 7, 2023, the Cybersecurity and Infrastructure Security Agency (CISA) identified an exploit by “Threat Actor 505” (TA505), namely, a previously unidentified (zero-day) vulnerability in a data transfer software called MOVEit. MOVEit is a file transfer software used by a broad range of companies to securely transfer files between organizations. Darin Bielby, the managing director at Cypfer, explained that the number of affected companies could be in the thousands: “The Cl0p ransomware group has become adept at compromising file transfer tools. The latest being MOVEit on the heels of past incidents at GoAnywhere. Upwards of 3000 companies could be affected. Cypfer has already been engaged by many companies to assist with threat actor negotiations and recovery.”

CISA, along with the FBI, advised that “[d]ue to the speed and ease TA505 has exploited this vulnerability, and based on their past campaigns, FBI and CISA expect to see widespread exploitation of unpatched software services in both private and public networks.”

Although CISA did not comment on the perpetrator behind the attack, there are suspicions about a Russian-speaking ransomware group known as Cl0p. Much like in the SolarWinds case, they ingeniously exploited vulnerabilities in widely utilized software, managing to infiltrate an array of networks.

Wider Implications

The Department of Energy was among the many federal agencies compromised, with records from two of its entities being affected. A spokesperson for the department confirmed they “took immediate steps” to alleviate the impact and notified Congress, law enforcement, CISA, and the affected entities.

This attack has ramifications beyond federal agencies. Johns Hopkins University’s health system reported a possible breach of sensitive personal and financial information, including health billing records. Georgia’s statewide university system is investigating the scope and severity of the hack affecting them.

Internationally, the likes of BBC, British Airways, and Shell have also been victims of this hacking campaign. This highlights the global nature of cyber threats and the necessity of international collaboration in cybersecurity.

The group claimed credit for some of the hacks in a hacking campaign that began two weeks ago. Interestingly, Cl0p took an unusual step, stating that they erased the data from government entities and have “no interest in exposing such information.” Instead, their primary focus remains extorting victims for financial gains.

Still, although every file transfer service based on MOVEit could have been affected, that does not mean that every file transfer service based on MOVEit was affected. Threat actors exploiting the vulnerability would likely have had to independently target each file transfer service that employs the MOVEit platform. Thus, companies should determine whether their secure file transfer services rely on the MOVEit platform and whether any indicators exist that a threat actor exploited the vulnerability.

A Flaw Too Many

The attackers exploited a zero-day vulnerability that likely exposed the data that companies uploaded to MOVEit servers for seemingly secure transfers. This highlights how a single software vulnerability can have far-reaching consequences if manipulated by adept criminals. Progress, the U.S. firm that owns MOVEit, has urged users to update their software and issued security advice.

Notification Requirements

This exploitation likely creates notification requirements for the myriad affected companies under the various state data breach notification laws and some industry-specific regulations. Companies that own consumer data and share that data with service providers are not absolved of notification requirements merely because the breach occurred in the service provider’s environment. Organizations should engage counsel to determine whether their notification requirements are triggered.

A Call to Action

This cyberattack serves as a reminder of the sophistication and evolution of cyber threats. Organizations using the MOVEit software should analyze whether this vulnerability has affected any of their or their vendors’ operations.

With the increasing dependency on digital platforms, cybersecurity is no longer an option but a necessity in a world where the next cyberattack is not a matter of “if” but “when;” it’s time for a proactive approach to securing our digital realms. Organizations across sectors must prioritize cybersecurity. This involves staying updated with the latest security patches and ensuring adequate protective measures and response plans are in place.

Listen to this post

On May 16, 2023, the U.S. Senate Judiciary Committee conducted a significant oversight hearing on the regulation of artificial intelligence (AI) technology, specifically focusing on newer models of generative AI that create new content from large datasets. The hearing was chaired by Sen. Richard Blumenthal for the Subcommittee on Privacy, Technology, and the Law. He opened the hearing with an audio-cloned statement generated by ChatGPT to demonstrate ominous risks associated with social engineering and identity theft. Notable witnesses included Samuel Altman, CEO of OpenAI, Christina Montgomery, chief privacy and trust officer at IBM, and Gary Marcus, professor emeritus of Psychology and Neural Science at New York University – each of whom advocated for the regulation of AI in different ways. 

Altman advocated for the establishment of a new federal agency responsible for licensing AI models according to specific safety standards and monitoring certain AI capabilities. He emphasized the need for global AI standards and controls and described the safeguards implemented by OpenAI in the design, development, and deployment of their ChatGPT product. He explained that before deployment and with continued use, ChatGPT undergoes independent audits, as well as ongoing safety monitoring and testing. He also discussed how OpenAI foregoes the use of personal data in ChatGPT to lessen the risk of privacy concerns, but also noted how the end user of the AI product impacts all of the risks and challenges that AI represents.

Montgomery from IBM supported a “precision regulation” approach, focusing on specific use cases and addressing risks rather than broadly regulating the technology itself, the approach taken in the proposed EU AI Act as an example. While Montgomery highlighted the need for clear regulatory guidance for AI developers, she stopped short of advocating for a federal agency or commission. Instead, she described the AI licensure process as obtaining a “license from society” and stressed the importance of transparency for users so they know when they interact with AI, but noted that IBM models are more B2B than consumer facing. She advocated for a “reasonable care” standard to hold AI systems accountable. Montgomery also discussed IBM’s internal governance framework, which includes a lead AI officer and an ethics board, as well as impact assessments, transparency of data sources, and user notification when interacting with AI.

Marcus argued that the current court system is insufficient for regulating AI and expressed the need for new laws governing AI technologies and strong agency oversight. He proposed an agency similar to the Food and Drug Administration (FDA) with the authority to monitor AI and conduct safety reviews, including the ability to recall AI products. Marcus also recommended increased funding for AI safety research, both in the short term and long term.

The senators seemed poised to regulate AI in this Congress, whether through an agency or via the courts, and expressed bipartisan concerns about deployment and uses of AI that pose significant dangers that require intervention. Furthermore, the importance of technology and organizational governance rules was underscored, with the recommendation of adopting cues from the EU AI Act in taking a strong leadership position and a risk-based approach. During the hearing, there were suggestions to incorporate constitutional AI by emphasizing the upfront inclusion of values in the AI models, rather than solely focusing on training them to avoid harmful content.

The senators debated the necessity of a comprehensive national privacy law to provide essential data protections for AI, with proponents for such a bill on both sides of the aisle. They also discussed the potential regulation of social media platforms that currently enjoy exemptions under Section 230 of the Communications Decency Act of 1996, specifically addressing the issue of harms to children. The United States find itself at a critical juncture where the evolution of technology has outpaced the development of both regulatory frameworks and the case law. As Congress grapples with the task of addressing the risks and ensuring the trustworthiness of AI, technology companies and AI users are taking the initiative to establish internal ethical principles and standards governing the creation and deployment of artificial and augmented intelligence technologies. These internal guidelines serve as a compass for organizational conduct, mitigating the potential for legal repercussions and safeguarding against negative reputational consequences in the absence of clear legal guidelines.