Listen to this post

As Cybersecurity Awareness Month comes to an end and the spooky season of Halloween is upon us, no one wants to live through a cybersecurity horror story. There are some simple precautions every business and household can participate in to help keep their data and information safe. We have outlined a few below with a downloadable PDF to share with your friends, families, and colleagues. Stay safe out there, and for more information and other updates regarding cybersecurity and privacy, subscribe to Bradley’s Online and On Point blog.

Cybersecurity Tips

  1. Use strong passwords for everything
  2. Update software on your devices
  3. Use multi-factor authentication for logins
  4. Keep learning about common cyber threats
  5. Look out for phishing attempts
Cybersecurity Tips
Listen to this post

On October 10, 2023, California Gov. Gavin Newsom signed SB 362 into law. The “Delete Act” is a key piece of privacy legislation designed to further protect consumer online privacy rights and place further obligations on data brokers.

The Delete Act heavily amends California’s existing data broker law and seeks to establish a one-stop shop for consumers to make a singular request that all data brokers delete their personal information. Until the Delete Act, California residents could still request deletion of their personal information under the California Consumer Privacy Act (CCPA), but they had to make individual requests to each business.

The California Privacy Protection Agency (CPPA) is now tasked with establishing an online deletion mechanism by January 1, 2026, to ensure consumers can safely and securely effectuate their deletion rights. All businesses meeting the definition of “data broker” would have to comply starting August 1, 2026.

We highlight the notable provisions of the Delete Act below:

Who Must Comply?

Data Brokers – The Delete Act applies to all California businesses regulated under CCPA that knowingly collect and sell to third parties the personal information of California residents with whom the consumer does not have a direct relationship. The Delete Act specifically exempts businesses that are regulated by certain federal laws, including the Fair Credit Reporting Act, the Gramm‑Leach‑Bliley Act, and the Insurance Information and Privacy Protection Act. Like CCPA, HIPAA-regulated entities are exempt to the extent the personal information is regulated under HIPAA or another applicable health law referenced under CCPA.

All data brokers must register with the CPPA and disclose a significant amount of information, such as:

  • Whether they collect any personal information from minors, precise geolocation data, or reproductive health data.
  • The number of consumer requests submitted to the data broker, including the number of times the data broker responded to and denied each request from the previous calendar year.
  • The average time it took for the data broker to respond to consumer requests from the previous calendar year.

Service Providers and Contractors – All service providers and contractors must comply with a consumer’s deletion request. The data broker is mandated to direct all of its applicable vendors to delete the consumer’s personal information. This is similar to a business’s obligation under CCPA to forward all deletion requests to its vendors.

The Deletion Mechanism

As mentioned above, the CPPA must create a deletion “mechanism” by January 1, 2026, that allows any consumer to submit a verified consumer request, instructing every data broker to delete the personal information of the consumer in its possession. 

There are specific requirements in the creation of this mechanism, including that: (1) it must be available online, (2) there be no charge for the consumer to use, (3) there is a process to submit a deletion request, (4) it must allow for a consumer’s authorized agent to aid the consumer in submitting the request, similar to CCPA, and (5) it must give consumers the option to “selectively exclude” certain data brokers from deleting their personal information.

Data Broker Responsibilities

Aside from the registration requirements, data brokers have additional obligations under the Delete Act:

  • Compliance with deletion requests – Data brokers must comply with a deletion request within 45 days.
  • Opting-out of selling/sharing – If the data broker cannot verify a deletion request, the data broker must treat the request as a request to opt-out of selling or sharing under CCPA.
  • Continuing obligations – Every 45 days, data brokers must access the deletion mechanism and delete, or opt-out of selling or sharing, the personal information of all consumers who have previously made requests. This is a continuing obligation until the consumer says otherwise or an exemption under the law applies.
  • Audits – Beginning January 1, 2028, and every three years thereafter, data brokers must undergo an audit by an “independent third party” to determine compliance with the Delete Act. The data broker must disclose the results of the audit to the CPPA within five business days upon written request. The report must be maintained for six years. Beginning January 1, 2029, data brokers must disclose to the CPPA the last year they underwent an audit, if applicable.
  • Public disclosures – Data brokers must disclose in their consumer‑facing privacy policies  (1) the same metrics on the consumer requests received, as discussed above; (2) the specific reasons why the data broker denied consumer requests; and (3) the number of consumers requests that did not require any responses and the associated reasons for not responding (e.g., statutory exemptions).

Investigations and Penalties

The CPPA may initiate investigations and actions, as well as administer penalties and fines. Data brokers are susceptible to fines of $200 per day for failing to register with the CPPA and fines of $200 per day for each unfulfilled deletion request.

Listen to this post

The proliferation of AI-derived and processed data in the era of big data is occurring against a complex backdrop of legal frameworks governing ownership of and responsibilities with regard to that data. In a previous installment of this two-part series, the authors outlined challenges and opportunities presented by big data and AI-derived data. In this part, they will discuss the complex legal backdrop governing this emerging area, including potential implications for business.

Patent Law and Machine-Generated Data Ownership

While not explicitly excluding machines as potential inventors, United States patent law has traditionally operated within an anthropocentric framework. This human-centric approach to inventorship and ownership is deeply ingrained in statutory law and judicial interpretations. However, rapid advancements in AI and ML technologies are increasingly blurring the lines between human and machine capabilities in the realm of invention. This creates an environment of legal uncertainty, necessitating vigilance among stakeholders in technology and IP law for future legislative or judicial developments that may clarify or redefine inventorship in the context of machine-generated innovations.

Trade Secret Protection for Machine-Generated Works

Trade secret law provides a compelling avenue for protecting machine-generated works, largely because it does not require the identification of a human inventor. This legal protection is anchored on three foundational pillars. First, the information must not be publicly disclosed or easily ascertainable, preserving its secretive status. Second, the information must possess intrinsic economic value attributable to its confidential nature. Third, reasonable measures must be undertaken to maintain the confidentiality of the information, ensuring its continued protection under trade secret law.

Given these criteria, trade secret law provides a flexible yet robust framework for safeguarding machine-generated works, circumventing the complexities and limitations often associated with copyright and patent law. This adaptability makes trade secret law increasingly relevant in the era of AI and ML, where traditional IP boundaries are being continually redefined.

The Fair Use Doctrine

The fair use doctrine stands as a nuanced yet indispensable exception within copyright law allowing for creating and utilizing transformative derivative works without constituting copyright infringement. Its relevance is heightened today, where technological advancements in big data, ML, and digital technology fundamentally alter how we interact with information.

The legal significance of the fair use doctrine has been underscored by several landmark cases illustrating its evolving role in mediating technological innovation and IP rights. For instance, the U.S. Supreme Court’s ruling in Google v. Oracle emphasized the transformative nature of Google’s use of Java APIs in the Android operating system, constituting fair use. Similarly, the Authors Guild v. Google case highlighted the public benefit of scanning and indexing millions of books, which the court deemed qualifying for fair use.

While applying this doctrine to ML-created derived data, courts may consider several factors, such as the purpose and character of the use, the nature of the copyrighted work, the amount and substantiality of the portion used, and the effect on the market value of the original work. If the ML model transforms the data in a way that could be considered “transformative use,” it might be more likely to be deemed fair use. However, ethical considerations also come into play, particularly when ML-derived data is used in ways that could be considered harmful or discriminatory.

Courts employ a multi-faceted approach in evaluating fair use claims. They scrutinize the intent behind the derivative work (‘Purpose and Character of Use’), assess the original work’s nature (factual or creative), examine the extent of the borrowed material (‘Amount and Substantiality of the Portion Used’), and evaluate the potential market impact on the original work.

It’s crucial to recognize that the fair use doctrine is not a static legal principle but a flexible and adaptable framework. It evolves in response to changing social, technological, and cultural contexts. Influenced by the norms and values of different communities and innovations in various fields, the doctrine remains relevant and applicable in a world where the modes of creation and dissemination are in constant flux.

Ethical and Societal Considerations

Beyond the legal frameworks, ethical stewardship plays a crucial role in responsible data management. Transparency, consent, and robust security measures constitute the cornerstone of responsible data management. Additionally, ethical guidelines should govern the use of data to prevent harmful or unethical applications, such as discrimination or exploitation. Public interest considerations and effective dispute resolution mechanisms should also be integrated into any comprehensive data governance framework.

Conclusion

In the rapidly evolving landscape of big data, AI, and the IoT, the issue of data ownership has become increasingly complex and multi-dimensional. This complexity is further accentuated by the intersection of various legal frameworks, including IP laws, trade secrets, and data protection regulations. As we have explored, each framework offers opportunities and challenges, necessitating a nuanced approach to data governance.

The advent of AI and ML technologies has introduced additional layers of intricacy, particularly in IP rights. As machines become increasingly capable of innovation, the anthropocentric frameworks of existing patent laws are being called into question, highlighting the need for legal evolution. Finally, the issue of data ownership is not solely a legal construct but also an ethical and societal one. The need for a multi-faceted approach to data governance is evident, balancing the rights and responsibilities of all stakeholders involved — individuals, machines, or public entities. Such an approach would incorporate elements of transparency, consent, security, compliance, and ethical considerations, thereby creating a governance framework that is both robust and adaptable.

Listen to this post

The emergence of big data, artificial intelligence (AI), and the Internet of Things (IoT) has fundamentally transformed our understanding and utilization of data. While the value of big data is beyond dispute, its management introduces intricate legal questions, particularly concerning data ownership, licensing, and the protection of derived data. This article, the first installment in a two-part series, outlines challenges and opportunities presented by AI-processed and IoT-generated data. The second part, to be published Thursday, October 19, will discuss the complexities of the legal frameworks that govern data ownership.

Defining Big Data and Its Legal Implications

Big data serves as a comprehensive term for large, dynamically evolving collections of electronic data that often exceed the capabilities of traditional data management systems. This data is not merely voluminous but also possesses two key attributes with significant legal ramifications. First, big data is a valuable asset that can be leveraged for a multitude of applications, ranging from decoding consumer preferences to forecasting macroeconomic trends and identifying public health patterns. Second, the richness of big data often means it contains sensitive and confidential information, such as proprietary business intelligence and personally identifiable information (PII). As a result, the management and utilization of big data require stringent legal safeguards to ensure both the security and ethical handling of this information.

Legal Frameworks Governing Data Ownership

Navigating the intricate landscape of data ownership necessitates a multi-dimensional understanding that encompasses legal, ethical, and technological considerations. This complexity is further heightened by diverse intellectual property (IP) laws and trade secret statutes, each of which can confer exclusive rights over specific data sets. Additionally, jurisdictional variations in data protection laws, such as the European Union’s General Data Protection Regulation (GDPR) and the United States’ California Consumer Privacy Act (CCPA), introduce another layer of complexity. These laws empower individuals with greater control over their personal data, granting them the right to access, correct, delete, or port their information. However, the concept of “ownership” often varies depending on the jurisdiction and the type of data involved — be it personal or anonymized.

Machine-Generated Data and Ownership

The issue of data ownership extends beyond individual data to include machine-generated data, which introduces its own set of complexities. Whether it’s smart assistants generating data based on human interaction or autonomous vehicles operating independently of human input, ownership often resides with the entity that owns or operates the machine. This is typically defined by terms of service or end-user license agreements (EULAs). Moreover, IP laws, including patents and trade secrets, can also come into play, especially when the data undergoes specialized processing or analysis.

Derived Data and Algorithms

Derived and derivative algorithms refer to computational models or methods that evolve from, adapt, or draw inspiration from pre-existing algorithms. These new algorithms must introduce innovative functionalities, optimizations, or applications to be considered derived or derivative. Under U.S. copyright law, the creator of a derivative work generally holds the copyright for the new elements that did not exist in the original work. However, this does not extend to the foundational algorithm upon which the derivative algorithm is based. The ownership of the original algorithm remains with its initial creator unless explicitly transferred through legal means such as a licensing agreement.

In the field of patent law, derivative algorithms could potentially be patented if they meet the criteria of being new, non-obvious, and useful. However, the patent would only cover the novel aspects of the derivative algorithm, not the foundational algorithm from which it was derived. The original algorithm’s patent holder retains their rights, and any use of the derivative algorithm that employs the original algorithm’s patented aspects would require permission or licensing from the original patent holder.

Derived and derivative algorithms may also be subject to trade secret protection, which safeguards confidential information that provides a competitive advantage to its owner. Unlike patents, trade secrets do not require registration or public disclosure but do necessitate reasonable measures to maintain secrecy. For example, a company may employ non-disclosure agreements, encryption, or physical security measures to protect its proprietary algorithms.

AI-Processed and Derived Data

The advent of AI has ushered in a new era of data analytics, presenting both unique opportunities and challenges in the domain of IP rights. AI’s ability to generate “derived data” or “usage data” has far-reaching implications that intersect with multiple legal frameworks, including copyright, trade secrets, and potentially even patent law. This intersectionality adds a layer of complexity to the issue of data ownership, underscoring the critical need for explicit contractual clarity in licensing agreements and Data Use Agreements (DUAs).

AI-processed and derived data can manifest in various forms, each with unique characteristics. Extracted data refers to data culled from larger datasets for specific analyses. Restructured data has been reformatted or reorganized to facilitate more straightforward analysis. Augmented data is enriched with additional variables or parameters to provide a more comprehensive view. Inferred data involves the creation of new variables or insights based on the analysis of existing data. Lastly, modeled data has been transformed through ML models to predict future outcomes or trends. Importantly, these data types often contain new information or insights not present in the original dataset, thereby adding multiple layers of value and utility.

The benefits of using AI-processed and derived data can be encapsulated in three main points. First, AI algorithms can clean, sort, and enrich data, enhancing its quality. Second, the insights generated by AI can add significant value to the original data, rendering it more useful for various applications. Third, AI-processed data can catalyze new research, innovation, and product development avenues.

Conversely, the challenges in data ownership are multifaceted. First, AI-processed and derived data often involves a complex web of multiple stakeholders, including data providers, AI developers, and end users, which can complicate the determination of ownership rights. Second, the rapidly evolving landscape of AI and data science leads to a lack of clear definitions for terms like “derived data,” thereby introducing potential ambiguities in legal agreements. Third, given the involvement of multiple parties, it becomes imperative to establish clear and consistent definitions and agreements that meticulously outline the rights and responsibilities of each stakeholder.

Listen to this post

Unfortunately, but as predicted earlier this year, the Department of Justice (DOJ) has shown no signs of pausing use of the False Claims Act (FCA) as a tool to enforce cybersecurity compliance.

On September 5, 2023, DOJ announced an FCA settlement with Verizon Business Network Services LLC based on Verizon’s failure to comply with cybersecurity requirements with respect to services provided to federal agencies. Verizon contracted with the government to provide secure internet connections but fell short of certain Trusted Internet Connections (TIC) requirements.

Compared to the approximate $9 million Aerojet settlement in 2022, Verizon’s approximately $4.1 million settlement appears to provide helpful suggestions on how to reduce liabilities or mitigate damages. For example, Verizon cooperated and self-disclosed its shortcomings, and the government emphasized the company’s level of cooperation and self-disclosure in their  press release.

Even as cybersecurity requirements become more complex, tried and true compliance strategies remain key to mitigating damages. Companies should encourage a culture of self-reporting and agency.

Establish and Advertise Self-Reporting Hotline Programs

A self-reporting hotline is often a key component of an effective corporate compliance and ethics program. In companies with an internal hotline, studies have found that tips account for over half of all fraud detection. A best practice is to consider making the hotline anonymous as anonymity often generates more calls. Importantly, make sure employees know that the hotline is the appropriate place to report any cybersecurity concerns. Although it might sound ridiculous to lawyers and compliance professionals, employees may not realize cybersecurity issues should be reported on the hotline. Make sure employees know about the hotline. Emphasize it at meetings, in newsletters, on intranet sites, and anywhere else.

Promote a Sense of Agency Throughout the Organization

Employees tend to report concerns only when they feel a sense of agency, or otherwise feel that their reported concerns are being addressed. This, of course, starts with the tone at the top. Make sure all individuals — from the top down — feel like their cybersecurity concerns are being heard and addressed, as appropriate. Consider ways to show that cybersecurity complaints are taken seriously — perhaps by consistently addressing cybersecurity concerns at staff meetings or otherwise publicizing the work done to ameliorate employees’ concerns.

To avoid potential FCA liability, companies need to be absolutely aware of any cybersecurity requirements in government contracts, including how compliance is certified, and how to monitor and report any cybersecurity incidents. When cybersecurity concerns are reported, no matter whether corroborated or otherwise, companies must follow-up on the complaint and with the complainant. Companies must consider ways to “close the feedback loop,” and develop a system to follow up with complainants and to keep them informed about what the company has done about their concerns. Companies must take the investigation seriously and involve experienced cyber and investigations counsel sooner rather than later. Counsel can help determine if a written self-disclosure to a government agency is necessary, help craft the strategy, and guide an investigation that may ultimately reduce liabilities or mitigate damages.

Listen to this post

This summer, a proposed amendment to the Controlled Substances Act known as the Cooper Davis Act (the “act”) is making its way through congressional approvals and causing growing dissension between and among parents, consumer safety advocates, and anti-drug coalitions on one hand, and the DEA, privacy experts, and constitutional scholars on the other.

As currently written, the act will require certain social media, email and other electronic platforms and remote computing companies (the “services providers”) to report suspected violations of the Controlled Substances Act to the United States attorney general.

The act is named for Cooper Davis, a Kansas teen who died after ingesting half of a counterfeit prescription pain pill that he had allegedly purchased through SnapChat. Subsequent testing revealed that the pill contained a lethal dose of fentanyl. The act, introduced with bipartisan support, proposes to bolster the federal government’s ability to detect and prosecute illegal internet drug trafficking by holding social media, email and other internet companies accountable for the activity conducted on their platforms.

The act’s main function is to impose a reporting obligation on the electronic service providers with respect to activity occurring on their platforms, if and to the extent they have knowledge of the activity. The act applies to any service that provides users with the ability to send or receive wire or electronic communications, and/or computer storage or processing services (18 USC § 2258E). These definitions seem to sweep every internet-based company into the act’s purview. However, the impact of the act hinges not on who the act captures, but rather, what duty these companies have and how this duty will be exercised.

The act, as proposed, targets “the unlawful sale or distribution of fentanyl, methamphetamine, or the unlawful sale, distribution, or manufacture of a counterfeit controlled substance” by imposing reporting requirements on service providers.  A service provider must report unlawful sales when: (1) it obtains actual knowledge of any facts or circumstances of an unlawful sale as defined above; or (2) if a user of the service provider alleges an unlawful sale and the service provider, upon review, has reasonable belief the alleged facts or circumstances that constitute an unlawful sale exist. A service provider also may report unlawful circumstances: (1) after obtaining actual knowledge of any facts or circumstances that indicate that an unlawful activity may be imminent; or (2) if the service provider reasonably believes that any facts or circumstances of unlawful activity exist.

A service provider’s actual knowledge of the unlawful activities allows (and in some situations requires) the service provider to report information about the individual using the internet platform for unlawful purposes, including the individual’s geographic location, information relating to how and when the unlawful activity was discovered by the service provider, data relating to the violation, and the complete communication containing the intent to commit a violation of the act. There are penalties for a service provider’s failure to report: if a service provider that knowingly and willfully fails to make a report required, it will be fined no more than $190,000in the case of an initial knowing and willful failure to make a report, and no more than $380,000in the case of any second or subsequent knowing and willful failure to make a report.

In this way, the act captures the companies and conduct necessary to provide greater protection to consumers, including minors like Cooper Davis. However, by creating the duty to report, the act requires service providers to serve as a surveillance agent for the U.S. Department of Justice. Without further clarification or rulemaking, service providers will be left to determine, on their own and without a consistent industry standard, what constitutes actual knowledge of unlawful activity, and in what instance (if ever) knowledge will be imputed to a service provider based on evidence contained on their platform. The structure of the act was heavily debated in the Full Committee Executive Business Meeting that took place on July 13, 2023, and for good reason. At its worst, the act was described as “deputizing” tech companies to serve as law enforcement, without warrants or other procedures in place to protect citizens or prevent unnecessary disclosure of a user’s private information. Alternatively, consumer safety advocates may argue that the act does not go far enough, and is unnecessarily favorable to service providers at almost every turn. For example, the trigger for a mandatory report is actual knowledge on the part of the service provider, not strict liability or the mere occurrence of unlawful activity on the platform.

Further, the monetary amount of any penalty for failing to report is minimal compared to the earnings reported by many of the tech industry giants who fall within the definition of a Service provider.

From a compliance perspective, companies that fall within the definition of electronic communication service providers and remote computing services should be aware that the Cooper Davis Act could become law and impose additional reporting requirements. Practically, however, companies maintain substantial autonomy in crafting the policies to both identify and provide adequate reports of unlawful activity under the act. Like other amendments to the Controlled Substances Act, the language as written is unpredictable, and enforcement action is often the most practical way to discern the contours of the amendment. So, the impact of the act, and how companies can prepare for it, remains to be understood. The act’s good intentions but unsteady enforcement mechanisms are reminiscent of the Ryan Haight Act, another act promulgated to keep teens safe from controlled substances on the internet. The Ryan Haight Act also remains to be applied in a predictable manner following the COVID-19 public health emergency.

The act is a significant step toward protecting the public from controlled substance distribution via the internet. However, much is left to be worked out regarding the means, scope, and constitutionality of law enforcement’s surveillance of online activity in our increasingly digital world.

Listen to this post

Machine learning (ML) models are a cornerstone of modern technology, allowing models to learn from and make predictions based on vast amounts of data. These models have become integral to various industries in an era of rapid technological innovation, driving unprecedented advancements in automation, decision-making, and predictive analysis. The reliance on large amounts of data, however, raises significant concerns about privacy and data security. While the benefits of ML are manifold, they are not without accompanying challenges, particularly in relation to privacy risks. The intersection of ML with privacy laws and ethical considerations forms a complex legal landscape ripe for exploration and scrutiny. This article will explore privacy risks associated with ML, privacy in the context of California’s privacy legislation, and countermeasures to these risks.

Privacy Attacks on ML Models

There are several distinct types of attacks on ML models, four of which target the privacy of protected information.

  1. Model Inversion Attacks constitute a sophisticated privacy intrusion where an attacker endeavors to reconstruct original input data by reverse-engineering a model’s output. A practical illustration might include an online service recommending films based on previous viewing habits. Through this method, an attacker could deduce an individual’s past movie choices, uncovering private information such as race, religion, nationality, and gender. This type of information can be used to perpetuate social engineering schemes (or the use of known information to build (sham) trust and ultimately extract sensitive data from an individual). In other contexts, such an attack on more sensitive targets can lead to substantial privacy breaches, exposing information such as medical records, financial details, or personal preferences. This exposure underscores the importance of robust safeguards and understanding the underlying ML mechanisms.
  2. Membership Inference Attacks involve attackers discerning whether an individual’s personal information was utilized in training a specific algorithm, such as a recommendation system or health diagnostic tool. An analogy might be drawn to an online shopping platform, where an attacker infers that a person was part of a customer group based on recommended products, thereby learning about shopping habits or even more intimate details. These types of attacks harbor significant privacy risks, extending across various domains like healthcare, finance, and social networks. The accessibility of Membership Inference Attacks, often not requiring intricate knowledge of the target model’s architecture or original training data, amplifies their threat. This reach reinforces the necessity for interdisciplinary collaboration and strategic legal planning to mitigate these risks.
  3. Reconstruction Attacks aim to retrieve the original training data by exploiting the model’s parameters. Imagine a machine learning model as a complex, adjustable machine that takes in data (like measurements, images, or text) and produces predictions or decisions. The parameters are the adjustable parts of this machine that are fine-tuned to make it work accurately. During training, the machine learning model adjusts these parameters so that it gets better at making predictions based on the data it is trained on. These parameters hold specific information about the data and the relationships within the data. A Reconstruction Attack exploits these parameters by analyzing them to work backward and figure out the original training data. Essentially, the attacker studies the settings of the machine (parameters) and uses them to reverse-engineer the data that was used to set those parameters in the first place.
    For instance, in healthcare, ML models are trained on sensitive patient data, including medical histories and diagnoses. These models fine-tune internal settings or parameters, creating a condensed data representation. A Reconstruction Attack occurs when an attacker gains unauthorized access to these parameters and reverse-engineers them to deduce the original training data. If successful, this could expose highly sensitive information, such as confidential medical conditions.
  4. Attribute Inference Attacks constitute attempts to guess or deduce specific private attributes, such as age, income, or health conditions, by analyzing related information. Consider, for example, a fitness application that monitors exercise and diet. An attacker employing this method might infer private health information by analyzing this data. Such attacks have the potential to unearth personal details that many would prefer to remain confidential. The ramifications extend beyond privacy, with potential consequences including discrimination or bias. The potential impact on individual rights and the associated legal complexities emphasizes the need for comprehensive legal frameworks and technological safeguards.

ML Privacy under California Privacy Laws

Organizations hit by attacks targeting ML models, like the ones described, could find themselves directly violating California laws concerning consumer data privacy. The California Consumer Privacy Act (CCPA) enshrines the right of consumers to request and obtain detailed information regarding the personal data collected and processed by a business entity. This fundamental right, however, is not without potential vulnerabilities. Particularly, Model Inversion Attacks, which reverse-engineer personal data, pose a tangible risk. By enabling unauthorized access to such information, these attacks may impede or compromise the exercise of this essential right. The CCPA further affords consumers the right to request the deletion of personal information, mandating businesses to comply with such requests. Membership Inference Attacks can reveal the inclusion of specific data within training sets, potentially undermining this right. The exposure of previously deleted data could conflict with the statutory obligations under the CCPA. To safeguard consumers’ personal information, the CCPA also obligates businesses to implement reasonable security measures. Successful attacks on ML models, such as those previously described, might be construed as a failure to fulfill this obligation. Such breaches could precipitate non-compliance, attracting potential legal liabilities.

The California Privacy Rights Act (CPRA) amends the CCPA and introduces rigorous protections for Sensitive Personal Information (SPI). This category encompasses specific personal attributes, including, but not limited to, financial data, health information, and precise geolocation. Attribute Inference Attacks, through the unauthorized disclosure of sensitive attributes, may constitute a direct contravention of these provisions, signifying a significant legal breach. Focusing on transparency, the CPRA sheds light on automated decision-making processes, insisting on clarity and openness. Unauthorized inferences stemming from various attacks could undermine this transparency, thereby impacting consumers’ legal rights to comprehend the underlying logic and implications of decisions that bear upon them. Emphasizing responsible data stewardship, the CPRA enforces data minimization and purpose limitation principles. Attacks that reveal or infer personal information can transgress these principles, manifesting potential excesses in data collection and utilization beyond the clearly stated purposes by exposing data that is not relevant for the intended purposes of the models. For example, an attacker could use a model inversion attack to reconstruct the face image of a user from their name, which is not needed for the facial recognition model to function. Moreover, an attacker could use an attribute inference attack to disclose the political orientation or sexual preference of a user from their movie ratings, which is not stated or agreed by the user when using the movie recommendation model.

Mitigating ML Privacy Risk

Considering California privacy laws, as well as other state privacy laws, legal departments within organizations must develop comprehensive and adaptable strategies. These must encompass clear and enforceable agreements with third-party vendors, establish internal policies reflecting state law mandates, and conduct data protection impact assessments and actionable incident response plans to mitigate potential breaches. Continuous monitoring of evolving legal landscapes at the state and federal level ensures alignment with existing obligations and prepares organizations for future legal developments.

The criticality of technological defenses cannot be overstated. Implementing safeguards such as advanced encryption, stringent access controls, and other measures forms a robust shield against privacy attacks and legal liabilities. More broadly, the intricacies of complying with the CCPA and CPRA require an in-depth understanding of technological functionalities and legal stipulations. A cohesive collaboration among legal and technical experts and other stakeholders, such as business leaders, data scientists, privacy officers, and consumers, is essential to marry legal wisdom to technological and practical acumen. Interdisciplinary dialogue ensures that legal professionals comprehend the technological foundations and practical use case of ML while technologists grasp the legal parameters and ethical considerations embedded in the CCPA and CPRA.

Staying ahead of technological advancements and legal amendments requires constant vigilance. The CPRA’s emphasis on transparency and consumer rights underscores the importance of effective collaboration, adherence to industry best practices, regular risk assessments, and transparent engagement with regulators and stakeholders, and other principles, i.e., accountability, fairness, accuracy, and security that govern artificial intelligence. Organizations should adopt privacy-by-design and privacy-by-default approaches that embed privacy protections into the design and operation of ML models.

The Future of ML Privacy Risks

The intersection of technology and law, as encapsulated by privacy attacks on ML models, presents a vibrant and complex challenge. Navigating this terrain in the era of the CCPA and CPRA demands an integrated, meticulous approach, weaving together legal strategies, technological safeguards, and cross-disciplinary collaboration.

Organizations stand at the forefront of this evolving landscape, bearing the responsibility to safeguard individual privacy and uphold legal integrity. The path forward becomes navigable and principled by fostering a culture that embraces compliance, vigilance, and innovation and by aligning with the specific requirements of the CCPA and CPRA. The challenges are numerous and the stakes significant, yet with prudent judgment, persistent effort, and a steadfast dedication to moral values, triumph is not merely attainable, it becomes a collective duty and a communal achievement.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

There is a great YouTube video that has been circulating the internet for half a decade that reimagines a heuristically programmed algorithmic computer “HAL-9000” as Amazon Alexa in an iconic scene from “2001: A Space Odyssey.” In the clip, “Dave” asks Alexa to “open the pod bay doors,” to which Alexa responds with a series of misunderstood responses, from looking for “cod” recipes to playing The Doors on Spotify. The mock clip adds a bit of levity to an otherwise terrifying picture of AI depicted in the original scene.

The field of artificial intelligence has rapidly evolved since the 1968 film that depicted HAL. Today, artificial intelligence is embedded in many of our day-to-day activities (such as Alexa). As artificial intelligence continues to grow, more companies are looking to create policies and procedures to govern the use of this technology.

As we embark on this new “odyssey,” what makes a smart, thoughtful, and well-reasoned artificial intelligence policy? With new AI terms such as “bootstrap aggregating” and “greedy algorithms,” how do companies ensure that policies are useful and understandable to all employees who need to be apprised of them?

Why Do Organizations Need an AI Policy?

As artificial intelligence seeps into our workday, the first step for organizations is to determine what legal and regulatory standards should be considered. For example, using sensitive and proprietary data can implicate data privacy and information security concerns or increase certain risks if AI is used in a particular way. In the same vein, the use of artificial intelligence can introduce bias, discrimination, or ethical considerations that should be taken into account in any comprehensive AI policy.

A thoughtful AI policy seeks to mitigate these risks while allowing the business to innovate and incorporate the latest technologies to maximize efficiency. A robust AI policy aims to stay abreast of the rapid pace of AI evolution while reducing the potential for regulatory or legal risks. Thus, an AI policy is no longer a futuristic concept but a strategic imperative, given the challenges associated with integrating AI into business operations.

What Type of AI Policy Do You Need?

Typically, we see three distinct types of AI policies: enterprise-level (corporate) policy, third-party or vendor-level policy, and product-level policy.

  • The enterprise-level (corporate) policy, or a set of guidelines and regulations, ensures an organization’s ethical, legal, and responsible use of AI technologies. An enterprise-level policy is essential when an organization heavily relies on AI across its operations and business units.
  • The third-party or vendor-level policy provides a framework for vetting and onboarding AI vendors. A vendor-level policy becomes increasingly crucial when an organization outsources AI solutions or integrates third-party AI into its workflow.
  • The product-level policy outlines applicable use criteria for specific types of AI or specific AI products. A product-level policy is essential when an organization offers distinct AI-powered products or services or uses specific AI tools with unique capabilities and risks.

Components of an Effective AI Policy

Creating an accessible and effective AI policy requires a multifaceted approach. This includes translating complex AI terms into plain language, developing user-friendly guides with visual aids, and offering tailored training and education for all staff levels.

Encouraging interdepartmental feedback and collaboration ensures the policy’s relevance and alignment with technological advances. Utilizing established frameworks such as the U.S. National Institute of Standards and Technology (NIST) AI Risk Management Framework (AI RMF) further aids in defining and communicating policies, making complex AI concepts approachable, and fostering a policy that resonates across the organization.

A Case Study: Creating an AI Policy Using AI RMF 1.0

NIST recently unveiled the AI RMF, a vital guide for responsibly designing, implementing, utilizing, and assessing AI systems. The core of the AI RMF is a critical component that fosters conversation, comprehension, and procedures to oversee AI risks and cultivate dependable and ethical AI systems. This core is structured around four core functions: GOVERN, MAP, MEASURE, and MANAGE. Utilizing the core functions of the AI RMF in creating an AI policy can enable organizations to build a foundation for trustworthy AI systems that are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy-enhanced, and fair with managed biases.

  • Govern: The AI policy must clearly define roles and responsibilities related to AI risk management, establish guiding principles, procedures, and practices, and foster a culture centered on risk awareness. This includes outlining processes for consistent monitoring and reporting on AI risk management performance, ensuring the policy remains relevant and practical.
    • Ensure Strong Governance: Establish clear roles, responsibilities, and principles for AI risk management, fostering a culture that values risk awareness and ethical integrity.
  • Map: This foundational stage defines the AI system’s purpose, scope, and intended use. Organizations must identify all relevant stakeholders and analyze the system’s environment, lifecycle stages, and potential impacts. The AI policy should reflect this mapping, detailing the scope of AI’s application, including geographical and temporal deployment, and those responsible for its operation.
    • Start with Clear Mapping: Outline the scope, purpose, stakeholders, and potential impacts of AI systems, ensuring that the policy reflects the detailed context of AI’s deployment.
  • Measure: The policy must specify the organization’s approach to quantifying various aspects of AI systems, such as inputs, outputs, performance metrics, risks, and trustworthiness. This includes defining the metrics, techniques, tools, and frequency of assessments. The policy should articulate principles and processes for evaluating AI’s potential impact on financial, operational, reputational, and legal aspects, ensuring alignment with organizational risk tolerance.
    • Implement Robust Measurement Procedures: Define clear metrics and methodologies to evaluate AI systems, considering various dimensions, including risks, performance, and ethical considerations.
  • Management: This phase requires the organization to implement measures to mitigate identified risks. The policy should outline strategies for managing AI-related risks, incorporating control mechanisms, protective measures, and incident response protocols. These strategies must be harmonized with the organization’s overall risk management approach, providing precise AI system monitoring and maintenance guidelines.
    • Build Effective Management Strategies: Develop strategies for managing AI risks, ensuring alignment with broader organizational objectives, and integrating control mechanisms and responsive protocols.

Conclusion

Creating a solid AI policy is a systematic process that requires thoughtful integration of ethical principles, transparent practices, risk management, and governance. Implementing the AI RMF can enable a holistic approach to AI policy creation, aligning with ethical integrity and robust security. By balancing the multifaceted characteristics of trustworthy AI, organizations can navigate complex tradeoffs, making transparent and justifiable decisions that reflect the values and contexts relevant to their operations. The AI RMF serves as a vital guide to managing AI risks responsibly and developing AI systems that are socially and organizationally coherent, enhancing the overall effectiveness and impact of AI operations.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

California, home to the highest number of registered vehicles in the U.S., is at the forefront of a critical issue – the privacy practices of automobile manufacturers and vehicle technology firms.

The California Privacy Protection Agency (CPPA), the state’s privacy enforcement authority, has messaged that it is launching an enforcement initiative. This initiative seeks to scrutinize the burgeoning pool of data accumulated by connected vehicles and assess whether the commercial practices of the firms gathering this data align with state regulations. This announcement signifies a crucial priority in privacy enforcement, highlighting the escalating focus on personal data management within the automotive industry.

Connected vehicles can accumulate a plethora of data through built-in apps, sensors, and cameras. As Ashkan Soltani, the executive director of CPPA, aptly describes, “Modern vehicles are effectively connected computers on wheels.” These vehicles monitor not only the occupants but also individuals in proximity. Location data, personal preferences, and information about daily routines are readily available. The implications are wide ranging; data can facilitate extensive consumer profiling, anticipate driving behavior, influence insurance premiums, and even assist urban planning and traffic studies.

While the commercial value of this data is undeniable, concerns about its management are growing. California’s enforcement announcement aims to probe this area, demanding transparency and compliance from automobile manufacturers. The CPPA will investigate whether these companies provide adequate transparency to consumers and honor their rights, including the right to know what data is being collected, the right to prohibit its dissemination, and the right to request its deletion. This type of regulatory scrutiny could also trickle down to the vast commercial network of supply, logistics, trucking, construction, and other industries that use tracking technologies in vehicles.

This concern extends beyond U.S. borders. European regulators have urged automobile manufacturers to modify their software to restrict data collection and safeguard consumer privacy. For instance, Porsche has introduced a feature on their European vehicles’ dashboards that allows drivers to either permit or retract the company’s consent to collect personal data or distribute it to third-party suppliers. Furthermore, European regulators have launched investigations into the automotive industry’s use of personal data from vehicles, including location information.

In the wake of an investigation by the Dutch privacy regulator, Tesla has amended the default settings of their vehicles’ external security cameras to remain inactive until a driver enables the outside recording function. Moreover, the camera settings now store only the last 10 minutes of recorded footage, in lieu of an hour of data previously collected. The Dutch regulatory body also stated that it infringes on privacy for the cameras to record individuals outside the vehicles without their consent. In response, Tesla’s new update includes features that alert passengers and bystanders when the external cameras are operating by blinking the vehicle’s headlights and displaying a notification on the car’s internal touchscreen. Such European investigations may indeed inform California’s regulatory approach.

However, the privacy landscape of connected cars is intricate. Automobile manufacturers, satellite radio companies, providers of in-car navigation or infotainment systems, and insurance firms are part of this complex ecosystem. For example, Stellantis, the parent company of Chrysler, recently established Mobilisights to license data to various clients, including competitor car manufacturers, under strict privacy safeguards and customer data usage consent.

As the CPPA conducts its first investigation, it marks a critical juncture, potentially shaping the future of privacy regulations and practices in the automotive industry, as well as the broader concept of mobile technologies. California’s initiative is not just a state issue — it could indicate a broader trend toward stricter regulation and enforcement in the sector. As connected cars become more common, regulators, the industry, and consumers must all navigate this complex landscape with a sharp focus on privacy.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

The construction sector is known for its perennial pursuit of efficiency, quality, and safety. In recent years, one of the tools the sector has started leveraging to achieve these goals is predictive maintenance (PM), specifically the implementation of artificial intelligence (AI) within this practice. This approach, combined with continuous advancements in AI, is revolutionizing the construction industry, promising substantial gains in operational efficiency and cost savings. However, with these developments come an array of cybersecurity threats and complex legal and regulatory challenges that must be addressed. Part 1 of this two-part series discusses the role of PM in the construction sector, and Part 2 goes deeper on the cybersecurity and vulnerabilities relating to PM’s use in the sector.

The Role of PM in the Construction Sector

At its core, PM in the construction industry relies on data-driven insights to anticipate potential equipment failures, allowing proactive measures to be taken that prevent significant downtime or exorbitant repair costs. This principle is applied across a diverse array of machinery and equipment, from colossal cranes and bulldozers to intricate electrical systems and HVAC units.

Critical to this innovative process is AI technology, which is employed to scrutinize vast volumes of data gathered from Internet of Things (IoT) sensors integrated into the machinery. Such an approach starkly contrasts with conventional maintenance practices, which tend to be reactive rather than proactive. The advent of AI-enabled PM can revolutionize this paradigm, enabling construction companies to minimize downtime, enhance safety standards, and effectuate considerable cost savings.

For instance, the integration of worker-generated data from wearable devices introduces another layer of complexity and sophistication, significantly expanding the scope of data being analyzed. These wearable devices precisely record a variety of parameters, including physical exertion levels, heart rate, and environmental exposure information, which directly pertain to an individual’s private health and personal details. Alongside machinery-related data, the physiological and environmental metrics gathered by these wearables are continuously fed into the AI system, bolstering its predictive capabilities. This intricate data, when collected and analyzed, yields invaluable insights into the conditions under which machinery operates. In certain instances, these observations can even serve as an early warning system for potential equipment issues. For instance, consistently high stress levels indicated by a worker’s wearable device while operating a specific piece of equipment could suggest an underlying machine problem that needs to be addressed.

In another use case, consider an AI-driven PM system processing vibration data from a crane’s motor. By applying machine learning to historical patterns, the system can deduce that a specific bearing is likely to malfunction within a certain timeframe. The alert generated by this prediction isn’t based solely on machinery data; it can also incorporate data from the crane operator’s wearable device, revealing elevated stress levels as the bearing begins to fail. This timely alert empowers the maintenance team to rectify the issue before it escalates into a significant breakdown or, even more detrimentally, a safety incident.

Risks of Predictive Maintenance

The rise in PM adoption simultaneously escalates the potential cybersecurity threats. The high volume of data transferred and stored, coupled with an increasing risk of data breaches and cyber-attacks, brings about grave concerns. Hackers could infiltrate PM systems, steal sensitive data, cause disruption, or manipulate the data fed into AI systems to yield incorrect predictions causing substantial harm. IoT devices, which act as the primary data sources for AI-driven maintenance systems, also present considerable cybersecurity vulnerabilities if not appropriately secured. Despite being invaluable for PM, these devices, ranging from simple machinery sensors to sophisticated wearables, have several weak points due to their inherent design and function.

PM users also face complicated new questions of privacy, liability, and compliance with industry-specific regulations. Ownership of the data that AI systems train on is a site of intense legal debate; regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Privacy Rights Act (CPRA) impose penalties for failing to properly anonymize and manage data. The question of liability in the case of an accident, and of compliance with construction-specific regulations, will also be key.

The Future of PM in Construction

Looking ahead, the use of AI in PM is expected to become even more sophisticated and widespread. The continuing development of AI technologies, coupled with the growth of IoT devices and the rollout of high-speed 5G and 6G networks, will facilitate the collection and analysis of even larger data volume, leading to even more accurate and timely predictions.

Furthermore, as AI systems become more capable of learning and adapting, they will increasingly be able to optimize their predictions and recommendations over time based on feedback and additional data. We can also expect to see increased integration between PM systems and other technological trends in construction, such as digital twins and augmented reality. For instance, a digital twin of a construction site could include real-time data on the status of various pieces of equipment and AR devices could be used to visualize potential issues and guide maintenance work.

PM, powered by AI, holds immense promise for the construction industry. It has the potential to greatly increase efficiency, reduce costs, and improve safety. However, it also brings with it significant cybersecurity threats and legal and regulatory challenges. As the industry continues to embrace this technology, it will be crucial to address these issues, striking a balance between innovation and security, compliance, and liability.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.