Listen to this post

California, home to the highest number of registered vehicles in the U.S., is at the forefront of a critical issue – the privacy practices of automobile manufacturers and vehicle technology firms.

The California Privacy Protection Agency (CPPA), the state’s privacy enforcement authority, has messaged that it is launching an enforcement initiative. This initiative seeks to scrutinize the burgeoning pool of data accumulated by connected vehicles and assess whether the commercial practices of the firms gathering this data align with state regulations. This announcement signifies a crucial priority in privacy enforcement, highlighting the escalating focus on personal data management within the automotive industry.

Connected vehicles can accumulate a plethora of data through built-in apps, sensors, and cameras. As Ashkan Soltani, the executive director of CPPA, aptly describes, “Modern vehicles are effectively connected computers on wheels.” These vehicles monitor not only the occupants but also individuals in proximity. Location data, personal preferences, and information about daily routines are readily available. The implications are wide ranging; data can facilitate extensive consumer profiling, anticipate driving behavior, influence insurance premiums, and even assist urban planning and traffic studies.

While the commercial value of this data is undeniable, concerns about its management are growing. California’s enforcement announcement aims to probe this area, demanding transparency and compliance from automobile manufacturers. The CPPA will investigate whether these companies provide adequate transparency to consumers and honor their rights, including the right to know what data is being collected, the right to prohibit its dissemination, and the right to request its deletion. This type of regulatory scrutiny could also trickle down to the vast commercial network of supply, logistics, trucking, construction, and other industries that use tracking technologies in vehicles.

This concern extends beyond U.S. borders. European regulators have urged automobile manufacturers to modify their software to restrict data collection and safeguard consumer privacy. For instance, Porsche has introduced a feature on their European vehicles’ dashboards that allows drivers to either permit or retract the company’s consent to collect personal data or distribute it to third-party suppliers. Furthermore, European regulators have launched investigations into the automotive industry’s use of personal data from vehicles, including location information.

In the wake of an investigation by the Dutch privacy regulator, Tesla has amended the default settings of their vehicles’ external security cameras to remain inactive until a driver enables the outside recording function. Moreover, the camera settings now store only the last 10 minutes of recorded footage, in lieu of an hour of data previously collected. The Dutch regulatory body also stated that it infringes on privacy for the cameras to record individuals outside the vehicles without their consent. In response, Tesla’s new update includes features that alert passengers and bystanders when the external cameras are operating by blinking the vehicle’s headlights and displaying a notification on the car’s internal touchscreen. Such European investigations may indeed inform California’s regulatory approach.

However, the privacy landscape of connected cars is intricate. Automobile manufacturers, satellite radio companies, providers of in-car navigation or infotainment systems, and insurance firms are part of this complex ecosystem. For example, Stellantis, the parent company of Chrysler, recently established Mobilisights to license data to various clients, including competitor car manufacturers, under strict privacy safeguards and customer data usage consent.

As the CPPA conducts its first investigation, it marks a critical juncture, potentially shaping the future of privacy regulations and practices in the automotive industry, as well as the broader concept of mobile technologies. California’s initiative is not just a state issue — it could indicate a broader trend toward stricter regulation and enforcement in the sector. As connected cars become more common, regulators, the industry, and consumers must all navigate this complex landscape with a sharp focus on privacy.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

The construction sector is known for its perennial pursuit of efficiency, quality, and safety. In recent years, one of the tools the sector has started leveraging to achieve these goals is predictive maintenance (PM), specifically the implementation of artificial intelligence (AI) within this practice. This approach, combined with continuous advancements in AI, is revolutionizing the construction industry, promising substantial gains in operational efficiency and cost savings. However, with these developments come an array of cybersecurity threats and complex legal and regulatory challenges that must be addressed. Part 1 of this two-part series discusses the role of PM in the construction sector, and Part 2 goes deeper on the cybersecurity and vulnerabilities relating to PM’s use in the sector.

The Role of PM in the Construction Sector

At its core, PM in the construction industry relies on data-driven insights to anticipate potential equipment failures, allowing proactive measures to be taken that prevent significant downtime or exorbitant repair costs. This principle is applied across a diverse array of machinery and equipment, from colossal cranes and bulldozers to intricate electrical systems and HVAC units.

Critical to this innovative process is AI technology, which is employed to scrutinize vast volumes of data gathered from Internet of Things (IoT) sensors integrated into the machinery. Such an approach starkly contrasts with conventional maintenance practices, which tend to be reactive rather than proactive. The advent of AI-enabled PM can revolutionize this paradigm, enabling construction companies to minimize downtime, enhance safety standards, and effectuate considerable cost savings.

For instance, the integration of worker-generated data from wearable devices introduces another layer of complexity and sophistication, significantly expanding the scope of data being analyzed. These wearable devices precisely record a variety of parameters, including physical exertion levels, heart rate, and environmental exposure information, which directly pertain to an individual’s private health and personal details. Alongside machinery-related data, the physiological and environmental metrics gathered by these wearables are continuously fed into the AI system, bolstering its predictive capabilities. This intricate data, when collected and analyzed, yields invaluable insights into the conditions under which machinery operates. In certain instances, these observations can even serve as an early warning system for potential equipment issues. For instance, consistently high stress levels indicated by a worker’s wearable device while operating a specific piece of equipment could suggest an underlying machine problem that needs to be addressed.

In another use case, consider an AI-driven PM system processing vibration data from a crane’s motor. By applying machine learning to historical patterns, the system can deduce that a specific bearing is likely to malfunction within a certain timeframe. The alert generated by this prediction isn’t based solely on machinery data; it can also incorporate data from the crane operator’s wearable device, revealing elevated stress levels as the bearing begins to fail. This timely alert empowers the maintenance team to rectify the issue before it escalates into a significant breakdown or, even more detrimentally, a safety incident.

Risks of Predictive Maintenance

The rise in PM adoption simultaneously escalates the potential cybersecurity threats. The high volume of data transferred and stored, coupled with an increasing risk of data breaches and cyber-attacks, brings about grave concerns. Hackers could infiltrate PM systems, steal sensitive data, cause disruption, or manipulate the data fed into AI systems to yield incorrect predictions causing substantial harm. IoT devices, which act as the primary data sources for AI-driven maintenance systems, also present considerable cybersecurity vulnerabilities if not appropriately secured. Despite being invaluable for PM, these devices, ranging from simple machinery sensors to sophisticated wearables, have several weak points due to their inherent design and function.

PM users also face complicated new questions of privacy, liability, and compliance with industry-specific regulations. Ownership of the data that AI systems train on is a site of intense legal debate; regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Privacy Rights Act (CPRA) impose penalties for failing to properly anonymize and manage data. The question of liability in the case of an accident, and of compliance with construction-specific regulations, will also be key.

The Future of PM in Construction

Looking ahead, the use of AI in PM is expected to become even more sophisticated and widespread. The continuing development of AI technologies, coupled with the growth of IoT devices and the rollout of high-speed 5G and 6G networks, will facilitate the collection and analysis of even larger data volume, leading to even more accurate and timely predictions.

Furthermore, as AI systems become more capable of learning and adapting, they will increasingly be able to optimize their predictions and recommendations over time based on feedback and additional data. We can also expect to see increased integration between PM systems and other technological trends in construction, such as digital twins and augmented reality. For instance, a digital twin of a construction site could include real-time data on the status of various pieces of equipment and AR devices could be used to visualize potential issues and guide maintenance work.

PM, powered by AI, holds immense promise for the construction industry. It has the potential to greatly increase efficiency, reduce costs, and improve safety. However, it also brings with it significant cybersecurity threats and legal and regulatory challenges. As the industry continues to embrace this technology, it will be crucial to address these issues, striking a balance between innovation and security, compliance, and liability.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

As cyber threats have evolved and expanded, cybersecurity has emerged as a threat to organizations across sectors, and there is more urgency than ever for companies to remain vigilant and prepared. Cybersecurity incidents can come with legal implications and lead to substantial financial losses, and members of the board must increasingly be involved and knowledgeable on cybersecurity to safeguard the company’s reputation – and their own. Tabletop exercises are a potent tool to help identify and address gaps, increase cooperation on cybersecurity goals, and build organizational “muscle memory” to respond to threats.

Risks for Companies and Boards

An indispensable component of cyber preparedness is the active engagement of organizational leadership, especially the board of directors. Insufficient cyber preparedness can result in serious legal implications for both the company and the board, including shareholder actions and derivative lawsuits. These mistakes can not only threaten the organization’s reputation and lead to substantial financial losses, but also affect the reputations of individual board members. This is especially significant for board members who serve on multiple boards, as their professional reputation and credibility are at stake. A derivative action against them could harm their standing across all the boards they serve.

An engaged and well-informed board is vital to building a resilient cyber defense and plays a critical role in mitigating the risk of legal actions. By actively participating in the cyber readiness process, the board can demonstrate its commitment to protecting the company and its stakeholders from cyber threats. When properly documented, this display of due diligence becomes a powerful defense against potential shareholder litigation or derivative lawsuits. It protects not just the company’s assets and reputation but also the board members’ personal reputations, reinforcing the importance of their roles in an increasingly interconnected corporate landscape.

Using Tabletops for Organizational Insights

Tabletop exercises offer a powerful platform to practice and evaluate response strategies to hypothetical cyber incidents. These simulated scenarios serve as a systematic, interactive, and low-risk method for teams to pinpoint vulnerabilities in existing protocols, improve coordination, and critically assess the decision-making process during crises. A recent study by the National Association of Corporate Directors underscores this imperative: 48% of company boards reported conducting a cyber-centric exercise in the year leading up to the survey.

These exercises generate valuable insights like response times, decision accuracy, coordination efficiency and communication effectiveness. Gathering these insights over several exercises helps organizations to discern patterns, track progress, and identify gaps that need to be addressed. More qualitatively, insights from these exercises can allow organizations insight into the subtleties of team dynamics, decision-making, and communication. Gaps or weaknesses in any of these areas are vulnerabilities that cyber criminals can exploit as entry points to a company’s system or facilities.

Tabletop exercises have additional benefits beyond identifying weaknesses in cyber preparedness. The exercises also allow stakeholders across different departments to collaborate, fostering an integrated communication culture within an organization. This practice, critical for effective cyber preparedness, does carry certain risks, including potential miscommunications and diverging departmental priorities. To address these challenges, organizations must prioritize establishing a structured, transparent communication system that mitigates such risks. Most importantly, tabletop exercises can allow organizations to develop a “cybersecurity muscle memory.”  By running through different scenarios and discussing various response strategies, organizations can strengthen their ability to detect, mitigate, and recover from security breaches

Making Tabletops Work for You

Tabletops are not “one-and-done” exercises. For maximum impact, companies should integrate the exercises into annual plans, adapting the scenarios to the rapidly evolving cyber threat landscape. Regular reviews of the exercises, incorporation of learned lessons, and ongoing adjustments to the exercises based on new threat intelligence are vital components of robust cyber preparedness. For companies uncertain about their starting point, tabletop exercises can be customized and scaled to meet the organization’s unique needs and risks. As the company evolves, the exercises can be tailored to tackle more complex scenarios and challenges. This customization ensures the exercises remain relevant, focusing on the company’s cybersecurity objectives. The surge in cyber threats underscores the need for leadership’s proactive approach to cybersecurity. Tabletop exercises are valuable tools to help corporate leaders and the board actively witness the effectiveness of the organization’s incident response capabilities and, thus, the risks they individually face.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

Kayla Tran is a co-author of this post and is a Summer Associate at Bradley.

In recent years, the Lone Star State has been vigilant in enacting cybersecurity and data privacy laws to protect individuals and businesses from the disastrous effects of a data breach. Here is a timeline of previous cybersecurity and data privacy legislation enacted by the Texas Legislature:

  • 2007: Identity Theft Enforcement and Protection Act – requires businesses to “implement and maintain reasonable procedures” to protect consumers from the unlawful use of personal information.
  • 2009: Biometric Privacy Act – requires businesses to obtain consent from consumers before capturing any biometric identifiers.
  • 2012: Medical Records Privacy Act – protects patients from the disclosure of their information without consent.
  • 2017: Student Data Privacy Act – further protects students by restricting school websites from “engaging in targeted advertising based on personally identifiable student information.”
  • 2017: Texas Cybercrime Act – assesses criminal penalties for the intentional interruption or suspension of another person’s access to a computer system or network without consent.
  • 2017: Texas Cybersecurity Act – sets forth “specific measures to protect sensitive and confidential data [to] maintain cyberattack readiness.”
  • 2019: Texas Privacy Protection Act (HB 4390) – amends existing data breach notification obligations and creates an advisory council to study and evaluate privacy laws in the state.

Now, the Texas Data Privacy and Security Act has just made Texas one of almost a dozen states to pass a comprehensive privacy legislation. On May 28, 2023, the act passed in the Texas State House and Senate. On June 18, 2023, Gov. Greg Abbott signed the law into effect. The act is set to take effect on July 1, 2024.

The purpose of the act is to protect the personal data of “consumers who [are] residents of the state of Texas acting in an individual or household context.” The act will provide consumers with stronger individual rights to (1) confirm whether a controller is processing their personal data; (2) correct any discrepancies in their personal data; (3) delete personal data provided or obtained; (4) receive a copy of their personal data previously given to a consumer in a portable and readily usable format so long as it is available digitally and technically feasible; (5) opt-out of the process of their personal data for targeted advertising; and (6) appeal a controller’s refusal to respond to such requests.

Personal data in the act includes any information, including sensitive data, that is linked or can be reasonably linked to an identified or identifiable person. Personal data includes pseudonymous data when the data “is used by a controller in conjunction with additional information that reasonably links the data to an identified or identifiable individual.” Personal data specifically does not include “deidentified data or publicly available information.”

Who does the act apply to?

The act has a broad scope of application as it applies to organizations that (1) conduct business in Texas or produce products or services that are consumed by the residents of Texas; (2) process or engage in the sale of personal data; and (3) are not defined by the United States Small Business Administration (SBA) as a small business. However, if an organization meets the first two requirements, but is defined as a small business, it must still comply with a section of the act that requires small businesses to first obtain consumer consent for the sale of sensitive personal data.

The act will not apply to individuals acting in a commercial or employment context as it only protects consumers acting in an individual or household capacity. As a result, it is not triggered in the business-to-business or employment context. The bill also includes a list of exceptions and exemptions, including state agencies, higher education institutions, nonprofit organizations, and entities governed by the Health Information Portability and Accountability Act (HIPAA) or the Gramm-Leach-Bliley Act.

Any problems?

One problem with the act is its use of the SBA’s definition of a small business. The SBA uses a variety of definitions to define a small business. These definitions change depending on the specific industry a company is in. Therefore, the act leaves open the uncertainty of what businesses are actually covered. Additionally, the act applies to businesses that provide services that are “consumed by” rather than “targeted at,” so many organizations will be surprised to learn that the act may apply to them.

It is important to note that the act does not create a private right of action for individuals. The act is enforced and governed solely by the Texas attorney general. The act includes an initial 30-day cure period to remedy such violations, but after the 30 days with no remedy, a civil fine of up to $7,500 can be prescribed for each violation. On top of that, the cure period does not sunset, and the attorney general’s office is entitled to recover reasonable attorneys’ fees and other reasonable expenses resulting from the investigation and bringing such enforcement action under the act.

So, what does all of this mean for businesses operating in Texas?

With almost every new law comes new obligations. Here are a few things that businesses (controllers) should pay close attention to:

  • Sensitive data or personal data obtained by a controller for a purpose that is not reasonably necessary or compatible with the disclosed purpose can only be processed with a consumer’s consent. This consent must be a clear affirmative act, signaling that the consumer is freely giving specific, informed, and unambiguous consent to process their personal data. It is undetermined whether consent by a consumer can be withdrawn.
  • In certain scenarios, a business must include a “reasonably accessible and clear” privacy notice to its consumers. This notice must include “(1) the categories of personal data processed by the controller; (2) the purpose for processing personal data; (3) how consumers may exercise their consumer rights, including the appeal process; (4) the categories of personal data shared with third parties; (5) the categories of third parties with whom the data is shared; and (6) a description of the methods through which consumers can submit requests to exercise their consumer rights.” Additionally, if any of the shared personal data is sensitive, the following notice must be included: “We may sell your sensitive personal data.”
  • Businesses must conduct and document a data protection assessment for data with a higher risk of harm. This assessment must weigh the potential risks to consumer rights against any direct/indirect benefit, mitigated by safeguards, and must consider the use of deidentified data, processing context, and most importantly, reasonable consumer expectation.
  • If a business is able to show that the data needed to identify pseudonymous personal data of a consumer is kept separately and subject to technical and organizational controls that prevent the business from accessing the information, then that business has no obligation to the consumer regarding such pseudonymous data. 
  • A business can choose to authenticate a consumer’s requests to exercise their rights under the act. If the business cannot authenticate a consumer’s request, then the business is not required to comply with the consumer’s request.

While Texas is  is just one of many states that have now enacted a bill to further protect consumers’ personal data, it is clear that things are changing, and state legislative bodies are recognizing the importance of consumer privacy. With this in mind, Texas businesses need to ensure that they are in compliance with this bill. We’re just here to spread the message: Failure to comply with this bill, can result in civil penalties assessed by the attorney general of Texas. 

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

*Kayla Tran is not a licensed attorney.

Listen to this post

The Department of Defense Inspector General (DoDIG) recently released its “Audit of the DoD’s Implementation and Oversight of the Controlled Unclassified Information [CUI] Program” (DODIG-2023-078). The audit highlights some of DoD’s challenges in implementing the CUI Program and provides recommendations on how to make the program work better. The DoD’s response to the DoDIG’s audit recommendations will likely impact federal contractors working on contracts that handle CUI, including increased oversight and auditing, as well as increased training and reporting requirements.

What is CUI?

CUI is information created or possessed for the government that requires safeguarding or dissemination controls according to applicable laws, regulations, and government‑wide policies; CUI is not classified information. This audit was requested by the Senate Armed Services Committee due to “concern that DoD Components were using limited dissemination controls [LDCs] without having a legitimate rationale, thereby limiting transparency.” Essentially, Congress wasn’t as concerned with the improper dissemination of CUI, but rather with DoD’s over-marking and use of the CUI Program to limit access to information.

Background

Before summarizing the important findings of the audit, let’s briefly review the history of the government-wide CUI Program, and DoD’s implementation thereof, starting with Executive Order 13556 issued in 2010.

EO 13556 aimed to standardize the way the entire executive branch handled unclassified information that requires safeguarding or dissemination controls. Prior to the establishment of the CUI Program, there were dozens of different programs and marking protocols administered by different agencies and DoD components, such as the most popular: For Official Use Only (FOUO), Sensitive But Unclassified (SBU), and Law Enforcement Sensitive (LES). The CUI Program, administered primarily by the National Archives, attempts to reduce the many marking and dissemination programs into a single, government-wide program, although many will note that these markings persist in some pockets of government, despite over a decade of regulatory intent.

DoD, for its part, most recently issued DoD Instruction 5200.48, which clarified previous DoD policy and established “the DoD CUI Program requirements for designating, marking, handling, and decontrolling CUI,” as well as created a requirement for CUI training. The DoD Office of the Under Secretary of Defense for Intelligence and Security (OUSD(I&S)) promulgated the guidance but left the implementation of the CUI Program to the various DoD components.

Audit Findings

The audit found:

  • DoD components did not effectively oversee the implementation of guidance to ensure that CUI documents and emails contained the required markings.
  • DoD components did not effectively oversee DoD and contractor personnel’s completion of the appropriate CUI training.
  • This implementation and oversight failure occurred because the DoD components did not have mechanisms in place to ensure that CUI documents and emails included the required markings, and the OUSD(I&S) did not require the DoD components to test, as part of their annual reporting process, a sample of CUI documents to verify whether the documents contained the required markings.
  • In addition, not all of the DoD components and contracting officials tracked whether their personnel completed the required CUI training.
  • The use of improper or inconsistent CUI markings and the lack of training can increase the risk of the unauthorized disclosure of CUI or unnecessarily restrict the dissemination of information and create obstacles to authorized information sharing.
  • Furthermore, the DoD will not meet the intent of Executive Order 13556 to standardize the way the executive branch handles CUI.

In sum, DoDIG found that DoD components routinely either over-marked information that was not properly considered CUI or improperly marked information that was CUI. A lack of training and tracking mechanisms compounded both findings. The DoDIG made 14 recommendations for improvement, six of which remain “unresolved” pending additional comments and coordination with DoD management, meaning that a revised version of the audit report will be expected later this year that incorporates management comments and tracks the resolution of outstanding recommendations.

Why Are the Audit Findings Important?

For defense contractors, these audit findings are important because they have real-world impact on contractors’ responsibilities and potential expenses under Defense Federal Acquisition Regulation Supplement (DFARS) clause 252.204‑7012, which requires contractors that maintain CUI to implement security controls specified in the National Institute of Standards and Technology (NIST) Special Publication (SP) 800‑171. Contractors responsible for the physical and cybersecurity safeguarding of CUI on their systems are reliant on DoD Component Program offices to properly identify and notify contractors of DoD CUI at the time of contract award and throughout the life of the contract when handling CUI. DoDI 5200.48 also requires contractors who handle CUI to receive initial and annual refresher training that meets certain CUI learning objectives. The audit notes that while contractors were more compliant with their training responsibilities, the DoD components were not auditing or tracking these training requirements, which increased risk of noncompliance.

If DoD components prioritize their CUI Programs and follow the recommendations of the DoDIG audit, this could result in increased programmatic and contracting offices’ focus on the information safeguarding compliance regime, NIST controls, and CUI training for contractors.

For contractors who believe that customer CUI Programs are over-marking information and data — unnecessarily increasing compliance burdens and limiting transparency — this audit provides substantive and rhetorical support to push-back on over-marked information during requests to decontrol.

Conclusion

The government-wide CUI Program is over a decade old and continues to evolve, be refined, and experience growing pains. This DoDIG audit is another milestone in the CUI Program’s growth and refinement.

This audit is also timely. As recent high-profile classified information leak prosecutions have made the news there has been an increased focus on all levels of sensitive information safeguarding, including CUI.

Improving the management of the CUI Program is particularly important because the CUI regime operates in the liminal space where both Congress and interested parties want a perfect balance between protection of proper CUI and heightened transparency for everything else. This goldilocks conundrum for the CUI Program will continue to generate friction between all parties: Congressional and IG oversight, agencies implementing and managing the CUI Program, contractors managing and safeguarding data, and the public and media pursuing open and transparent government ideals.

If you have any questions about this noteworthy development, please do not hesitate to contact Nathaniel Greeson, Andy Elbon, or Matthew Flynn.    

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog Online and On Point.

Listen to this post

In an era where our lives are ever more intertwined with technology, the security of digital platforms is a matter of national concern. A recent large-scale cyberattack affecting several U.S. federal agencies and numerous other commercial organizations emphasizes the criticality of robust cybersecurity measures.

The Intrusion

On June 7, 2023, the Cybersecurity and Infrastructure Security Agency (CISA) identified an exploit by “Threat Actor 505” (TA505), namely, a previously unidentified (zero-day) vulnerability in a data transfer software called MOVEit. MOVEit is a file transfer software used by a broad range of companies to securely transfer files between organizations. Darin Bielby, the managing director at Cypfer, explained that the number of affected companies could be in the thousands: “The Cl0p ransomware group has become adept at compromising file transfer tools. The latest being MOVEit on the heels of past incidents at GoAnywhere. Upwards of 3000 companies could be affected. Cypfer has already been engaged by many companies to assist with threat actor negotiations and recovery.”

CISA, along with the FBI, advised that “[d]ue to the speed and ease TA505 has exploited this vulnerability, and based on their past campaigns, FBI and CISA expect to see widespread exploitation of unpatched software services in both private and public networks.”

Although CISA did not comment on the perpetrator behind the attack, there are suspicions about a Russian-speaking ransomware group known as Cl0p. Much like in the SolarWinds case, they ingeniously exploited vulnerabilities in widely utilized software, managing to infiltrate an array of networks.

Wider Implications

The Department of Energy was among the many federal agencies compromised, with records from two of its entities being affected. A spokesperson for the department confirmed they “took immediate steps” to alleviate the impact and notified Congress, law enforcement, CISA, and the affected entities.

This attack has ramifications beyond federal agencies. Johns Hopkins University’s health system reported a possible breach of sensitive personal and financial information, including health billing records. Georgia’s statewide university system is investigating the scope and severity of the hack affecting them.

Internationally, the likes of BBC, British Airways, and Shell have also been victims of this hacking campaign. This highlights the global nature of cyber threats and the necessity of international collaboration in cybersecurity.

The group claimed credit for some of the hacks in a hacking campaign that began two weeks ago. Interestingly, Cl0p took an unusual step, stating that they erased the data from government entities and have “no interest in exposing such information.” Instead, their primary focus remains extorting victims for financial gains.

Still, although every file transfer service based on MOVEit could have been affected, that does not mean that every file transfer service based on MOVEit was affected. Threat actors exploiting the vulnerability would likely have had to independently target each file transfer service that employs the MOVEit platform. Thus, companies should determine whether their secure file transfer services rely on the MOVEit platform and whether any indicators exist that a threat actor exploited the vulnerability.

A Flaw Too Many

The attackers exploited a zero-day vulnerability that likely exposed the data that companies uploaded to MOVEit servers for seemingly secure transfers. This highlights how a single software vulnerability can have far-reaching consequences if manipulated by adept criminals. Progress, the U.S. firm that owns MOVEit, has urged users to update their software and issued security advice.

Notification Requirements

This exploitation likely creates notification requirements for the myriad affected companies under the various state data breach notification laws and some industry-specific regulations. Companies that own consumer data and share that data with service providers are not absolved of notification requirements merely because the breach occurred in the service provider’s environment. Organizations should engage counsel to determine whether their notification requirements are triggered.

A Call to Action

This cyberattack serves as a reminder of the sophistication and evolution of cyber threats. Organizations using the MOVEit software should analyze whether this vulnerability has affected any of their or their vendors’ operations.

With the increasing dependency on digital platforms, cybersecurity is no longer an option but a necessity in a world where the next cyberattack is not a matter of “if” but “when;” it’s time for a proactive approach to securing our digital realms. Organizations across sectors must prioritize cybersecurity. This involves staying updated with the latest security patches and ensuring adequate protective measures and response plans are in place.

Listen to this post

On May 16, 2023, the U.S. Senate Judiciary Committee conducted a significant oversight hearing on the regulation of artificial intelligence (AI) technology, specifically focusing on newer models of generative AI that create new content from large datasets. The hearing was chaired by Sen. Richard Blumenthal for the Subcommittee on Privacy, Technology, and the Law. He opened the hearing with an audio-cloned statement generated by ChatGPT to demonstrate ominous risks associated with social engineering and identity theft. Notable witnesses included Samuel Altman, CEO of OpenAI, Christina Montgomery, chief privacy and trust officer at IBM, and Gary Marcus, professor emeritus of Psychology and Neural Science at New York University – each of whom advocated for the regulation of AI in different ways. 

Altman advocated for the establishment of a new federal agency responsible for licensing AI models according to specific safety standards and monitoring certain AI capabilities. He emphasized the need for global AI standards and controls and described the safeguards implemented by OpenAI in the design, development, and deployment of their ChatGPT product. He explained that before deployment and with continued use, ChatGPT undergoes independent audits, as well as ongoing safety monitoring and testing. He also discussed how OpenAI foregoes the use of personal data in ChatGPT to lessen the risk of privacy concerns, but also noted how the end user of the AI product impacts all of the risks and challenges that AI represents.

Montgomery from IBM supported a “precision regulation” approach, focusing on specific use cases and addressing risks rather than broadly regulating the technology itself, the approach taken in the proposed EU AI Act as an example. While Montgomery highlighted the need for clear regulatory guidance for AI developers, she stopped short of advocating for a federal agency or commission. Instead, she described the AI licensure process as obtaining a “license from society” and stressed the importance of transparency for users so they know when they interact with AI, but noted that IBM models are more B2B than consumer facing. She advocated for a “reasonable care” standard to hold AI systems accountable. Montgomery also discussed IBM’s internal governance framework, which includes a lead AI officer and an ethics board, as well as impact assessments, transparency of data sources, and user notification when interacting with AI.

Marcus argued that the current court system is insufficient for regulating AI and expressed the need for new laws governing AI technologies and strong agency oversight. He proposed an agency similar to the Food and Drug Administration (FDA) with the authority to monitor AI and conduct safety reviews, including the ability to recall AI products. Marcus also recommended increased funding for AI safety research, both in the short term and long term.

The senators seemed poised to regulate AI in this Congress, whether through an agency or via the courts, and expressed bipartisan concerns about deployment and uses of AI that pose significant dangers that require intervention. Furthermore, the importance of technology and organizational governance rules was underscored, with the recommendation of adopting cues from the EU AI Act in taking a strong leadership position and a risk-based approach. During the hearing, there were suggestions to incorporate constitutional AI by emphasizing the upfront inclusion of values in the AI models, rather than solely focusing on training them to avoid harmful content.

The senators debated the necessity of a comprehensive national privacy law to provide essential data protections for AI, with proponents for such a bill on both sides of the aisle. They also discussed the potential regulation of social media platforms that currently enjoy exemptions under Section 230 of the Communications Decency Act of 1996, specifically addressing the issue of harms to children. The United States find itself at a critical juncture where the evolution of technology has outpaced the development of both regulatory frameworks and the case law. As Congress grapples with the task of addressing the risks and ensuring the trustworthiness of AI, technology companies and AI users are taking the initiative to establish internal ethical principles and standards governing the creation and deployment of artificial and augmented intelligence technologies. These internal guidelines serve as a compass for organizational conduct, mitigating the potential for legal repercussions and safeguarding against negative reputational consequences in the absence of clear legal guidelines.

Listen to this post

Effective July 1, 2023, a new Florida law will limit certain health care providers from storing patient information offshore. CS/CS/SB 264 (Chapter 2023-33, Laws of Florida), amends the Florida Electronic Health Records Exchange Act to require health care providers who use certified electronic health record technology to ensure that patient information is physically maintained in the continental United States or its territories or Canada.

The law broadly applies to “all patient information stored in an offsite physical or virtual environment,” including patient information stored through third-party or subcontracted computing facilities or cloud computing service providers. Further, it applies to all qualified electronic health records that are stored using any technology that can allow information to be electronically retrieved, accessed, or transmitted.

The new law is limited to health care providers listed below who use “certified electronic health record technology” or CEHRT – a term of art applicable to technology certified to the certification criteria adopted by the U.S. Department of Health and Human Services (HHS):

  • Certain entities licensed by the Florida Agency for Health Care Administration (AHCA), including hospitals, healthcare clinics, ambulatory surgical centers, home health agencies, hospices, home medical equipment providers, nursing homes, assisted living facilities, intermediate care facilities for persons with developmental disabilities, laboratories authorized to perform testing under the Drug-Free Workplace Act, birth centers, abortion clinics, crisis stabilization units, short-term residential treatment facilities, residential treatment facilities, residential treatment centers for children and adolescents, nurse registries, companion services or homemaker services providers, adult day care centers, adult family-care homes, homes for special services, transitional living facilities, prescribed pediatric extended care centers, healthcare services pools, and organ, tissue, and eye procurement organizations;
  • Certain licensed health care practitioners, including physicians, physician assistants, anesthesiologist assistants, pharmacists, dentists, chiropractors, podiatrists, naturopathic physicians, nursing home administrators, optometrists, registered nurses, advanced practice registered nurses, psychologists, clinical social workers, marriage and family therapists, mental health counselors, physical therapists, speech language pathologists, audiologists, occupational therapists, respiratory therapists, dieticians, orthotists, prosthetists, electrologists, massage therapists, licensed clinical laboratory personnel, medical physicists, genetic counselors, opticians, certified radiologic personnel, and acupuncturists;
  • Licensed pharmacies;
  • Certain mental health and substance abuse service providers and their clinical and nonclinical staff who provide inpatient or outpatient services;
  • Licensed continuing care facilities; and
  • Home health aides.

At this time, the HHS certification program includes inpatient EHRs for hospitals and ambulatory EHRs for eligible health care providers, the only provider types eligible to participate in the Centers for Medicare and Medicaid Services (CMS) payment programs requiring CEHRT. While other health care providers such as ambulatory surgery centers, pharmacies, long-term post-acute care providers, home health and hospice are not eligible to participate in those CMS payment programs, they arguably fall within the scope of the Florida offshoring prohibition if they “utilize” CEHRT. Further, given its broad language, the statute could technically be read as covering all patient information stored by a health care provider utilizing CEHRT, even if that patient information is stored in an application that is not so certified.

The new law also amends Florida’s Health Care License Procedures Act to require entities submitting an initial or renewal licensure application to AHCA to sign an affidavit attesting under the penalty of perjury that the entity is in compliance with the new requirement that patient information be stored in the continental United States or its territories or Canada. Entities licensed by AHCA must remain in compliance with the data storage requirement or face possible disciplinary action by AHCA.

Furthermore, the new law requires an entity licensed by AHCA to ensure that a person or entity who possesses a controlling interest in the licensed entity does not hold, either directly or indirectly, an interest in an entity that has a business relationship with a “foreign country of concern” or that is subject to section 287.135, Florida Statutes, which prohibits local governments from contracting with certain scrutinized companies. “Foreign country of concern” is defined by the new law as “the People’s Republic of China, the Russian Federation, the Islamic Republic of Iran, the Democratic People’s Republic of Korea, the Republic of Cuba, the Venezuelan regime of Nicolás Maduro, or the Syrian Arab Republic, including any agency of or any other entity of significant control of such foreign country of concern.”

Listen to this post

Tennessee has joined the growing number of states that have enacted comprehensive data privacy laws. On the final day of this year’s legislative session, the Tennessee legislature passed the Tennessee Information Protection Act (TIPA), and Governor Bill Lee signed TIPA into law on May 11, 2023.  

TIPA marks a significant development in data privacy for businesses operating in the state. This comprehensive legislation grants consumers enhanced control over their personal information while establishing stringent responsibilities for businesses and service providers. Navigating TIPA’s extensive requirements is crucial for maintaining your company’s compliance and reputation.

Here are key takeaways from the bill passed by the legislature:

  • Entities Affected: The law affects entities that conduct business in Tennessee or provide products or services to Tennessee residents, exceed $25 million in revenue, and meet one of these criteria:
    • Control or process information of 25,000 or more Tennessee consumers per year and derive more than 50% of gross revenue from the sale of personal information; or
    • Control or process information of at least 175,000 Tennessee consumers.
  • Consumer Rights: TIPA creates consumer rights to confirm, access, correct, delete, or obtain a copy of their personal information, or opt out of specific uses of their data. Controllers must respond to authenticated consumer requests within 45 days, with a possible 45-day extension, and establish an appeal process for refusals to take action on requests. If the controller cannot authenticate the consumer’s request, they can ask for additional information to do so.
  • Data Controller Responsibilities: Controllers must limit data collection and processing to what is necessary, maintain data security practices, avoid discrimination, and obtain consent for processing sensitive data. Controllers must provide a clear and accessible privacy notice detailing their practices, and, if selling personal information or using it for targeted advertising, disclose these practices and provide an opt-out option. Controllers must also offer a secure and reliable means for consumers to exercise their rights without requiring consumers to create a new account.
  • Controller–Processor Requirements: Processors must adhere to controllers’ instructions and assist them in meeting their obligations, including responding to consumer rights requests and providing necessary information for data protection assessments. Contracts between controllers and processors must outline data processing procedures, including confidentiality, data deletion or return, compliance demonstration, assessments, and subcontractor engagement. The determination of whether a person is acting as a controller or processor depends on the context and specific processing of personal information.
  • Data Protection Assessments: Controllers must conduct and document data protection assessments for specific data processing activities involving personal information. These assessments must weigh the benefits and risks of processing, with certain factors considered. Assessments are confidential, exempt from public disclosure, and not retroactive.
  • De-Identified Data Exemptions: Controllers must take measures to ensure that de-identified data cannot be associated with a natural person, publicly commit to not reidentifying data, and contractually obligate recipients to comply with the law. Consumer rights do not apply to pseudonymous data under certain conditions, and controllers must exercise oversight of disclosed pseudonymous or de-identified data.
  • Major Similarities to CCPA: TIPA shares many similarities with the CCPA, including (but not limited to):
    • Granting consumers the right to access, delete, and opt out of the sale of their personal information, and requiring businesses to provide notice of their data collection and usage practice;
    • Requiring controllers and processors to enter into contracts outlining the terms and conditions of data processing and obligating subcontractors to meet the obligations of the processor; and
    • Requiring data protection assessments for certain processing activities, weighing the benefits and risks associated with the processing.
  • Affirmative Defense: TIPA provides for an “affirmative defense” against violations of the law by adhering to a written privacy policy that conforms to the NIST privacy framework or comparable standards. The privacy program’s scale and scope must be appropriate based on factors such as business size, activities, personal information sensitivity, available tools, and compliance with other laws. In addition, certifications from the Asia Pacific Economic Cooperation’s Cross-Border Privacy Rules and Privacy Recognition for Processors systems may be considered in evaluating the program.
  • Enforcement: The Tennessee Attorney General retains exclusive enforcement authority for TIPA;the law expressly states that there is no private right of action. The Tennessee Attorney General must provide 60 days’ written notice and an opportunity to cure before initiating enforcement action. If the alleged violations are not cured, the Tennessee Attorney General may file an action and seek declaratory and/or injunctive relief, civil penalties up to $7,500.00 for each violation, reasonable attorney’s fees and investigative costs, and treble damages in the case of a willful or knowing violation.
  • Dates and Deadlines: TIPA becomes effective on July 1, 2025.
  • Exemptions: The law includes numerous exemptions, including (but not limited to):
    • Government entities;
    • Financial institutions, their affiliates, and data subject to the Gramm-Leach-Bliley Act (GLBA);
    • Insurance companies;
    • Covered entities, business associates, and protected health information governed by the Health Insurance Portability and Accountability Act (HIPAA) and/or the Health Information Technology for Economic and Clinical Health Act (HITECH);
    • Nonprofit organizations;
    • Higher education institutions; and
    • Personal information that is subject to other laws such as the Children’s Online Privacy Protection Act (COPPA), the Family Educational Rights and Privacy Act (FERPA), and the Fair Credit Reporting Act (FCRA).

Despite having extensive carve-outs, TIPA grants consumers extensive rights over their personal information, and places stringent compliance obligations on businesses (controllers) and service providers (processors). Businesses should start planning for compliance now to avoid costly enforcement actions down the road.

For more information and other updates and alerts regarding privacy law developments, subscribe to Bradley’s privacy blog, Online and On Point.

Listen to this post

Federally insured credit unions are now required to report a cyber incident to the National Credit Union Administration (NCUA) Board within 72 hours. This final rule was unanimously approved by the NCUA on February 17, 2023 and will take effect September 1, 2023 – giving credit unions just over 6 months to update their data incident response teams, policies, and procedures accordingly.

The new rule states that a “reportable” cyber incident is an incident that leads to at least one of the following outcomes:

  • A “substantial loss” of the confidentiality, integrity, or availability of a network or member information system that (i) causes the unauthorized access to or exposure of “sensitive data,” (ii) disrupts vital member services, or (iii) seriously impacts the “safety and resiliency” of operational systems and processes;
  • A disruption of business operations, vital member services, or a member information system resulting from a cyberattack or exploitation of vulnerabilities; or
  • A disruption of business operations or unauthorized access to sensitive data facilitated through, or caused by, a compromise of a credit union service organization, cloud service provider, or other third-party data hosting provider or by a supply chain compromise.

If a credit union experiences any of these outcomes, it must notify the NCUA “as soon as possible but no later than 72 hours” from the time it reasonably believes that it has experienced a reportable cyber incident. Disruption to business operations seems to be the central consideration in whether cyber incident will be reportable, which mirrors the considerations of banking regulator’s final rule that governs federally insured banks. The NCUA has indicated that it will issue additional guidance before the rule goes into effect on September 1, 2023, including examples of both non-reportable and reportable incidents, and the proper method for providing notice to the NCUA via email, telephone, or other similar prescribed methods. This initial notification is merely an “early alert” to NCUA and does not require a detailed incident assessment within that initial 72-hour time frame.

In response to public comments, the NCUA clarified that this reporting requirement is distinct from the current five-day period to report “catastrophic acts,” which are defined as “any disaster, natural or otherwise, resulting in physical destruction or damage to the credit union or causing an interruption in vital member services” that is projected to last more than two consecutive business days. The NCUA dismissed concerns that it may be difficult for credit unions to differentiate between a “catastrophic act” and “reportable cyber incident,” and rejected requests to apply the longer five-day reporting period for events that may fall within both definitions. The NCUA also noted that “catastrophic acts” includes non-natural disasters such as a power grid failure or physical attack and indicated that it may provide additional clarification at a later date if needed. As currently drafted, a reportable cyber incident may very well fall within the scope of such definitions, and if that is the case, credit unions should likely err on the side of reporting the incident within 36 hours. To provide some clarity on the scope of the new rule, the NCUA stated it would be retaining the non-exhaustive examples set forth in the proposed rule constituting reportable cyber incidents, which include:

  • If a credit union becomes aware that a substantial level of sensitive data is unlawfully accessed, modified, or destroyed, or if the integrity of a network or member information system is compromised;
  • If a credit union becomes aware that a member information system has been unlawfully modified and/or sensitive data has been left exposed to an unauthorized person, process, or device, regardless of intent;
  • A DDoS attack that disrupts member account access;
  • A computer hacking incident that disables a credit union’s operations;
  • A ransom malware attack that encrypts a core banking system or backup data;
  • Third-party notification to a credit union that they have experienced a breach of a credit union employee’s personally identifiable information;
  • A detected, unauthorized intrusion into a network information system;
  • Discovery or identification of zero-day malware (which is a cyber-attack that exploits a previously unknown hardware, firmware, or software vulnerability) in a network or information system;
  • Internal breach or data theft by an insider;
  • Member information compromised as a result of card skimming at a credit union’s ATM; or
  • Sensitive data exfiltrated outside of the credit union or a contracted third party in an unauthorized manner, such as through a flash drive or online storage account.

On the other hand, blocked phishing attempts, failed attempts to gain access to systems, and unsuccessful malware attempts would not trigger a reporting requirement.

Notably, the NCUA’s reporting timeline is longer than the 36-hour timeline that applies to banks. The NCUA chose the 72-hour timeline in an effort to align the rule to reporting requirements for critical infrastructure, and specifically, to the requirements of the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA), which requires certain entities in critical infrastructure sectors—such as financial services, telecommunications, information technology, healthcare, energy, and others—to report certain cyber incidents to the Cybersecurity and Infrastructure Security Agency. This timeframe also aligns with GDPR and the UK Data Protection Act 2018, which require notification to the supervisory authority “without undue delay” and, where feasible, not later than 72 hours of becoming aware of a reportable breach. The NCUA decided to roll out its final reporting rule even though the final rule implementing CIRCIA is not required to be published until 2025.   Although the upcoming NCUA regulations will provide additional guidance, companies should not delay putting systems into place to detect and report cyber incidents where appropriate. Such preparations could include conducting training to ensure that employees are aware of the new reporting requirements, a chain of command for reporting suspected cyber incidents for review, updating the credit union’s incident response plan, and assigning relevant task owners for various phases of the incident response plan. Some aspects of the incident response plan will likely need to be supplemented once the NCUA issues additional guidance closer to the implementation date; however, credit unions should not delay in revisiting their data security monitoring and incident response procedures given the short notification timeframe.