Privacy Regulation Roundup

Author(s): Safayat Moahamad, Carlos Rivera, Mahmoud Ramin

This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated on a monthly basis. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.

What Does a Second Trump Presidency Mean for Privacy, AI Governance?

Canada USA Europe APAC Rest of World



Type: Article
Published: November 2024

Summary: On November 5, we witnessed the election of Donald Trump to a second, non-consecutive term as US President, bringing significant shifts in regulatory philosophy that could impact digital governance and technology policy. With the Senate and House of Representatives now under Republican control, we’re entering a period of uncertainty around how campaign promises will translate into actionable policy. The new administration is poised to pivot in several key areas: through direct executive actions, changes in regulatory agencies, and a legislative reset. Immediate executive actions are expected to focus on revising or killing off previous policies, particularly those related to AI governance, like President Biden's executive order on AI. The president-elect’s choices for agency leadership will signal policy direction, with a particular focus on privacy and AI, given the political appointees’ roles in shaping these areas.

The incoming administration’s approach to digital trade policy appears cautious, likely preserving existing frameworks like the EU-U.S. Data Privacy Framework while potentially realigning US positions on data flow and localization. However, significant shifts in regulatory agencies like the FTC, CFPB, and FCC are highly likely, with new leadership possibly altering enforcement priorities and stalling or reversing ongoing rulemakings. This includes the FTC's privacy initiatives, which could face opposition or redirection under new Republican leadership, particularly in areas like AI governance and commercial surveillance.

Analyst Perspective: As a cybersecurity advisor, I think we need to prepare for a dynamic policy environment in 2025. The incoming administration's focus on reducing regulatory burdens might lead to a period where privacy and cybersecurity standards could be relaxed or reinterpreted, particularly in hot areas like AI and data privacy. Organizations should stay aware, preparing for potential shifts in compliance requirements and enforcement. While some long-standing policies might remain stable, I feel the new political climate could accelerate the need for companies to adapt their strategies, ensuring they are robust enough to withstand policy volatility while still protecting consumer data and maintaining trust.

Analyst: Carlos Rivera, Principal Advisory Director – Security & Privacy

More Reading:


Data Compliance: The Evolving Landscape in China

Canada USA Europe APAC Rest of World




Type: Regulation
Effective: January 2025

Summary: China’s latest Data Security Regulation, effective January 1, 2025, introduces a framework to implement the Cybersecurity Law (CSL), Data Security Law (DSL), and Personal Information Protection Law (PIPL). It provides much needed clarity and balance between compliance demands and business innovation, impacting domestic and foreign companies alike.

The scope of the regulation is expansive, applying not only to data activities within China but also to foreign entities engaging with the Chinese market. Companies outside China that collect personal data for sales, services, or tracking the behavior of Chinese citizens must establish a local presence or appoint a representative in China.

A key feature of the regulation is the clarification around “important data,” a previously ambiguous concept. Businesses can now rely on published catalogs or regulatory notifications to determine whether their data qualifies as “important.” Cross-border data transfers see notable relaxations. In addition to Cyberspace Administration–led assessments and Standard Contractual Clauses, new mechanisms like transfers for contract performance and emergencies have been introduced. While stringent approvals remain for certain transfers, these new pathways provide businesses with options tailored to their operational needs.

Enforcement under the regulations is robust, with penalties that include business suspension, fines up to 5% of annual turnover, and even criminal liability for severe violations. Senior executives can be personally liable, which emphasizes the importance of proactive compliance.

Analyst Perspective: The clarified compliance requirements and expanded transfer mechanisms offer avenues to streamline operations, but the timeline to prepare is tight. Companies must swiftly assess how the regulations affect their data strategies, especially those with cross-border activities or extra-territorial risks. With explicit obligations for privacy, security, and ethical AI practices, businesses must not only comply with data laws but also demonstrate proactive risk management. For industries like finance, public utilities, and AI-driven sectors, the imperative is clear – embrace a culture of accountability, privacy, and adaptability. The regulation also reflects China’s focus on data sovereignty, providing businesses with clearer rules while reinforcing national security and privacy priorities.

Analyst: Safayat Moahamad, Research Director – Security & Privacy

More Reading:


Responsible AI in Practice – Where Privacy Meets Innovation

Canada USA Europe APAC Rest of World

Type: Guidance
Date: October 2024

Summary: AI is increasingly embedded in everyone’s life, impacting us every day. Despite its huge benefits, AI has potential risks. One of these risks is data privacy, making data security and management an important practice, especially for organizations using lots of inputs for training AI models. Various frameworks, such as ISO 42001 and NIST’s Responsible AI Framework, exist to guide organizations through the effective implementation of policy regulations. However, even with such frameworks, managing data privacy processes for AI systems can be complex. In addition to these guidelines, organizations should take further steps to find out who is responsible for AI governance in the organization, making AI accountability very pivotal.

Privacy teams need to know what data is required for training AI solutions, know how AI solutions are employed, and ensure data regulations and policies are in place. It’s crucial to know whether the organization is prepared to handle the data, especially as data gets more sophisticated.

Organizations need to ensure they comply with data regulations. The EU AI Act is going to become effective in August 2026, addressing requirements for high-risk AI systems and introducing legal and financial consequences of noncompliance with AI regulations. It will force organizations to apply proactive measures like privacy by design, including data privacy considerations into systems from the ground up. This approach will help develop a data pool for building proper algorithms while ensuring personal data privacy.

Employee training on responsible use of data at all levels is also crucial and will rely on having a culture of awareness and accountability. Organizations should encourage their staff to ask AI the right questions and advocate for privacy rights.

Analyst Perspective: It usually takes more than one single department, apart from technology teams, to work together to make a holistic approach for data privacy for AI systems. This approach helps create a shared approach toward ethical AI.

Privacy professionals have a significant role and should make sure to bridge the gap between data protection and AI governance, integrating them into their existing data privacy practices. They should conduct a thorough AI risk assessment to predict problems before they arise. Furthermore, organizations must ensure that personal data is not used to train AI systems and that all the data is managed responsibly and professionally.

Cross-departmental collaboration is essential to address risks associated with AI, and privacy officers have a central role in creating comprehensive strategies to mitigate legal and ethical issues. Foster a collaborative culture to harness the benefits of AI while ensuring the organization’s compliance with ethical and regulatory requirements.

Analyst: Mahmoud Ramin, Senior Research Analyst – Infrastructure & Operations

More Reading:


Ontario Court Clarifies the Dichotomy Between Legal Privilege and Transparency

Canada USA Europe APAC Rest of World


Type: Case Law
Date: April 2024

Summary: Millions of Canadians’ personal health information was compromised in the significant privacy breach at LifeLabs in 2019. It resulted in a contentious legal battle where LifeLabs sought not to disclose certain documents under solicitor-client privilege. This strategy delayed the release of a privacy investigation report for over four years.

However, the Ontario Superior Court of Justice ruled that privilege does not extend to factual data that organizations are legally obligated to disclose. This applies even when legal counsel is involved. Drawing on precedents like the US Capital One case, the court reinforced that regulatory transparency takes precedence over broad claims of privilege. This echoes lessons from past cases, like the unsuccessful attempt by Aylo (previously MindGeek) to block a similar privacy report in Canada.

This decision showcases that while privilege remains vital to protecting candid legal communications, it cannot serve as a blanket shield against regulatory scrutiny.

Analyst Perspective: In the context of data breaches, the LifeLabs case offers a compelling lesson on the intersection of privacy law, regulatory cooperation, and the limitations of legal privilege. It captures the dichotomy between an organization’s right to confidentiality and its statutory duty to ensure regulatory transparency. In its attempt to prevent documents from disclosure under privilege, LifeLabs may have delayed the release of a critical investigation report but also undermined public trust and regulatory prerogative.

Courts have now made it clear that “involving legal counsel as part of a breach remediation and subsequent investigation does not automatically render everything exchanged as privileged” (IAPP). Drawing on US-based precedents, this case highlights that accountability is not optional and cooperation with regulators can be a strategic necessity. In a society where data breaches have become rampant, transparency can go beyond just a compliance checkbox and become the foundation of resilience and trust.

Analyst: Safayat Moahamad, Research Director – Security & Privacy

More Reading:



If you have a question or would like to receive these monthly briefings via email, submit a request here.

Visit our Exponential IT Research Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019