Privacy Regulation Roundup

Author(s): Safayat Moahamad, Carlos Rivera, John Donovan, Fred Chagnon, Ahmad Jowhar

This Privacy Regulation Roundup summarizes the latest major global privacy regulatory developments, announcements, and changes. This report is updated monthly. For each relevant regulatory activity, you can find actionable Info-Tech analyst insights and links to useful Info-Tech research that can assist you with becoming compliant.

Privacy in the Age of Robotics

Type: Article

Published: March 2025

Affected Region: USA

Summary: As a cybersecurity analyst since the ‘90s, I’ve witnessed firsthand the evolution of technology and its impact on society. The rise of autonomous robots with embedded artificial intelligence (AI) marks a significant shift, bringing advanced capabilities to public spaces across industries. Examples like Waymo’s self-driving cars, Knightscope’s security robots, and Unitree’s Go1 robotic dog illustrate how these machines are becoming more adaptive and interactive, capable of not only engaging with humans but also identifying them. This proliferation signals a major change in our relationship with technology, as autonomous engagement becomes more integrated into daily life. For cities and organizations aiming to deploy these robots at scale, ensuring safe, ethical, and privacy-conscious implementation is paramount – a challenge that’s made worse by the inadequacy of existing privacy mitigations for these novel human-robot interactions.

The unique challenges posed by autonomous robots stem from their ability to initiate engagement and operate in uncontrolled public environments, flipping the traditional dynamic where humans control interactions with technology. Unlike app-based or device-driven experiences, where users knowingly consent to data exchanges, robots with embodied AI introduce a new user experience that requires evaluating both human and robotic behavior to assess privacy risks. Control becomes a critical issue. In shared spaces, individuals probably feel uncertain about who or what governs the interaction – whether it’s the robot’s AI, a remote operator, or a hybrid system. This uncertainty, coupled with robots’ reliance on sensors like cameras and microphones, can lead to unintended privacy harm. Think of a delivery robot publicly displaying your name and order details. Existing safeguards built for predictable digital environments fall short here, necessitating a robust robotics privacy framework to address transparency, data minimization, and user empowerment.

Analyst Perspective: Looking ahead, the broad and rapid adoption of AI-powered autonomous robots will reshape societal interactions, making privacy-preserving design a priority. Drawing from my experience, I see frameworks like Google’s robotics privacy guidelines as a critical step forward, building on efforts like their AI Security Framework to foster a safer ecosystem. By focusing on identifying and minimizing data collection, assessing environmental control, and enforcing transparency around robotic capabilities, these frameworks provide actionable guidance for governments, businesses, and engineering teams. As these technologies evolve, proactive collaboration between technologists, policymakers, and privacy experts will be essential to mitigate risks, maintain public trust, and ensure that innovation doesn’t come at the expense of individual rights. As we forge ahead, getting privacy right around robotics is a societal imperative not just a technical challenge.

Analyst: Carlos Rivera, Principal Advisory Director – Security & Privacy

More Reading:


Quebec’s Digital Future: Cybersecurity, AI, and Transformation

Type: Article

Published: August 2024

Affected Region: Canada

Summary: Quebec is advancing its digital future with the Government Cybersecurity and Digital Strategy 2024-2028, a bold initiative to modernize public administration while strengthening cybersecurity protections. The strategy aims to improve digital services, secure critical infrastructure, and harness artificial intelligence (AI) responsibly.

A key priority is cybersecurity enhancement, with an expanded Government Cyber Defense Center and a new security classification model to protect sensitive data. Ethical hacking initiatives, including a bug bounty program, encourage vulnerability reporting to bolster defenses against cyber threats.

To accelerate digital transformation, Quebec is rolling out a unified digital platform for public services and expanding its Government Authentication Service to provide citizens with a secure digital identity. AI will play a central role in improving government efficiency while maintaining ethical oversight. Infrastructure modernization is also central to the plan, with cloud migration and the retirement of obsolete IT systems. Investments in fiber optics and cellular expansion will enhance digital accessibility across the province. Quebec’s strategy positions it as a leader in cybersecurity, AI, and digital governance, setting a national benchmark for future innovation and security.

Analyst Perspective: Quebec’s chances of successfully executing its Government Cybersecurity and Digital Strategy 2024-2028 are highly uncertain given the historical inefficiencies associated with government-led digital transformations. Bureaucracy, slow decision-making, budget overruns, and technical execution failures have plagued similar initiatives globally.

The government’s cloud migration strategy and cybersecurity enhancements are well-intended but risk falling into the same traps as past digital modernization efforts. Legacy IT systems, workforce shortages, and a lack of cross-agency coordination are significant hurdles. Without competent leadership, clear accountability, and strong project management, this initiative could become another example of policy-driven ambition without practical execution.

However, if Quebec can leverage private sector expertise, enforce strict governance frameworks, and rapidly adapt to emerging challenges, the strategy has a chance of partial success – particularly in cyber defense and AI implementation. The key will be avoiding bureaucratic stagnation and ensuring real-world impact rather than just policy announcements.

Analyst: John Donovan, Principal Research Director – Infrastructure and Operations

More Reading:


Drawing the Line: Where Enterprise Access Ends and Government Overreach Begins

Type: Article

Published: March 2025

Affected Region: EU

Summary: Apple and Signal are pushing back against government demands for encryption back doors, underscoring a growing global conflict between privacy-focused tech companies and state surveillance efforts. Apple is appealing a UK order under the Investigatory Powers Act (IPA) that would force it to weaken encryption for iCloud, arguing that such a move would compromise user security and create exploitable vulnerabilities. The case is drawing criticism from privacy advocates over potential violations of international agreements like the CLOUD Act.

Similarly, Signal has vowed to exit Sweden if the government enacts legislation requiring messaging platforms to store users’ messages in plain text for law enforcement access. Signal’s CEO, Meredith Whittaker, has taken a firm stance against weakening end-to-end encryption, warning that it would fundamentally undermine user trust and security. Sweden’s armed forces, too, have raised concerns in this regard.

These cases reflect a broader global trend of governments seeking greater access to encrypted communications, often citing national security. However, tech companies argue that encryption backdoors introduce systemic risks that ultimately threaten both individual privacy and broader cybersecurity. The outcomes of these disputes could set significant legal precedents, influencing future regulations and corporate approaches to encryption worldwide.

Analyst Perspective: The growing tension between enterprise data access and government surveillance questions whether it's hypocritical to oppose mass government surveillance while enforcing internal monitoring within organizations. Signal’s threat to exit Sweden over proposed metadata retention laws and Apple’s appeal against the UK’s encryption backdoor demands are examples of governments pressuring tech companies to weaken encryption. These actions pose systemic risks to privacy and security by creating vulnerabilities that could be exploited by cybercriminals and hostile entities.

In contrast, enterprise data access is presented as a necessary and controlled practice, governed by clear policies, internal controls, and legal frameworks. Companies monitor employee activity for security, compliance, and legal reasons within a framework of "reasonable expectation of privacy," ensuring transparency and proportionality.

The key distinction is scale and oversight. Enterprise monitoring is targeted and regulated, while government surveillance often lacks accountability and can lead to mass data collection and abuse. Security and privacy leaders are urged to champion strong encryption, advocate for transparent policies, and resist the normalization of mass surveillance. Reinforcing the idea that opposing government overreach while maintaining responsible enterprise data governance is not hypocrisy but a necessary defense of security and civil liberties.

Analyst: Fred Chagnon, Principal Research Director – Security & Privacy

More Reading:


The EU AI Act: Compliance in effect

Type: Legislation

Enforced: February 2025

Affected Region: EU

Summary: On February 2, 2025, the rules pertaining to prohibited artificial intelligence (AI) practices and AI literacy requirements went into force. The act provided a list that identifies AI systems that pose an unacceptable risk level due to their threat to safety, livelihood, and rights of people. This includes the use of:

  • Facial recognition databases built through scraping images online.
  • Biometrics used to determine a person’s identity.
  • Criminal prediction software.
  • Systems that exploit sensitive data.

The European Commission’s guidelines intend to provide explanation and examples toward achieving compliance to promote safe and ethical AI. The Commission also released its repository of AI literacy practices. Individuals leveraging AI within a company must possess sufficient technical knowledge of the AI system, its usage, and who it will affect. The obligations are further grouped into two categories. One focused on specific requirements regarding AI systems, and the other on requirements toward awareness of risks and harms.

By August 2, more requirements of the Act will take effect. Member states will appoint authorities with powers to issue fines. Organizations using AI systems on the prohibited list may face fines of up to 7% of their annual revenue or 35 million, whichever is greater.

Analyst Perspective: The requirements of the Act have the potential to proliferate safe and trustworthy AI. However, AI regulations often result in a lack of actionable steps that organizations should be implementing to adhere to the requirements. Hence, developing an AI compliance strategy that takes a risk-based approach will enable you to prioritize critical activities and effectively manage the complexities of AI compliance.

From identifying your AI systems and applications to defining controls necessary to govern the AI system and monitor the effectiveness of the program, having a risk-based approach to developing an AI compliance strategy will:

  • Streamline compliance with evolving regulations.
  • Improve cross-functional collaborations.
  • Increase the transparency and accountability of the stakeholders involved in the usage of the AI system.

Having a strategy to address AI compliance requirements will improve the visibility into your AI investments and further define compliance metrics to legal and ethical standards. This, in turn, will reduce the risk of reputational damage and demonstrate commitment to responsible AI – fostering trust with customers, partners, and stakeholders alike.

Analyst: Ahmad Jowhar, Research Director – Security & Privacy

More Reading:


AI Compliance: A Comparative Look at the EU and China

Type: Article

Published: February 2025

Affected Region: EU and China

Summary: The EU and China have distinct approaches to AI regulation, reflecting their respective legal and policy priorities. The EU AI Act takes a horizontal, risk-based approach, categorizing AI systems into prohibited, high-risk, and transparency-required categories, ensuring uniform compliance across sectors. In contrast, China’s AI governance is sector-specific, with regulations targeting algorithmic recommendations, generative AI, and AI’s influence on public opinion and social mobilization.

Compliance requirements also differ. In the EU, high-risk AI systems must undergo pre-market conformity assessments, ensuring they meet transparency and risk mitigation standards. China’s approach is more dynamic, imposing restrictions based on AI’s perceived societal impact, with stricter controls on AI that could affect public discourse.

Enforcement mechanisms reflect these differences. The EU’s structured compliance model, overseen by the AI Office and AI Board, ensures legal certainty but requires strict adherence. In contrast, China enforces AI regulations rapidly through multiple regulatory bodies, prioritizing state control and oversight over public transparency. For businesses, navigating these frameworks requires adaptable compliance strategies, balancing predictability in the EU with agility in China.

Analyst Perspective: The EU and China’s AI compliance approaches reflect fundamentally different models, creating challenges for businesses operating in both regions. The EU AI Act follows a risk-based approach, whereas, China’s AI regulations are sector-specific and interventionist, focusing on algorithmic control, content moderation, and AI’s impact on public discourse – which could allow rapid but unpredictable changes.

For enforcement, the EU relies on centralized oversight bodies, ensuring structured penalties and transparency. China, however, employs swift, state-driven enforcement. Companies operating across both jurisdictions must adapt their AI strategies accordingly, balancing long-term compliance investments in the EU with agile and real-time regulatory monitoring in China.

The EU’s structured framework demands proactive alignment with legal requirements, while China’s evolving AI rules require continuous regulatory engagement and operational flexibility. As AI regulation and compliance continue to evolve globally, businesses that stay adaptive and jurisdiction-aware will be better positioned to mitigate risks and maintain market access in an increasingly complex landscape.

Analyst: Safayat Moahamad, Research Director – Security & Privacy

More Reading:


If you have a question or would like to receive these monthly briefings via email, submit a request here.

Visit our IT Critical Response Resource Center
Over 100 analysts waiting to take your call right now: +1 (703) 340 1171