Industry Coverage icon

Improve Governance and Stakeholder Engagement to Curb Shadow AI

Improve your artificial intelligence risk mitigation strategies with an acceptable use policy.

Unlock a Free Sample
  • Difficulty assessing risks of emerging AI technologies.
  • Challenges in implementing AI governance across departments.
  • Varying levels of policy development and corresponding processes across departments.
  • The novelty of AI can complicate risk assessment and mitigation strategies, especially in understanding appropriate use cases. Public sector must prioritize addressing unauthorized AI use through right policies while strengthening overall data security measures at the same time.

Our Advice

Critical Insight

  • Focus on actionable governance measures. Unauthorized AI use poses significant risks to federal departments and agencies. Identifying applicable risks and implementing targeted governance policies allows agencies to address immediate concerns while planning for long-term improvements in AI security and compliance.

Impact and Result

  • Identify Shadow AI risks relevant to your agency's use cases.
  • Develop an AI governance policy that would proactively address the potential misuse issues.
  • Create a roadmap for enhancing data security to support responsible AI use.
  • Most Shadow AI risks are extensions of existing data security concerns. Government entities can often adapt and expand current controls rather than creating entirely new systems.

Improve Governance and Stakeholder Engagement to Curb Shadow AI Research & Tools

1. Improve Governance and Stakeholder Engagement to Curb Shadow AI Deck – Use this research to implement AI governance proactively across all adoption stages, decreasing vulnerability to cyberattacks and the need for organization-wide risk communication and mitigation strategies.

Use this research to:

  • Evaluate risks associated with Shadow AI in government operations.
  • Assess suitability of existing governance frameworks to mitigate Shadow AI risks.
  • Communicate risks to leadership, stakeholders, and end users.
  • Determine acceptable use criteria for AI in federal contexts.

2. AI Governance Policy Template – Use this policy to govern the responsible use of generative AI to protect the interests of your organization from the risks associated with the technology.

A policy detailing required security protocols and acceptable use for Gen AI is the most immediate step all departments and agencies must take to deploy Gen AI in a safe and secure fashion.

3. Shadow AI Risk Map – A best-of-breed template to help you identify risks associated with the potential use of Shadow AI in your federal department or agency and determine appropriate mitigating tactics and policy statements.

Use this tool to assess the potential risks and prevalence of Shadow AI within your federal organization. It will help you identify unauthorized AI use, understand associated risks, and determine appropriate policy statements for your department's or agency's AI Governance Policy.

Unlock a Free Sample

Improve Governance and Stakeholder Engagement to Curb Shadow AI

Improve your artificial intelligence risk mitigation strategies with an acceptable use policy.

Analyst Perspective

Generative AI needs an acceptable use policy.

Shadow AI is the unsanctioned or uncontrolled use of AI tools that work outside of standard IT governance processes. Shadow AI poses significant risks to data privacy, security, ethics, and compliance. Shadow AI has the potential to undermine public trust and the responsible adoption of AI in federal government. With federal departments and agencies broadening their AI scope, and scaling their AI efforts beyond initial "proof of concept" investments, they face the challenge of managing the proliferation of Shadow AI.

To curb this unsanctioned use of AI and to make sure that appropriate AI initiatives are successfully scaled within the public sector, effective governance and stakeholder engagement are critical. Clear policies, standards, guidelines, and oversight mechanisms can all provide guardrails for AI development and deployment while fostering innovation and agility. Engaging leaders in IT, data, ethics, and operations will help align AI initiatives with the federal government's mission and values.

Photo of Paul Chernousov, Research Director, Industry, Info-Tech Research Group.

Paul Chernousov
Research Director, Industry
Info-Tech Research Group

Executive Summary

Your Challenge

You lack governance on the use of AI to minimize risks and maximize benefits.

You need better protections for data, confidentiality, and integrity when addressing Shadow AI.

You need to address AI’s potential misuse due to its evolving capabilities.

Federal departments scaling AI beyond initial projects may lack robust governance to manage Shadow AI risks. Some organizations may need to retroactively implement controls where unauthorized AI use has already taken root.

Common Obstacles

Difficulty assessing risks of emerging AI technologies.

Challenges in implementing AI governance across departments.

Varying levels of policy development and corresponding processes across departments.

The novelty of AI can complicate risk assessment and mitigation strategies, especially in understanding appropriate use cases. Organizations in the public sector must prioritize addressing unauthorized AI use through the right policies while strengthening overall data security measures.

Info-Tech’s Approach

Identify Shadow AI risks relevant to your agency's use cases.

Develop an AI governance policy that would proactively address the potential misuse issues.

Create a roadmap for enhancing data security to support responsible AI use.

Most Shadow AI risks are extensions of existing data security concerns. Government entities can often adapt and expand current controls rather than creating entirely new systems.

Info-Tech Insight

Focus on actionable governance measures. Unauthorized AI use poses significant risks to federal departments and agencies. Identifying applicable risks and implementing targeted governance policies allows agencies to address immediate concerns while planning for long-term improvements in AI security and compliance.

Federal organizations need to confront the Shadow AI threat

With the increased proliferation and adoption of AI in government systems, the rise of unauthorized AI use poses major challenges. This emerging threat requires proactive measures and a solid policy foundation to mitigate associated risks.

To start that journey, use this research to:

  1. Evaluate risks associated with Shadow AI in government operations.
  2. Assess suitability of existing governance frameworks to mitigate Shadow AI risks.
  3. Communicate risks to leadership, stakeholders, and end users.
  4. Determine acceptable use criteria for AI in federal contexts.

Implement AI governance proactively across all adoption stages. Without an official policy, employees lack clarity on the agency's AI stance. Unauthorized AI use can increase vulnerability to cyberattacks, necessitating organization-wide risk communication and mitigation strategies.

Employees often engage in the unsanctioned use of AI

Percentage of employees who have entered organizational data into an AI-powered tool their company hasn’t provided them for work: 49% (Source: AuditBoard/Harris Poll, 2024)

Shadow AI brings new challenges to many federal organizations

Shadow AI produces familiar risks in new forms

Most AI risks are new versions of familiar data security risks that can be mitigated by defining acceptable use and necessary security controls to support governance of AI.

These familiar security risks include:

  • Access control and authentication
  • Data encryption and protection
  • Audit trails and monitoring
  • Incident response and recovery procedures

IT leaders are unsure about how to evaluate risks associated with unauthorized AI use.

These risks include:

  • Data breaches and privacy violations
  • Compromised decision-making processes
  • Non-compliance with federal regulations
  • Cybersecurity vulnerabilities

Assessing risk and defining acceptable use are the first key steps to AI security improvement. Organizations should also re-evaluate their data security controls and plan necessary improvements to further mitigate risks associated with the unsanctioned use of AI.

Many IT leaders believe Gen AI introduces new risks, but few have dedicated teams for mitigation

6% — Organizations with a dedicated risk assessment team for Gen AI (Source: KPMG, 2023)

71% — IT leaders who believe Gen AI will introduce new data security risks (Source: KPMG, 2023)

Shadow AI presents three main types of risk

  1. Governance and compliance challenges

    Shadow AI undermines federal regulatory frameworks by operating outside the established governance structures. Employees using unauthorized AI tools often bypass key approval processes, leading to non-compliance with data protection laws and federal regulations. Such unauthorized use complicates departments’ ability to ensure adherence to ethical AI principles and maintain transparency in decision-making processes. This leads to the erosion of public trust in government operations.
  2. Operational security risks

    Unsanctioned AI use introduces significant vulnerabilities to federal IT infrastructures. When staff input sensitive data into unapproved AI systems, it creates potential access points for cyber attacks and data breaches. These shadow systems often lack proper security protocols, exposing federal networks to malware and other cyber threats. The use of external AI platforms without proper vetting increases the risk of unauthorized data access and potential exploitation of government information.
  3. Data management and data integrity issues

    Shadow AI compromises the reliability of federal data ecosystems by introducing unverified and unvetted information into official records. When AI-generated data is incorporated into government documents without proper review and validation, it could lead to the spread of inaccurate, biased, and factually incorrect information across departments and agencies. Overtime, such gradual corruption of data integrity could significantly impact the accuracy of records, decision-making processes, and the overall quality of government services provided to citizens.

Top-of-mind Gen AI concerns for IT leaders

  • Cybersecurity — 81%
  • Privacy — 78%
  • (Source: KPMG, 2023)

Shadow AI facilitates sophisticated cyberattacks

The proliferation of Shadow AI in federal agencies introduces new dimensions to existing cybersecurity challenges, regardless of whether the AI systems are public or private. Agencies should be vigilant about how unauthorized AI use could potentially facilitate sophisticated cyberattacks.

  • Social engineering

    • AI-powered phishing threatens federal data security through the creation of highly convincing attempts targeting government employees.
    • Sophisticated impersonation risks compromise sensitive information across various communication channels (email, voice, video).
  • Malware and unauthorized code

    • Shadow AI bypasses crucial security safeguards, potentially lacking protections against illegal activities present in reputable AI systems.
    • Unvetted AI-generated code introduces vulnerabilities when employees inadvertently use AI to create or modify code without proper security checks.
  • Data poisoning

    • Malicious actors may manipulate AI training data to introduce biases or backdoors into federal AI systems.
    • Compromised AI models risk producing flawed outputs, potentially leading to incorrect decision-making in critical government operations.

Governance and compliance

AI systems increase the complexity of cyberattacks

“Generative AI can allow cybercriminals to launch sophisticated and stealthy attacks like deepfakes or self-evolving malware, compromising systems on a large scale.” (Margareta Petrovic, Global Managing Partner, and Dr. KPS Sandhu, Head of Global Strategic Initiatives, Cybersecurity, Tata Consultancy Services, 2024)

Shadow AI enables significant data confidentiality risks

The unauthorized use of AI in federal agencies poses major threats to data confidentiality, potentially exposing sensitive government information. This risk exists in both public and private AI systems, requiring careful management and robust security measures.

  • Information exposure

    • Government data faces unauthorized exposure through input into public or private AI systems used without proper sanctions.
    • Classified information risks inadvertent incorporation into AI training datasets or outputs, compromising security.
  • PII and PHI vulnerabilities

    • Sensitive personal data may leak through AI outputs, as tools can inadvertently incorporate PII or PHI into their generated content.
    • Even private AI models threaten confidentiality when trained on sensitive government data, potentially revealing confidential details.
  • Data transmission risks

    • Unsecured data transfer to AI systems increases the likelihood of interception by malicious actors.
    • Shadow AI bypasses established data protection protocols, potentially exposing information during transmission or processing.

Operational security risks

Organizations must have solid governance structures in place

“Without governance, organizations can’t see what tools employees use and how much sensitive information is at risk.” (Michael Crandell. CEO, Bitwarden, 2024)

Shadow AI creates data integrity barriers

The use of unverified AI outputs in federal agencies can gradually erode data quality, compromising the integrity of government records and decision-making processes. Vigilant verification and quality control measures are essential to mitigate these risks.

  • Cumulative data degradation

    • Unchecked AI outputs erode data quality over time through the repeated use of unverified information.
    • Consistent reliance on low-quality, unverified AI-generated data gradually compromises the integrity of organizational records.
  • Decision-making impacts

    • Critical government operations risk flawed analyses when integrating unverified AI outputs into official databases.
    • Long-term planning and policy formulation may skew due to the compounding effect of multiple inaccurate AI-generated data points.
  • Cascading errors

    • Undetected AI-generated errors propagate rapidly across interconnected government systems and databases.
    • Data inconsistencies amplify across agencies, potentially leading to widespread misinformation in federal records.

Data management and data integrity issues

Data plays a key role in all AI systems

“Data quality, quantity, diversity, and privacy are critical components of data-driven applications, and each presents its own set of challenges.” (Al-Khalifa et al., Applied Sciences. 2023; 13(12):7082.)

Shadow AI arises from multiple failure points

Shadow AI refers to the unsanctioned or uncontrolled use of AI tools and systems within federal agencies, operating outside standard IT governance processes. The three main characteristics of Shadow AI are:

  1. Lack of official approval

    CharacteristicRisk to the Agency
    Employees use AI tools without proper authorization.Violation of federal procurement regulations.
    Circumvention of established procurement processes.Misallocation of government resources.
    Absence of formal AI use confirmation/documentation.Non-compliance with national AI governance frameworks.
  2. Use of unauthorized public AI platforms

    CharacteristicRisk to the Agency
    Staff utilize unsecure public AI tools for government tasks.Potential breach of classified information protocols.
    Processing of sensitive data on external platforms.Violation of national data protection laws.
    Dependence on AI systems with unverified security.Compromise of federal cybersecurity and IT standards/rules.
  3. Lack of formal governance and oversight

    CharacteristicRisk to the Agency
    AI use without adherence to security protocolsIncreased vulnerability to sophisticated cyber attacks.
    Absence of compliance checks for AI systems.Failure to meet national AI ethics and trust guidelines.
    Insufficient monitoring of AI decision processes.Liability for faulty, erroneous, or discriminatory outcomes in operations.
  4. (Sources: Binta, Kaushal, et Pandi, 2024.)

Improve your artificial intelligence risk mitigation strategies with an acceptable use policy.

About Info-Tech

Info-Tech Research Group is the world’s fastest-growing information technology research and advisory company, proudly serving over 30,000 IT professionals.

We produce unbiased and highly relevant research to help CIOs and IT leaders make strategic, timely, and well-informed decisions. We partner closely with IT teams to provide everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.

What Is a Blueprint?

A blueprint is designed to be a roadmap, containing a methodology and the tools and templates you need to solve your IT problems.

Each blueprint can be accompanied by a Guided Implementation that provides you access to our world-class analysts to help you get through the project.

Talk to an Analyst

Our analyst calls are focused on helping our members use the research we produce, and our experts will guide you to successful project completion.

Book an Analyst Call on This Topic

You can start as early as tomorrow morning. Our analysts will explain the process during your first call.

Get Advice From a Subject Matter Expert

Each call will focus on explaining the material and helping you to plan your project, interpret and analyze the results of each project step, and set the direction for your next project step.

Unlock Sample Research

Author

Paul Chernousov

Contributors

  • Info-Tech Research Group, Matthew Bourne, Managing Partner II
  • Info-Tech Research Group, Justin Eggstaff, Managing Partner
  • Info-Tech Research Group, Theo Antoniadis, Principal Advisory Director
  • Info-Tech Research Group, Kate Wood, Practice Lead, Security and Privacy
  • Info-Tech Research Group, Andrew Sharp, Research Director, Infrastructure and Operations
  • Info-Tech Research Group, Theo Antoniadis, Principal Advisory Director

Search Code: 106060
Last Revised: October 22, 2024

Visit our Exponential IT Research Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019