- AI technologies are being rapidly adopted by both technical and nontechnical staff. As an IT or business leader, you need to mitigate the risk of improper or harmful use of AI by quickly establishing responsible AI guiding principles to underpin your organization’s approach to AI technology deployments.
- These principles should serve as an ethical foundation for policies and governance practices. Without these principles, and a proper strategy to implement them, the risks associated with deploying AI solutions will negatively impact business outcomes.
Our Advice
Critical Insight
Create awareness among the CEO and C-suite on the benefits of and need for establishing responsible AI guiding principles to provide the safeguards for designing, developing, and deploying AI-based solutions.
Impact and Result
In this publication, we will help you get key stakeholders on board and define AI guiding principles unique to the culture and values of your organization.
- Leverage our foundational principles to kick-start a conversation with your key stakeholders on adapting your own principles.
- Customize our template to create a boardroom-ready presentation that will help educate staff on what your principles are and how they were developed.
- Follow our lifecycle to keep your principles current.
Member Testimonials
After each Info-Tech experience, we ask our members to quantify the real-time savings, monetary impact, and project improvements our research helped them achieve. See our top member experiences for this blueprint and what our clients have to say.
9.5/10
Overall Impact
$19,350
Average $ Saved
12
Average Days Saved
Client
Experience
Impact
$ Saved
Days Saved
City of Winter Park
Guided Implementation
9/10
$13,700
5
The presenter is very knowledgeable. I have to work on getting the policy developed.
ISSofBC
Guided Implementation
10/10
$25,000
18
Altaz is a fantastic resource!
What is responsible AI?
Responsible AI is a clearly defined strategy that governs the ethical development and use of AI technologies within your organization. This strategy, which typically includes a specific set of policies, is meant to reduce the risk of improper or harmful effects of AI, which can include wrong, inaccurate, or biased outputs, and cybersecurity risks such as deepfakes.
What are the principles of responsible AI?
Each organization has its own policies for the responsible use of AI but all have principles in common. Some commonalities include:
- Privacy: The privacy of AI users must be respected and safeguarded.
- Fairness and inclusivity: AI systems must be aware of and avoid bias.
- Accountability: The use of AI must include human supervision.
- Transparency: AI technology must be explainable and understandable.
Develop Responsible AI Guiding Principles
Find your north stars for responsible AI.
Analyst Perspective
Find your organization's north stars for responsible AI.
For many previous technology revolutions (e.g. computerization, digitalization) business and technology leaders had time to plan the approach and control the implementation. In the case of AI (and especially generative AI), the horse had already left the barn before most organizational leaders could even begin to grasp the full complexities of opportunity and risk inherent in the technologies.
Nontechnical staff were exploring the productivity gains in ChatGPT and other similar tools, and technical staff were either formally or informally (through citizen AI efforts) deploying AI models, before leadership could get policies and governance in place. Complicating things further, the technology is evolving so rapidly from week-to-week that traditional policy and governance approaches are lacking.
This pace has made the need for responsible AI practices essential. Organizations need to quickly understand the new risks that AI-based solutions can introduce. Within this, responsible AI guiding principles are foundational, providing a framework to set safeguards for technical and nontechnical staff when working with AI technologies.
Think of your AI principles as the rhythm that staff move to in relation to AI – not a set of rules per se, but a way of conducting ourselves when working with the technologies to ensure the organization is leveraging their innovative potential while at the same time protecting shareholder value from risk and failures.
Travis Duncan
Research Director Info-Tech Research Group |
Executive Summary
Your Challenge
- AI technologies are being rapidly adopted by both technical and nontechnical staff. As an IT or business leader, you need to mitigate risk by quickly establishing guardrails that will define and guide the organization's approach to AI technology deployments.
- Guiding AI principles are the guardrails you need. They are a foundation for policies and governance practices. Without a proper strategy and responsible AI guiding principles, the risks to deploying AI solutions will negatively impact business outcomes.
Common Obstacles
- Getting key stakeholders to participate and to understand the importance of AI principles. Business stakeholders need to participate in the development and establishment of responsible AI guiding principles to optimize investments and to minimize risks involved with AI-based solutions.
- Turning principles into practice. Establishing the principles is one thing but embedding them into day-to-day operations of both technical and nontechnical staff is another challenge altogether. Good communication and executive sponsorship is key to ensuring principles translate into action.
Info-Tech's Approach
In this publication, we will help you get key stakeholders on board and define AI guiding principles unique to the culture and values of your organization.
- Leverage our foundational principles as they are or use them to kick-start a conversation with key stakeholders on adapting your own principles.
- Customize our boardroom-ready Responsible AI Guiding Principles Presentation Template to help educate staff on what your principles are and how they were developed.
- Follow our lifecycle to keep your principles current.
Info-Tech Insight
Create awareness among the CEO and C-suite of executives on the benefits and need to establish responsible AI guiding principles to provide the safeguards for designing, developing, and deploying AI-based solutions.
AI technologies are being rapidly adopted
As an IT or organizational leader, you need to provide direction and inform policies on the appropriate use of AI for your organization.
- The rapid proliferation and adoption of AI technologies by both technical and nontechnical staff has largely found IT and business leaders playing catch-up in terms of drafting, ratifying, and socializing acceptable use policies and statements.
- Without such measures in place when deploying AI solutions, organizations open themselves up to risks that could negatively impact business outcomes. Threats to reputation and trust, privacy and security, inclusiveness and transparency are among the risk events that can jeopardize organizations when the adoption of AI technologies outpaces the implementation of AI policies and governance.
This guide will help you develop a solid foundation for your organization's use of AI by helping you to define responsible AI guiding principles.
35% — Globally, only 35% of consumers trust in the use of AI by organizations.
(Source: Analytics Vidhya, 2023)
77% — 77% of people feel organizations must be held to account for misuses of AI.
(Source: Analytics Vidhya, 2023)
AI failures can carry a heavy cost
Recent history has taught that even the most well-funded organizations can be negatively impacted when their approach to AI deployments lack a responsibly well-rounded, risk-aware approach.
Amazon HR Hiring Tool
The team started building a model in 2014 to review job applicants' resumes and identify top candidates. Resumes used to train the model spanned a 10-year period and most came from men. After four years and significant investment, the hiring application was abandoned. (Source: "Amazon scraps secret AI recruiting tool that showed bias against women." Reuters, October 10, 2018.)
Google Vision Cloud
AlgorithmWatch exposed racial bias in Google Vision Cloud, an automated image-labeling service. In a viral experiment, the service labeled a thermometer held by a dark-skinned individual a "gun," but in a similar image it labeled a thermometer held by a light-skinned person an "electronic device." (Source: "Google apologizes after its Vision AI produced racist results." AlgorithmWatch, April 7, 2020.)
Microsoft Chatbot Tay
The bot's goal was to improve Microsoft's understanding of conversational language used by young adults online. Within hours of the bot's launch, Twitter users discovered flaws that caused the bot to respond to certain questions with offensive answers. (Source: "Microsoft's artificial intelligence Twitter bot has to be shut down after it starts posting genocidal racist comments one day after launching." Daily Mail, March 24, 2016.)
What is responsible AI?
To mitigate risks to the corporation and staff, organizations require a responsible approach to developing, implementing, and using AI systems.
- As defined by the World Economic Forum, "Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence."
- In essence, responsible AI is the practice of responsible action to mitigate harm to people, corporations, and society.
- The terms responsible, ethical, and trustworthy are often used interchangeably, and people who use the terms often have similar goals and objectives.
- When implementing responsible AI, it is best practice to define AI guiding principles and make these transparent to internal and external stakeholders.
Info-Tech's Foundational Responsible AI Principles
Info-Tech recommends six core AI guiding principles that were distilled from industry frameworks and practitioner insights. This research will help you use our core six as a jumping-off point in defining the right principles for the unique needs of your organization.
What are guiding principles?
In general, an enterprise principle is a decree or standard that serves as a foundation for decision making at all levels of the organization, helping to support in the fulfillment of the organization's goals and mission.
In the development and use of AI technologies, guiding principles are foundational inputs into usage policies, model development approaches, and responsible AI practices more generally.
Key things to note when considering the development of AI guiding principles include:
- AI principles are not developed in isolation. They are derivatives of corporate principles and should be informed by an AI strategy.
- Other important inputs that can inform AI principles include:
- Corporate mission, vision, and culture
- Industry, legal, and regulatory frameworks
- Internal ordinances, ethical frameworks, and executive orders
- Risk management frameworks, etc.
- Principles are made actionable by serving as inputs into AI governance frameworks and policies. Through education and training, principles should inform how both technical and nontechnical staff approach the use of AI in their day-to-day roles.
A sample diagram showing the inputs to, and outputs from, defining responsible AI guiding principles.
Responsible AI principles should be in place to help inform an overall AI roadmap
The development of principles is part of a journey that begins with corporate strategy and mission and that ends in the deployment and implementation of AI models.
- Consider Corporate Strategy and Mission
Assemble key stakeholders (e.g. board members, C-suite, shareholders) to identify the organization's relevant business strategy, mission, and objectives. - Define AI Strategy
Work with key business stakeholders to provide a view of how IT will invest resources to leverage AI technologies to best support business strategy. - Develop Responsible AI Principles
Assess current AI maturity level and identify your desired target state to address gaps. - Build a Responsible AI Roadmap
Define policies, integrate tools, upgrade process, and test and monitor for compliance.
Download Info-Tech's Build Your Generative AI Roadmap blueprint
If you are further along in your AI journey than developing responsible AI principles and are looking to build a roadmap of AI initiatives, see Info-Tech's Build Your Generative AI Roadmap blueprint.
The benefits of implementing responsible AI principles are multiple
Avoid the costs of AI failures and leverage the benefits of responsible AI.
The business case for implementing responsible AI principles is clear. With just a little bit of internal effort, organizations can establish safeguards that will help them more effectively navigate the risks associated with AI implementations and maximize the value of business outcomes.
Benefits include:
Improved end-user confidence and trust
Improving the trust-level of the AI application improves the adoption of the application by users and customers.
Improved risk awareness and oversight
AI principles improve the understanding of AI risks and provide structure to your governance and risk management practices.
Improved decision making
By incorporating AI principles such as accuracy, reliability, transparency, and fairness into the model/system development lifecycle, it will improve model outcomes enabling more effective decision making.
Know the barriers to successfully implementing AI principles
52% — 52% of companies claim to practice some form of responsible AI.
(Source: MIT, 2022)
79% — Of those 52%, however, 79% say those implementations are limited in scale and scope.
(Source: MIT, 2022)
-
Awareness of executive and business stakeholders on the importance of principles and their role in helping to ratify and promote them:
Business stakeholders need to participate in the development and establishment of responsible AI principles. However, getting their understanding on why principles are important, and why it's important that they be involved in their creation, can sometimes be a challenge. Creating a sense of urgency and importance among the key stakeholders is a must. -
Getting alignment among key stakeholders on the right principles for the organization:
The topic of principles can force us into discussions that are not typically common in business settings (e.g. personal morals and ethics). Stakeholders can have especially strong convictions, which can lead to divergent viewpoints. Even when proposing to adopt broadly accepted principles within an industry, getting alignment among key stakeholders on the right principles for your organization can be a challenge. -
Turning principles into action:
One of the greatest challenges for organizations is to turn principles into actions by embedding them into day-to-day operations of both technical and nontechnical staff. Practically, this means grounding data operations and machine learning teams with the data ethics design principles, factoring principles into design and technical builds, data processing and data sharing frameworks, and considering principles when evaluating new use cases.
Info-Tech's approach
- A comprehensive yet flexible approach that can help you meet new challenges as they arise.
- A practical and tactical framework and template that will help you save time.
- Leverage industry best practices and practitioner-based insights.
Info-Tech's methodology for developing responsible AI guiding principles
The slides ahead will take you through the following steps, all with the goal of customizing Info-Tech's Responsible AI Guiding Principles Presentation Template.
1. Evaluate Key Inputs |
2. Draft AI Principles |
3. Prepare to Present |
|||
1.1 Review Info-Tech's Foundational Principles |
1.2 Evaluate Other Key Inputs |
2.1 Engage Key Stakeholders |
2.2 Draft Responsible AI Principles |
3.1 Outline Next Steps |
3.2 Complete the Responsible AI Guiding Principles Presentation Template |
Get to know Info-Tech's core six AI principles and some use cases behind them. | Look at other essential inputs to defining your organizations principles, including industry standards, and regulatory frameworks and guidelines. | Prepare to engage executives and other key stakeholders to ensure their participation in the process of defining AI principles. | Identify the core principles (as well as their rationale and implications) that will guide the development of responsible AI for the organization. | Create a roadmap of next steps that your AI principles will help inform (i.e. policies and governance structures) as well as procedures for monitoring principles for adoption and relevancy. | Customize Info-Tech's Responsible AI Guiding Principles Presentation Template with the outputs of the previous steps as you prepare to educate and train staff on your responsible AI principles. |
Our deliverable: Responsible AI Guiding Principles Presentation Template
With some minimal customizations (which the slides ahead will help you with) our boardroom-ready presentation template will help you educate and train staff.
Download Info-Tech's Responsible AI Guiding Principles Presentation Template
Use the template to help explain the importance of responsible AI to your key stakeholders, as well as technical and nontechnical staff.
Use the template to help explain the key inputs that informed your principles (and why).
Use the template to communicate your principles, detailing for each the rationale that led to its selection as a principle as well as its initial implications for individual and group conduct.
Step 1.1: Review Info-Tech's Foundational Principles
Review Info-Tech's core six AI principles and some use cases behind them.
Our Responsible AI Principles
Leveraging industry best practices and practitioner insights, Info-Tech has identified six foundational AI principles.
Responsible AI
- Accountability
AI actors will be accountable for the functioning of AI systems. - Fairness & Bias
We will endeavor to ensure any models/systems are fair and free from bias - Data Privacy
Privacy values such as anonymity, confidentiality, and control will guide our choices for AI model/system design. - Explainability & Transparency
AI model/systems should provide meaningful information and be transparent and explainable to end users. - Safety & Security
AI model/systems should be resilient, secure, and safe throughout their entire lifecycle - Validity & Reliability
AI systems should perform reliability and as expected.