Your organization has built its AI strategy, developed some high-value use cases, and begun the process of acquiring and licensing a Gen AI platform. A crucial part of that process will be identifying and mitigating risk, and the Gen AI vendor contract is a good place to start.
Our Advice
Critical Insight
- As you prepare for contract negotiation, take the opportunity to build risk awareness about the nature of these offerings and how you may be impacted.
Impact and Result
- Understand how the major areas of risk in Gen AI products may manifest in the contracts for these products.
- Come to a consensus on your level of risk tolerance before entering into negotiations.
- Determine which risks can be addressed in negotiations, which are to be mitigated operationally, and which cannot be mitigated.
Prepare to Negotiate Your Generative AI Vendor Contract
Build risk awareness: You can’t begin to negotiate until you understand where your real risk points are.
EXECUTIVE BRIEF
Analyst Perspective
Build risk awareness: You can’t begin to negotiate until you understand where your real risk points are.
Generative AI (Gen AI) has arrived on the scene and your organization has decided to leverage this technology. The business sees value in exploiting Gen AI products, derived from large language models, for a variety of use cases such as data analysis and summary, copywriting, image generation, or code-writing. However, excitement around the potential of these tools is tempered by awareness of their risk landscape. You need to build risk awareness and understand what to consider when entering into negotiations. You can’t begin to negotiate until you understand where your real risk points are. What considerations must be surfaced as you prepare to negotiate for a Gen AI contract? Which risks can be addressed within the contract, and which will be mitigated operationally? Then, assess your risk around prominent Gen AI concerns and identify where you mitigate or otherwise respond to these risks. In all this context, you must employ a risk-based approach so you understand how far you are leaning into this space. Emily Sugerman |
Executive Summary
Your Challenge |
Common Obstacles |
Info-Tech’s Approach |
---|---|---|
|
|
|
Info-Tech Insight
In preparation for contract negotiation, take the opportunity to build risk awareness about the nature of these offerings and how you may be impacted.
Your challenge
This research is designed to help organizations that want to:
- Contract with a Gen AI platform provider, or a provider incorporating AI into their suite of existing products, in a rapidly growing yet still volatile market.
- Secure a tool/platform that will enable the Gen AI-enabled use cases that the organization has determined are part of its roadmap.
- Ensure that the organization enters contract negotiations reasonably informed of the risks currently understood as inherent in this technology, and aligned on the position it will take on these risks in the negotiation process.
The Gen AI market is projected to grow to 1.3 trillion by 2032 (Bloomberg, 2023).
ChatGPT received 1.4 billion visits in August 2023 (Similarweb, 2023).
But it’s not clear whether these tools are profitable yet:
GitHub Copilot was estimated to be losing approximately $20 per user every month in early 2023 (The Wall Street Journal, 2023).
Common obstacles
Gen AI risks will factor into your negotiation preparation.
- These tools present potential risk if users are already using AI without clear guardrails or guidelines set by IT.
- Organizations are unclear on whether the company’s data is safe in a Gen AI tool. What control do you potentially relinquish when providing inputs into the tool?
- Class action lawsuits are now hitting Gen AI providers, especially related to copyright infringement. If third-party copyright violations do occur through your company’s use of the tool, will your provider indemnify you against them or will you be responsible?
- Open lawsuits and an immature regulatory environment mean the Gen AI providers’ legal obligations might change: how will this affect the long-term viability of the product and your ability to use it as anticipated?
Entering the era of Gen AI
Contextualizing Gen AI in the broader AI landscape.
Artificial Intelligence (AI)
A field of computer science that focuses on building systems to imitate human behavior. Not all AI systems have learning behavior; many systems operate on preset rules, such as customer service chatbots.
Machine Learning (ML) and Deep Learning (DL)
An approach to implementing AI, whereby the AI system is instructed to search for patterns in a data set and then make predictions based on that set. In this way, the system “learns” to provide accurate content over time (think of Google’s search recommendations). DL is a subset of ML algorithms that leverages artificial neural networks to develop relationships among the data.
Generative AI (Gen AI)
A form of ML whereby, in response to prompts, a Gen AI platform can generate new outputs based on the data it has been trained on. Depending on its foundational model, a Gen AI platform will provide different modalities and thereby use case applications.
Key concepts
Artificial Intelligence (AI) |
Machine Learning (ML) |
Responsible AI |
Generative AI (Gen AI) |
Natural Language Processing (NLP) |
ChatGPT |
Inputs/Prompts |
Outputs |
Hallucination |
Understand Gen AI and its commercial models
What kind of platform will you be using?
What is Gen AI?
A form of ML whereby, in response to prompts, a Gen AI platform can generate new outputs based on the data it has been trained on. Its outputs include text, code, images, audio, and video.
Direct Access |
Product Extensions |
---|---|
|
|
For more on foundational AI concepts and industry use cases, see Info-Tech’s An AI Primer for Business Leaders.
Info-Tech Insight
The direct access model will present equivalent risks and different risks to the consumer. While they may not have negotiation leverage in this scenario, an understanding and evaluation of the contract risks is still required.
Derive your contract negotiation position from preexisting responsible AI principles
Your organization should have already defined its responsible AI principles and have a reasonable understanding of AI capabilities, opportunities, and risks before you begin negotiating with providers.
Develop Responsible AI Guiding Principles: Use this guide to establish responsible AI guiding principles that are foundational, providing a framework to set safeguards for technical and nontechnical staff when working with AI technologies so the organization can leverage their innovative potential while protecting shareholder value from risk.
Where do you fall on the vendor manager Gen AI risk evaluation continuum?
Use this deck to help identify where you fall on the continuum.
Source: Info-Tech's Adopt a Structured Acquisition Process to Ensure Excellence in Gen AI Outcomes blueprint
Identify areas of contract risk aligned with responsible AI principles
Insight summary
Assess the space
In preparation for contract negotiation, take the opportunity to build risk awareness about the nature of these offerings and how you may be impacted.
Manage risk
Learn the difference between the vendor’s standard consumer license terms and enterprise terms and assume initial terms will favor the vendor. In such an unsettled space, establish clarity beforehand about your risk tolerance profile and work to secure terms more favorable to you.
Know when you want to walk away
The terms focused on liability and security will likely be rigid, necessitating a risk analysis around a take-it-or-leave-it standard. Other terms may be more negotiable (e.g. around the levers of solution governance required by the customer to allow the purchase of a vendor’s solution).
Deliverable
The key deliverable where you will document the outcomes from the activity in this deck is:
Prepare to Negotiate Your Generative AI Contract Risk Assessment Tool
Use this tool to help you identify the major areas of risk and roadblocks you will want to pay attention to in the negotiation process for a Gen AI tool.
Understand Gen AI risks
Especially as they pertain to your negotiation process.
Issues related to intellectual property (IP) are the most prominent concerns about the development and use of Gen AI tools. If an organization starts down the road incorrectly, the problem of IP violations could scale rapidly.
Lawsuits are raising the question of whether the methods of training existing large language models violate copyright law and open-source licenses.
If violations do occur, Gen AI customers must understand who is liable for third-party claims of infringement – the provider or you, the customer?
This step could involve the following participants:
- CIO
- Chief Data Officer
- AI Ethics Officer
- Data Governance Specialist
- AI Strategy Manager
- Vendor Manager
- AI Governance Manager
- Risk & Compliance Analyst
- Security Analyst
Understand risks and roadblocks
Risk
- Something that could potentially go wrong.
- You can respond to risks by mitigating them:
- Eliminate: take action to prevent the risk from causing issues.
- Reduce: take action to minimize the likelihood/severity of the risk.
- Transfer: shift responsibility for the risk away from IT, toward another division of the company.
- Accept: where the likelihood or severity is low, it may be prudent to accept that the risk could come to fruition.
Roadblock
- There are things that aren’t “risks” that we still must care about when acquiring the Gen AI tool.
- We respond to roadblocks by generating work items.
Info-Tech Insight
The terms focused on liability and security will likely be rigid, necessitating a risk analysis around a take-it-or-leave-it standard. Other terms may be more negotiable (e.g. around the levers of solution governance required by the customer to allow the purchase of a vendor’s solution).
Understand the source of the tool’s training data
How transparent is the vendor on the sources of its training data? Did it include copyrighted or protected material?
The excitement around new Gen AI technology and its potential use cases is tempered by the launch of several lawsuits against the companies developing these large language models and offering products derived from them: OpenAI, Meta, Stability AI, Midjourney, Microsoft, and GitHub (ABA Journal, 2023).
The unsettled nature of ongoing lawsuits from the creators of works that made up Gen AI training data means that using the output of these tools produces a level of risk you may or may not be comfortable with. These tools require a massive amount of training data scraped from the internet, and commenters suggest it’s likely these sources include copyrighted material, not just material in the public domain, and data from websites whose terms of use explicitly prohibit this kind of data scraping. As a result, “a court could find Gen AIs problematic under either (i) copyright infringement or (ii) breach of contract” (Zuva, 2023). On the other hand, courts may find that this use of data is defensible under fair use (ABA Journal, 2023). Until these claims have been tested in court, certainty is not possible.
Translate into action items/vendor questions
- Do you know the copyright status of the tool’s training data?
- Can the provider make assurances that its training data was not copyrighted and/or it was used with permission?
- Will the outcome of the lawsuits served against these companies impact your ability to use the product? If so, how?
- Does the vendor anticipate producing audit trails for its outputs? If not now, is it on the roadmap?
- Is the user expected to do their own due diligence (e.g. reverse image searches of outputs)? Will this be feasible for you? Will a human be reviewing the outputs to mitigate unintended consequences?
Case Study: Getty sues Stability AI for copyright infringement
SOURCE: US District Court for the District of Delaware. Getty Images (US), Inc. v. Stability AI, Inc. 1:23-cv-00135-UNA. 3 Feb. 2023.
From innovation to lawsuits
As expected, the rise of Gen AI brings scrutiny of the provenance of its training data and the legality of its creation.
In February 2023, Getty Images sued Stability AI for copyright infringement, providing false copyright management information, removal or alteration of copyright management information, trademark infringement, unfair competition, trademark dilution, and deceptive trade practices. In the suit, Getty claims that Stability AI copied over 12 million of its visual assets and metadata, without permission or remuneration and against Getty’s terms of use in order to create its product, which Getty claims operates as a direct competitor against them. The suit also claims that Stability AI commits trademark infringement and that its outputs dilutes Getty’s trademark when it incorporates images that resemble the Getty watermark. As of February 2024, the case is still working its way through the courts.
Clarify what the vendor will do to avoid copyright infringement
How does the vendor anticipate mitigating claims of IP violation?
Some vendors are relying on prospective favorable legal outcomes; some have mitigated the risk of their training data. Intellectual property law researcher Andres Guadamuz points to Adobe’s assertion that its Firefly model was trained entirely on legal inputs: “This is an indication that they have conducted a thorough investigation of their training sources and are happy that they will not get sued” (Fast Company, 2023).
Future customer demand, and even insurance demand, may increase for vendors to protect them against infringement claims by developing “audit trails” of AI outputs, which “recor[d] the platform that was used to develop the content, details on the settings that were employed, tracking of seed-data’s metadata, and tags to facilitate AI reporting, including the generative seed, and the specific prompt that was used to create the content” (Harvard Business Review, 2023).
“I think it’s really simple. AI systems are not magical black boxes that are exempt from the law, and the only way we’re going to have a responsible AI is if it’s fair and ethical for everyone. So the owners of these systems need to remain accountable. This isn’t a principle we’re making out of whole cloth and just applying to AI. It’s the same principle we apply to all kinds of products, whether it’s food, pharmaceuticals, or transportation.”
– Matthew Butterick, who, with the Joseph Saviari Law Firm, is filing class action lawsuits against Gen AI vendors based on the large language models’ training on copyrighted and open-source material (The Verge, 2022).
Pin down who indemnifies whom
If infringement or damages are claimed by a third party as the result of your organization’s use of the tool, who will be responsible for covering legal costs: you or the provider?
After an analysis of existing public Gen AI terms of service, Waisberg and Lash note that the vendors’ terms of service around indemnification and liability tend to favor the vendor over the customer, and infrequently proactively offer the customer remedies (e.g. refunds, replacements) in the event of a third-party infringement (Zuva, 2023).
This should be a major area of attention in contract negotiations: “unless you have negotiated a more customer-favorable approach with the provider, you and your colleagues’ use of the tool may subject your company to broad liability and, should your use of the tool result in liability to the company, the terms of use are unlikely to offer much protection from the provider” (Cooley GO, 2023). Where possible, aim “to shift risk to the tool vendor, and reserve rights to remedies in contract terms if a claim is brought based on decisions made using the tool” (Bloomberg Law, 2023).
If you do receive indemnity from the vendor, they will likely have certain requirements that must be met in order to qualify for it.
Translate into action items/vendor questions
- Are we, the customer, indemnified against third-party claims of IP infringement by the provider?
- Do the terms of service require us, instead, to indemnify the provider against infringement claims created through our use of the tool?