BLOG POST

Tech & Sourcing @ Morgan Lewis

TECHNOLOGY TRANSACTIONS, OUTSOURCING, AND COMMERCIAL CONTRACTS NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

Innovation: all companies want their outsourcing providers to be at the forefront, whether accomplished by proposing ideas, implementing solutions as part of their business-as-usual services, or offering savings based on productivity commitments or other demonstrable business impact. Some outsourcing providers may even use innovation as a key differentiator during the sales cycle, putting real dollars at risk if innovation projects don’t realize promised savings. And what innovation is more top of mind presently than the use of artificial intelligence?

Companies are now looking to their outsourcing providers to design and explore AI use cases in IT and business process outsourcing arrangements to bolster security monitoring (thus lowering the risk of data breach and ransomware attacks), enable more efficient capacity management (less excess capacity = less costs), streamline business processes (less duplicative resources), and enhance user experience (more productivity, more revenue).

Senior management may be demanding to know how their teams are accelerating the use of AI to get out ahead of competitors. The process may hit the ground running and move very quickly—all to come to a screeching halt when compliance and legal enter the picture.

While business units are rushing to leverage AI (particularly generative AI), compliance and legal teams are rushing to understand the risk implications of AI usage, from confidentiality and intellectual property issues to quality and implicit bias concerns, and put guardrails in place to ensure “responsible” use of AI. And here lies our first conundrum: how can companies best require their outsourcing providers to leverage generative AI within their AI policy guidelines without stifling innovation?

To consider the conundrum, let’s look at some common components of AI policies:

Definition of AI

What is considered AI? While the current discussions around AI largely focus on generative AI (GenAI), many policies’ AI definitions include a wide range of automation/AI solutions and technology, some of which may have been around for years. A broad definition may be a good (and responsible) feature and may call attention to issues with the use of automation that have not been addressed to date. However, in doing so, certain other requirements such as disclosure and quality checks may be challenging or at least require diligence and work, i.e., resources and time.

Disclosure and Approval of AI Use

Depending on the definition of AI, the outsourcing provider’s willingness to disclose all of its use of AI in the provision of the services may be an issue. Some providers now claim that they use AI in all aspects of their business. This raises various questions: do they need to disclose that use and obtain customer approval? Do they need customer approval if they want to change their office collaboration tools?

There are many reasons why a company must understand where and how AI is used in its services and environments, particularly in such cases where the AI processes train large language models (LLMs) using company data or a company’s customer data. But are there tools where approval may not be necessary or cause inherent risk?

As companies are understanding the risks and implications of AI, broad policies may be the best defense, but as providers demonstrate safe use cases the requirements may soften. Regarding concerns of security in the cloud, we are seeing some providers looking ahead and proactively offering terms that demonstrate how they use AI in a responsible manner (attempting to allay at least some concerns).

Noninfringement and Ownership of Training Algorithms and Output

Many company policies and/or contract terms require that the outsourcing provider ensure that the use of any AI tools is not infringing and that the training algorithms and output generated from the AI tools are not infringing and can, in fact, be owned by the company. The ability of a company to demonstrate chain of title to input and output is critical for a number of reasons, including in situations where a company wants to sell a product, asset, or potentially its business.

As attention heightens as to the treatment of AI and copyright, some providers are raising concerns regarding their ability to ensure ownership of, and therefore the transfer to company of, output using AI tools. This tension is the focus of much negotiation in the current AI intellectual property allocation landscape.

Data Sources and Using Company Data to Train Large Language Models

Perhaps one of the most important considerations when using AI is understanding what data sources are being used to train the LLMs and produce the output. Are the sources all considered company data and does the company have the right to use its data for the intended purposes? Will the LLM instance trained using company data be used for other purposes? Again, many providers are getting ahead of the questions and describing how the LLMs are using data to train and create data in an effort to be transparent. Not all solutions may be acceptable to a company, but there may be ways to modify the offering to mitigate risks.

Output Quality

When using LLMs to answer a question, the answer may sound good at first pass, but it may not always be correct on precise details. Many companies’ policies require the provider to verify and monitor the security of, and the quality (including anti-bias) and accuracy of, any output of any AI tools. Some providers are pushing back on grounds that, if they need to retain headcount to monitor quality and output, then that is diminishing the productivity benefits of such tools. Depending on the criticality of the output and the use of the tools for business operations, the requirement to monitor and confirm quality and accuracy will likely continue, at least for now.

Conclusion

The contributors of Tech & Sourcing @ Morgan Lewis find that the benefit of writing a blog is that you can identify and mull over the issues while not in all cases having a one-size-fits-all solution. The purpose of this series is to pinpoint the potential issues found in AI and outsourcing and start a discussion on the best ways to tackle them for your particular situation.

This blog is the first in our Cracking AI and Outsourcing Conundrums Series. Look out for more blogs in this series as we consider the use of applications and implications of AI in outsourcing arrangements.