How buyers of AI can impose controls on use

How buyers of AI can impose controls on use

Businesses today are increasingly turning to AI solutions to enhance existing processes, innovate new business avenues, and maintain their competitive edge. However, as they delve into acquiring AI technology, it becomes crucial for them to deliberate over the internal and external controls they can wield over the usage of these systems.

Controls serve as a bulwark against potential risks that may emerge throughout the life cycle of an AI solution. Consequently, such controls are becoming integral components of corporate AI policies. In the realm of supplier relationships, these AI policies must extend externally to encompass controls concerning errors in models or implementation that could lead to biased outcomes. Additionally, they should address concerns about a supplier’s unregulated utilization of customer data or the development of customized solutions using customer data, which may then be disseminated among the supplier’s broader clientele. Such scenarios pose challenges, particularly when an AI system is leveraged to confer a competitive advantage.

A paramount consideration revolves around the risk of regulatory non-compliance, particularly concerning data privacy regulations and the emergent landscape of AI regulations.

However, customers can only enforce their AI policies and impose contractual controls on their suppliers’ use of AI once they’ve identified the deployment of AI. With the burgeoning interest in generative AI and its potential benefits for businesses, many suppliers are keen to highlight the incorporation of AI within their service offerings. Nonetheless, outside of pure AI engagements, this necessitates a clear definition of “AI” and its integration into a notification process. From the customer’s standpoint, a logical starting point entails contemplating the risks they aim to mitigate through contractual safeguards, and correspondingly, the controls they seek to apply to their supplier’s utilization of AI to mitigate those risks.

These controls may encompass requisites for obtaining consent for the usage of new or updated AI solutions, the right to object to AI usage or its manner of application, or the implementation of appropriate usage constraints. Broadly, customers can adopt three approaches when endeavoring to impose controls on AI usage:

  1. Imposing controls on AI as a concept in general.
  2. Imposing controls specific to the supplier’s current service offerings involving AI.
  3. Imposing controls that align with prevailing legislative compliance standards.

Customers must factor in the broader context of the commercial arrangement when determining the most suitable approach.

If customers are aware that they are procuring an AI solution, the first approach might have limited efficacy. Suppliers are likely to resist obligations aimed at curtailing AI usage altogether if their solution inherently employs machine learning technology, as is increasingly common. Suppliers generally favor the second approach, which allows for a more nuanced and use-case-specific application of controls.

Regarding the third approach, instances are already emerging where the EU AI Act—currently in its final legislative stages—is pivotal in shaping contractual positions.

The EU AI Act adopts a risk-based regulatory framework, delineating four classifications of risk. The most substantial obligations under the EU AI Act come into play when AI systems pose a high level of risk or beyond, and drafting practices are aligning with these requirements.

For instance, if a customer is procuring an AI solution categorized under the ‘limited risk’ classification as per the EU AI Act, it’s deemed appropriate to impose the associated controls on the supplier’s current service offerings. In such cases, customers should endeavor to include a definition of “prohibited AI” in the contract, tethered to the ‘high risk’ and ‘unacceptable risk’ classifications in the EU AI Act, which aren’t adequately addressed by those controls.

Although the EU AI Act currently informs drafting approaches, other jurisdictions are contemplating their regulatory stances, and contracting strategies are expected to evolve in tandem with the global regulatory landscape.

Beyond the fundamental question of AI permissibility, there are other critical issues that AI customers should address in their contracts with suppliers. Failure to address these concerns in contracts may necessitate compensatory controls, such as heightened supplier management, to mitigate certain risks, albeit not all.

Testing and Monitoring

Once a customer greenlights the use of AI by its supplier, the next pivotal question pertains to ensuring that the AI system functions as intended. Unlike traditional software procurements, testing an AI system poses unique challenges, particularly in testing for various error scenarios, biases, and compliance factors.

For complex AI systems, conducting exhaustive pre-implementation testing may prove near-impossible. Instead, customers can mitigate this risk by trialing new AI systems in a pilot phase—for instance, deploying the solution in a discrete business unit or with a limited dataset—and assessing performance before making a definitive decision for a full-scale rollout.

While the contract serves as a control measure, it cannot supplant effective and ongoing testing and monitoring throughout the AI system’s lifecycle. Industry standards in this domain are evolving rapidly, with both customers and suppliers sharing responsibility for ensuring that AI models operate as intended.

Data and Assets

Effective AI utilization often hinges on a robust data strategy to safeguard the customer’s critical data assets. It’s imperative for customers to discern the types of business and personal data they own or license from third parties to determine which data the supplier should access and under what terms. From a contractual perspective, any restrictions—whether from third parties or otherwise—must be incorporated into provisions governing the supplier’s use of data.

Ownership and control of data are also areas of concern for both customers and suppliers, with the latter increasingly constrained by restrictions on data usage. While enabling suppliers to leverage insights and improve their systems can be beneficial, particularly in anonymized or aggregated forms, such practices may entail implications on IP ownership and data protection provisions, necessitating careful consideration.

Liability

In AI system contracts, liability for generated output is a significant concern for both customers and suppliers. However, liability clauses alone do not proactively manage operational risk. The potential magnitude of AI system failures underscores the importance of implementing additional contractual controls that facilitate operational risk management. Circuit breakers capable of halting AI system usage in the event of errors or biases, coupled with the ability to revert to a previous system version, can serve as invaluable tools in this regard.

Written by Anita Basi of Pinsent Masons. Pinsent Masons is hosting a webinar on managing risk and operating effective technology transformation programs on Wednesday, May 15th. The event is free to attend—registration is open.

Looking Ahead

As businesses navigate the complexities of AI integration, it’s evident that the journey involves multifaceted considerations beyond mere technological adoption. With the landscape of AI regulations evolving, and the imperative to balance innovation with risk mitigation, both customers and suppliers must remain vigilant and adaptable in their contractual engagements.

Adapting to Regulatory Shifts

While the EU AI Act serves as a foundational framework, it’s imperative for stakeholders to remain attuned to regulatory shifts in other jurisdictions. As global regulatory landscapes evolve, contracting strategies will undoubtedly undergo further refinement, necessitating a dynamic approach to compliance and risk management.

Collaborative Responsibility

In the realm of AI deployment, responsibility transcends individual entities. Both customers and suppliers share a collective responsibility for ensuring the ethical and effective use of AI systems. By fostering collaboration and transparency, stakeholders can collectively navigate the complexities of AI integration while safeguarding against potential risks.

The Role of Continuous Improvement

In an era marked by rapid technological advancement, the journey towards effective AI utilization is an ongoing process. Continuous improvement, encompassing rigorous testing, monitoring, and adaptation, is essential to ensure that AI systems evolve in tandem with organizational needs and regulatory requirements.

Also Read:

Conclusion

The adoption of AI presents immense opportunities for businesses to innovate, optimize processes, and drive competitive advantage. However, realizing the full potential of AI necessitates a strategic approach to risk management and regulatory compliance. By proactively addressing these considerations within contractual engagements, businesses can harness the transformative power of AI while safeguarding against potential pitfalls.

Written by Anita Basi of Pinsent Masons. Pinsent Masons is hosting a webinar on managing risk and operating effective technology transformation programs on Wednesday, May 15th. The event is free to attend—registration is open.

Leave a Comment