AI chatbots: will they be deemed high-risk in Europe?
Photo by Andrea Piacquadio from Pexels

AI chatbots: will they be deemed high-risk in Europe?

As some of you may have heard recently, there has been a new proposal presumably to classify chatbots and other generative AI tools based on large language models (like ChatGPT) as high-risk AI systems under the forthcoming EU AI Act.

While this newsletter has previously covered ChatGPT and similar chatbots, that coverage occurred before the above news surfaced.

Therefore, an update is in order.

What does the proposal say exactly?

We don’t know, as it has not been published. We only have some reporting.

Of course, without access to the wording of the proposal, we might be missing some important nuance. But the reporting at least suggests that AI systems that generate text are not proposed to be deemed high-risk unless they operate in a deceiving way (by not disclosing that the text is computer-generated, in contexts where this is not evident).

This doesn’t mean, however, that text-generating AI developers have nothing to worry about except disclosures.

Proposal by the Council of the European Union

Previously, I’ve explained that the Council’s 6 December 2022 proposal envisages special rules for so-called general purpose AI systems. As per recital 12c, these should be the “systems that are intended by the provider to perform generally applicable functions, such as image/speech recognition, and in a plurality of contexts”.

Let’s take OpenAI’s ChatGPT as an example. It is intended to generate text and is not, as of yet, constrained by the developer to any particular use case. As such, it is clearly a general purpose AI system.

The Council’s December version of the EU AI Act envisages that general purpose systems should be subject to rules applicable to high-risk systems (with adaptations still to be proposed by the European Commission by means of implementing acts). Unless their outputs are “purely accessory”, says the proposed article 6(3).

The “purely accessory” bit

As it follows from recital 32, the AI system’s output cannot be considered “purely accessory” if it has a high importance for the actions or decisions which may cause harm to the health and safety or the fundamental rights of persons. This likely means that when AI systems take such decisions or actions by themselves (automatically) or when they significantly influence human decision-making, the system’s outputs should not be considered “purely accessory”.

On the other side of the spectrum are outputs of AI systems which do not have such effect. This likely means that when an AI system does not automatically cause decisions or other actions and instead leaves plenty of room for a human decision maker to decide whether and how to modify, contextualise and use the system’s output, this output is “purely accessory”.

The same recital 32 gives an uncontroversial example of the latter case - when AI systems are “used for translation for informative purposes”. In this example, the user is clearly free to verify and use the output or decide not to use it at all. The system does not cause real-world effects by itself in this case, only its user can do so.

But the same is true for ChatGPT or any other similarly designed chatbot.

When a user interacts with it, its outputs are not immediately published on the web or sent elsewhere. It is up to the user to either (1) leave the suggested output in the chat box, unconsumed, or (2) to verify it, and, after that, possibly act on it or otherwise use it in the real world outside the chat environment.

What if the use is high-risk?

Surely, the user of ChatGPT or a similar general purpose AI system may put it to high-risk use. For example, a business user may decide to plug such an AI system into its applicant tracking platform to review employment applications (one of high-risk uses under Annex III to the proposed Act).

But in this case, as provided by article 23(1)(e) of the proposed Act, it will be this business user who will be subject to all obligations of a high-risk AI system provider.

It doesn’t mean, though, that a provider of a general purpose AI system is immune from attracting legal obligations associated with high-risk uses.

A chatbot environment like the one offered by ChatGPT is one thing. But as soon as the provider offers integrations with clients’ systems, e.g., through an application programming interface (API), the provider can no longer rely on the presumption that the outputs of its general purpose AI system will be treated as “purely accessory”. The relevant exemption in the article 6(3) is unlikely to apply.

Are general purpose AI providers offering API doomed to fall under the high-risk rules?

Let’s follow up on the above example with applicant tracking systems.

If the general purpose AI system provider specifically advertises the benefits of integrating the clients’ systems with its system for candidate screening purposes, such provider of course directly invites its system to be reclassified as high-risk.

Further, even if the provider does not do such advertising, but becomes aware that such use is actual market practice and does nothing to prevent it, the provider is again clearly asking for its system to be put into the high-risk category. This follows from the proposed article 4c of the Act.

Under the said article, the only way to avoid being thrown into the high-risk basket is to explicitly exclude all high-risk uses (such as candidate screening) in all client communications and manuals, and, on top of that, to take active and effective steps to prevent such uses by legal means and, to the extent feasible, by technological measures and design choices.

In terms of legal means, it would help to devise a code of conduct applicable to all client integrations. The code should explicitly exclude all high-risk uses and provide examples of best practices and things to avoid. And of course, such a code should be referred to and made part of any agreements with clients who are integrating their systems with the provider’s general purpose AI system.

To sum up

While the EU AI Act is not yet finalised, the developers of text-generating and other general purpose AI systems should not wait until the last moment.

If you’re developing such a system, the best approach is to make sure that, already today, you’re developing and deploying AI responsibly. To that end, you have to either prepare to embrace the full scope of obligations for the providers of high-risk AI systems or take active and effective steps to exclude all high-risk uses.

If you choose the latter, the best way to do this is to specifically target and promote a narrower range of uncontroversial lower risk uses of your AI system and take proactive, effective steps to prevent misuse.

Among such steps should be feasible technological measures and legal precautions, including the establishment of a client-facing code of conduct for API integrations and contractual provisions requiring clients to follow such code.

Should you need further practical advice on this topic and assistance with drafting or review of necessary documents, don’t hesitate to contact me.

Steven Martin

SCORE Certified Mentor & Volunteer and Seasoned Entrepreneur

2y

Aleksandr Tiulkanov, please keep us informed.

To view or add a comment, sign in

More articles by Aleksandr Tiulkanov LL.M., CIPP/E

Insights from the community

Others also viewed

Explore topics