Published
Plato, Love, and the Philosophical Problem of Europe’s AI Act
By: Fredrik Erixon
Subjects: Digital Economy European Union
Plato’s Symposium is a good place to start a discussion about the EU’s forthcoming Artificial Intelligence Act – or AI Act. Before you choke on your coffee, let me explain why the ancient Greek philosopher is relevant for the discussion.
The theme of the symposium – or the dinner-party discussion, hosted by Agathon, that Plato recounts – is love. It is Eryximachus, the Athenian physician, that proposes the theme, and he wants all the guests “to make the best speech he can in the praise of love”. But what is love? Does it come from instincts and strong physical desires like erotic lust? Is it about sex – or, as many at the time would have it, only certain forms of sex? Or should we start from the bodily motions of love and let it be defined by its material manifestations? If love is rather immaterial, is it about our individual soul – making love so idiosyncratic that it is almost impossible to agree on a common standard? Or is it a divine aspiration (to Eros? Aphrodite? Dionysus?), perhaps making love – to coin a phrase – “Platonic”?
A similar metaphysical disputation is necessary for artificial intelligence. In the first place, what is intelligence? If we are now concerned about AI, what is the distinguishing essence making it artificial and different from other forms of intelligence? Are our concerns, like in previous waves of technological change, about “machines” – or are they about man? If the concerns are specific to the “machines”, what are they about?
As observers of the EU process to come up with a regulation of AI will know, it has been a challenge to establish a common and agreed definition of AI that has been understandable, meaningful and constant. The EU’s High Level Expert Group on AI worked with one concept in a 2019 paper, which outlined the case for an AI regulation. The European Commission made another definition in its ethical guidelines on AI the same year, then reframed it in a 2020 White Paper, and then changed it again for its regulatory proposal – the AI Act. The Council and the Parliament have been exploring a variety of various concepts – sometimes drawing on work by the Commission, sometimes using definitions by the OECD and other bodies and groups, sometimes developing own versions.
It is Phaedrus that begins the praise of love in Symposium. Love, he says, has no parents. It’s the oldest of Gods, born right after Chaos, and therefore builds the foundation of a noble life. In Plato’s telling, love for Phaedrus is the most powerful assistance “in the acquisition of merit and happiness”. Yet it is still a metaphysical concept, symbolically related to the sun and the moon, concerned with our being – not primarily the ethical consequences of our actions.
Pausanias took issue with that view. Inspired by Aphrodite rather than Eros, he argus that love is not a single subject and cannot be reduced to mere practical actions and results. Nor can it be just metaphysics because love has a practical side, too. Just as there are two Aphrodites – the heavenly Aphrodite and the common Aphrodite – there must be two loves as well: heavenly love and commonly love. The first is purely male and free from wantonness; the latter is about the baser side of the human species. Pausanias explains:
“The truth of the matter I believe to be this. There is, as I stated first, no absolute right and wrong in love, but everything depends upon the circumstances; to yield to a bad man in a bad way is wrong, but to yield to a worthy man in a right way is right. The bad man is the common or vulgar lover, who is in love with the body rather than the soul; he is not constant because what he loves is not constant; as soon as the flower of physical beauty, which is what he loves, begins to fade, he is gone ‘even as a dream’, and all his professions and promises are as nothing. But the lover of a noble nature remains its lover for life, because the thing to which he cleaves is constant.”
If permanency and constancy were important for Pausanias, they are disputed – at least in part – by Eryximachus. For him, love extends beyond men’s souls and can be found in nature – “the bodies of all animals, for example, and plants which grow in the earth.” In other words, love is more material and corporeal, and, as a physician, he builds his view of love on medicine and biology. Pausanias may be right about heavenly and common loves, Eryximachus says, but the battle between them is fought in the body and the art of medicine is to bring them together in harmony.
The dialogue continues. Aristophanes, the entertainer, builds on the creator origin of men and sees love as something innate – a need, even. He categorises sexes and corresponding features of love. Agathon is pompous and vacuous, and practice “the glib and oily art”, to quote from Shakespeare’s account of love in King Lear, of speaking without purpose and intent. Obviously, it then falls on Socrates to put everyone right. Citing a conversation with Diotima, the Athenian sage revaluates inner beauty and concludes with a metaphysical construct of love whose essence lies in goodness. It is that goodness, rooted in truth, we should honour.
I know I have tested your patience, but I believe Platon’s account of this dialogue speaks to the situation we find ourselves in right now in pursuing new regulations of AI. Absent an essential definition and a clear “Platonic form”, regulators have developed definitions that make the subject of regulation unclear and the pursuit of a regulation a moveable object. There is no absolute right and wrong in various definitions of AI, but when a regulator is not clear about what it is that should be regulated, everyone gets a bit hazy. In this case, the regulator employs concepts that at best are vague proxies of the real thing, and it allows itself to become guided by multiple intentions and observations that don’t always sit comfortably together. This is the philosophical problem with the AI Act. It is about metaphysics.
Let’s become more practical. Since the EU (and others) isn’t clear what AI is, or what aspects of it that are concerning, it weaves a regulation that is complex and that applies different standards to the same action. The AI Act is a combination of different types of regulations. First, it works with models from the EU product safety regulations, partly because this is also a good way to anchor an AI regulation in areas of clear legal competence for the EU. Thus, the AI Act will seek to ensure that products that are placed on the EU market, containing or employing AI, conform to EU rules. But the AI Act also builds on the fundamental rights doctrine. It isn’t always clear what the application of this rights approach means in a practical environment, but EU legislators have expanded on the concepts that were already part of the Commission’s proposal – like obligations on various stakeholders (e.g. various product and post-market monitoring obligations) that go beyond the conformity process and concern rights. There is also a third regulatory culture in the AI Act, and it is the precautionary principle. In total, we have a package of general, pre-market, product, and post-market rules that is extraordinarily complex and not easy to understand in practical circumstances.
One can discuss each of the regulatory cultures in the AI Act and if they are fit for purpose. However, it is probably the case that we cannot say at this point which of the approaches – if any – that would best reach the objectives. I say probably because the objectives are vague too, and this is an emerging and evolving field of technology, with huge variety in the applications. Moreover, the EU hasn’t bothered itself too much with trying to find an answer. The High Level Expert Group who was supposed to inspire the approach to an AI regulation proposed a different approach than the one the Commission ended up with. Furthermore, the impact assessment that accompanied the proposal is, ahem, light on analysis.
It is clearer that the combination of several types of regulations is causing problems. In the first place, regulators and legislators have been adding and withdrawing different practices, data models, and products to either make sure that something specific is covered – or that it isn’t covered. One debate right now, for example, is about foundation models, and whether they should be included or not. But the adding and withdrawing of different specific provisions is a sign that the understanding of the effects of the regulations are obscure.
Overlapping regulatory cultures will also cause problem in the future because the same aspect or action can be covered by different regulatory regimes; the product safety approach, backed up by technical standardisation, will bump into fundamental rights. Furthermore, there is a value chain to AI that include parts that are already the subject of other regulations. Data, for instance, is subject to a variety of regulations, and the stock of data-specific regulations with consequences for the AI development will expand significantly once the Data Act comes into force. The actual data holder or processer will also be covered by new business-model regulations – like the Digital Markets Act – that can seriously reduce the use of data for AI.
The philosophical problem of the AI Act will come back to haunt Europe. Whatever the outcome of the negotiations taking place now, the structure of the regulation is messy, and the regulator is not clear about exactly what it is it wants to regulate. As a result, we have a messy body of rules and obligations in the Act that most likely will lead to confusions about their meaning and applicability, and to recurring concerns that the EU, yet again, has opted for more rather than less regulation, and that it has prioritised speed over quality.
If we were agnostic as to whether AI was an input or not to the production of a good or service, then you wouldn’t need to define AI. Indeed, this is how much existing regulation works, and it applies whether AI is an input or not. Singling out ‘artificial’ intelligence is in any case discriminatory – not in the sense that AI cares (yet?), but in the sense that it will mean foregoing safer, less biased and lower cost ways of doing things. If we really want to regulate intelligence regulate ours too, and good luck with transparency.