Published
Regulating Artificial Intelligence: Should there be innovation without permission?
By: Guest Author
Subjects: Digital Economy
Brian Williamson, a Partner at consultancy Communications Chambers. The views in this blog are his own.
The European Commission, along with others, is considering the role of ethics in relation to the governance of Artificial Intelligence (AI). This invited blog considers ethics for AI in the broader context of markets and public policy intervention generally and draws on a paper I published in December 2018 “In search of the ‘Good AI Society”.
Is artificial intelligence too general for regulation?
In considering ethics for AI, we should reflect on whether AI is a sufficiently distinct and homogenous category in terms of potential policy issues to justify the application of a distinct form of governance. It is not obvious that it is. First, AI is a general-purpose technology – like steam, electricity or computing – with a wide range of applications, many of which are likely to be benign, whilst those that are not may be covered by existing rules. Second, if the proposed ethical approach is sound, it should arguably apply more broadly to pattern recognition, prediction and decisions taken by non-artificial intelligences – namely by human beings. Yet that is not proposed.
AI does not therefore look promising as a distinct category to which a new and specific policy approach should apply. The debate around ethics and AI does, however, flag questions regarding who decides what, on the basis of what information and within what institutional setting regarding the development and application of technology.
No innovation without permission
Existing institutions rely on three broad mechanisms. First, markets operating within a framework of legal infrastructure. Markets decentralise decision making and rely on individual’s knowledge, entrepreneurial ‘bets’ and preferences. There is an ethical underpinning to this, namely provided a transaction is valued by the participants, and provided externalities are addressed (one reason for legal infrastructure), then value adding transactions can proceed without permission. Second, governments make allocative decisions, including income transfers and the provision of services such as education and health. Third, governments may make regulations or impose taxes or subsidies to address specific problems and may delegate detailed decision making within a narrow remit to an independent regulatory authority.
These mechanisms are accountable to individual preferences directly (the market) or indirectly in democracies (via the right of citizens to choose their representatives). Both suppliers and political representatives are subject to competition. In contrast, proposals for ethics for AI, if they are to have bite, would place considerable power in the hands of experts across broad swathes of activity.
Enterprises are, of course, free to adopt their own ethics or values as part of internal culture and/or to enhance their brand; whilst governments will continue to evolve the checks and balances on their own service provision – whatever intelligence is involved in its provision. However, this differs from ethics determined centrally and applied to all applications of AI.
If a set of ethical principles is applied to AI ex ante, this could involve an end to innovation without permission, which would substantially slow innovation and productivity growth, whilst limiting scope for individual agency on the part of consumers and entrepreneurs. This would be illiberal, and arguably unethical. Specific requirements which have been suggested, including explainability, may also curtail potential benefits including lifesaving applications of AI. Again, this hardly seems ethical.
A world without ethical oversight is not unethical
In assessing proposals for ethics for AI the counterfactual is not a state of the world absent ethics; but the ethical underpinnings of a market economy subject to interventions to address the moral limits of markets (including externalities, market power and distributional concerns).
A re-examination of existing rules should arguably go beyond adapting them to AI (and other new technologies and business models) and include; examination of whether existing rules represent a barrier to potentially beneficial AI, consideration of whether existing rule are applicable or should differ; and consideration of whether the underlying mechanisms for creating legal infrastructure are up to the task of keeping pace with technological and market change.
If a new and different ethical standard is proposed, it needs to be tested relative to the existing standard not just in terms of the attractiveness of the ethical principle itself, but also in terms of the anticipated consequences of implementation in practice. In considering these questions, the scope for innovation without permission, which has underpinned economic progress over the past few centuries; and implications for the concentration of power, not just in markets but in institutions and groups of experts, should be taken into account.
We should also anticipate that AI will likely prove superior in some applications, not just in terms of efficiency, but also judged against criteria including unbiasedness and safety. The relevant question is then likely to shift from ‘should there be a human in the loop?’ to ‘should we prevent humans from undertaking or intervening in an activity?’ Debate over this question is likely to prove predominantly political rather than ethical.
Change is likely required, but ethics for AI may not be the change we are looking for.