HomeBlockchainRegulationCrypto and AI: the future of the lawyer's role (part 2)

Crypto and AI: the future of the lawyer’s role (part 2)

One of these is that human beings might find a different role and position than we are used to today.

So if, for argument’s sake, a machine were to be made that could give an ineluctably exact answer to the legal question and thus provide a virtually ineluctable response as to the possible outcome of a dispute, theoretically the role of the lawyer might move into an area other than that of working out the answer to the question. Perhaps, that of knowing how to pose the right question to the machine that will then provide the answer. Thus, he would be concerned that the machine be given all the most appropriate elements and parameters to generate the expected answer.

Or he might move into that area of “training” the legal machine, and then provide or see to it that all the legal data and information needed to make its evaluations are provided to the machine.

And since this machine, following this hypothesis, will be able to provide with ineluctable exactitude to render a verdict that we assume is “fair,” the role of the judge perhaps could become that of making sure that the parties do not cheat in providing the machine with the necessary elements to render the verdict and that the criteria of judgment entered and applied by the machine meet fairness, reasonableness, proportionality, non-discrimination, etc.

All of this, by the way, seems to be in line with the famous five principles set by CEPEJ – European Commission for the Efficiency of Justice (i.e., the Council of Europe’s Commission for the Efficiency of Justice, that body of the CoE representing the 47 countries whose purpose is to test and monitor the efficiency and functioning of European justice systems) in the Ethics Charter on the Use of Artificial Intelligence in Justice Systems: (i) Principle of respect for fundamental rights; (ii) Principle of non-discrimination (iii) Principle of quality and safety; (iv) Principle of transparency, impartiality and fairness (v) Principle of user control.

Now, even accepting the idea that a future in which AI finds massive use in the legal field the role of humans may shift to the area of supervision only, there are other considerations to be made as well. Mainly because when we imagine a justice system administered with these seemingly neutral and infallible tools, we represent to ourselves an apparatus that merely enforces laws and rules. A mere executor of precepts.

This representation of justice, however, does not exist in practical reality, because, in defiance of any petition of principle and the principle of separation of powers, those who render a verdict often do in fact, to some extent, contribute to the production of law and alter its fabric. That is, the judicial function often concurs specifically in the creation and consolidation of rules.

Of course, this extent varies across legislative and constitutional systems. It is certainly greater in common law countries, where law is formed through precedent-setting decisions.

However, this is also true in countries with codified law, such as Italy, France, Germany, etc. In these systems, in fact, the interpretation given through judicial decision sometimes forces or even bends formal law, complements it when it finds gaps and deficiencies in it, disregards it and places it in the void when conditions exist that place it at odds with higher-ranking principles.

That is, the judicial function, whether directly or indirectly, often ends up encroaching on the field of the regulatory function, and this can happen at different levels.

Note: this is not to rule out the possibility that, in the abstract, a machine called upon to produce regulations is not capable of doing so even better than man. If only for the fact that history is full of bad human regulators. To take an extreme example, consider the horrific experience of the Holocaust and ethnic cleansing: these were horrors that were legally supported by legislative systems based on macroscopically inhumane principles, yet they were created and imposed by human beings themselves.

The encounter between normative production and artificial intelligence

The crucial point is another: are we really sure we want to give machines access to the process of normative production? And to what extent? And we must keep in mind that this entry can also take place in a “creeping” way, through that half-open doorway of the jurisdictional function.

The idea that the functions that can be exercised by machines can remain relegated to a merely executive, or at most auxiliary, role with respect to the work and volition of man, by virtue of those ethical and formal bars imposed by man (e.g., the laws of robotics, Asimov’s or, indeed, the principles elaborated in the European context on the use of AI in judicial systems) can be appeasing.

These are in this case rules dictated directly from Man to Machine and respond in a broad sense to the satisfaction of Man’s own existential vocation. That is, they are all in some way conservative and functional to the development and preservation of the existence of humankind.

And it is here that the somewhat philosophical dilemma is triggered, if you will: if we were ever to allow a non-human entity to enter fully into the process of normative formation, given that it, precisely as an entity is immanently endowed with its own existential vocation, what would prevent it from writing rules that do not respond to man’s existential vocation?

To take an extreme example, if we were to pose the problem of overpopulation and the scarcity of food and energy resources, globally, as humans, subject to certain pathological ideological drifts, on the ethical level we would repudiate as a means of solving the problem solutions that postulate mass extermination or the murder of human beings.

The same problem, seen through the eyes of a non-human entity, which might not recognize identical ethical principles, could lead to the solution of mass extermination, perhaps on the basis of selective criteria aimed at eliminating the weakest subjects (the very ones that human ethics dictates should be preserved as a priority) as the most reasonable solution on a strictly and coldly logical level.

Massimo Chiriatti, among the leading experts on artificial intelligence in Italy, who in many of his writings has clarified his views on the limits of artificial intelligence and the supervisory role that humans must maintain in an ironclad manner in the use of these technologies in his “Artificial Unconsciousness” states:

“There is a very important point to consider: every AI prediction is a quantitative assessment, never a qualitative one, whereas for us humans a choice is almost never a simple calculation. We make decisions based on immeasurable and therefore incomputable values. We are the teachers of the machines. We are implicitly so when they assimilate the data we create, when they build the model and give us the answers. 

We are explicitly so when we give them instructions on how to do a job. For these reasons we must pay attention to how they learn, because in doing so they will evolve.”

Beyond the extreme example just given, while it is vain and illusory to oppose the development of technology, this kind of process must be governed with the utmost awareness.

Today we are discussing the impact of artificial intelligence on the legal professions, with respect to which situations and values of extreme delicacy and peculiarities related to intellectual sophistication, creativity and all those components that we like to trace back to the intangible essence of man.

The same issue, however, is bound to generate a large-scale impact on the hundreds of jobs that machines in a very short time will be able to perform as well as and better than humans, at infinitely lower cost.

Should we feel threatened by crypto and artificial intelligence (AI)?

The massive proportions of the issue should lead us to reflect on fallout that will impact the real world and our ability to read reality, as the social and political view of the world of work and the economy will be revolutionized.

If it is legitimate to ask a number of questions, with respect to the world of the legal professions, it is necessary to consider that similar questions will have to be asked about much of the world of work.

For us, the most immediate ones are, “What will happen to the humans, judges and lawyers, who today perform that role and functions that tomorrow might be performed by machines? How will they earn a living?”

But on the level of collective interest, there are far more: “Who will pay the social security contributions and who will provide the community with the tax revenue generated by the incomes of all the human workers replaced by machines?” And again, “what will happen to all those figures who contribute to the performance of the activities of these operators (assistants, collaborators, practitioners, etc.) and what will happen when their contribution and tax revenues are also lost?”

Well, these questions also arise for all the other job categories that may be hit by the robotic and digital revolution in an even smaller time frame than the one that is likely to affect legal workers.

Scenarios arise that could render the sociological, economic, anthropological, and political views known today outdated: socialism, liberalism, libertarianism, sovereignism, and so on, would lose their conceptual foundations.

Much, if not everything, would have to be rethought from scratch.

But returning to the topic of AI in the legal field, my personal view is that the role of the lawyer (by vocation an interpreter not only of norms, but also of facts and, to some extent, of humans), will not be able to be limited to migrating to a different region of the legal services production cycle.

My idea is that the lawyer, and legal practitioners more generally, could be given a higher role: that is, to see to it that awareness in the governance of technological development is always proportionate to the real welfare purposes of mankind, properly channeled and, if necessary, also consciously and reasonably curbed.

There is a famous Chinese saying, “when the wind of change blows, some put up barriers, others build windmills.”

Now, although I like to think I can count myself among those who “when the wind of change blows” enthusiastically throw themselves into building windmills, I would not want to get to a point where windmills no longer need humans to exist, since their existence is devoted to the need for other windmills.

And if it came to that, would man need such windmills?

Now, the lawyer by definition is one who is called (ad vocatum) to defend and plead a cause. Here is his cause: he will have to see to it that humans keep a firm grip on the rules and that machines remain anchored in the role for which they were created: to work in the service of humanity.

And when necessary he will have to stand up and fight, so that this is how it is and how it will remain.

To fight for the good of humanity. Like Mazinga Zeta, in the famous Japanese cartoon, for those who remember it.

Sounds good, but Mazinga Zeta, wasn’t he also a robot?

 

Luciano Quarta - The Crypto Lawyer
Luciano Quarta - The Crypto Lawyer
Luciano Quarta, tax lawyer in Milan, managing partner and founder of the tax law firm QRM&P, has published extensively on the legal and tax aspects of legal tech, artificial intelligence and cryptocurrencies. A speaker at numerous conferences on the subject, he writes the column "Tax & the city" for the daily newspaper "La Verità" and regularly writes for the Economy and Taxes section of "Panorama". He is a member of the Tax Justice Commission of the Milan Bar Association and is the contact person of the Milan office of the interdisciplinary association for the study and application of artificial intelligence GP4AI (Global Professionals for Artificial Intelligence).
RELATED ARTICLES

MOST POPULARS

GoldBrick