Machine Learning is a computer programming technique that uses statistical probability to give computers the ability to learn on their own without explicit programming. This tool is used in many areas of our daily life: the recommendation system for e-commerce platforms, digital assistants, decision-making support software, etc. But the increasingly important role of new technologies in society leads to the question: how does the law take these new tools into account?
“The author of a work of the mind enjoys over this work, by the sole fact of his creation, an exclusive right of intangible property and enforceable against all”
The first problem that arises with machine learning is that of the protection of the algorithm. It can be defined as the study of the resolution of problems by the implementation of sequences of elementary operations according to a defined process leading to a solution. Within the meaning of French law, it is a mathematical formula. This is excluded from patent law, in accordance with article L611-10 of the Intellectual Property Code < / a>.
Yet it is possible to envisage protection under the concept of copyright. As soon as a person creates a work, he has a copyright in it, which gives him specific rights. These allow the creator to be protected against the disclosure of his work, its appropriation, or even counterfeiting. In French law, the principle is laid down in article L111-1 of the Code of intellectual property : “The author of a work of the mind enjoys over this work, by the sole fact of his creation, an exclusive right of intangible property and enforceable against all. “.
But to be protected by this concept, certain conditions must be met. The first problem with the concept of copyright stems from its jurisprudential conception. Both French case law and European case law make the acquisition of these rights subject to human intervention. Indeed, the recognition of copyright is subject to several conditions, one of which is the originality of the work. Case law defines originality as “the expression of the author’s personality”. This French case law has been replaced by European case law which is based on the notion of “intellectual creation specific to its author”: “ an intellectual creation is specific to its author when it reflects his personality. < / em> ”- ( CJEU, Painer, December 1, 2011, C-145 / 10, No. 88 ). In view of this interpretation, it is necessary for the author to individualize his project, for example by introducing comments during development.
We could consider going through another mechanism to protect these creations, for example an application of article L113-5 of the French Intellectual Property Code according to which “ The collective work is, unless proven otherwise, the property of the natural or legal person under whose name it is disclosed. This person is vested with the rights of the author. “. Therefore, it would be the person in charge of the project who would be vested with the copyright.
As for Irish law, it frames copyright at article 17 of the Copyright and Related Rights Act 2000 which states that: “( 1) Copyright is a property right by which, subject to this Act, the copyright owner on a work may undertake or authorize other persons in relation to that work to undertake certain acts in the State, being acts which are designated by this Act as acts restricted by copyright in a work of this description .
(2) Copyright subsists, in accordance with this Act, in— […]
(d) original databases.
(3)Copyright protection does not extend to the ideas and principles underlying any element of a work, procedures, methods of operation or mathematical concepts and, with respect to original databases, does not apply. does not extend to their content and is without prejudice to any rights subsisting in such content. “. We can therefore see that the protection offered by Irish law is rather limited: it protects the structure of the database as soon as it is original (in accordance with European law), but to know what is really protected , it is necessary to define what a database is. Article 1 of Directive 96/9 / EC defines the database as “ a collection of works, data or other independent elements, arranged in a systematic or methodical manner and individually accessible by electronic means or in another way . “. For example, they are used by airlines to manage reservations or by hospitals for medical records.
Should the owner representing the robot be held liable or the robot, which would only be a legal screen of the owners’ liability?
The second problem that arises is the engagement of responsibility. In the event that the decisions of the machine create damage, who should be held responsible?
Under article 1240 of the French Civil Code: “ Any act of man which causes damage to others obliges the person through whose fault he has arrived to repair it “. This responsibility cannot be applied to the machine because it cannot be held responsible for its own fault in that it is not a person. Belgian law has a similar conception of liability: article 1383 of the Belgian Civil Code establishes that “ Everyone is liable for the damage he has caused not only by his act, but also by his negligence or by his recklessness ”. Recognizing the legal personality of robots would entail too important consequences because it would allow them to be holders of rights and obligations therefore to conclude legal acts but it seems unthinkable to recognize machines as a faculty to engage so they would be continuously represented and in the same way, at the moment robots have no interests of their own, it is not possible to distinguish their interest from that of the owner or the user so this leads to the question: should- does it engage the responsibility of the owner representing the robot or of the robot, which would only be a legal screen of the owners’ liability?
Ultimately the only reason for recognizing this personality would be compensation for the damage caused by the robot itself, but the difficulties of this recognition are considerable. Our current concept of legal personality would be strongly impacted: decision-making autonomy would become a criterion for attributing personality, which would amount to questioning the loss of legal personality for people unable to express their will. In addition, to assess the character of the fault, we would base ourselves on the quality of reasonable behavior: but how to define the reasonable behavior of an algorithm?
We note that the engagement of the personal responsibility of the machine, based first of all on the legal personality of this one seems totally unthinkable in French law, it is advisable to consider other hypotheses.
Control and direction of the behavior of machine learning
The second liability envisaged in French law is liability for the act of things, enshrined in article 1242 of the Civil Code: “ We are responsible not only for the damage we cause by our own act, but also for that which is caused by the acts of people for whom we are responsible, or things we have in our care . “. This responsibility is based on the concept of “custody”: whoever has custody of a thing is responsible for the damage it causes. For many authors, it is the responsibility most suited to new technologies, yet the notion of custody poses a problem: can we consider that the owner of machine learning technology has custody of it when his power of direction is limited by automation algorithms? initially, French law distinguishes between legal custody, which is the property and the material custody which is the power of direction and control of the thing at the time of the damage. Now, machine learning is in contradiction with this notion because it is based on the idea of an autonomy of the thing and of a limited control on the part of the creator, consequently, of a absence of a person responsible for physical custody. To get around this irresponsibility, some authors return to the distinction between guarding the structure and guarding the behavior. The guardian of behavior is the one who holds the thing to carry out its mission and the guardian of the structure is the owner of the thing ( Court of Cassation, Civil Chamber 2, of January 5, 1956, 56-02.126 56-02.138 ). Guarding of the structure could be considered as being the responsibility of the manufacturer or the owner, depending on the situation. Custody of behavior would engage the liability of the party whose faulty use caused the damage, either the owner or the user. To define who has custody of the behavior it will be necessary to define who has the powers of use, control and direction of the behavior of machine learning technology.
Finally, it should be noted that if the exact cause of the damage could not be determined, a presumption of attribution of the damage to the structure of the thing applies. As the Court of Cassation refuses to share responsibility, an alternative custody principle is applied. However, it would not be impossible to envisage joint and several liability through the collective custody model. This solution is however dangerous because it is linked to a limitation of the chances of compensating owner / user victims in relation to the damage caused by unconnected objects.
The last possible category is liability for defective products. It was established by the directive of 25 July 1985 , the legislation is therefore harmonized between all the member states of the European Union. In French law, it is provided for at article 1245 of the French Civil Code : “ Le producer is liable for damage caused by a defect in his product, whether or not it is bound by a contract with the victim . “. The first problem that arises is that of product qualification. Indeed, the algorithm is not a product within the meaning of the directive of July 25, 1985 . However, we can consider the application of this regime to the other hardware components of the machine. The second problem to be taken into account is the definition of the person for whom we will be responsible, because in general, a multitude of actors participate in the creation of technology. The final question is that of the defective product, meaning that it “ does not provide the security that one might legitimately expect. “. The defect could then be that of the machine program, of its learning, when the use of it leads to the realization of a damage. Detecting the defect would therefore be a means of recognizing those responsible. However, when it comes to machine learning technology, establishing the link between damage and machine defect seems complex because it could be to apply this regime to intangible goods.
Artificial intelligence because the humans who create the systems are themselves subject to cognitive biases
We see that today, no regime is totally adapted to machine learning and although they could be used to take into account the damage caused by our technological advances, adaptation would be necessary.
In addition, today new technologies are developing and it is necessary to take them into account. The European legislator took up the issue of artificial intelligence in 2017. In a resolution of 16 February 2017 , MEPs proposed” the creation, in the long term, of a legal personality specific to robots, so that at least the most sophisticated autonomous robots can be considered as responsible electronic persons, required to repair any damage caused to a third party; it would be possible to confer electronic personality on allt robot that makes autonomous decisions or that interacts independently with third parties; ”. But this proposal was not accepted and in a resolution of 20 January 2021 < / a> on artificial intelligence, the European Parliament reaffirmed the maintenance of human responsibility and “ underlines that systems based on AI must allow the humans who are responsible for it to exercise real control, to assume full and entire responsibility with regard to these systems and to answer for all their uses ”. The European Union has chosen a plan based on the risk posed by the technology in question with four categories: unacceptable risk, high risk, limited risk and minimal risk. Would this conception not be limiting? Indeed, it would prevent the pronouncement of bans in areas that do not fall under unacceptable risk, for example predictive policing which could use artificial intelligence for ethnic biometric categorization constitutes a violation of human rights yet it is not categorized. as “unacceptable risk” by the European Union plan.
The challenge of this new regulation will be to take into account the issue of consciousness within machines and the ethical limits that artificial intelligence often faces because the humans who create the systems are themselves subject to cognitive biases, which ‘it will be necessary to be able to minimize thanks to the legislation.
It seems that today, European legislation is moving towards a responsibility of the person who is responsible for artificial intelligence and not for artificial intelligence itself, without taking into account the degree of autonomy and independence of it. The person responsible will be the one providing the service and not the developers, although they are required to go through a process of bringing the product into compliance before placing it on the market.
As European law begins to take into account the excesses of artificial intelligence, other regions of the world have found themselves confronted with the excesses of new technologies. In 2020, a court in eastern China rendered its decision regarding the first court case involving the use of facial recognition in China, which consists of a machine learning mechanism. This involved establishing the security of data collected by a private body, the reason for their collection in relation to their structure as well as the law that could justify the change in the mode of collection (switch from fingerprint to recognition facial). The plaintiff won his case based on breach of contract and not on legal instruments regulating artificial intelligence, but this case illustrates how important it is in today’s society and leads us to question ourselves. on how to protect our data and our privacy in the face of these new technologies.
But then, when we see that new technologies play an increasingly important role, giving rise to litigation, the courts itself will have to use new technologies and machine learning . The idea of preventive justice emerges, which consists of predicting with as much certainty as possible what the decision of a court will be in such a situation. It is already used in China due to the lack of lawyers: in some provinces machines offer decisions to judges to compensate for their lack of impartiality
The risk of this form of justice is the homogenization of court decisions: will judges be afraid of rendering a decision different from that rendered by artificial intelligence? However, this idea must be mitigated because the judge remains fully responsible for the decisions rendered. But we must manage to combine this form of justice with the main principles of the rights of the defense: equality of arms, the principle of individualization of the sentence. Justice will therefore have to put in place guarantees to protect litigants against the abuses of its own system. Predictive justice can be a means of orienting professionals but cannot substitute for their skills.