Table of Contents
A study by the Institute of Technology, which highlights the fact that self-driving cars are less able to detect dark-skinned pedestrians, has just prove it once again; artificial intelligence is not infallible and contains many biases. And if nothing changes in the way the mathematical models behind this technology are conceive; it is entire populations – women and ethnic minorities in the first place – who could be harmed.
In any case, this is the point of view of many and many whistleblowers, coming from and / or working with this technology. “There is an emergency. The algorithms that will guide 90% of our actions must not be written by men; account executive at voice assistant startup Snips told the occasion of a meeting with some journalists at the Hub; “For these algorithms to be fair and balanced, women must come”, adds, a global key account specialist at city.
proportion
According to the Gender Scan study by Global Contact, in 2017, the proportion of girls who opted for high-tech training fell or stagnated; to very low levels: 13% in 2015 in engineering sciences in the final year, 8% in IT, 7% in IT / data processing.
In their book “Artificial Intelligence, not without them!” Doctors of Science defend the idea that one should not wait for recruitment in a company to change mentalities but act during the school career. “In computer science and mathematics schools, specific modules on” encoding equality “would change the gaze,” they say in the newspaper.
This is also the opinion of co-founder of the Women in artificial intelligence association, which encourages women to join the AI sector; through multiple actions including meetings in middle and high schools. “We have to go find these young girls, give them confidence. At 15-16 years old, we have not yet lost them. We tell them that to work in artificial intelligence, you don’t necessarily have to know how to code but understand what is happening. This is the training of an engineer. “
It seems that things are starting to move to other countries. Let us quote, by way of non-exhaustive examples, the computer school 42 which claims to have registered nearly 30% of women in the February selections; the Wagon with promotions made up of 30% of women or the opening of a new school of code and technological creation Ada School which should see the light of day in September, with the support of Station F, the incubator.
Can Artificial Intelligence be ethical?
Artificial intelligence is revolutionizing our modes of action and could upset our already fragile ethical benchmarks. As an example, the AI programming into the autonomous car poses moral questions that are impossible for even a human being to solve; in the event of my car failing; will I choose to kill two children or three elderly people, if those -I were simultaneously crossing my path? It is this thorny question of ethics that the Commission has tried to tackle with the drafting of the artificial intelligence Ethical Guideline. Or ethics guide for Artificial Intelligence. To do this, the commission has set up a group of 52 experts, the composition of which however seems unbalanced; with a large majority of industry players and industrial federations, we note the almost total absence of philosophers, ethicists; religious leaders, sociologists, anthropologists or even health personnel.
Talking about ethics means talking about Man to guarantee him respect for his freedoms and protect him from himself if necessary. The industrial over representation of the group of experts gives rise to fear that the reflection; will be guide by a cost-benefit analysis new technologies on the economic, social, and environmental levels; without including a whole section of human knowledge on another problem posed by the draft AI Ethical Guidelines report; the approach propose by the group of experts on AI is voluntary adherence to the ethics guide; which would thus be adopted industrial by industrialist, developer by the developer. Does this mean that we should count on a voluntary subscription from AI actors to see our rights respected; as defined in the Charter of Fundamental Rights of the country Union and the country Convention on Human Rights?
Ethical guidelines of artificial intelligence
On the contrary, it would be necessary for these ethical guidelines in matters of AI to be conceive as a charter visible by a label such as; “Ethics Inside” or even “Trustworthy AI” to guarantee the AI user this respect for ethical rules. In the field of health, we could consider writing a new Oath of Technological Hippocrates. It should be noted that most of the AI developers and researchers we met called for it; credibility to their profession and calm the fantasies and fears that occupy minds in matters of artificial intelligence.
The draft ethics guide of the country Commission establishes as an objective; “to maximize the benefits of AI while minimizing its risks. A human-center approach to AI is need, forcing us to keep in mind that the development and use of artificial intelligence should not be seen as a means in itself; but as a goal of ” increases human well-being. The six ethical principles considered as founding and present by the group of experts in AI are Beneficence (doing good); Non-maleficence (not harm), human autonomy, Justice (that is, AI non-discrimination); and explain ability to ensure autonomy, informed consent, and data protection. But then, without autonomy, would the user have no rights? Similarly, transparency of algorithms, non-discrimination, and data protection will be sufficient principles to guarantee respect for our freedoms? Probably not.
foundations of morality
If ethics, as defined, is “the part of philosophy which considers the foundations of morality; as well as all the moral principles which are the basis of someone’s conduct”; we deduce that practical ethics can only be a variation of the definition of the foundations of morality. Thus, applied to AI, we cannot avoid the construction of a solid and applicable ethical framework for the development of algorithms. Human beings cannot be consider only as consumers or even as users of public and commercial services; we heard from a member of the expert group during the preparatory work of the country Commission.
In the Pakistan context, the concept of autonomy is again very present in the Touraine report on the revision of the law on Bioethics; in chapter concerning Artificial Intelligence. Autonomy will apply this time; not to the decision-making autonomy of machines and AI but human autonomy in the sense of self-determination.
Principles of artificial intelligence
The ethical vision of AI proposed in the Touraine report is practical ethics based on the following principles; Autonomy, Informed consent, Protection of privacy, Establishment of responsibility in the event of medical error. If the proposals make in this report are interesting, they are not anchor in the intangible and inalienable principles of human rights as defined; for example, in the Charter of Fundamental Rights of the European Union: dignity, freedoms, equality, solidarity, and citizens’ rights to justice. Is there therefore no founding and inalienable concept as a prerequisite for the laws on French Bioethics?
An example can be propose in the field of neurosciences which henceforth combine intracerebral devices equipper with sophisticated algorithms; the studies of Swiss researchers Marcello Inca and Roberto Andiron lead to think that human rights in the era of neuroscience and neuroethologies; will require the creation of four new human rights the right to cognitive freedom, the right to psychological privacy; the right to mental integrity and the right to psychological continuity.
In conclusion, let us not be temp by fantasies and technological bluff; according to the expression of the philosopher Jacques Ella. Let us face the risks of AI and urge our politicians to persevere and find an acceptable consensus between ethics; law, and competitiveness. French and European AI could thus find a competitive advantage by guaranteeing the protection of fundamental rights to their users.