Are you afraid of artificial intelligence?

Why you shouldn't be afraid of artificial intelligence

By Lorena Jaume-Palasí

Are machines with "AI" intelligent? Do they have feelings and a will of their own? Are they capable of malice and malice? After the commercialization of the automobile at the beginning of the 20th century, these questions were tried in the courts of various countries. The fact that the same uncertainties and questions re-emerge a century later with the increasing commercialization of AI warrants a look back at the debate that accompanied the introduction of a new mode of transport.

The idea of ​​an automated society with robots populating the workplace and home was one of the utopias - and dystopias - with which the literature responded to the introduction of automation systems. In the early 20th century, cars and traffic lights literally brought automation to the streets. Since then, the number of machines and automated processes in our lives has grown exponentially: washing machines, ATMs, camera lenses, doors, car washes, thermostats ... The fear that they initially triggered has given way to a routine. Automation is so ubiquitous that we don't even notice it when we run into it.

But automatic machines and artificial intelligence (AI) are not the same. Rather, AI is a form of advanced automation. Conventional devices create very precise programming rules with which a machine performs certain tasks. The efficiency depends on the detail and the accuracy with which the task was programmed, such as calculating the shortest route between Berlin and Munich. What AI enables is a more abstract form of automation. In this way, the fastest route between these two cities could be calculated, taking into account various aspects: the construction work, the number of traffic lights, the foreseeable rush hour traffic and unpredictable events such as traffic accidents or weather conditions. In other words, programming in this context focuses on creating rules to measure efficiency and developing action parameters. According to these rules, intelligent automation systems choose the most efficient action. This level of abstraction is a milestone in the history of technology.

These achievements are both exciting and frightening. A lack of knowledge and familiarity makes AI seem like magic and leads to old debates being revived. Is this technology smart? Does she have feelings and a will of her own? Is it capable of malice and malice? Who is responsible if the system has unforeseen, harmful effects? Will it change human nature? What are the risks? Do we need new rules?

Exactly these questions were heard in the courts of different countries after the commercialization of the automobile at the beginning of the 20th century. The fact that the same uncertainties and questions re-emerge a century later with the increasing commercialization of AI warrants a look back at the debate that accompanied the introduction of a new mode of transport. From a normative and regulatory point of view, three aspects deserve our attention.

Technology only appears intelligent and human when it is not commonplace

The commercialization of cars was longed for by all social classes. The car as a means of transport promised a future of efficiency and hygiene in cities where the streets were drowned in horse droppings. But within a few years there was a U-turn and the cars themselves became the new urban plague. In the 1920s, demonstrations against insecurity were ubiquitous on the streets: real wrecks were put on display on the streets - with bloody mannequins and the devil behind the wheel. Demonstrations took place in the streets of Washington and New York, at which 10,000 children disguised as ghosts symbolized the annual number of road deaths.

In just a few years, the car became the subject of a profound ethical debate that soon reached the courts. For example, the state court in the US state of Georgia held an intense debate about the moral character of the car. In its judgment, the court came to the conclusion that although the vehicles were not malicious, they "should be classified as dangerous wild animals". For this reason, the existing laws for keeping exotic animals should be applied.

Getting used to the new vehicles ensured that the humanization of the machines and the associated attribution of malicious intentions decreased over time. The ethical and legal debate has increasingly focused on the behavior of people - in front of and behind the wheel.

This at first glance philosophical aspect of the discussion had a clear legal consequence. A liability of the machine as if it were an intelligent entity was excluded. In retrospect, the opposite would not only be ridiculous, it would also have posed a challenge to ethics and the law to create workable rules and sanctions that would apply to both humans and machines.

Today's debate about artificial intelligence has the same meaning in this area and suggests thinking about the same legal and ethical ramifications. Does a robot have intentions that should lead to it being given legal personality? To what extent can responsibility be transferred from humans to machines? How can a sanction be applied to a machine?

Artificial intelligence is not intelligent

Artificial intelligence and its methods of statistical analysis have no will of their own. Artificial intelligence is not intelligent. It is therefore not in a position to have its own ambitions and interests, to deceive or to lie. In other words, artificial intelligence shouldn't scare us more than statistics. That doesn't mean it's harmless. Artificial intelligence and its algorithms are not neutral, but rather reflect the intentions and unconscious biases of the team of programmers and data scientists, as well as the parties involved in implementing this technology.

For AI, very transparent logs can be created to understand what changes have been made by humans - regardless of the complexity of the algorithms with which this technology works. There is no need to create a special legal figure for artificial intelligence. Technology itself makes it possible to assign responsibility for failure or abuse by a particular person more clearly and easily than before.

Both the driver who manages the artificial intelligence and the pedestrian who is exposed to the system can be identified.

Ethics and law must be neutral towards technology

Back to the dilemmas created by the introduction of the car a century ago. Back then, it was crucial to focus the debate - ethically and legally - on people in order to formulate practical and applicable laws and regulations. However, creating a legal and regulatory system that assigns rights and obligations can only be legitimate with a clear understanding of the risks and the actors involved. It took the courts, and society in general, some time to understand both the technical aspects of the car and the problems created by car traffic.

The first attempts at regulation seem grotesque to us today. Not least because they imposed duties on actors who could not exercise sufficient control over the machine. In the UK, for example, a driver was required to notify the sheriff before driving through a parish so that he could march in front of the car and warn pedestrians, armed with two red flags.

The legal system that tried to regulate traffic attributed responsibility solely to the driver. Back then, however, the streets were characterized by a lack of predictability: street signs had not yet been invented, children played in the street, horse-drawn carriages went by with the noise of engines, and pedestrians could not estimate the speed at which cars were approaching. All of this made the responsibility assigned to the driver disproportionate. From a physiological point of view, it was simply impossible to react to such unpredictable events.

Pragmatism and a sense of social justice led the Canadian James Couzens to invent a system of traffic signs and rules to coordinate pedestrians and drivers. Couzens resigned from his position as Vice President of Finance at Ford and began working for the city of Detroit, USA, the then world capital of the automobile. With cigar in hand, Couzens revolutionized the transport infrastructure. First he identified the situations where the responsibility lay with the pedestrian and created signs and areas to cross the streets.

Initially there was great resistance from society. The rules and duties for pedestrians were not free from controversy: Councilor Sherman Littlefield called them humiliating for “treating ordinary citizens like cattle”. Couzens was not intimidated and imposed his rules by decree. Time proved him right and showed the effectiveness of his approach, which ultimately became an international model. Couzens was also responsible for drawing up a traffic control and management plan that made it possible to forego the presence of the police in the event of a staff shortage. Detroit became the cradle of technological progress, with revolutionary ideas such as the automatic traffic light in the 1920s.

It is noteworthy that Couzens paid little attention to the car as an independent technology when designing his traffic rules: The rules and restrictions did not affect the technical aspect, but only its use in public spaces. For example, speed limit measures do not prohibit the development of engines with more horsepower, but restrict the use of the accelerator by the driver. The laws and regulations drawn up by Couzens did not have to be changed with every technological change, as they always made it possible to recontextualize the use of technology. The fact that the established traffic rules were technologically neutral is why a century later they are still in place and are essentially not out of date.

In the field of AI, laws and ethical principles that can be applied to program code are examined. One example is the principle of "minimizing personal data", in which only the minimum amount of personal data required to provide a service or perform a task is to be processed. This is a technical principle that is of vital importance and affects information processing. On the one hand, the process protects the privacy of the people involved. However, paradoxically, this rule can violate equal treatment because it does not take the context into account. For example, just over a decade ago, studies were conducted on the use of beta-blockers (a drug widely used in cardiology) using a database consisting mainly of data from European men. Their conclusions apply to this group, but not to women or to ethnic groups with any other genetic variation.

The lack of information about certain social groups creates a database that is distorted from the start: the profile and characteristics of part of the population are overrepresented and distort the calculation, which gives a wrong impression of the whole. The assumption that less data reduces the risk of discrimination is a myth. Depending on the context, more or less personal data is required in order not to lapse into simplifications that lead to discrimination against certain groups.

These examples show that we need to change strategy. Because until now, the artificial intelligence debate has focused on the technical part. But history shows that it is possible to develop laws and regulations about new technologies without regulating the mathematical code itself. Ethics and law basically focus on the social context: their principles do not apply to the technical process, but to the social situation in which this technical process is integrated. It's not about regulating the technology that artificial intelligence enables, but what people do with it.

Training society to deal with new technologies does not require any technical knowledge

Couzens saw the need to educate citizens on how to use cars so that traffic rules permeate and are adopted by society. He also understood the importance of key knowledge, such as recognizing the distance and speed of a car. This could only be done by integrating the technologies into everyday life and getting used to them through their use. Couzens did not believe the need to understand the mechanics of the car beyond its operational functions, such as braking, accelerating, or changing a tire. In both law and ethics, the premise ultra posse nemo obligatur applies, a legal term that states that no one is obliged to do more than he can do. The required knowledge of the mechanisms of a car is beyond common sense, so no one has an obligation to know them.

The AI ​​debate calls on the public to be technically competent. But the dilemmas caused by the automobile at the beginning of the 20th century show that this type of discourse is not constructive. We don't need to know how an airplane works to fly in it. We also don't have to know anything about biochemistry to buy a yogurt. Beyond what can be done, beyond common sense, no one is obliged to know anything.

AI enables us to recognize patterns of behavior and to identify differences in the behavior of different groups - women, ethnic groups, social classes and many others. On this basis, the developers can decide to discriminate more or less legitimately. They can offer different services or information to different groups, or manipulate attention. Involuntary and implicit discrimination must also be taken into account, which is why constant evaluation of the technology is essential. For this, experts not only have to have a general ethical sensitivity, but also be particularly sensitive with regard to involuntary discrimination. Because that can result from distortions in the design or the databases that AI works with.

The use of this technology to reinforce or compensate for discrimination depends on the group of people who use it. It is not the citizen who needs to understand the technical process behind AI in order to use it. It is engineers, data scientists, as well as marketing departments and governments who use or regulate such technologies that need to understand the social and ethical dimensions of artificial intelligence.

---

Lorena Jaume-Palasí is Managing Director of AlgorithmWatch and a member of the Spanish Government's Council of Wise Men on Artificial Intelligence and Data Policy. The article was originally published in Spanish by El Pais in March 2018.