Along with the spreading of intelligent robots to services and the domestic sphere, we might need ethical code designers in a near future.

Extensions of human capabilities

Robots are the newest step in the evolution of the extension of human capabilities, and as such, this novelty can sometimes be disturbing. However, human beings have always relied on external resources to expand their own capacities.

What has changed is our relationship with these extensions. Since this relationship is dialectical —they transform each other—, it is logical to think that the coexistence between humans and robots will produce changes that are yet to be discovered.

Analyzing the evolution of extensions, Vilém Flusser (The Shape of Things) proposed the following historical chain: first, the hand was extended by the tools, the tools gave way to the machines that, finally, give way to the robots. In my opinion, this sequence is limited by an artificial perspective. Human beings have also increased their capacities with the use of natural extensions: other humans and animals.

The evolution of technical systems

According to Flusser, the great difference between the tool and the machine is that while in the first case the tool is a variable element and the human being is constant, in the case of machines this relationship is reversed: the machine is constant and the worker becomes a replaceable element. The question is whether with intelligent machines (robots) humans beings could become an unnecessary element.

This reversal of roles, in which the human becomes a smaller player progressively, is well reflected on the laws of TRIZ, based on the theories of the Soviet engineer, inventor and scientist Genrich Altshuller.

According to Altshuller’s first law, technology systems require four essential components to operate:

  1. The engine, the power source.
  2. The transmission, responsible for directing the energy from the working unit.
  3. The working unit (or tool). It ensures the contact between the system and the object or outer environment on which it acts.
  4. The control element, which ensures that the whole system reacts to the changes, adapting the system behavior to attain its goals.

Thus, for example, a technical system based on the squeezer would be composed of the following elements: The engine is the strength of the user, the hand transmits this energy to activate the operation of the squeezer (the working unit), which acts on the orange (the outer element).

The elimination of the human role

However, according to Altshuller’s laws, over time, the design and innovation of technical systems tends to evolve towards more efficient systems, increasing the benefits and reducing the undesirable effects, as well as the costs.

At the same time, technical systems tend to reduce the degree of human involvement, being replaced by technical processes. This progressive reduction of the human role responds to the design of increasingly efficient systems. By replacing people with technological elements, we obtain faster, less costly and more user-friendly systems.

Returning to the example of the squeezer seen previously, we can verify how the different systems have reduced the role of the user, making the process more automatic progressively.

The substitution of the human being

However, the replacement of human beings by autonomous sensitive and, possibly, intelligent systems, carries with it a series of effects that we still do not know how they can affect us.

1. What use is a human being?

The fear of being replaced by a machine has always existed. In their quest for efficiency, factories were the first to massively replace workers by machines. However, robots are spreading out of the industrial domain to services and, even, the domestic sphere.

Elevator girls

2. Can robots be craftsmen?

On February 10, 1996, Deep Blue, a chess-playing computer developed by IBM, became the first computer system to defeat a chess world champion.

Kasparov, being defeated by Deep Blue

The goal of artificial intelligence (AI) is to create systems that can perform operations that would require intelligence if they were performed by a human being. What’s the point of using a robot instead of a human being (with built-in intelligence)? In theory, the use of intelligent artificial systems would increase efficiency, reduce costs and save efforts to users.

However, it is curious to see how, in parallel with the growing role of AI, we are witnessing the emergence of a new craftsmanship. The new craftsmen use these artificial intelligent systems to their advantage, but there are some factors that could hardly be captured by the machines. Could we artificially reproduce the tacit knowledge of the craftsman?

It has been a long time since humans ceased to be a completely natural system. Paradoxically, it is possible that the role of robots will not be that of replacing human labor, but to give rise to a human-machine collaboration that will allow us to go further.

Marshall Felch Shoemaker

3. Do we need ethical code designers?

The use of autonomous machines that perform tasks previously programmed is not new. However, with the application of artificial intelligence and deep learning, machines can learn by themselves, evolve and perform actions that have not been programmed.

This raises several dilemmas. Could a robot harm a human being? Should we incorporate an ethical code into the program of intelligent machines? Answering these questions is not a science fiction issue. At present, the machines decide who to give a loan or a life insurance.

In 2016, Fast Company magazine conducted a survey among designers at Google, Microsoft, Autodesk, Ideo and Artefact, among others, to predict the future new design jobs. Among the predictions, appeared the Intelligent System Designer or the Machine-Learning Designer, but surprisingly no one raised the ethical question.

On the one hand, it is possible that machines need to incorporate an ethical code that is prioritized over purely mathematical decisions. Humans not only use data to decide what action to take and, although this is often a source of errors and harmful behaviors, it has also allowed us to survive.

On September 26, 1983, Stanislav Petrov was duty officer at the command center for the Oko nuclear early-warning system when the system detected that six missile had been launched from the United States. Petrov judged the reports to be a false alarm, preventing an erroneous retaliatory nuclear attack.

But how can we program an ethical code on dilemmas that humans have never been able to solve? And which ethical code would be chosen? In democratic societies, people can follow the ethical code they want, as long as they respect the laws established by the society as a whole.

And finally, the question arises of how can we demand ethical codes to the machines when we ourselves do not follow them. How can we explain to a robot why people drive after drinking alcohol? How can we explain to a robot that it’s legal to sell tobacco knowing that it is harmful to health?

 .

.

.