top of page

My Assistant, the Robot

Given some studies and the way artificial intelligence is presented in some of the media, concern about jobs in the future is growing. Wrongly so! Given the current state of the technologies, the robot should be seen as an assistant or a companion, not as a replacement. This requires continually educating the public in this regard, and work on the complementary nature of humans and machines.

Life assistant Pepper greets the public at the Liège University Hospital in Belgium by un loading medical staff from part ot the reception work.

One sign of intelligence is the ability of an agent to interact with its surroundings. “Artificial” intelligence (AI) refers to the ability of machine to reproduce these interactive abilities: perception, cognition and action. The term “robot” is used to denote a robot in human or animal form, but also an automatic software agent interacting with computer servers. Robots use a large number of AI programmes. We are at the beginning of the revolution in artificial intelligence. And yet, the transhumanist movement, a fervent partisan of this revolution, is touting immortality and the “superintelligence” of machines. Some businessmen, like Elon Musk, predict a future where AI will seize power over humans. The media presents this revolution as software and robots replacing human work. These prophecies only serve to fuel job loss fears. Max Tegmark, MIT researcher and one of the founders of the Future of Life Institute, has made this issue the most pressing question in AI: “What do we want to do with it?”, he asks. “What will happen to humans if machines gradually take over the job market?” (1)

For the moment, economic growth linked to AI remains slow. What is its potential? What savings will it enable? Is it really an advantage if it starts replacing the work of humans? All these questions reveal one thing: the issues raised are economic, but also societal. The challenges our political leaders today face is how to enable (or even organise) transition to a society that includes AI, without massive job losses and economic slow-down.

Some economists claim that almost 50% of all jobs are threatened by AI. For example, the American think-tank McKinsey estimates that in 2016, 47% of U.S. jobs could be lost in the next twenty years. Other surveys however, such as those of the OECD or France Stratégie (a prime-ministerial think-tank), show very different results. They claim that machines will replace less than 10% of jobs (2). Other studies are attempting to predict the number and types of jobs created, although this remains quite difficult.

SOCIETY POLARISED

Alongside all this, some reports (one by Michael A. Osborne, automatic learning scientist, and Carl Frey, economist, both at Oxford University) are focusing on “professions at risk” of automation. Others are evaluating this risk by task rather than by profession, like the Employment Advisory Council report (3). They explain that customs officers, cashiers and road freight carriers could disappear; the jobs of the middle classes and managers would be very different; and society could undergo net polarisation, with on the one side, creative jobs or “managers” that are economically very productive, and on the other, manual and personal assistance jobs with a weak economy. In any event, depending on the methods used, contrasting conclusions are reached on the impact of artificial intelligence on jobs.

The first thing to be considered when researching the impact of machines on our lives is the type of AI we are actually dealing with. There are three common types. The first is narrow AI: the ability of a machine to reproduce a specific human behaviour, without conscience. Basically, this is simply a powerful tool for reproducing specific tasks; recognising objects, answering quizzes, playing Go, etc. The second is strong AI: the ability of a machine to reproduce conscious and emotional human intelligence, knowing how to adapt to the environment and the context. This is also known as Artificial General Intelligence (AGI). The third and final type is superintelligence: artificial intelligence that is more intelligent than all human intelligence combined.

AI, even narrow, should be distinguished from automation. Automation has already revolutionised the working world, replacing workers’ “production line” tasks with machines, and includes applications defined by the 4 Ds: dangerous, dull, dirty and dumb. AI goes beyond this; it can replace humans for tasks that are not just automatic. Boosted with AI, computers or robots can for example reply to their master’s voice in writing or orally. They can be found in what is called social, personal and service robotics, regulated by the 4 Es: everyday, e-health, education, entertainment. Social robots are with us in our daily lives, monitoring our health, developing our knowledge and entertaining us. For these tasks, the robot is seen as a companion or assistant, but definitely not a replacement - in fact, it would be quite incapable.

JOURNALIST OR LAWYER

In other fields, artificial intelligence can recognise what’s in an image, beat a Go champion, answer encylopedic questions, copy the style of any artist, generate lifelike images, diagnose cancer, etc. Contrary to popular belief, it will also have consequences on so-called “white-collar” jobs. For example, an AI journalist already exists, able to write summaries. These robot-journalists, or robot-editors, are in fact AI programmes that can convert data into text. The American Los Angeles Times was one of the first newspapers to take the leap. It welcomed Quakebot into its teams, which wrote its first article in 2014 when an earthquake occurred in California. In France, the news website of the daily Le Monde used Data2Content to write some 36,000 articles about the departmental elections in March 2015.

Similarly, assistant agents already exist for lawyers. The US firm BakerHostetler, founded in 1916, has been using a certain Ross since 2016. Like some employees in law firms, AI analyses hundreds of files and articles about cases similar to the one in court and provides the case lawyer with useful information. In fact, narrow AI (the only AI accessible today) will transform many jobs, rather than causing them to disappear. It is therefore necessary to reflect on human-machine complementarity and discover how the association of the two could enhance economic production, while at the same time respecting human well-being. This means that certain questions need to be answered when a human task is to be replaced by AI. For example, can the task be entirely automated? Is emotional intelligence required (as the machine doesn’t have emotions, it can only simulate them)? Is a human decision required to verify and be accountable? Is it socially and legally acceptable to substitute human opinion with the opinion of a machine?

These questions are particularly important in the medical sphere. The work of the doctor is to collect data about the patient, make a diagnosis, find the appropriate treatment and help the patient to feel better by establishing a relationship of trust. Narrow AI can already perform some of these tasks; it can analyse and diagnose medical images better than humans! However, the question is whether the decision to find the best treatment and order additional tests should be left up to it. And if so, what would the role of doctors be? Would they just deal with complex cases, requiring general intelligence? Many aspects of AI enable us to live better lives. The usefulness of AI, robots and artificial agents is undeniable, to assist with and relieve the work of healthcare providers for example, caring for the elderly, while at the same time underlining that human contact is one of the essential aspects that must remain at the centre of the care relationship. AI will enable better disease prevention, continuous monitoring, more extensive memory, less loss of concentration or oversight of pertinent information, and more efficient expert advice, that is less subjective or arbitrary and better justified... on the condition that a certain number of values are respected.

It is essential that AI be neutral, loyal and non-discriminatory. These issues have already been discussed by stakeholders in this field; less so the ethical and social risks of long-term interaction with machines. Disengagement, disempowerment, lack of merit in work, the “proletarianization” of knowledge and know-how, social disparities in access to technology, standardisation, etc., are all issues that demand more debate (4).

To reply to interrogations about these risks and facilitate the appropriation of AI systems by all actors in the workplace, we must reflect now on the steps and necessary conditions to instil an atmosphere of trust in developing these technologies. This requires continuous public education on the subject. We must also work on the complementarity between humans and machines in the medium and long-term, so that a sort of companionship emerges. So, let us be more human in our contact with machines.

(1) John Brockman (éd.), What to Think About Machines That Think, Harper Perennial, 2015.

THE BOOK

OF ROBOTS AND MEN

Soon robots will outperform and overthrow us. This is the cliché this book attacks. Page after page, Laurence Devillers deconstructs the fantasies machines give rise to, and attempts to initiate as wide a public as possible in the real challenges of artificial intelligence and robotics, encouraging ethical questions on these issues. She does so with an acute sense of narration and simple writing, undoubtedly inspired by Isaac Asimov, the undisputed master of the science-fiction novel and inventor of the term robotics, whom she quotes at the start of each chapter.

Vincent Glavieux

Plon, 288 p., 16,90 €.

> AUTHOR

Laurence Devillers

Computer Scientist

Laurence Devillers is a professor at Paris-Sorbonne University and a research scientist at the CNRS IT Laboratory for Mechanics and Engineering Sciences, where she leads the research team on Affective and Social Dimensions in Spoken Interactions. Her research deals mainly with the human-machine interaction, the detection of emotions, “chatbots” and affective and social robotics. She is a member of the Allistene Commission on Research Ethics in Digital Science and Technology (Cerna). She contributed to the France Intelligence Artificielle report in March 2017.

RECENT POST
LATEST DISCOVERIES OF FRENCH SCIENTIFIC RESEARCH
bottom of page