6533b82afe1ef96bd128cb8d

RESEARCH PRODUCT

Complying With the First Law of Robotics: An Analysis of the Occupational Risks Associated With Work Directed by an Algorithm/Artificial Intelligence

Adrian Todolí-signes

subject

business.industryProcess (engineering)Computer sciencemedia_common.quotation_subjectBig dataMiddle managementOccupational safety and healthWork (electrical)Artificial intelligenceElement (criminal law)Human resourcesbusinessAlgorithmAutonomymedia_common

description

It is increasingly common for companies to use artificial intelligence mechanisms, which may be more or less advanced, to manage work, that is, to establish work shifts and production times, design and allocate tasks for workers, recruit workers, evaluate performance and dismiss employees. Companies rely on technology to gather all the information available, process it and make the management decisions (productivity optimisation) that will benefit them the most. This replaces human supervisors and middle managers, as well as experts in human resources, leaving the management of workers in the hands of automated processes directed by algorithms – or in its most advanced stage, to be undertaken by artificial intelligence. This work examines the health hazards that the new forms of technological management can cause. Indeed, constant monitoring through sensors, the intensification of work derived from decisions taken by a machine without any empathy or knowledge of the limits of human beings, the reduction in the autonomy of the worker subject to decisions made by artificial intelligence, discrimination despite the mantle of algorithmic neutrality of those decisions and possible operating errors can all end up causing serious physical and psychological health problems for workers. These risks can be reduced if they are taken into account in programming the algorithm. This study defends the need for a correct programming of the algorithm to ensure it takes these occupational risks into consideration. That is to say, in the same way that supervisors must be trained in risk prevention to be able to carry out their work, the algorithm must be programmed so that it weighs up the risks in the workplace – and if this programming does not exist, steps must be taken to prevent the algorithm from being used to direct workers. Specifically, the algorithm must be transparent, adapted to the real capabilities of workers and leave them some margin of autonomy and respect their privacy. In short, the algorithm must assess any element that poses a risk to workers' health and safety. Hence, it is defended that mandatory risk assessment, carried out by specialists, should be incorporated into the programming of the algorithm so that it is taken into account when making decisions regarding work management.

https://doi.org/10.2139/ssrn.3522406