AI Software That Risks Worker Safety Would Breach Section 6, says DWP Minister
- Date: Monday 22nd October 2018
- PDF: Download
Designers, manufacturers, importers or suppliers of artificial intelligence (AI) and machine learning software that is implicated in an accident could be liable for prosecution under Section 6 of the Health and Safety at Work Act, the government has confirmed.
The announcement, in a written parliamentary answer by Department for Work and Pensions minister Baroness Buscombe on 5 June 2018, adds some clarity to a grey area in legal and academic circles.
The written answer responded to a question by Lord Stevenson of Balmacara, a Labour life peer and former adviser to Gordon Brown.
He asked to what extent Section 6 would apply to AI or machine learning software used to control physical objects in the workplace, to design articles for workplace use or to support human decision-making processes on computers under the control of the employer.
In reply, Buscombe said: “Section 6 of the Health and Safety at Work Act 1974 places duties on any person who designs, manufactures, imports or supplies any article for use at work to ensure that it will be safe and without risks to health, which applies to artificial intelligence and machine learning software.
“Section 6(1)(b) requires such testing and examination as may be necessary to ensure that any article for use at work is safe and without risks but does not specify specific testing regimes. It is for the designer, manufacturer, importer or supplier to develop tests that are sufficient to demonstrate that their product is safe.”
Few prosecutions are currently brought by the HSE under Section 6, which sets out the duties of “any person who designs, manufactures, imports or supplies any article for use at work”.
In her written reply, Buscombe said that the HSE’s Foresight Centre was monitoring developments in AI to identify potential health and safety implications for the workplace over the next decade.
“The Centre reports that there are likely to be increasing numbers of automated systems in the workplace, including robots and artificial intelligence. The HSE will continue to monitor the technology as it develops and will respond appropriately on the basis of risk.”
In its' report 'AI in the UK: ready, willing and able?' published in April, the House of Lords Select Committee on Artificial Intelligence pointed to existing uncertainty over how the UK’s regulatory framework could be applied to next-generation computer systems, and called for a review by the Law Commission.
Amongst other issues, it highlighted malfunctions that occur as a result of an algorithm learning and evolving of its own accord.
The report said: “There is no consensus regarding the adequacy of existing legislation should AI systems malfunction, underperform or otherwise make erroneous decisions which cause harm.”
It added: “It was not clear to us, nor to our witnesses, whether new mechanisms for legal liability and redress in such situations are required, or whether existing mechanisms are sufficient.”
“We recommend that the Law Commission consider the adequacy of existing legislation to address the legal liability issues of AI and, where appropriate, recommend to government appropriate remedies to ensure that the law is clear in this area.”