top of page
  • Writer's pictureSafer Highways


AI will leave its mark on every aspect of our lives, but is this cause for alarm or celebration? The tech may keep workers safe, but is it a danger to privacy and consent?

The steady march of artificial intelligence (AI) into our daily lives is accompanied by as much unease as excitement: extending the role of machine decision-making may shrink humans’ role and reduce workers’ autonomy, increase perceptions of surveillance, and possibly even automate the biases of the original authors.

In addition, the data required by AI, gathered from sensors, cameras and wearable devices, is needed first to build intelligent algorithms, then accumulated to monitor, analyse and expand operations. While data in most workplace scenarios would be anonymised and aggregated, data intended for one purpose – such as social distancing compliance – could ultimately be repurposed for another less benign: for instance, to monitor productivity.


How should safety professionals navigate this territory? Lawyers in the field advise caution. ‘If it isn’t pitched correctly and considerately, there is potential to damage the trust and respect between the employer and employee, and that, as well as putting the business on the wrong side of compliance in respect of employment and data protection legislation, could be detrimental to productivity and wellbeing,’ says a spokesperson from law firm Osborne Clarke.

A key issue is that securing ‘consent’ is not necessarily enough to guarantee workers’ rights, or protect employers from future liability claims. ‘Consent will rarely be an option in an employment context because of the imbalance of power between employer and employee,’ says the Osborne Clarke spokesperson. ‘Instead, employers will need to look at what other lawful basis they can rely on for handling employee data collected through these types of technology.’ There could be lawful grounds for collecting health-related data, for instance.


On the other hand, safety practitioners who have trialled AI-backed systems are positive about the results and the technology’s future. Travis Perkins plc calculates a 14% reduction in its manual handling-related lost time incidents after adopting wearable digital tags from Soter Analytics to alert staff to unsafe manual handling movements. TrackActive Me, a machine-learning app for staff with pain from musculoskeletal disorders (MSDs), can reduce lower back pain by 35%. And 30 Far East construction sites trialling the viAct intelligent site safety monitoring system have seen no recordable accidents since deployment.

Travis Perkins plc reports that staff have been ‘motivated and excited’ by the use of Soter’s technology, rolled out in its pipeline and heating solutions business BSS. ‘Colleagues said it increased their risk awareness, enabling them to make changes to their work routine,’ says Vimel Budhdev, health, safety and environment improvement specialist at Travis Perkins plc. ‘There is definitely a space in the workplace for new technologies, which help us work safer and smarter without slowing down.’


The TrackActive Me app gathers data from specialist software for physiotherapists (TrackActive Pro) and matches employees with musculoskeletal pain to the best type of rehabilitative exercise.

Its algorithm can generate personalised programmes for back, muscle and joint health, so that patients start exercise programmes in the time normally spent waiting for a physiotherapist’s appointment.

Co-founder and managing director Ian Prangley (pictured) explains that TrackActive Me collects users’ feedback on the exercises that help or are too demanding, then uses that data in the back end to optimise the programmes. The next step would be to incorporate more data from more users to develop true AI that is even more responsive to the user’s needs.

Physiotherapist Tim Colledge came across the app while searching for a solution for a client company. ‘It’s like having a physiotherapist in your pocket,’ he says.

But although Tim is already using the app with some clients and their staff, a trial with a major manufacturer is on pause as its legal team wrestles with the issues around data privacy and corporate liability. What would happen if, for instance, a worker is told to exercise for back pain that turns out to be a kidney condition?

‘The arguments are around who makes the decisions: the human, or the machine learning?’ says Tim. ‘And who would have liability if the data collected ended up in a tribunal? Ultimately, we’re trying to integrate the app into the occupational health pathway as a tool, but the human still writes the report so it’s not replacing their responsibility.’

On the legal issues, Ian says the app could be used as a voluntary measure for staff, or possibly packaged with incentives: he acknowledges that making its use mandatory for employees or policy-holders will greatly depend on how an employer or insurer would wish to offer the app.


2 views0 comments


Recent Blog Posts


bottom of page