Workers have some misgivings about AI, amid software scandal | WARC | The Feed
The Feed
Read daily effectiveness insights and the latest marketing news, curated by WARC’s editors.
You didn’t return any results. Please clear your filters.
Workers have some misgivings about AI, amid software scandal
Among company executives and employees, there are significant gaps in trust that AI will be implemented transparently and openly, according to a new study. It arrives during a week in which the perils of putting too much trust in software should be front of mind for leaders around the world.
Why trust matters to AI
AI is far from perfect, but it’s already useful. This is a big issue not only for employees who fear for their welfare and jobs but also for executives who may be presiding over the improper use of AI systems in their organisations without proper training or oversight.
The survey, by workplace software company Workday, comes as the UK is gripped by the story of the miscarriage of justice against thousands of Post Office sub-postmasters wrongly accused of theft and accounting fraud, for what turned out to be a software fault. If there’s a key lesson from the story, it’s that technology cannot be beyond doubt, especially for a black box like AI.
Transparency and oversight are key. “The tipping point for AI success will be an organization’s ability to create trust through transparency. The challenge is that AI innovation needs to be balanced with an unwavering commitment to smart governance and communication,” says Workday’s report.
The trust gap
Commissioned by Workday, the survey led by FT Longitude took in the views of 1,375 business leaders and 4000 employees. It found a roughly 10% AI trust gap.
Among leaders:
- 62% welcome AI;
- 62% are confident it will be implemented in a trustworthy and responsible way.
Among employees:
- 52% welcome AI;
- 55% are confident that it will be implemented in a trustworthy and responsible way.
Employees and leaders align broadly on the potential for business transformation and the likelihood that regulation and control will come to pass. Interestingly, 33% of both leaders and employees believe that a “decline and collapse” scenario is likely to occur.
Control is crucial, as 70% of leaders and 69% of employees believe AI applications should remain under human control.
Behind the gap
Despite leaders’ emphasis on control, 42% of employees believe that their company doesn’t have a clear understanding of which systems should and shouldn’t require human intervention.
In fact, human welfare is a concern:
- 23% don’t believe their organisation puts employee interests above its own when implementing AI (vs. 21% of leaders).
- 23% don’t believe their organisation prioritises innovating with care for people over innovating with speed (vs. 17% of leaders).
- 23% don’t believe their organisation will implement AI in a responsible and trustworthy way (vs. 17% of leaders).
It’s worth noting that leaders aren’t much more or less confident in organisations’ welfare concerns.
Perhaps a bigger problem is a lack of awareness across the entire workforce about how AI is being used, how to use it responsibly, and how to be transparent about its use. Here, there’s quite a big gap, with just a quarter of employees saying their organisation has collaborated with them on AI regulation (vs. 36% of leaders) and a fifth saying their company has shared guidelines on responsible usage with them.
Sourced from Workday, FT, WARC
Email this content