Business leaders appear remarkably untroubled by the ethical issues thrown up by the use of artificial intelligence, according to a new report from FICO.
Why it matters
AI systems allow decisions to be made at scale, reducing human involvement and growing profit margins. But the financial gains tend to overshadow the profound risks of ignoring their responsibility to those whose lives may be altered by the often-flawed decision of an AI system.
Findings from the FICO report
65% of respondents’ companies can’t explain how specific AI model decisions or predictions are made.
73% have struggled to get executive support for prioritising AI ethics and responsible AI practices.
43% say they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people's livelihoods – i.e. audience segmentation models, facial recognition models, recommendation systems.
At its core, the study finds that there is no consensus among executives about what a company’s responsibilities should be when it comes to AI.
What it means
AI most often manifests in a business context as a combination of machine learning, natural language processing, and image recognition.
High-profile examples of racist bias have been found built into machine learning systems used in the US Criminal Justice System and elsewhere, often with catastrophic results for the individuals affected. This is often as a result of biases in data collected originally by humans and fed to the AI, perpetuating those prejudices.
Oxford University’s Future of Humanity poll of Americans in 2019 found that while most people believe there are benefits to the development of AI, trust in the organisation developing or using that technology varies wildly. Robust governance combined with trust-building communications work is likely to be critical to the future.
FICO, the analytics software company and credit scoring specialist, worked with researchers Corinium to survey 100 C-level analytic and data executives alongside a series of expert interviews from academia and NGOs.
“AI will only become more pervasive within the digital economy as enterprises integrate it at the operational level across their businesses. Key stakeholders, such as senior decision makers, board members, customers, etc., need to have a clear understanding on how AI is being used within their business, the potential risks involved and the systems put in place to help govern and monitor it” – Cortnie Abercrombie, Founder and CEO, AI Truth, an expert interviewed for the report.
Sourced from FICO, ProPublica, Future of Humanity Institute