Amid concerns surrounding digital trust and ethics, Capgemini’s Frank Windoloski says having an ethical and responsible approach to artificial intelligence is necessary for enterprises to preserve customer trust.
Within the span of months, generative artificial intelligence (GAI) has gained widespread popularity for businesses and consumers through its ability to generate content and personalised recommendations. It operates through a process of being trained and understanding existing large-scale data models to autonomously generate content and personalised recommendations with text prompts within seconds, and has created a major breakthrough in the AI sphere. Importantly, the rapid adoption is a clear indication that AI continues to drive the relationship we have with technology.
With generative AI, businesses are now able to produce personalised content at a faster pace, identify consumer trends and improve their targeted marketing across various touchpoints. The insights obtained by AI can help the organisation effectively engage consumer segments across different channels and provide real-time updates on the best touchpoints to leverage. This boosts the consumer experience in a way that feels almost natural, rather than robotic and impersonal.
McKinsey recently reported that adopting generative AI could result in 0.1 to 0.6% growth in labour productivity every year until 2040. This loosely translates to the potential generation of US$2.6tn to US$4.4tn in value across industries. But the success of implementation and usage depends largely on consumer receptivity and the ability of businesses to deliver valued and trusted experiences for end consumers.
Our latest report, ‘Why consumers love generative AI’, revealed that more than half of consumers in Asia Pacific are aware of the latest generative AI trends and have started using tools like ChatGPT and DALL-E. Yet, its ease of access has raised concerns surrounding digital trust and ethics, which ultimately impacts the overall consumer experience. To determine the usefulness and trustworthiness of generative AI tools, we must first understand how it is currently used and discern its suitability for businesses and consumers. GAI, with appropriate guardrails and a human-led design approach, offers significant potential for organisations to create additional business value in the years ahead.
Revolutionising consumer experiences and accelerating business adoption
GAI has made it easy for consumers to generate thorough summaries of long-form content pieces and even create art and images. It is also able to generate detailed, personalised responses like trip itineraries and product recommendations, potentially possessing the ability to change the way consumers make purchase decisions. In fact, our survey found that consumers are comfortable with the implementation of generative AI in marketing and advertising, under the condition that it does not negatively impact their overall experience, and almost two-thirds of APAC respondents are willing to purchase new products and services recommended by GAI.
Even when it comes to bigger decisions like medical opinions, relationships or even life advice, more than half of consumers across all age groups, from baby boomers to Gen Z, believe that GAI is helpful. However, while consumers tend to trust GAI tools for the seemingly intuitive and reliable responses, the data that the GAI model is trained upon is susceptible to inaccuracies, which translates into the new content. There is thus a need for consumers to review and fact check the content generated.
While businesses are not obliged to jump on the bandwagon, the ripple effect brought on by the recent adoption at scale signals the inevitable – GAI will transform how businesses engage consumers. We believe GAI has the potential to enable businesses to create more innovative and sustainable products, as well as offer differentiated, customised products and customer experiences. With the amount of interest and trust we see from consumers in this space, it is natural for enterprises to transform and change with the volume of consumer interest.
Addressing the risks involved
Like most emerging technologies, GAI is ahead of global, standardised governance policies. To mitigate the risks of misuse and maintain great customer experiences, the development and deployment of GAI within businesses is crucial but guidelines and secure systems must be put in place. In the report, we identified several basic risks associated with use of GAI tools that may be aggravated by bad actors: accuracy, inherited risks and biases that arise from training data, intellectual property and data leakage, indicating the need for enterprises to approach this technology with a human-led design.
While APAC consumers largely trust GAI across multiple use cases like chatbots and search functions, there are risks involved in fully trusting GAI tools that are yet to be regulated with standardised governance infrastructure and principles. Local measures like AI Verify, Singapore’s AI governance testing framework and toolkit, can be used as a collaborative resource for businesses as GAI becomes more prevalent.
Specifically for consumers, we also listed precautions in the report when using GAI tools, including empowering themselves with copyright issues, verifying factual information with other sources, seeking help in high-risk situations and being aware of phishing attempts online.
On the flip side, for businesses to gain consumer trust, they must be strategic in making GAI investments for tailored business needs, use cases and workstreams. Across implementation of GAI into business functions, businesses agree that they must be certain to design safe and unbiased systems. Our survey found that seven in 10 global businesses are currently reviewing open source GAI models for the reliability of data sources and adherence to data privacy laws and regulations. Even more are establishing governance policies for the use and sharing of internal information on GAI models.
The exponential uptake of GAI has raised consumer expectations just within the first half of the year. As trust continues to be a key node in navigating the ever-growing digital economy, businesses will embrace GAI in more ways than one to remain competitive.
Capgemini believes that the key to the success of GAI is the safeguards that human experts build around them to ensure the quality of its input and output. Enforcing an ethical and responsible approach to AI will be a crucial success factor for enterprises to establish and preserve customer trust over time. It will be critical to manage product interactions, seek continuous feedback and strive to improve systems in the years to come, as we continue to innovate with AI.