AI: amid a safety summit, questions about its present survival remain | WARC | The Feed
The Feed
Read daily effectiveness insights and the latest marketing news, curated by WARC’s editors.
You didn’t return any results. Please clear your filters.

AI: amid a safety summit, questions about its present survival remain
Artificial Intelligence has captured many people’s imaginations for good or for ill with visions of futures dark and bright; but some of the bigger questions about the technology remain unanswered, not least what information can it use and how can it pay for itself.
Why artificial intelligence matters
Words, images, and increasingly video are now quick and cheap to generate thanks to AI programs that not only threaten to upend creative industries but to cast doubt over the entire information ecosystem. The problem is that they are expensive to run and use information generated by others.
Moral panics and the history of technology
Speaking from Bletchley Park during the UK summit on AI safety this week, former deputy PM Nick Clegg, who is now Meta’s president of global affairs at Meta urged calm from all sides during the hype. It is worth noting that Meta has very significant interests in the technology, even if its stance has tended to be less apocalyptic than other major figures in the space.
New technologies, he said, “often lead to excessive zeal amongst the advocates and excessive pessimism amongst the critics. I remember the 80s. There was this moral panic about video games. There were moral panics about radio, the bicycle, the internet.” In effect, predictions often don’t turn out exactly as anybody foresees.
Copyright struggles
Research from the News Media Alliance, a trade association of US publishers, suggests that AI developers give more weight to training data from professionally produced news content than to generic internet content, the New York Times reports.
The study compares a number of curated public datasets used to train the large language models that underpin AI chatbots with generic online content datasets. It concludes that curated datasets use five to 100 times more news data
The research follows many years of campaigning for big advertising companies like Google to pay news organisations more for use of the content that makes up the information surfaced on online platforms.
With AI, however, these concerns have kicked into a new gear with the ‘answers’ provided by generative AI chatbots like OpenAI’s ChatGPT, or Google’s Bard, don’t necessarily send users to the source of the information used to generate a response, therefore taking away an opportunity to sell ads against a visit.
Many lawsuits are currently in motion, with key elements of some of the highest profile cases of artist protecting their intellectual property moving forward in the courts.
The rules are now in play
Artificial intelligence was one of the big issues behind this year’s extended Hollywood protests amid increasing legislative interest in some guardrails around the technology. Despite this, the studios’ trade body, the Motion Picture Association, has argued against “inflexible” copyright rules designed to tackle the questions thrown up by AI technology.
Copyright rules, the Association said in a filing “may be moving toward an inflexible rule that does not properly recognize the extent to which human creativity can be present in a work generated with the use of AI tools.”
In the music world, private companies are trying to move ahead of general rules by exploring a deeper relationship between rights holders and AI companies, as reports that YouTube has approached record labels point to a way through the complexity. This story is particularly critical, as it would set up an early solution to isolating approved training data from protected training data, and suggesting a way forward for rights holders from record labels to brands.
The problem of profit
Artificial intelligence is incredibly computing intensive and therefore expensive to run. The issue, as the Wall Street Journal sees it, is that the business model at the end of the development tunnel is far from clear. As hype cools into reality, the flow of cash toward aimless generative AI is likely to slow dramatically.
Partly, it’s to do with the fact that AI exploded onto the scene a year ago with the release of the public (free) beta of ChatGPT. To implement the technology, however, the actual costs of running these models are high at a time when many companies are cutting back on cloud spending. The environmental costs are also heavy.
Without obvious and clear business use cases, it has been difficult to turn hopes and expectations into adoption and revenue. Some of the clearest extant use cases haven’t paid the bills: some power users of Microsoft’s GitHub Copilot pay $10 per month but cost the company $80, according to a source close to the action, speaking to the Journal.
Sourced from the New York Times, Wall Street Journal, The Guardian, WARC
SPT
Email this content