EU puts new AI rules in force as Meta chief questions ultimate capabilities | WARC | The Feed
The Feed
Read daily effectiveness insights and the latest marketing news, curated by WARC’s editors.
You didn’t return any results. Please clear your filters.
![EU puts new AI rules in force as Meta chief questions ultimate capabilities](https://cdn.warc.com/Images/Feed//en-GB/51e9dca7-1402-4e25-ac93-ab5651ad8a27/tile.jpg)
EU puts new AI rules in force as Meta chief questions ultimate capabilities
AI is regularly invoked as something that will inevitably change civilisation, but imaginations and category-level marketing are powerful things: as the EU puts into practice a landmark set of rules around artificial intelligence, Meta’s AI chief suggests that a next step in powerful AI will require a major rethink of current technologies.
Why AI rules matter
Modern generative AI tools based on large language models are impressive, built on the hive mind of the internet’s data. Commercialising the result, however, requires making something that carries a company’s logo and reputation when big questions about copyright are still being defined. And it means taking on a level of risk that few stewards of brands are willing to assume.
Rules, then, establish a set of agreed constraints or transparency requirements for providers of the technology and will help to make its use – at agencies, brands, and beyond – safer and more predictable.
What’s going on: The AI Act
The European Union’s AI Act, first agreed in December and endorsed this week by member states (the EU Council), will enter into force next month and will carry an impact far beyond the borders of the European bloc. Effectively, any company that uses EU customer data in their platforms will need to comply with the new rules.
At its core, the legislation separates out high- and low-risk uses of AI as well as flatly “unacceptable” uses of the technology, such as social scoring systems, predictive policing, and emotional recognition in schools and workplaces.
- High-risk uses tend to carry a threat to life or fundamental rights of citizens in the form of autonomous vehicles or medical devices.
- While the rules come into force next month, different areas of the market will experience different transition periods.
“With the AI Act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” said Belgium’s digitisation minister, Mathieu Michel, in a statement.
Transparency will be key, with major AI providers experiencing the following requirements:
- Disclosing that the content was generated by AI
- Designing the model to prevent it from generating illegal content
- Publishing summaries of copyrighted data used for training
As with earlier European rules, the elements making headlines around the world are the fines: breaches will be handed fines ranging from €7.5m up to €35m or between 1% and 7% of global turnover – whichever is higher, depending on the violation.
What it means for advertising
The act itself contains relatively few mentions of advertising; aside from the prohibition of subliminal messaging or cognitive manipulation, it is much more concerned with regulating the companies behind LLM platforms and ensuring that they abide by EU copyright law.
More broadly, the law stipulates the labelling of AI-generated content – something a lot of businesses have already been doing as a matter of ethics. Trickier will be the need to mitigate biases, especially when outcomes from those biases may influence future inputs to an AI system.
Beyond LLMs
Just as rules are coming into force, major practitioners are noting the need to look beyond current technologies. Ever the countervailing AI thinker, Meta’s AI chief Yann LeCun explained to the Financial Times how large language model (LLM) AIs will never achieve human level intelligence. And that’s because they can only answer prompts accurately if they have been given the right training data. In the absence of logic, or any understanding of the physical world, he argues, this makes them “intrinsically unsafe”.
Of course, Meta is also working hard on its own AI systems, even if its path to deployment looks very different from that of its big tech rivals. It’s seeking to build out a popular free product first and then monetise later. The bigger story is the company’s work, under LeCun, toward next-generation AI with an approach called “world modelling”, based on machines that can actually reason rather than just look like they’re reasoning.
Sourced from the European Union, FT, WSJ
Email this content