Google’s CEO, Sundar Pichai, took to the opinion pages of the Financial Times to get the trillion-dollar company on the front-foot of any rules governing AI, and promising to be a “helpful and engaged partner” – here’s what we know.
On Monday, the Financial Times published an article indicating the Google chief’s interest in being “clear-eyed about what could go wrong” in the coming development of artificial intelligence, a field of research and engineering in which his company has staked a high-profile position.
“Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to,” Pichai writes.
“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used.”
In 2018, the company published a list of AI principles, which Pichai says have helped “avoid bias, test rigorously for safety, design with privacy top of mind, and make technology accountable to people.”
So what is he proposing? Pichai points to some existing regulations, such as the European Union’s General Data Protection Regulation (GDPR) as an example of a supra-national foundation.
A “good” framework, he adds, “will consider safety, explainability, fairness and accountability to ensure we develop the right tools in the right ways.”
There’s a lot to chew on. Regulation must exist at a large scale, such as the GDPR, which provides a standard across a continent of 500 million people; the converse of this is a state-by-state approach as seen in the California Consumer Privacy Act, which points to an alarming multiplicity of laws within the same country.
However, GDPR’s high-stakes regulation of data protection has, according to the Wall Street Journal, helped to hand big players, such as Google, more market power as partners opt for companies with the scale and resources to adapt, while neglecting smaller companies whose innovations should keep the market competitive.
While most people agree that they want safe technologies that operate fairly, explainability and accountability are the interesting watch-words in Pichai’s article.
As The Next Web pointed out in December of last year, AI’s black box problem is an ongoing headache for companies and regulators: incorrect decisions need to be understood, especially if AI is aiding life-changing decisions. However, there’s a big downside: “If the world can figure out how your AI works, it can figure out how to make it work without you.”
Pichai says that Google wants to be a helpful and engaged partner to regulators grappling with the “inevitable tensions and trade-offs”.
This is only an early step in a much longer saga. AI is moving extremely fast and regulators wish to design policy that will ensure an effective and innovative market for the long-term; plenty of observers have proposed an ongoing expert watchdog over the AI industry makes the most sense in this case as it can adapt and develop with the field.
Whatever the outcome, Pichai is, here, making the point that those at the cutting-edge, not least Google, are the experts and should be at the centre of developing the eventual system and not the recipients of a list of external rules.
Sourced from Financial Times, Google, Wall Street Journal, The Next Web