Sundar Pichai’s op-ed in the FT is a call for big tech to be more ethical and I genuinely do agree with the sentiment, for Google it’s a brilliant PR move, a pre-emptive strike. It’s akin to walking into a crowded room – farting, and then saying ‘can someone smell something?’
As a society we are now broaching some of the most important questions of the modern age – the ethical implications of AI and it’s impact on our everyday lives (driverless cars, home automation, healthcare, energy, education to name a few); and the narrative of this debate is being crafted by Google to ensure they appear to sit on the ‘right’ side of this ethical fence.
A questionable past
Historically Google have had little regard for these ethical debates, and on many occasions their practices have been down right illegal: the ‘accidental’ collection of public wi-fi network activity data by it’s street view cars; or the undisclosed inclusion of a microphone in their Nest thermostat; or YouTube’s violation of the Children’s Online Privacy Protection Act (COPPA) by collecting data on children without parental consent; or how it’s Google Assistant recorded and then leaked intimate private audio recordings unintended for data capture to third-parties; or how the Android platform breached EU GDPR rules and didn’t clearly collect user permission for data collection; and that’s just a few of their greatest hits.
Google have a history of shoot first and ask question later, they have the money, clout and political lobby power to play down these issues even if they get dragged into legal battles. Google recently spent over $21.7m a year in lobbying Washington alone.
AI is Google’s golden goose, they’ve invested billions of dollars in AI systems and research even to the point of building custom AI processing units, in 2018 they rebranded their research division to ‘Google AI’.
It’s why statements like these can’t be taken at face value and we need to read between the lines, take a step-back and look at the bigger picture of what’s at stake. User data is Google’s oil, and they’re not going to stop collecting it and refining it for their paying customers to use.
Setting the ground rules
Google clearly know there will be regulation in the AI space they just want to make sure it works in their favour.
In the FT op-ed Pichai writes “regulation can provide broad guidance while allowing for tailored implementation in different sectors“ it’s pretty clear regulations will have to be domain and operation specific, because of the vast range of applications to which AI tools can be applied. Ultimately codifying these complex legislations will leave loop holes and gaps that will readily be exploited and best known by its architects. Furthermore it’s unlikely government will be able to patch these leaks and catch up with Silicon Valley until it’s far too late.
He continues to highlight how Google have already crafted a framework of ethical AI principles “These guidelines help us avoid bias, test rigorously for safety, design with privacy top of mind, and make the technology accountable to people. They also specify areas where we will not design or deploy AI, such as to support mass surveillance or violate human rights.” At this point i’m just going to point out that Google have a relatively cosy relationship with the NSA, and documents from the Snowden leak already have confirmed this.
Guidelines for AI systems and furthermore policing them will be a somewhat futile task – commercial software systems are generally opaque and largely closed-source. AI algorithms are closely guarded intellectual property, and complex AI systems can have emergent behaviour which is difficult to clearly define and understand, even by the system authors. Past behaviour tells us that it’s more likely that organisations like Google will push the envelope as far as possible until it crosses boundaries that are too big to ignore.
The AI revolution will change the world
Google could genuinely do great things with the power that they weald, careful analysis of healthcare data could be used to help save millions of lives. Deploying AI could enable us to find patterns in data we could not do using conventional approaches, but we need safeguards to ensure this type of personal information does not get sold nor compromise user privacy without a user’s clear consent (and I don’t mean a 100 page terms and conditions fine-print disclaimer in legalese). We need more transparency if organisations are going to co-opt this data from us, and we need them to be held accountable if they breach this trust.
The fact this op-ed even exists show that Google want to get in on the ground floor with regards to the wider debate. When we have regulations heavily set and influenced by the same organisations that aim to benefit from these mandates, we need to take the cat out from amongst the pigeons.
I would expect to see more of these statements by leaders of the Silicon Valley giants that rely on the collection of personal data as their core source of revenue.
We must remember that unchecked power is open for abuse – even if it’s done under the guise of a guardian. We absolutely need oversight for these new technologies, but we can’t let those most likely to be scrutinised by them set the rules. That wouldn’t be tethical now, would it.