Photo by Google DeepMind on Unsplash
Half of all AI developers think there is a > 10% probability that humans will become extinct through failure to control AI.
That is like aircraft engineers saying you have a 10% chance of crashing. Would you get on that plane?
Meanwhile, Snapchat's AI chatbot is encouraging 13-year-old girls to have sex with strangers.
I am a massive fan of technology to make the world a better place, but the 'AI arms race' needs regulation, and fast ....
Listen to the podcast here: https://lnkd.in/e6iY9HR8
Or see the YouTube video here: https://lnkd.in/eBNVW-cW
#ai #podcast #snapchat #chatgpt4 #microsoft #google #technology #regulation #tech4good
Ben Legg, CEO The Portfolio Collective and Tech Entrepreneur, LinkedIn
I’ve written about AI in a couple of articles, which illustrate that disruption can be positive, negative or both. Yes, AI can help us discover new drugs faster and check breast scans 24/7 but it also looks likely to cause massive disruption in employment as I covered in the two parter ‘Disruption - Good for Business, Bad for Humans?’
Twitter and Tesla CEO Elon Musk and Apple cofounder Steve Wozniak are among the more than 1,100 signatories of an open letter calling for a six-month moratorium on the development of advanced A.I. systems. Why all of a sudden are tech companies, tech entrepreneurs and enthusiasts getting concerned about AI and the lack of regulation? The second question is that if we need regulation, then can it be effective and how might we go about it?
Why Is AI Such a Problem?
Tech, as an industry, often hasn’t concerned itself too much about its wider impact on society and Silicon Valley has fiercely resisted regulation to date. However as Tristan Harris and Aza Raskin point out in the quoted YouTube video, social media has had some extremely detrimental effects on our lives, wider society and politics.
I’d really recommend listening to the podcast or watching the YouTube video as it sets a detailed and reasoned argument how existing AI capabilities already pose catastrophic risks and how A.I. companies are racing to deploy technology without adequate safety measures.
I would put a few observations on top of that. The first is that those involved in the field have lost control over the development of the technology. The speed of learning of these systems is far far greater than anticipated. Things that were supposed to take years to come to fruition are taking months.
Secondly, the ability of the companies to police systems using this technology is woeful or non-existent. I’ve tried a number of these tools on subjects I have expertise in. They’re capable of producing convincing human-like answers but within those answers are made up facts (‘hallucinations’ as the industry calls them) and advice, which is totally inappropriate.
The Snapchat bot example is particularly disturbing but people are writing programs that have no safeguarding around the answers given. A chatbot sounds like a great idea in principle (cheap, convenient, there 24/7 etc) but where is it getting the information to make suggestions from? What suggestions is it actually making and is this safe for the user?
What happens if that advice goes wrong? Who’s liable? Well who knows, because there’s no regulation in this space whatsoever for the people doing it. To illustrate this point, I wrote myself a mental health chatbot over the weekend just for the ‘fun’ of it. I’m sensible enough to know that it’s not any kind of serious tool for anyone with mental health issues and certainly no substitute for professional advice.
The third observation that I’d make, and I’m being cynical here, is that AI is threatening people within the technology industry themselves. Whereas, in the past, their technology has been let loose on other people, they can see that technology coming like a wrecking ball for their jobs too.
As mentioned, half of all AI developers think there is a > 10% probability that humans will become extinct through failure to control AI, presumably that the law of unintended consequences will apply somewhere. However, I think the greater threat is that by applying these technologies across all industries and sectors of life simultaneously, we’re in danger of creating a Wild West out of an ordered (if imperfect) society. The idea that jobs have always been created from new technology holds in slowly transforming industries of the 18th, 19th and 20th century. It may be less applicable in the 21st.
Whatever the exact reasons for the concerns around AI, there are now a large number of people looking to governments and regulators to prevent potential disaster, start to make sense of these risks and create adequate safeguards. The question is, can they?
The Nature of Regulation
It’s important to recognise that regulation is a reaction to potential loss either in human health or financial wellbeing and typically it has always lagged development rather than run in tandem with it. Furthermore advances in regulation tend to come from disaster. Boiler safety standards came as a result of explosions, accounting standards came as a result of various financial frauds, and pharmaceutical regulations came as a result of horrendous side effects of inadequately tested drugs e.g. thalidomide.
There are three reasons for this. One, the companies themselves typically don’t want regulation because it ‘stifles innovation’. Secondly politicians’ radar is largely confined to things that go wrong. Thirdly, it takes time for lawmakers to understand the issues involved and the needs of different stakeholders in order to make sensible law.
This brings us onto the question of whether we can regulate tech? We can, but we need to understand some of the challenges involved that are specific to tech with respect to regulation and how we might go about it.
Regulation to Date
Tech, in the way that we understand it, has been very lightly regulated to date, at least in itself. We have some regulation on data e.g. GDPR but regulation in software tends to come through the specific applications it goes into e.g. a medical device.However, even then, we’re really regulating the effect of that device, not the code that goes in it.
In 2017, I was the co-author on a white paper written for the Institute For Strategy, Resilience and Security on ‘Trustable Software’ looking at what would appear to be a simple question of whether we could trust the millions of lines of code in the software we use. We argued
While software has become critical to virtually all aspects of modern life, processes for determining whether we can trust it are conspicuously absent.
Among stakeholder groups – vendors, purchasers, software engineers, computer scientists, government and regulators – there exists little, if any, consensus as to how software should be designed, constructed and operated to achieve this.
‘Towards Trustable Software’, ISRS 2017
In November 2018, our founder Lord Reid, addressed the House of Lords debate on the Select Committee on Artificial Intelligence’s report ‘AI in the UK: ready, willing and able?’ and called for a Trustable Process for AI Software based on our white paper which proposed mechanisms to engineer trust.
Gaining traction on this proved extremely difficult despite the warning in the paper.
Analogously, despite an urgent unmet need, the software industry is inexorably drawn towards fuelling growth and will de facto ignore and resist a “push” towards a systematic approach to trust in software.
An equivalent “pull” is required by governments and regulators in recognising the problem and encouraging the adoption of trustability as standard practice, before a series of events or a particular disaster forces this issue into the wider public domain and Government is required to compel industry post hoc to address the issue of trust in software.
We now appear to have reached that point.
Global or National Regulation?
The next handicap we face is that, unlike preceding technology eras, ‘Tech’ in general and AI in particular is a truly global business in terms of both suppliers and customers. Most regulation tends to start at a national level and build out but this makes little sense when different countries and blocs e.g. the EU, China are competing to advance their own Tech industries.
Hence we need to agree on the model of regulation, which law it would follow and what court would rule in the case of dispute. Most importantly, we would need to decide what is regulated at the end use level and what is regulated at the learning level and underpinning technology. So to go back to our Spotify example, who is responsible for the bot’s errant behaviour in law?
Establishing a Regulatory Body
There is little regulatory infrastructure in place. Lawmakers in themselves will not have the skills required to articulate the specific issues and potential solutions to problems. Regulatory bodies typically consist of experienced industry practitioners (poachers turned gamekeepers) and secondees from major players. In established industries, this is relatively easy but here we have an extremely fast moving environment and there is little experience, at least in terms of length of time in the industry.
Given the speed and scale of development, removing front line people into regulatory roles for any length of period can make it difficult for them to remain current in terms of knowledge. In addition, those people will also likely have proprietary knowledge of their own products so the mechanisms of ‘trust’ in the process need to be worked out. There are perhaps learnings here from Formula 1 where the teams compete on innovative technology but need to agree on a common set of technical regulations.
What Exactly Are We Regulating?
Most regulation to date has a clearly identified harm e.g. human injury, financial loss which forms the basis on which the product or service is regulated. In flight safety, we seek to prevent the aircraft crashing and this then works backwards to manufacturing, safety checks, training etc.
The question is much more ambiguous when it comes to AI. What is the specific harm? With users being able to ask a program like ChatGPT any question in the world and the program being able to select any answer it likes from whatever source it likes, then there are potentially an endless number of possible harms.
Therefore, while it is easy to spot a horrendous mistake like the Snapchat bot, it’s more difficult to specify the criteria and steps required to prevent these problems occurring. What is needed is an overarching framework for preventing harm to humans. The concept of this in robotics was established well before robots became practical.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov’s Three Laws of Robotics
These Principles have been modified and enhanced over the years. However, no set of principles have had to cope with the speed of development that we are currently seeing. Clearly, robotics deals with physical harm rather than mental or societal harm, which is far more complex. Developing a framework that encompasses information will be a much harder task.
Does a moratorium make sense?
It is absolutely clear from the examples already encountered that there are serious concerns about the potential harm that AI could cause. This is borne out to some extent by the failure of Tech to recognise the problems caused by the introduction of social media. The fact that so many in the industry are concerned needs to be heeded.
A moratorium makes perfect sense but only if that time can be used effectively. In governmental terms, six months is just the blink of an eye, so the pace of development of regulation will need to be of a similar speed to that of the development. This will be unprecedented and will require a huge amount of effort on the part of both the industry and the lawmakers. The streams of work would need to include the following
Where the legal liability for the generation of information lies and the basis on which information is provided
A framework that recognises what issues are agreed at global level and at national level
How a regulatory body would work and how regulation would be introduced into law
How to ensure that AI systems are ‘trustable’ and can be audited
A framework for establishing the principles which AI systems must follow in order to prevent human and societal harm
This seems hugely ambitious for six months. However, it also seems highly necessary for us to consider how this technology is deployed. For once it is, there will be no going back.
Until Next Time
Pete