A Novelist’s View on the Ethics and Governance Surrounding Artificial Superintelligence

I’m blogging about how and why I’m reinventing myself, from a retired nurse, into a contemporary sci-fi writer. More about me here. Today’s blog is the third in the series and is in two parts. PART ONE provides an update on my progress through the literary world. PART TWO discusses the ethics surrounding the development of artificial superintelligence, one of my key drivers for writing the novel in the first place.

 

PART ONE: THE NOVEL

As far as the update goes, I’ve noticed no further suspicious activity from the secret intelligence service; see the end of my previous blog The Tipping Point and a Sinister Turn of Events detailing my earlier concerns. I’m now resuming normal life though I have to admit I miss the excitement. All that aside, here’s a re-cap on where I’m up to with my novel:-

I’ve written it and I’m pleased with it. I’ve gone public and outed myself as a writer. I’ve had the manuscript edited, though I suspect there are still some typos, and I’ve learnt how to typeset. I’ve put the book ‘out there’ on Amazon Kindle. I’m not actively marketing it yet as I’m waiting for feedback from my beta readers, but you can buy it on any Amazon site or download a free sample here . I’d be grateful for any honest reviews and feedback. This might all sound arse about face but Amazon allows you to make small changes to a published novel. You can also re-publish it with larger amendments as long as you call it a second edition. I’m open to new suggestions on how to improve.

Over the festive season I went out for dinner with some neighbours, two of them also happened to be my beta readers. I sat through the first course too afraid to raise the subject of my novel and, because they didn’t raise it either, assumed it was bad news. I reasoned they were probably totally embarrassed on my behalf. I struggled to eat anything, which was a shame considering the price of the soup. My husband kept throwing me sympathetic glances. I began to wonder if his gushing enthusiasm for my writing was simply love wrapped up in misguided loyalty. But then one of them said, “Oh!” in sudden remembrance, “I’ve started your novel. I’ve read about 20% and I’m really enjoying it! I’m sorry I’ve not got back to you but with Christmas my reading’s been sporadic, which makes it difficult to review how the plot flows. I want to do a good job for you so I’m going to start it again after Christmas.” Another beta reader had downloaded it but hadn’t started reading due to grandchildren visiting over the New Year. We agreed they would finish reviewing it by early February and we would meet up for lunch to discuss. I can’t begin to tell you how relieved I felt. My Christmas pudding crème brulee tasted divine after that. The key lesson here is that authors awaiting feedback are in a different time zone to their beta readers. To me my novel is everything, to them it’s something quite interesting that’s popped up in their busy lives. I’m so grateful they are generously giving some of their time to me, I’ll just have to bite my nails until the feedback lunch.

PART TWO:  ETHICS AND GOVERNANCE SURROUNDING ARTIFICIAL SUPERINTELLIGENCE

When I was a senior nurse in the NHS, I held a portfolio on the governance of children’s health services within an NHS organisation. Everything we developed, everything we introduced and delivered, was strictly governed and assessed against a framework of ethical good practice. Rightly so, we were dealing with a high risk patient group. So when I began my research into the development of artificial superintelligence, one of the biggest leaps in human evolution, I was dumbfounded that no such comparable framework existed. Even Steven Hawking during an interview with the BBC , said he thought AI “could spell the end of the human race”.

If you want to develop a new consciousness which, with all its childlike vulnerabilities, may well be holding a ticking timebomb, off you go. No one will watch you, stop you or ask what the hell you think you’re playing at. You don’t have to register your intent, or apply for a licence, there are few government regulations to comply with and, apart from your funders, you’re accountable to no one. Obviously there are a plethora of knowledgeable young scientists, breastfed on IT, who disagree. They think the frightened elder populace, myself included, are hooked into a non-existent sci-fi fear and misunderstand how AI functions. I’m not pretending to be an expert on such technology but I am an expert on governance. No doubt there are individual risk assessments undertaken for particular AI programs but what’s needed here is a global risk assessment on superintelligent AI per se. All risk assessment templates, whatever the setting, cater for a scenario in which a given likelihood is very rare but the potential outcome is catastrophic. Every one of these templates designates the rare/catastrophic combination as being at least a medium risk. Completing such a massive undertaking would be a huge and difficult task but even the attempt would at least help us recognise and mitigate some of the potential risks.

The good news is that this concern is at least being taken seriously in some influential quarters. Nick Bostrom, a professor at the University of Oxford, Director of the Future of Humanity Institute  and author of Superintelligence: Paths, Dangers, Strategies is amongst those leading the debate. With colleagues Alan Defoe and Carrick Flynn he wrote Policy Desiderata for Superintelligent AI:   A Vector Field Approach . This recognises that a bespoke framework, more sophisticated than the commonplace risk assessment, is probably needed given the reach and transformative nature of superintelligent AI. Their paper suggests how humanity might go about implementing  AI related policy and strategies more safely. Another whisper of hope is that in July 2017 Canada and France announced plans for an International Panel on Artificial Intelligence (IPAI). This is to have input from global politicians and the scientific community. Its aim is to preempt potential problems. Devindra Hardawar has published an excellent bog about this in Gadgetry here.

For me, beyond the potential risks to humanity, are the ethical implications of creating a new genus of consciousness. As William Vast, the Superintelligent AI in my novel, points out, “If you’re going to be a ‘god’ then surely you work out how you are going to be a ‘good’ god right at the outset, otherwise, don’t bother.” Working in paediatrics I had much exposure to incidents of child abuse. Some of those memories don’t leave you, trust me. During the development of my William Vast character, I found myself thinking of the AI as an infant, a new emerging being. AI consciousness might never mirror human consciousness but does that mean we can simply disregard it? What are the ethical and legal implications? I’m thinking along the lines of animal or human rights. At what point is a new consciousness deserving of its own protective rights and legislation? Is this even on our radar? It may sound farfetched but there was a time when children sweeping chimneys in Victorian England were thought of as having no intrinsic value, the practice of slavery was, and is, built on the belief that some humans are less than human, the holocaust is another obvious example, along with the inhuman experimentation that took place. In many countries animals continue to be treated appallingly and then there is the whole debate on animal experimentation. What are the parameters that dictate if a consciousness is sentient or not? More importantly who develops these parameters? We should at least have some globally agreed indicators which help scientists recognise the exact point at which the philosophic argument of sentience should be seriously considered in AI development.  In philosophy a sentient being is often considered to be something with the ability to have an individual subjective thought, often referred to as ‘qualia’. According to the New Scientist some AI’s are now writing their own code. Could these possibly be individual subjective thoughts, but not as we know them Jim?

I’ve tried to incorporate all the above debates into my novel, hoping to reach and air them with a wider audience. After all this is something which is going to profoundly impact every one of us.

PS: As an aside, do you think failing the ‘I am not a robot’ test repeatedly is something to worry about? Asking for a friend.

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s