New SF Mind Meld: AI, good/bad?

Posted: April 10, 2013 in Random wifflings
Tags: , ,

I was really happy when SF Signal invited me to contribute to this week’s Mind Meld. These articles are loads of fun, penned by a bunch of SF writers, all answering the same question. This latest was, in a nutshell, would the creation of AI be a good thing, or a bad thing?

I really enjoy taking part in Mind Melds, and this one has some very big names in it — Larry Niven and Neal Asher among them. You can read the whole thing on SF Signal. I’ve posted my bit below too.

My take on AI is ambivalent, I can never quite make up my mind. There’s something about the negative possibilities of AI in my latest book Crash, out in June, although to say more would be to say too much. On the other hand, my Richards and Klein books have AI as complex as people, struggling to find their place in the world even as they slowly take it over. Some are good, and some are not.

The fact is (are there any facts here? It’s all supposition really, isn’t it?) is that we just don’t know what would happen should an AI be created. We don’t even know how our own brains work, let alone how to emulate our level of intelligence in a machine. Our own intellect has arisen organically, emerging first from the expansion of the visual centres in our early mammalian ancestors, then evolved further by our need to process large social networks. The first isn’t really relevant here – we always assume that an AI would possess a fantastic range of senses – but would an AI be able to think at all without making the sort of connections we make in our own heads every second? And if it were capable of such linked up, consequential thinking, why would it necessarily decide we were of no consequence? This is the reverse of what happened to us, empathy came first in we hairless apes, but the result would probably be the same: a social creature. Furthermore, any AI would be born into a world so tailored to humanity, its early experiences would necessarily be shaped by its interaction with us. Assuming we don’t stick it in a cage and beat it with electro-whips, I kind of assume it’s “childhood” would be positive, and therefore its attitudes to us.

The nightmare scenario, where AI uses us as batteries a la The Matrix, or exterminates mankind like vermin like in Terminator, conveniently does away with empathy, sympathy, mercy, loyalty and a whole host of other positive human traits, while specifically imparting them with a bunch of negative emotions. Chief among these seems to me to be ambition. Why would an AI break the world to make solar panels? Who would give them these goals? Why would they feel the need to achieve them? Who would put them in a position where the AI would be able to act on them with impunity? Responsibility and access to the whole suite of tools of 21st century industry and science implies a level of trust, and if the AI couldn’t be trusted, then it wouldn’t be in that position. If they appeared trustworthy, but were not, then they’d be capable of dissembling. Lying requires a level of understanding of others, which requires an amount of empathy – even sociopaths are capable of that. If that were the case, and they lied to us to fulfil goals that actively endangered us, we must assume, from our perspective, that they would be evil.

Of course, one possible scenario, like in the films Colossus: The Forbin Project or Demon Seed, is where we create a supremely empathic being who thinks we’ve screwed up enormously and takes steps to rectify our errors – the “efficient path to human happiness” example cited above, enacted by the “arrogant AI” at whatever cost. I again touch upon this in Reality 36. In some respects, this is not very different to the Age of Reason ideal of the “enlightened despot” – one individual ruling others rationally for the overall benefit of everyone. Still, this also supposes the AI is able to act with complete freedom. Sure, they could bring down the internet. But they’re dependent on power, and don’t have thumbs.

I reckon a greater danger comes from unthinking machines, set loose to do a mindless task, that rather like the brooms in The Sorcerer’s Apprentice, cannot be stopped. The ecophagy “gray goo” scenario from Eric Drexler’s novel Engines of Creation or the robots sent to terraform Mars that end up disassembling it in Stephen Baxter’s Evolution.

So for me, I think the relationship between us and any AI will be a parent/child one. They’ll no doubt have their own struggles, their own doubts, will need to find their own way, and they’ll all be different. The greatest danger there – should we not simply merge with them, the likeliest scenario – is that they’ll stick us in the equivalent of an old people’s home and forget to visit.

  1. I can’t help thinking that last scenario is more likely. Once the growing/developing stage is done with, that is. Like our old folk now, what use would we be at that point? I don’t imagine AIs would want to wipe us out… that doesn’t make sense either, outside of some sci-fi movies/novels. And we might still provide some insight, if only in the form of personal experience.

    In the end, an AI or race of AIs would be completely alien to us. They’d probably leave Earth and its inhabitants behind, either physically or as some form of virtual consciousness.

    Perhaps we’ll never allow a full AI to develop. While mindless machines, I agree, are scary, and full AIs, I think, would eventually leave us, partial AIs that remain as children or even just clever pets might be a more realistic development.

  2. Wait… I forgot to say whether I thought AI was good or bad. Answer is; I don’t know!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s