I’ve been reading a lot about Artifical Intelligence recently. It’s a topic that has a way of capturing the mind. Not surprising given the mind-boggling timeline:
- We’re already surrounded by Artificial Narrow Intelligence, software that can do one thing really really well. Like suggest films, books and music you might enjoy, help drive your car and make sure your home is warm when you get home from work.
- Loads of clever people and well-funded businesses are – right now while you’re sitting there reading this email – working on developing Artificial General Intelligence (AGI), software that’s as good as humans at managing complex ideas, solving abstract problems – and, critically, learning from experience.
- The average estimate for when those clever people will crack AGI is 2040 – that’s 23 years from now. If that seems hard to believe, go back to 1994 and tell your younger self that in 23 years they’ll have a device that’ll give them instant access to a library of millions of books, films and songs, that’ll track their location, monitor all their communications and upload that data for MI5 to keep for eternity, and that can build 3-D virtual worlds where they can interact with real and virtual people. Oh and it’ll be the size of a chequebook. (While you’re there, you might as well tell them that chequebooks won’t exist any more either. If that doesn’t blow their mind…)
- So now we’ve got AGI – and plenty of people believe it’ll happen before 2040, by the way. From this point on, by definition, computers will be able to do everything humans can do.
- In fact, because they can learn from experience and don’t have down-time (“sleep”, “eat”, “get bored”) it will take [insert short span of time – could be a few years, could be a few hours] for this AGI to reach super-intelligence.
- Super-intelligence is incomprehensible to us lowly humans. It’s like us trying to imagine what life must be like for an ant. Except we’re the ant.
- An Artificial Super Intelligence (ASI) could solve ALL THE PROBLEMS. Including death. Or it could basically fuck everything up royally. It depends how they’re coded by those boffins in Step 2. To be honest, I’m pessimistic. Those boffins in Step 2 are still human and are mostly just trying to have fun / get laid / win fame and glory / pay the bills / become masters of the universe. And the idea of controlling an ASI after the fact is laughable. It’s super-intelligent – it’s not going to be fooled by an on/off switch!
- At this point, all bets are off. It’s a coin flip: either we get immortality (some of us at least) or we get oblivion (all of us, definitely).
This isn’t necessarily as doom-laden as it sounds, or as Sci-Fi likes to make out for the purposes of making money with a scary story. As the comprehensively fascinating article by Tim Urban at Wait But Why? suggests, we’re all heading for oblivion anyway, so why not take a punt on ASI saving us? ASI would not only solve death, but climate change, the refugee crisis, food shortages and famine, over-population, Donald Trump, all human misery, war and peace and the colonisation of Mars. It could be quite good.
But this is the big picture. ASI doesn’t currently have much direct day-to-day relevance for me or many people I know. I’m not a computer scientist or an ethical philosopher. I have no influence over the progress or direction of AI. Nevertheless, AI raises a heck of a lot of questions that are of indirect relevance to all of us – and of profound, pretty much insurmountable importance to everyone under the age of about 65.
- Am I really happy with how I’m spending my time, given that ASI is more than likely to render all my human efforts absurdly pathetic within my expected life time?
- How am I going to get my head around the fact that ASI will be able to do comedy and writing a lot better than me? (Not hard, fnar, fnar!)
- What niche can I find that won’t be swallowed up by ASI? (A: There is none.)
- Should I worry about my long term health, or just about surviving until ASI has solved disease and ageing?
- Do I need to relax about politics and just leave it for ASI to sort out?
- Is there a problem here that I could delegate all my motivating worries to ASI and end up in a miserable purposeless funk? Or is this the ultimate a capella triumph of Don’t worry, be happy?
- Are we all just setting the scene for ASI? What kind of world do I want to bequeath to ASI?
- What will happen when the one biological imperative that we all share – to procreate and pass on our genes – is rendered futile because ASI means we can live forever?
- Do I really need to invest in a pension that’ll give dividends for eternity?
I’ll be 58 when ASI is expected to hit. Old enough to witness the horror and glory, too old perhaps to really exploit its strengths given that I’m a writer, not a computer scientist.
Strangely enough, I do take comfort from the idea that ASI will shortly blow into our lives. It puts things into perspective: I am a tiny blot on the surface of the universe. I am an organism, just doing the things that organisms do. The pressure’s off.