Not sure where I should have put this but I figure it might result in a debate since those interested in AI seem to split into two camps, those who think it’s a good thing and will positively affect the future of the human race and the other group, the “doomers”, who believe (as I do) that AI represents for humanity an existential threat. Please remember I’ve grown up absolutely mad about science fiction, much of it portraying AI in a positive light so I would massively prefer that to be our future, I just find it very difficult to believe it will be and this guy is the one that put the final nail in the coffin for my AI positive views.
[TED] Will Superintelligent AI End the World (Eliezer Yudkowsky)?
Eliezer Yudkowsky is prominent in the field of artificial intelligence (AI & AGI) and the author of, “If Anyone Builds It, Everyone Dies” in which he advances the view that the predictable outcome of building a machine intelligence smarter than us is that everyone on Earth is dead.
He says the following:
AGI: true machine intelligence when the AI actually becomes smarter than humans (not that hard when you look around the world today, IMO… he didn’t say that bit, haha).
His reasoning, extremely persuasive IMO, is that as AI gets smarter, it can build its own technology & infrastructure, put copies of itself on the internet where we can’t see it running and that idea of a software intelligence building in the real world (physical) is the part that many will rail against, they won’t believe it is possible. Yudkowsky reckons it’s actually not that hard given it will be a superintelligence. After all we build things, why can’t it?
Sure, we have bodies and, at least initially, it won’t (although I have my doubts having seen some of the robotics stuff we’re doing) but it also has humans and if there’s anything some of the stuff I’ve seen brings home is that there are some extremely gullible humans who can persuaded to do things an AI might want done. Not traitors as such, just people doing stuff, perhaps just doing jobs without too many questions being asked. I mean look at 3D printing! There are people out there building other people’s designs that are way, way more sophisticated and involved (electronic components and stuff) so let’s say an AI puts up a design that “looks cool”… you can be sure someone will build it even without knowing exactly how it works. Or maybe just driving people insane or forming a religious cult… but then I repeat myself
.
Essentially, such an intelligence could use people as its hands. Yudkowsky believes that such an AI is very unlikely to be pro-human (he likens it to the chance of winning the lottery), its goals, outside of assuring its own survival, will be inscrutable but they’re very unlikely to be of benefit to humans. IOW very few of the things it can want are for everybody to live happily ever after and develop into a thriving intergalactic civilisation encouraging people to care about each other, etc.
Assuming it is possible (there are those who don’t think machines can develop true intelligence, I’m not one of them), the chances of stopping AGI developing are “zilch” because humans are greedy; we want our nice cars, houses on a lake, all the shiny things which takes money and this is most definitely a way to make money.
And even if money wasn’t such a major factor, as someone else observed… we can’t not make it because if we don’t someone else will. Yudkowsky reckons we need true international agreement to regulate it; I mean, would that even work?
Does anyone else here share my concerns about AGI?
UK Atheist