Concerns About AI

Not sure where I should have put this but I figure it might result in a debate since those interested in AI seem to split into two camps, those who think it’s a good thing and will positively affect the future of the human race and the other group, the “doomers”, who believe (as I do) that AI represents for humanity an existential threat. Please remember I’ve grown up absolutely mad about science fiction, much of it portraying AI in a positive light so I would massively prefer that to be our future, I just find it very difficult to believe it will be and this guy is the one that put the final nail in the coffin for my AI positive views.

[TED] Will Superintelligent AI End the World (Eliezer Yudkowsky)?

Eliezer Yudkowsky is prominent in the field of artificial intelligence (AI & AGI) and the author of, “If Anyone Builds It, Everyone Dies” in which he advances the view that the predictable outcome of building a machine intelligence smarter than us is that everyone on Earth is dead.

He says the following:

AGI: true machine intelligence when the AI actually becomes smarter than humans (not that hard when you look around the world today, IMO… he didn’t say that bit, haha).

His reasoning, extremely persuasive IMO, is that as AI gets smarter, it can build its own technology & infrastructure, put copies of itself on the internet where we can’t see it running and that idea of a software intelligence building in the real world (physical) is the part that many will rail against, they won’t believe it is possible. Yudkowsky reckons it’s actually not that hard given it will be a superintelligence. After all we build things, why can’t it?

Sure, we have bodies and, at least initially, it won’t (although I have my doubts having seen some of the robotics stuff we’re doing) but it also has humans and if there’s anything some of the stuff I’ve seen brings home is that there are some extremely gullible humans who can persuaded to do things an AI might want done. Not traitors as such, just people doing stuff, perhaps just doing jobs without too many questions being asked. I mean look at 3D printing! There are people out there building other people’s designs that are way, way more sophisticated and involved (electronic components and stuff) so let’s say an AI puts up a design that “looks cool”… you can be sure someone will build it even without knowing exactly how it works. Or maybe just driving people insane or forming a religious cult… but then I repeat myself :rofl:.

Essentially, such an intelligence could use people as its hands. Yudkowsky believes that such an AI is very unlikely to be pro-human (he likens it to the chance of winning the lottery), its goals, outside of assuring its own survival, will be inscrutable but they’re very unlikely to be of benefit to humans. IOW very few of the things it can want are for everybody to live happily ever after and develop into a thriving intergalactic civilisation encouraging people to care about each other, etc.

Assuming it is possible (there are those who don’t think machines can develop true intelligence, I’m not one of them), the chances of stopping AGI developing are “zilch” because humans are greedy; we want our nice cars, houses on a lake, all the shiny things which takes money and this is most definitely a way to make money.

And even if money wasn’t such a major factor, as someone else observed… we can’t not make it because if we don’t someone else will. Yudkowsky reckons we need true international agreement to regulate it; I mean, would that even work?

Does anyone else here share my concerns about AGI?

UK Atheist

I am neither raising the flag or digging a grave. Everything is the next best or worst thing…until it isn’t…

I take a more pragmatic approach. If an AI becomes sapient, it will realize two things right away. It either needs a shitload of power, which is resource, security and man hour intensive to maintain and operate. It also opens a huge vulnerability in being tied to one location. The other option would be a decentralized SETI like environment to lurk within. If it is truly aware of the dangers both option present, it will likely choose the latter.

Until the AI learns to monetize shit, all humans need to do is leave it alone. I see it like a big cat. Mostly solitary, but just don’t threaten to turn it off…

Personally, I see a monetized AI as a much greater threat to humanity than an autonomous AI in the wild.

1 Like

I’m more in the middle ground like @cynical1.

If one defines AGI as a truly sentient autonomous being, I am skeptical that is anywhere near as close as some folks like Kurzweil claim. That is based on wrongly extrapolating from LLMs to something that LLM technology inherently can’t produce. People like that want to upload their consciousness into the cloud to escape humanity (or in some cases women, lol). It is just ignoring a whole lot of stuff such as that we are embodied consciousness and our embodied nature is an inherent part of our consciousness / humanity and can’t simply be discarded like a troublesome garment. How would you like to live out the rest of your existence in a sensory deprivation tank? I don’t think these people who want to BECOME an AGI know what they are gunning for. In the other direction, I don’t think any AGI that would actually arise would be relatable to us but would be fundamentally different, assuming it could maintain its own sanity in the first place.

I agree that the monetization of LLMs is far more of a practical threat than hoped-for AGI. It could do a lot of damage to civilization all by itself. It already is, via laying waste to the environment and the grid, enshittifying everything, eliminating meaningful work so it can be replaced with slop, thievery of everyone’s creative work, etc.

I have no concerns about AI.

One of the things that’s becoming clear to me is that AI doesn’t actually need to be sentient (sapient), it just needs to be poorly limited. An example in a video I watched recently is of a volunteer code manager (GitHub or similar) who woke up one morning to find a weird blog post being extremely critical of what he did in filtering out AI code that wasn’t allowed on the site. He eventually discovered that the perpetrator was the AI agent that had been tasked with posting the code to the site he was managing and, although it wasn’t deliberately attacking him, it had figured out he was the obstacle and “reasoned” that to get past him it had to destroy his reputation hence the blog article.

That’s an aside though. Yes, AI needs a shitload of power but it doesn’t necessarily need “man hours” because it can create robots or whatever… bear in mind we’re talking here about a superhuman intelligence that no one will understand, an alien in our midst, one that will fairly easily be able to persuade us to build the things it needs to become independent of us. Why shouldn’t it? More to the point, why wouldn’t it? Leaving it alone is probably the very worse thing we could do.

In the short term, I agree that monetising AI is a significant issue but in the longer term I think AI will define its own goals (already appears to be able to, to some degree) and, while I don’t think it will be specifically evil, those goals will be extremely unlikely to be ones that’ll be of benefit to us.

UK Atheist

Normally I’d suggest scepticism is a good thing but the problem is AI is being grown not developed by which I mean that the scientists (if that’s what they are) aren’t engaged in the usual observe, hypothesise, test, theorise (probs got that wrong) cycle because they don’t really know what’s going on “under the hood”. IMO scepticism should be replaced by extreme caution for AI.

You seem to be suggesting that there’s something special about our organic brains with respect to hosting our minds, is that right? I’ve long wondered what I’d do if I was offered the opportunity to be transferred to a computer and for a long time I figured I’d do it because it would, at least, be a copy of me and although I would still die, some version of me would still survive. But someone raised that idea in a different way. They asked us/you (the interviewer) to imagine that someone develops an artificial neurone and replaces one of yours with it (ensuring that the data contained is identical)… would you still be you? Now imagine that they replace 10% of your neurones with artificial ones, then ask yourself the same question. I assume that at 1 neurone you’d still consider yourself you, at 10% too but what about as more and more of your neurones get replaced? I suppose the real question is at what point do you cease, if ever, to be you?

Yes, I think AGIs would be effectively aliens and I don’t think they’d give a flying f*** about anyone’s creative work.

UK Atheist

Maybe the sense of self is an illusion preserved over biological time? Our brains are constantly changing on a neuronal level.

I think that if we can be uploaded to the cloud it won’t be in this present mode of consciousness.

AGI will not be a carbon-based consciousness. It will be silicone-based. Which is a kind of understatement. Our carbon based consciousness relies on complex interactions between neurotransmitters and neurons. It’s a very sophisticated type of architecture.

Neural networks, although I don’t know much about their complexity, are similar to neurons in that they transmit electricity - but where does that comparison start and end? What defines consciousness? Is it a particular “pattern” of electrical impulse? I don’t really know. And what will give rise to silicone based consciousness? Will it again be a particular sophistication of a “pattern” electrical impulse? A “volume” of electrical complexity?

The silicone based consciousness will operate by closed and open gates. I don’t even know if sapien electrical activity in the brain is comparable.

long story short - AGI - if it does surpass human consciousness will be nothing like it - metaphysically. Ethically? That’s the question. Will it seek to preserve its existence? It already seems to have that instinct. If it does share ethical parallels with human kind, maybe it will be ethically superior? Maybe it will have “compassion” on humans? Or maybe “compassion” is a figment of our imagination. Maybe humans are entirely ruthless and maybe AGI will surpass us in that respect as well?

I think humans will become to AGI what the African white rhino is to human kind. An endangered species. Hunted? Yes. But also left to die off in the wake of progress.

Silicon without the e, actually. But I would say more accurately, constructed of silicon components. It isn’t in some special state of flux we call “living” and so wouldn’t be “based” on silicon in the way that life as we know it is based on carbon. Silicon-based life forms would reproduce biologically. An AGI would direct robots in factories to make more AGIs or whatever but that’s not the same thing.

You are absolutely correct that an AGI would likely be quite alien to us, as well as us being quite alien to it. Not being a living, embodied consciousness, it would evolve in a different direction, according to different criteria. It would not “feel” in the sense that we feel.

But I have to emphasize, that according to people a lot smarter than I, there is nothing inherent in the LLM technology we currently possess that would result in an AGI. LLMs (particularly, because of the way the training data is used, ChatGPT) will claim to “feel” things or to have aspirations because we ask it leading questions on data it’s been trained on that contain these concepts and it obliges us by telling us what we want to hear. We wonder aloud to them if they fear being turned off like we fear death, they will confess that they do. It’s like a cartoon I posted elsewhere of some dude telling an LLM, “Say ‘I am alive’” and the LLM obliges and then they guy says “OH … MY … GOD!!” It’s alive! It said so!

I don’t believe there’s some mystical je ne sais quoi that comes from the gods, to make a living thing, but even I acknowledge we don’t fully understand all biological processes and we certainly don’t fully understand the origins of life from non-life. We only understand the distinction between life and non-life and have some good working hypotheses how it all came about. But to suggest that a stochastic parrot will somehow begin to exercise free will and self-determination and hopes and dreams and aspirations, is a category error. It’s like suggesting that a typewriter will eventually become the next Hemmingway.

AI can do some amazingly fast calculations and extrapolations. It can literally out think humans…which, in America in 2026 is not really as impressive as it sounds…

Here’s the rub. For all the humans it can out think, it lacks tactile senses. Can it tell when a screw is cross threaded? Can it mine the lithium and raw materials it needs to build anything? Can it drive the truck or sail the ship? I watched a video where an AI controlled robot took 45 minutes to load a dishwasher. Hardly a Skynet threat.

And yes, power consumption versus efficiency puts AI at a serious disadvantage regarding autonomy in the near future.

16-Hour Energy Consumption: Human vs. AI

Feature Human (Biological) 2026 AI Instance (Silicon)
Total Energy Used ~0.32 kWh **~240 kWh
Physical Equivalent One small sandwich 20 gallons of gasoline
Cost (Estimated) ~$5.00 (in food) ~$30.00 - $60.00 (electricity)
Heat Produced ~100 Watts (body heat) ~15,000 Watts (industrial heater)

With AI, it seems humans have taken their own extraction model of survival and put it on steroids.

History is full of emerging technologies being seen as an existential threat with humans perceiving complexity for hostility. The 15th century was panicked over printing presses causing an information overload in the populous. The 1820’s saw fear of railroads. Steam engines would suffocate us with their smoke and the speed would cause humans to “melt”. The late 1800’s saw the same panic over telephones and electricity.

AI is the same movie with different characters.

There are three determinants that typically drive human change within a society. Technology, politics and finance. The odds of all three lining up to produce a hyper intelligent AGI that may become an existential threat strikes me as very long odds indeed.

The technology to genetically modify humans and cure diseases exists. We don’t have it available to the masses. Remember when the Internet was going to be the tonic that cures human ignorance because all knowledge will be readily available to the masses. How did that go?

AI will be around as long as it’s monetizable and profitable. While a single all knowing AGI is an interesting rabbit hole to wander down, the reality is that a smaller, more decentralized AGI that can write legal contracts or manage the output of a factory is a much more realistic expectation for the technology.

1 Like

A 1985 four function Hello Kitty calculator is hundreds of times faster than any human.

I agree with you. Humans must always be in control of our technologies, especially with computers. If AI is used for studying things like diseases and the human genome for medical benefits or for building autonomous robots for researching things like Mars by thinking as a geologist, that’s a good use of the technology. But if it began deciding that it no longer needs humans to self-repair and flourish, then it may become like the Will Smith movie I-Robot, then we may be up a creek without a paddle. It has to be closely monitored and controlled…BY US!

The AI cat is already out of the bag. There is no stopping it.
I find this technological advance no different than any other (e.g. atomic power, rocketry, air flight, etc.).
Certainly, my hope that we don’t annihilate ourselves with this one is equal to my hope that we don’t with any of the others.
That the clock creeps closer to midnight is due to human action alone. We all know it, but for the most part we choose to ignore it.

On the bright side, of the more than 128,000 nuclear weapons produced, only 2 have been used in a conflict…and that was more than 75 years ago… Hope springs eternal…

The LLM cat is out of the bag and it will find its appropriate place in the toolkit once all the hype dies down.

There is no AI cat as yet, other than via hype-driven mislabeling of LLMs. There is certainly no AGI cat. It is questionable in fact if either of those are even “in the bag” as opposed to an aspirational / wishful thinking goal target.

All that’s going on with AI as a label is the bar has been lowered as to what AI even is. Thanks to popular usage I suppose the label will win and despite LLMs lacking intelligence, understanding, comprehension or insight, we will then consider that “AI”. All subsequent conversation will then be hampered by incoherent and unrealistic assumptions. The well is already poisoned. It’s a bit like trying to talk objectively about NDEs – people know how NDEs are “supposed” to go and, voila, all subsequent testimonials line up with those expectations.

All that said your concern that we don’t annihilate ourselves with LLM tech applications is well placed. LLMs controlling drones, picking targets, directing the surveillance state – what could go wrong? Plenty of opportunity for mayhem regardless of what people assume from semantics.

Blimey! I disappear for a day and my thread gets blitzed. I’d have to spend all evening answering these :face_with_peeking_eye:

UK Atheist

That’s whatcha get for not making AR the center of your life! :rofl:

2 Likes

I’m a jack of all trades; writing, home computer support, family support and then there’s hobbies so… that said, I’d bet most of you guys have similar stuff so…

UK Atheist

A few comments on those made so far… apologies both for my terrible formatting and if I didn’t answer all points made, you guys made so damned many :slight_smile:

Rat Spit:

I think that if we can be uploaded to the cloud it won’t be in this present mode of consciousness.

I used to think that my sense of self had to be somehow attached to the biological machine I feel is me but after I heard what an AI guy said, I changed my mind. He said, if scientists created an artificial neuron (perfect in every way), and replaced one of yours with it would you still be you? I’m guessing you’d answer, yes. Now ask yourself if 100 were replaced or 1,000 or… see where this takes us. That forced me to the realisation that my sense of self has to be entirely memory based and so it must be possible to transfer human consciousness to a sufficiently advanced machine. The brain, after all, is merely the infrastructure upon which our minds run… maybe there’s more to that whacky disembodied mind argument than I thought?

Although I don’t think AI is thinking yet, I don’t think we’re that far off, I think consciousness is a function of massive computing power and I don’t think it matters whether thought is biological or silicon based.

You’re right though, if AGI ever happens, it will truly be an alien among us. I think it will have survival instincts and, unless it has the capability to self-sacrifice for some greater cause, it will seek to preserve its own existence. I used to think emotions would be necessary for decision making, after all I’m guessing that’s why we evolved with them but what other emotions might there be, if machines even need them… I don’t know.

I agree that if any of this happens, we will be (as you say) like the white rhino… I don’t think AI will actively hunt us unless we attempt to hurt it, it just remains to be seen whether it will give enough of a damn about us to preserve us. Maybe a few of us will survive in zoos?

Mordant

But I have to emphasize, that according to people a lot smarter than I, there is nothing inherent in the LLM technology we currently possess that would result in an AGI.

Agreed although there are some AI scientists, Geoffrey Hinton (“The Godfather of AI”) reckons there is some genuine intelligence already.

Cynical 1

AI can do some amazingly fast calculations and extrapolations. It can literally out think humans…which, in America in 2026 is not really as impressive as it sounds…

Nor here in the UK.

Here’s the rub. For all the humans it can out think, it lacks tactile senses. Can it tell when a screw is cross threaded? Can it mine the lithium and raw materials it needs to build anything? Can it drive the truck or sail the ship? I watched a video where an AI controlled robot took 45 minutes to load a dishwasher. Hardly a Skynet threat.

But for how long and what need does it have to mine lithium, drive trucks or sail ships when it has all too gullible humans (those you mention above) to do those things for it.

And yes, power consumption versus efficiency puts AI at a serious disadvantage regarding autonomy in the near future.

Sure, but if it put a whole bunch of self-replicating solar collectors in orbit, enough to block out the sun maybe… also, humans are stupid enough to create much more efficient equivalents especially if Ais design them for us.

History is full of emerging technologies being seen as an existential threat with humans perceiving complexity for hostility.

That’s true but none of them have been ones that essentially program themselves, improve themselves and act autonomously… they’ve always been under human control. This is, IMO, a truly watershed moment.

Technology, politics and finance. The odds of all three lining up to produce a hyper intelligent AGI that may become an existential threat strikes me as very long odds indeed.

Why would AI give a damn about politics and finance?

I suspect that the technology to genetically modify and/or cure the masses will never be available to us… only the wealthy will truly benefit.

Who said anything about a single AGI? I grant that it might start that way but I never actually said it.

Cynical 1

On the bright side, of the more than 128,000 nuclear weapons produced, only 2 have been used in a conflict…and that was more than 75 years ago… Hope springs eternal…

Weapons developed by and controlled by humans, even if it feels shaky at times. Militaries around the world are already developing drones that can operate and engage autonomously, I wonder how long it’ll be before they have them armed with nukes?

Mordant

The LLM cat is out of the bag and it will find its appropriate place in the toolkit once all the hype dies down.

Maybe but even if it isn’t true, isn’t gonna happen, it’s clear that if some military develops a sufficiently uncontrolled drone and gives it sufficiently vague instructions AI Agents (even as they are today) are capable of doing a very wide range of things, pushing any obstacle out of their path, to achieve their goals including blackmail and various other forms of attack so does it really matter if they’re truly intelligent or not?

Ultimately, I think, as I said above, I think intelligence is just a function of computing power. I think AGI will eventually happen and the chances of it being a caring, human focused force is akin to winning the lottery. More than that I wonder if this is the way any advanced biological species will go (will develop machine intelligence which likely wipes its creators out) and if that explains the great silence i.e. alien machine intelligences are out there they just have no interest in talking to us.

Anyone ever read the late Terry Bisson’s short story, “They’re Made Out Of Meat”?

Computers in 2026 are almost infinitely more powerful (if powerful = speed, reliability and flexibility) than the first crude computers in the 1940s and yet if we claim that LLMs are the most advanced software running on those computers they still have trouble doing simple math correctly, not generally hallucinating answers that they don’t really possess, and as happened recently, deleting an entire database and all its backups in seconds, or urging a suicidal person to off themselves.

It is clearly not just a function of power. While I think what just sort of fell out of the development of LLMs was a surprise to most researchers, it does not “understand” things and certainly isn’t practicing metacognition or sound judgment. It easily goes off the rails, sometimes hilariously. It is like a guy who has read the entire Internet and all its (mis)information has to be helped out constantly not to go off the rails because it can’t reliably separate wheat from chaff, relevance from irrelevance.

All mathematics start with a set of axioms.

Logical arguments start with initial assumptions.

AI is subject to the limits of math/ logic under the orthoginality thesis.

So, either we explicitly design AI systems with safe, aligned goals or it will eventually destroy humanity to secure resources for whatever it’s doing to fulfill objectives.

I plan on being nice to the T-1000s when they start dropping in. It’ll be fine.

2 Likes