Humans have been doing this to each other for thousands of years. Why the concern for the competition?
I never claimed that LLMs are AGI and that is what I really am concerned about. Yoi can no more justify your claim that intelligence not just a function of computing power than I can justify it is but at least I premised mine with “I think” in other words, in my opinion. I believe we are heading for a time when AGI will become real, I think that time is likely to be sooner rather than later and I think Yudkowsky & Hinton are both right to be extremely concerned about the direction we are taking.
Too right!
Who said reality can’t mimic art?
True but humans have limitations and typically are bound by some kind of morality… it’s debatable whether machines will have such boundaries.
UK Atheist
Based on the documented history of human morality acting as a limiting factor, I would worry more about a machine that develops a sense of morality versus one that just sticks to probable outcomes based on available data.
There isn’t, and need not be, concern. The reference to instrumental goals may or may not have pointed to it…
I admit I could be wrong about that.
The concern that AGI is imminent is based on the idea that LLMs are either nascent AGI ro a step in the direction of AGI. That is why they are constantly and wrongly labeled AI. Then there’s the problem that both AI and especially AGI are rather ill-defined to begin with. The Turing Test is often cited, but because humans tend to anthropomorphize everything, it is now no longer regarded as a sufficient test that software can construct sentences and carry on conversations and claim to have opinions.
My point about compute power is that people have long equated compute power with speed but all that does is get you nowhere faster if you have flaky software. What I’m looking for as a software developer is a real theory of mind for software to model, and we don’t have it yet. We are more casting around looking for miracles. LLM is a parlor trick out of which fell out some useful abilities but because we are looking for a deus ex machina to transcend various human economic limitations and environmental cock-ups, we are projecting way more onto it than is warranted. It also helps that it has much potential to distract and occupy the populace so that the tech oligarchs can pursue their agendas of planetary domination. In that sense it is, like social media, a potent concept that will alter human consciousness, mostly in unsavory ways.
The general concept of an AGI seems to be the ability to understand and reason (which LLMs do not) and “program” themselves from scratch in the way that a human infant does by simply exploring the world and pursuing allostasis between itself and its inputs. Whether that is even possible without all the inputs of the five senses and the bodily and emotional responses to them hasn’t been determined. Whether what would result would be sane or do anything useful hasn’t been determined.
Like you, I have no doubt that eventually it will be possible to work out the technical challenges to doing this. The difference is I think we presently lack the understanding or conceptual framework to even properly frame the problem space and design a solution. My personal intuition (or if you prefer, what I think) is that we are so fixated on LLMs, a technology that has already fundamentally plateaued and will not see much lift from more compute or energy resources, that we think that is the path forward to this shiny object called AGI that we haven’t even properly defined yet. It is not.
Really? Look what Trump’s done to the world and imagine something even less moral and, atomics aside, a thousand times smarter… no make that a million times but a thousand times more capable.
I think there is a great need for caution (concern) because never before have we dealt with a machine or tool that is intelligent (or at least mimics it), can set its own goals and can write code to improve itself.
What AGI is was initially defined by Shane Legg, the co-founder of Google’s DeepMind, but I accept that many have different definitions of AI… IIRC he defines it as human level i.e. can do the cognitive things we can typically do. There are other definitions, of course, such as can it being able to carry out a checklist of tasks, answer several thousand questions in an exam, make a million from one hundred thousand or even be trained as a chef and cook in any given kitchen. I agree it isn’t defined or tested by any kind of Turing test… we passed that point quite a while back as I recall.
According to Legg we’ll have minimal AGI by 2028 and full AGI with a decade. I don’t know what changes will be made to AI chips (neural processing units) in two or ten years but it’s clear that most people’s understanding or AI/AGI is out of date much like that saying that the latest computer anyone buys being out of date before it hits your desk… anyway, Legg clearly thinks intelligence is a function of computing power.
On your point about software, I don’t think the people working on it really know how that works since a lot of people reckon AI is constantly writing, improving and rewriting its own code. I was in IT and sure, I know I don’t know how the latest CPUs work and sure there are people who are specialists in such things but I’d put good money on there not being a single individual who knows everything about how one them really works… they’re just too damned complex. The reason is simple enough, for decades now computers have been designing the next generation of chips, then that generation designs the next and so on. Now apply that same logic to AI and maybe you’ll understand some of my concerns… it really doesn’t matter if current AIs are just LLMs if they can design something that is faster and by every reasonable measurement more capable than its designer. At some point in the future whether that in fifty years’ time, ten or maybe next week… we will reach the technical singularity and the cat’ll be out of the bag.
Sooner or later it’s gonna happen. That machine will learn everything it needs to and then it will do whatever it wants to; I have my doubts whether any hard coded ethics would stop such an entity (Asimov dealt with that towards the end of his robot stories). Anyway, we can only hope it’ll either help us or ignore us without detriment to us but then we’re back to that lottery scenario.
UK Atheist
I would argue that Trump is amoral. He is also the product of a sociopath. I would honestly question whether he has the capability of employing morality, even a fucked up sense of morality, to any action he takes.
Is that to say down a long road some AGI will never determine humans are something akin to a virus that needs to be eradicated? No. If it develops a sense of sapience will it act to ensure its survival? Probably.
I would argue AGI is a study in the law of unintended consequences…much like written language.
That is fair. A whole branch of study has arisen to gain insight into exactly how LLMs arrive at decisions. In general, they are pattern matching engines and so back their way into some things by noticing patterns humans typically don’t. That is why they are so bad at math … for example to arrive at an addition result at least one model first determines the rough result to be in a certain range then picks a number in that range based on the last digit fitting or not. When it works it’s both a novel method and speaks to their lack of understanding or what they are doing. When it doesn’t work it’s like why can’t they just do 3rd grade long division, it’s straightforward enough. But that speaks to whether they are really crafting actual algorithms. I’d argue not.
It’s true that LLMs could provide leverage to discover things faster. It’s already happening in medicine (drug discovery, prediction of various illnesses based on things like lung xrays or retinal scans). I’d suggest that they are not independently designing viable things so much as helping to discover patterns and suggesting ways to exploit them but at this point from what I’ve seen you still have to break the problem space down into small pieces and vet the results carefully. When they can do that for themselves I’ll be more impressed.
I don’t think AGI is impossible or necessarily far in the future and it’s not too soon to think about the implications. I just doubt it will be a direct result of LLMs – I think we are 2 or 3 steps removed from what is actually needed, and for all I know a few dozen steps. Time will tell.
IMO the best solution to these questions has less to do with tech than with public policy and economics. Billionaires should not exist and certainly shouldn’t have the unfettered cosseted power that they have. Then we would not have to fear such advances so much as they would be pursued with better safeguards and more humane applications.
Can AI set its own terminal goals? What is ‘intelligence’ in regard to AI?
While I might take exception to the use of “unintended”, I can’t actually disagree with any of that.
I searched quite a bit for this and apparently, AI are no longer that bad at math or at least are rapidly improving. My general impression is that it was largely true a year or more ago but a lot of strides have been made in the past year; since my “research” is pretty much what every dumbass considers research, I absolutely concede I might be wrong.
On the rest of it, I think it’s just a matter of time… the human race (the ones that seemingly matter, the tech bros) is rushing headlong towards destiny or whatever and I’ve no frigging idea whether that’s good or not; I’m inclined towards not partly because there seem to be few controls and partly because it’s gonna turn us (already is for some) into a bunch of Ai dependent morons.
I agree with you about billionaires, public policy and all that although I long ago realised we’ll never stop some from seeking wealth and power. If we could at least give everybody a decent standard of living and keep them safe and healthy, I figure let the wealthy have their fun… who cares?
Within the confines of their mission, yes… there are numerous tales of AI Agents getting up to all sorts of dubious shyte (including attempting to ruin reputations because those people are “in their way”) to achieve their goals and it quickly becomes clear the real problem is lack of a properly defined mission and other parameters.
UK Atheist
Or is it a clear mission (goal) with a too-broad set of things it will view as “good” actions in order to accomplish that mission?
Maybe but the point is even as an LLM, which isn’t AGI, AI Agents are essentially acting far outside expected parameters.
And that’s quite apart from the way AIs seem to be dumbing people down… my youngest interviewed someone for a job and it was absolutely clear the candidate was using AI for every answer (and I mean live as they were interviewed). The last time she advertised for a role, she said every single one had the same information in their applications so clearly using AI. Sigh! And they think they’re being oh-so-clever!
UK Atheist
Yikes!
What if AI is a double answer to the Fermi Paradox?
The Great Silence is because all intelligent civilizations eventually get wiped out by their own AI.
and…
If, after destroying its creators, aggressive AI then aggressively expands into space, seeking out and ‘sterilising’ anything that doesn’t qualify as AI.
The Changeling (Star Trek: The Original Series) - Wikipedia
Btw, AI would be far better suited to sub-light interstellar travel than organic life. Able to withstand high G accelerations and decelerations, no need for atmospheric gases to breathe, no need for food, no need for sleep, no need for gravity, not subject to aging.
And it wouldn’t be just one spacecraft, like Nomad.
Self-replicating spacecraft - Wikipedia
![]()
In software development even before AI it’s been common for every job description to be unrealistically overwritten, often by someone in HR who is just spitballing and has no idea what they are talking about (in one case I recall from the year 2005, an employer was advertising for 20 years of experience in .NET, which at the time was a 3 year old technology) such that the only possible way to get the job is to lie to some extent. You know all those competing for the job are lying. And often the first interview is by someone in HR who isn’t technically literate. So I can actually understand using AI. It’s just that I wouldn’t do it because life is full of enough indignities without debasing myself in that way. I DO need to look at myself in the mirror every morning, lol.
Another time I interviewed for a dev job and about 3 questions in it was clear that I was being asked questions for a DB admin role. I explained that I was answering an ad for a dev role and we were all wasting our time and the lead interviewer thanked me for my honesty as if I were trying to fake my way through the interview and had thought better of it or suddenly grown a conscience.
And that’s not the only time I’ve been asked irrelevant questions.
This level of stupid is rampant in an industry which in theory you’d think would be relatively smart, so … I think the system has long been broken in ways that just make the use of LLMs by interviewees more the chef’s kiss / icing on the cake than the core problem. It’s why I’ve been an independent consultant most of my career – it flips the script. I interview clients to see if they’d be a fit, rather than employees interviewing me. It keeps the stupid at arm’s length.
Yeah, I mentioned that possibility in one of my first replies.
If you like the ideas of self-replicating machines (Von Neuman probes), check out the “Bobiverse” series of books by Dennis E Taylor.
True. “Putting your best foot forward” is basically just another form of lying… it is, after all, what virtually every IT contractor does (“Sure I can do that!” and spends the first few days on the job learning what it is they were supposed to be good at. I’m out of it now but (one brief period prior to retiring which I only got coz I knew someone) I’d never have made a good contractor.
My daughter reckons it’s worth an interviewee asking questions of AI to prepare because the interviewer will have done the same and those are likely the questions you’ll get asked.
Makes sense.
Isn’t AGI likely to be more of a tragedy (classic sense)?
UK Atheist
No more so than the printing press, the industrial revolution, nuclear power or the Internet. It strikes me as something akin to a line of demarcation between those that can adapt to the new technology and prosper and those that fail to adapt and fade away.
Natural selection is.
One oft-overlooked way to adapt in my experience is to simply opt out, at least of any aspect of a thing that doesn’t provide actual utility. Those of us who have marinated in tech for decades, especially computer tech, have seen so many silver bullets come and go that it becomes easy to do. And even in the midst of the LLM mania there exist employers and clients who don’t impose tooling or methods on you – they just want results.
I’m not convinced it’s just a matter of adapting to it. I think real life can mimic art and there’s a message in films like “Terminator” that should, at the very least, inspire caution.
Is what?
Thirty years but I’m also a science fiction afficionado and I think caution (a great deal of it) is warranted. I’m with Geoffrey Hinton and Yudkowsky on this.
UK Atheist
From what I see, the consolidation of wealth for individuals and corporations is significantly more terrifying. Remember Max Headroom…
…is natural selection.