Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.

submitted by

www.theverge.com/ai-artificial-intelligence/827…

Archive.ph link

34
224

Log in to comment

34 Comments

A wise man once said ā€œThe ability to speak does not make you intelligent.ā€

A probabilistic ā€œword calculatorā€ is not an intelligent, conscious agent? Oh noes! šŸ™„šŸ˜…

I’ll bite.

How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?

If I can’t meet it and could only interact with it through a device, then I could be fooled, of course.

But then how can you tell that it’s not an actual conscious being?

This is the whole plot of so many sci-fi novels.

Because it simply isn’t, it isn’t aware of anything because such algorithm, if it can exist, hasn’t been created yet! It doesn’t ā€œknowā€ anything because the it we’re talking about is probabilistic code fed the internet and filtered through the awareness of actual human beings who update the code. If this were a movie, you’d know it too if you saw the POV of the LLM and the guy trying to trick you, making sure the text is human whenever it went too off the rails… but that’s already the reality we live in and it’s easily checked! You’re thinking of an actual AI, which perhaps could exist one day, but God knows. There is research that indicates consciousness to be a quantum process, and philosophically and mathematically it’s just non-computational (check Roger Penrose!), so we might still be a bit away from recreating consciousness. 🤷

How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?

The same way you distinguish a horse with a plastic horn from a real unicorn: you won’t see a real unicorn.

In other words, your question disregards what the text says, that you won’t get anything remotely similar to an actual intelligent agent through those large token models. You need a different approach, acknowledging that linguistic competence is not the same as reasoning.

Nota bene: this does not mean ā€œAGI is impossibleā€. That is not what I’m saying. I’m saying ā€œLLMs are a dead end for AGIā€.

Linguists have been saying this over and over, but almost everybody ignored it.

Linguists were divided until recently, to be fair.

The main division was about why language appeared; to structure thought, communication, or both. But I genuinely don’t think anyone serious would claim reasoning appeared because of language. …or that if you feed enough tokens to a neural network it’ll become smart.

Well, and whether intelligence is required for mastery of language. Not even that long ago, in 2009, my linguistics professor held a forum discussion within the linguistics, informatics, and philosophy departments at my school where they each gave their perspectives on whether true mastery of language could exist without intelligence.

Well duh… Most politicians can talk.

CUTTING EDGE RESEARCH SHOWS something everybody already knew and were saying for years.

Something I was taught in film school 15 years ago was that communication happens when a message is perceived. Whether the message was intended or not is irrelevant. And yet here we are, ā€œcommunicatingā€ with a slightly advanced autocomplete algorithm and calling it intelligent.

This is not really cutting edge research. These limitations were described philosophically for millenia. Then again mathematically through the various AI summers and winters since 1943.

Monied intrests beat science every day.

I keep saying that those llm peddlers are selling us a brain, when at most they only deliver Wernicke’s + Broca’s area of a brain.

Sure, they are necessary for a human like brain, but it’s only 10% of the job done my guys.

LLMs are actually very, very useful for certain things.

The problem isn’t that they lack utility. It’s that they’re constantly being shoehorned into area where they aren’t useful.

They’re great at surfacing nee knowledge for things you don’t have a complete picture of. You can’t take that knowledge at face value but a framework that you can validate with external sources can be a massive timesaver

They’re good at summarizing text. They’re good at finding solutions to very narrow and specific coding challenges.

They’re not useful at providing support. They are not useful at detailing specific, technical issues. They are not good friends.

when at most they only deliver Wernicke’s + Broca’s area of a brain.

Not even. LLMs don’t really understand what you say, and their output is often nonsensical babble.

you’re right. More like discussing with an Alzheimer’s addled brain being coerced into a particular set of vocabulary.

LLMs are simply tools that emulate the communicative function of language, not the separate and distinct cognitive process of thinking and reasoning …

Take away our ability to speak, and we can still think, reason, form beliefs, fall in love, and move about the world; our range of what we can experience and think about remains vast.

But take away language from a large language model, and you are left with literally nothing at all.

The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isn’t really how it works. LLMs have to internally model more than just the structure of language because text contains information that isn’t just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever it’s doing internally is more abstract than only being about language.

Not to say the research on the human brain they’re talking about is wrong, it’s just that the way they are trying to tie it in to AI doesn’t make any sense.

Took a lot of scrolling to find an intelligent comment on the article about how outputting words isn’t necessarily intelligence.

Appreciate you doing the good work I’m too exhausted with Lemmy to do.

(And for those that want more research in line with what the user above is talking about, I strongly encourage checking out the Othello-GPT line of research and replication, starting with this write-up from the original study authors here.)

Let me grab all your downvotes by making counterpoints to this article.

I’m not saying that it’s not right to bash the fake hype that the likes of altman and alienberg are making with their outlandish claims that AGI is around the corner and that LLM are its precursor. I think that’s 100% spot on.

But the news article is trying to offer an opinion as if it’s a scientific truth, and this is not acceptable either.

The basis for the article is the supposed ā€œcutting-edge researchā€ that shows language is not the same as intelligence. The problem is that they’re referring to a publication from last year that is basically an op-ed, where the authors go over existing literature and theories to cement their view that language is a communication tool and not the foundation of thought.

The original authors do acknowledge that the growth in human intelligence is tightly related to language, yet assert that language is overall a manifestation of intelligence and not a prerequisite.

The nature of human intelligence is a much debated topic, and this doesn’t particularly add to the existing theories.

Even if we accept the authors’ views, then one might question if LLMs are the path to AGI. Obviously many lead researchers in AI have the same question - most notably, Prof LeCun is leaving Meta precisely because he has the same doubts and wants to progress his research through a different path.

But the problem is that the Verge article then goes on to conclude the following:

an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.

This conclusion is a non sequitur. It generalizes a specific point about the capacity of LLMs to evolve into true AGI or not, into an ā€œAI dumbā€ catchall that ignores even the most basic evidence that they themselves give - like being able to ā€œsolveā€ go, or play chess in a way that no human can even comprehend - and, to top it off, conclude that ā€œit will never be able toā€ in the future.

Looking back at the last 2 years, I don’t think anyone can predict what AI research breakthroughs might happen in the next 2, let alone ā€œforeverā€.

CEOs are just hyping bullshit

AI tech bros are like we’re going to build technology so powerful we have been warned about it for generations without any ethics or regulations to eliminate all of your jobs and to make you obsolete while we strip all of your social safety nets, build ai mass surveillance against your 4th amendment rights and militarize your streets so you and everyone you love will either die or go to prison for being homeless and work as a slave for us in that prison and we’re going to use your money for our evil plan and we’re all like maybe we can just vote blue no matter who even though they pay them to just pretend to try to do something about this but they ultimately do nothing because a few magically turn facist when it matters so basically we’re just going to do nothing even though we know we’re being marched to our death by psychopaths who are very vocal about their intentions….am I following the story? Am I wrong?

You’re forgetting that, like all of americas problems, the solution has been found elsewhere and you’re just not going to do it because reasons I guess? Idk. I’m not American. Thank god.

Yeah watching your loved ones die without being able to access healthcare has been soul crushing torture for me.

Because what we call intelligence (the human kind) usually is just an emergent property of the wielding of various combinations of fist or second-hand experience by ā€œconsciousnessā€ which itself is…

What we like to call the tip of a huge fucking iceberg of constant lifelong internal dialogues, overlapping and integrating experiences all the way back to the memories (engrams, assemblies, neurons that wired together to represent something), even the ones so old or deep we can’t even summon them any longer, but often are still measurable, still there, integrating like lego bricks with other assemblies.

Humans continuously, reflexively, recursively tell and re-tell our own stories to ourselves all day, and even at night, just to make sense of the connections we made today, how to use them tomorrow, to know how they relate to connections we made a lifetime ago, and how it fits in the larger story of us. That ā€œcontext integration windowā€ absolutely DWARFS even the deepest language model, even though our own organic ā€œneural netā€ is low-power, lacks back-propagation, etc etc, and it is all done using language.

So yes, language is not the same as intelligence (though at some point some would ask ā€œwho can tell the difference?ā€) HOWEVER… The semantic taxonomies, symbolic cognition, and various other mental tools that are enabled by language are absolutely, verifiably required for this gargantuan context integration to take place.

Somebody tell these absolute idiots that AI is *NOT AN F’IN BUBBLE!*

Here’s proof the USD and government bonds are the bubble, from Mark Moss: https://inv.nadeko.net/watch?v=xGoPdHH9PlE

Whataboutism + false dichotomy.

Comments from other communities

But I spek!

The ability to speak does not make you intelligent.

Yeah, my boss can prove that real quick.

I would go further and say that sometimes speaking is a good way to prove a lack of intelligence.

Um…yeah. About ten seconds of critical thinking also shows that language is not the same as intelligence.

But is the average finance bro capable of ten seconds of critical thinking?

Money throws logic out the window 10/10 times cant recommend.

It was clear to any serious person from the get go that LLMs only ape intelligence and are a dead end as far as general intelligence is concerned. Trying to discredit this notion this late in the game is akin to wrestling with a pig in the mud. If they want their journalism to make a difference they should be asking why AI boosters like Altman and Amodei are getting away with doing what Neumann and Holmes did but on a much larger scale.

The title is false. The cited paper deals with connection between language and intelligence however a) it does not comment on current AI bubble and b) current AI research does not assume language is the same as intelligence. It’s an example of scientists saying something and journalists extrapolating from that to get a story out of it. Reminds me of On Human (and) Nature.

The title is not false. If you actually bothered to read the article, you’d see that the argument being made is that the AI tech companies are selling a vision to their investors that’s at odds with the research. The current LLM based approach to AI cannot achieve general intelligence.

Even the article admits that AI researchers are aware that LLMs are not sufficient. So the title is absolutely false. The article uses research which has very little to do with the subject to springboard into an opinion piece which itself observes that actually the premiss of the opinion is incorrect.

And once again, what the article is actually talking is how LLMs are being sold to investors. At this point, I get the impression that you simply lack basic reading comprehension to understand the article you’re commending on.

Maybe. Or maybe you lack basic critical thinking to be able to put the article in context. But since you’ve initiated the ad hominem part of the discussion, I don’t think there’s any point continuing this discussion, so we’ll never know.

I’ve literally been contextualizing the article throughout this whole discussion for you. At least we can agree that continuing this is pointless. Bye.

Insert image