Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.
34 Comments
Comments from other communities
But I spek!
The ability to speak does not make you intelligent.
I would go further and say that sometimes speaking is a good way to prove a lack of intelligence.
Umā¦yeah. About ten seconds of critical thinking also shows that language is not the same as intelligence.
It was clear to any serious person from the get go that LLMs only ape intelligence and are a dead end as far as general intelligence is concerned. Trying to discredit this notion this late in the game is akin to wrestling with a pig in the mud. If they want their journalism to make a difference they should be asking why AI boosters like Altman and Amodei are getting away with doing what Neumann and Holmes did but on a much larger scale.
The title is false. The cited paper deals with connection between language and intelligence however a) it does not comment on current AI bubble and b) current AI research does not assume language is the same as intelligence. Itās an example of scientists saying something and journalists extrapolating from that to get a story out of it. Reminds me of On Human (and) Nature.
The title is not false. If you actually bothered to read the article, youād see that the argument being made is that the AI tech companies are selling a vision to their investors thatās at odds with the research. The current LLM based approach to AI cannot achieve general intelligence.
Even the article admits that AI researchers are aware that LLMs are not sufficient. So the title is absolutely false. The article uses research which has very little to do with the subject to springboard into an opinion piece which itself observes that actually the premiss of the opinion is incorrect.
And once again, what the article is actually talking is how LLMs are being sold to investors. At this point, I get the impression that you simply lack basic reading comprehension to understand the article youāre commending on.
Maybe. Or maybe you lack basic critical thinking to be able to put the article in context. But since youāve initiated the ad hominem part of the discussion, I donāt think thereās any point continuing this discussion, so weāll never know.
Iāve literally been contextualizing the article throughout this whole discussion for you. At least we can agree that continuing this is pointless. Bye.
Keyboard Vagabond
A wise man once said āThe ability to speak does not make you intelligent.ā
And that man was Raās Al Ghul
I mean, yes but no
How rude
A probabilistic āword calculatorā is not an intelligent, conscious agent? Oh noes! šš
Iāll bite.
How would you distinguish a sufficiently advanced word calculator from an actual intelligent, conscious agent?
If I canāt meet it and could only interact with it through a device, then I could be fooled, of course.
But then how can you tell that itās not an actual conscious being?
This is the whole plot of so many sci-fi novels.
Because it simply isnāt, it isnāt aware of anything because such algorithm, if it can exist, hasnāt been created yet! It doesnāt āknowā anything because the it weāre talking about is probabilistic code fed the internet and filtered through the awareness of actual human beings who update the code. If this were a movie, youād know it too if you saw the POV of the LLM and the guy trying to trick you, making sure the text is human whenever it went too off the rails⦠but thatās already the reality we live in and itās easily checked! Youāre thinking of an actual AI, which perhaps could exist one day, but God knows. There is research that indicates consciousness to be a quantum process, and philosophically and mathematically itās just non-computational (check Roger Penrose!), so we might still be a bit away from recreating consciousness. š¤·
The same way you distinguish a horse with a plastic horn from a real unicorn: you wonāt see a real unicorn.
In other words, your question disregards what the text says, that you wonāt get anything remotely similar to an actual intelligent agent through those large token models. You need a different approach, acknowledging that linguistic competence is not the same as reasoning.
Nota bene: this does not mean āAGI is impossibleā. That is not what Iām saying. Iām saying āLLMs are a dead end for AGIā.
Linguists have been saying this over and over, but almost everybody ignored it.
Linguists were divided until recently, to be fair.
The main division was about why language appeared; to structure thought, communication, or both. But I genuinely donāt think anyone serious would claim reasoning appeared because of language. ā¦or that if you feed enough tokens to a neural network itāll become smart.
Well, and whether intelligence is required for mastery of language. Not even that long ago, in 2009, my linguistics professor held a forum discussion within the linguistics, informatics, and philosophy departments at my school where they each gave their perspectives on whether true mastery of language could exist without intelligence.
Well duh⦠Most politicians can talk.
CUTTING EDGE RESEARCH SHOWS something everybody already knew and were saying for years.
Something I was taught in film school 15 years ago was that communication happens when a message is perceived. Whether the message was intended or not is irrelevant. And yet here we are, ācommunicatingā with a slightly advanced autocomplete algorithm and calling it intelligent.
This is not really cutting edge research. These limitations were described philosophically for millenia. Then again mathematically through the various AI summers and winters since 1943.
Monied intrests beat science every day.
I keep saying that those llm peddlers are selling us a brain, when at most they only deliver Wernickeās + Brocaās area of a brain.
Sure, they are necessary for a human like brain, but itās only 10% of the job done my guys.
LLMs are actually very, very useful for certain things.
The problem isnāt that they lack utility. Itās that theyāre constantly being shoehorned into area where they arenāt useful.
Theyāre great at surfacing nee knowledge for things you donāt have a complete picture of. You canāt take that knowledge at face value but a framework that you can validate with external sources can be a massive timesaver
Theyāre good at summarizing text. Theyāre good at finding solutions to very narrow and specific coding challenges.
Theyāre not useful at providing support. They are not useful at detailing specific, technical issues. They are not good friends.
Not even. LLMs donāt really understand what you say, and their output is often nonsensical babble.
youāre right. More like discussing with an Alzheimerās addled brain being coerced into a particular set of vocabulary.
The author seems to be making the assumption that a LLM is the equivalent of the language processing parts of the brain (which according to the cited research supposedly focus on language specifically and the other parts of the brain do reasoning) but that isnāt really how it works. LLMs have to internally model more than just the structure of language because text contains information that isnāt just about the structure of language. The existence of Multimodal models makes this kind of obvious; they train on more input types than just text, whatever itās doing internally is more abstract than only being about language.
Not to say the research on the human brain theyāre talking about is wrong, itās just that the way they are trying to tie it in to AI doesnāt make any sense.
Took a lot of scrolling to find an intelligent comment on the article about how outputting words isnāt necessarily intelligence.
Appreciate you doing the good work Iām too exhausted with Lemmy to do.
(And for those that want more research in line with what the user above is talking about, I strongly encourage checking out the Othello-GPT line of research and replication, starting with this write-up from the original study authors here.)
Let me grab all your downvotes by making counterpoints to this article.
Iām not saying that itās not right to bash the fake hype that the likes of altman and alienberg are making with their outlandish claims that AGI is around the corner and that LLM are its precursor. I think thatās 100% spot on.
But the news article is trying to offer an opinion as if itās a scientific truth, and this is not acceptable either.
The basis for the article is the supposed ācutting-edge researchā that shows language is not the same as intelligence. The problem is that theyāre referring to a publication from last year that is basically an op-ed, where the authors go over existing literature and theories to cement their view that language is a communication tool and not the foundation of thought.
The original authors do acknowledge that the growth in human intelligence is tightly related to language, yet assert that language is overall a manifestation of intelligence and not a prerequisite.
The nature of human intelligence is a much debated topic, and this doesnāt particularly add to the existing theories.
Even if we accept the authorsā views, then one might question if LLMs are the path to AGI. Obviously many lead researchers in AI have the same question - most notably, Prof LeCun is leaving Meta precisely because he has the same doubts and wants to progress his research through a different path.
But the problem is that the Verge article then goes on to conclude the following:
This conclusion is a non sequitur. It generalizes a specific point about the capacity of LLMs to evolve into true AGI or not, into an āAI dumbā catchall that ignores even the most basic evidence that they themselves give - like being able to āsolveā go, or play chess in a way that no human can even comprehend - and, to top it off, conclude that āit will never be able toā in the future.
Looking back at the last 2 years, I donāt think anyone can predict what AI research breakthroughs might happen in the next 2, let alone āforeverā.
CEOs are just hyping bullshit
AI tech bros are like weāre going to build technology so powerful we have been warned about it for generations without any ethics or regulations to eliminate all of your jobs and to make you obsolete while we strip all of your social safety nets, build ai mass surveillance against your 4th amendment rights and militarize your streets so you and everyone you love will either die or go to prison for being homeless and work as a slave for us in that prison and weāre going to use your money for our evil plan and weāre all like maybe we can just vote blue no matter who even though they pay them to just pretend to try to do something about this but they ultimately do nothing because a few magically turn facist when it matters so basically weāre just going to do nothing even though we know weāre being marched to our death by psychopaths who are very vocal about their intentionsā¦.am I following the story? Am I wrong?
Youāre forgetting that, like all of americas problems, the solution has been found elsewhere and youāre just not going to do it because reasons I guess? Idk. Iām not American. Thank god.
Yeah watching your loved ones die without being able to access healthcare has been soul crushing torture for me.
Because what we call intelligence (the human kind) usually is just an emergent property of the wielding of various combinations of fist or second-hand experience by āconsciousnessā which itself isā¦
What we like to call the tip of a huge fucking iceberg of constant lifelong internal dialogues, overlapping and integrating experiences all the way back to the memories (engrams, assemblies, neurons that wired together to represent something), even the ones so old or deep we canāt even summon them any longer, but often are still measurable, still there, integrating like lego bricks with other assemblies.
Humans continuously, reflexively, recursively tell and re-tell our own stories to ourselves all day, and even at night, just to make sense of the connections we made today, how to use them tomorrow, to know how they relate to connections we made a lifetime ago, and how it fits in the larger story of us. That ācontext integration windowā absolutely DWARFS even the deepest language model, even though our own organic āneural netā is low-power, lacks back-propagation, etc etc, and it is all done using language.
So yes, language is not the same as intelligence (though at some point some would ask āwho can tell the difference?ā) HOWEVER⦠The semantic taxonomies, symbolic cognition, and various other mental tools that are enabled by language are absolutely, verifiably required for this gargantuan context integration to take place.
Somebody tell these absolute idiots that AI is *NOT AN FāIN BUBBLE!*
Hereās proof the USD and government bonds are the bubble, from Mark Moss: https://inv.nadeko.net/watch?v=xGoPdHH9PlE
Whataboutism + false dichotomy.