-
Indonesia probes student after nearly 100 hurt in school blasts
-
UPS grounds its MD-11 cargo planes after deadly crash
-
Taliban govt says Pakistan ceasefire to hold, despite talks failing
-
Trump says no US officials to attend G20 in South Africa
-
Philippines halts search for typhoon dead as huge new storm nears
-
Bucks launch NBA Cup title defense with win over Bulls
-
Chinese ship scouts deep-ocean floor in South Pacific
-
Taiwan badminton star Tai Tzu-ying announces retirement
-
New York City beat Charlotte 3-1 to advance in MLS Cup playoffs
-
'Almost every day': Japan battles spike in bear attacks
-
MLS Revolution name Mitrovic as new head coach
-
Trump gives Hungary's Orban one-year Russia oil sanctions reprieve
-
Owners of collapsed Dominican nightclub formally charged
-
US accuses Iran in plot to kill Israeli ambassador in Mexico
-
New Zealand 'Once Were Warriors' director Tamahori dies
-
Hungary's Orban wins Russian oil sanctions exemption from Trump
-
More than 1,000 flights cut in US shutdown fallout
-
Turkey issues genocide arrest warrant against Netanyahu
-
Countries agree to end mercury tooth fillings by 2034
-
Hamilton faces stewards after more frustration
-
World's tallest teen Rioux sets US college basketball mark
-
Trump pardons three-time World Series champ Strawberry
-
Worries over AI spending, US government shutdown pressure stocks
-
Verstappen suffers setback in push for fifth title
-
Earth cannot 'sustain' intensive fossil fuel use, Lula tells COP30
-
Wales boss Tandy expects Rees-Zammit to make bench impact against the Pumas
-
James Watson, Nobel prize-winning DNA pioneer, dead at 97
-
Medical all-clear after anti-Trump package opened at US base
-
Sabalenka beats Anisimova in pulsating WTA Finals semi
-
Iran unveils monument to ancient victory in show of post-war defiance
-
MLS Revolution name Mitrovic as hew head coach
-
Brazil court reaches majority to reject Bolsonaro appeal against jail term
-
Norris grabs pole for Brazilian Grand Prix sprint race
-
More than 1,200 flights cut across US in govt paralysis
-
NFL Cowboys mourn death of defensive end Kneeland at 24
-
At COP30, nations target the jet set with luxury flight tax
-
Trump hosts Hungary's Orban, eyes Russian oil sanctions carve-out
-
All Blacks 'on edge' to preserve unbeaten Scotland run, says Savea
-
Alpine say Colapinto contract about talent not money
-
Return of centuries-old manuscripts key to France-Mexico talks
-
Byrne adamant Fiji no longer overawed by England
-
Ex-footballer Barton guilty over 'grossly offensive' X posts
-
Key nominees for the 2026 Grammy Awards
-
Brazil court mulls Bolsonaro appeal against jail term
-
Rybakina sinks Pegula to reach WTA Finals title match
-
Earth 'can no longer sustain' intensive fossil fuel use, Lula tells COP30
-
Kendrick Lamar leads Grammy noms with nine
-
Ex-British soldier fights extradition over Kenyan woman's murder
-
Kolisi to hit Test century with his children watching
-
Alex Marquez fastest in practice ahead of Portuguese MotoGP
Generative AI's most prominent skeptic doubles down
Two and a half years since ChatGPT rocked the world, scientist and writer Gary Marcus still remains generative artificial intelligence's great skeptic, playing a counter-narrative to Silicon Valley's AI true believers.
Marcus became a prominent figure of the AI revolution in 2023, when he sat beside OpenAI chief Sam Altman at a Senate hearing in Washington as both men urged politicians to take the technology seriously and consider regulation.
Much has changed since then. Altman has abandoned his calls for caution, instead teaming up with Japan's SoftBank and funds in the Middle East to propel his company to sky-high valuations as he tries to make ChatGPT the next era-defining tech behemoth.
"Sam's not getting money anymore from the Silicon Valley establishment," and his seeking funding from abroad is a sign of "desperation," Marcus told AFP on the sidelines of the Web Summit in Vancouver, Canada.
Marcus's criticism centers on a fundamental belief: generative AI, the predictive technology that churns out seemingly human-level content, is simply too flawed to be transformative.
The large language models (LLMs) that power these capabilities are inherently broken, he argues, and will never deliver on Silicon Valley's grand promises.
"I'm skeptical of AI as it is currently practiced," he said. "I think AI could have tremendous value, but LLMs are not the way there. And I think the companies running it are not mostly the best people in the world."
His skepticism stands in stark contrast to the prevailing mood at the Web Summit, where most conversations among 15,000 attendees focused on generative AI's seemingly infinite promise.
Many believe humanity stands on the cusp of achieving super intelligence or artificial general intelligence (AGI) technology that could match and even surpass human capability.
That optimism has driven OpenAI's valuation to $300 billion, unprecedented levels for a startup, with billionaire Elon Musk's xAI racing to keep pace.
Yet for all the hype, the practical gains remain limited.
The technology excels mainly at coding assistance for programmers and text generation for office work. AI-created images, while often entertaining, serve primarily as memes or deepfakes, offering little obvious benefit to society or business.
Marcus, a longtime New York University professor, champions a fundamentally different approach to building AI -- one he believes might actually achieve human-level intelligence in ways that current generative AI never will.
"One consequence of going all-in on LLMs is that any alternative approach that might be better gets starved out," he explained.
This tunnel vision will "cause a delay in getting to AI that can help us beyond just coding -- a waste of resources."
- 'Right answers matter' -
Instead, Marcus advocates for neurosymbolic AI, an approach that attempts to rebuild human logic artificially rather than simply training computer models on vast datasets, as is done with ChatGPT and similar products like Google's Gemini or Anthropic's Claude.
He dismisses fears that generative AI will eliminate white-collar jobs, citing a simple reality: "There are too many white-collar jobs where getting the right answer actually matters."
This points to AI's most persistent problem: hallucinations, the technology's well-documented tendency to produce confident-sounding mistakes.
Even AI's strongest advocates acknowledge this flaw may be impossible to eliminate.
Marcus recalls a telling exchange from 2023 with LinkedIn founder Reid Hoffman, a Silicon Valley heavyweight: "He bet me any amount of money that hallucinations would go away in three months. I offered him $100,000 and he wouldn't take the bet."
Looking ahead, Marcus warns of a darker consequence once investors realize generative AI's limitations. Companies like OpenAI will inevitably monetize their most valuable asset: user data.
"The people who put in all this money will want their returns, and I think that's leading them toward surveillance," he said, pointing to Orwellian risks for society.
"They have all this private data, so they can sell that as a consolation prize."
Marcus acknowledges that generative AI will find useful applications in areas where occasional errors don't matter much.
"They're very useful for auto-complete on steroids: coding, brainstorming, and stuff like that," he said.
"But nobody's going to make much money off it because they're expensive to run, and everybody has the same product."
T.Ward--AMWN