-
A rough guide to F1 rule changes for 2026
-
At least 25 killed at Pakistan's pro-Iran weekend protests
-
Israel kills 31 in Lebanon, vows to expand strikes after Hezbollah fire
-
Myanmar grants amnesty to over 7,000 convicted of 'terrorist group' support
-
Riyadh's King Fahd stadium to host 2027 Asian Cup final
-
'Superman Sanju' toast of India after T20 World Cup heroics
-
Travel chaos, but F1 season-opener in Australia 'ready to go'
-
Lunar New Year heartache for Chinese team at Women's Asian Cup
-
El Nino may return in 2026 and make planet even hotter
-
Somaliland's Israel deal could put Berbera port at risk
-
Texas primaries launch midterm battle with Trump agenda at stake
-
How a Syrian refugee chef met Britain's King Charles
-
Bangladesh tackle gender barriers to reach Women's Asian Cup
-
Argentina's Milei says wants US 'strategic alliance' to be state policy
-
'Sinners' wins top prize at Screen Actors Guild awards
-
New rules, same old suspects as F1 revs up for 2026 season
-
World Cup tickets: Huge demand and sky-high prices
-
List of key Actor Award winners
-
Trump hunkers down after Iran strikes
-
China's leaders gather for key strategy session as challenges grow
-
UK toughens asylum rules to discourage migration
-
Israel hits Lebanon after Hezbollah fire, expanding Iran war
-
CBS in turmoil as US media feels pressure under Trump
-
Messi bags double as Miami battle back to down Orlando
-
Greenland is 'open for business' -- kind of, says business leader
-
Canada's Carney to mend rift, boost trade as he meets India's Modi
-
Crude soars, stocks drop after US strikes on Iran
-
Iran war spreads across region as US, Israel suffer losses
-
Miriam Margolyes tackles aging in Oscar-nominated short
-
Recognition, not competition, for Oscar-nominated foreign filmmakers
-
Israel, Hezbollah trade fire: latest developments in Iran war
-
Israel strikes Tehran: latest developments in Iran war
-
Trump vows to avenge first US deaths as Iran war intensifies
-
MWC 2026: Amdocs Unveils CES26, an Agent-driven BSS-OSS-Network Suite, powered by the Amdocs aOS Cognitive Core
-
MWC 2026: Amdocs Launches Global eSIM Traveler Solution, Enabling Telcos to Reclaim the Roaming Journey
-
Empire Joins Western Australia Delegation
-
InterContinental Hotels Group PLC Announces Transaction in Own Shares - March 02
-
Genflow Biosciences PLC Announces Receipt of First Tranche of Grant
-
Lowry collapses late again, Echavarria snatches victory in Cognizant Classic
-
Aubameyang strikes twice as Marseille edge Lyon in Ligue 1
-
Infantino says players who cover mouths when speaking could be sent off
-
Bolsonaro son rallies the right as thousands protest Brazil government
-
Juve stay in Champions League hunt with last-gasp Roma draw
-
Maersk suspends vessel transit through Strait of Hormuz
-
France, Germany, UK ready to take 'defensive action' against Iran
-
Knicks halt Spurs' 11-game NBA winning streak
-
EU warns against long war, urges 'credible transition' in Iran
-
Bored of peace? Trump keeps choosing war
-
Arteta embraces Arsenal's 'Set-Piece FC' label after corners sink Chelsea
-
Sevilla rescue derby draw to deal Betis top four setback
Firms and researchers at odds over superhuman AI
Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.
The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.
"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".
Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.
Others, though are more sceptical.
Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.
LeCun's view appears backed by a majority of academics in the field.
Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.
- 'Genie out of the bottle' -
Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.
Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.
"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."
Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.
"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.
A similar, more recent thought experiment is the "paperclip maximiser".
This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.
While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.
Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.
He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.
- 'Biggest thing ever' -
The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.
"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.
Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.
"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".
The challenge can lie in communicating these ideas to politicians and the public.
Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.
Y.Kobayashi--AMWN