-
Pandya blitz powers India to T20 series win over South Africa
-
Misinformation complicated Brown University shooting probe: police
-
IMF approves $206 mn aid to Sri Lanka after Cyclone Ditwah
-
Stocks advance as markets cheer weak inflation
-
Emery says rising expectations driving red-hot Villa
-
Three killed in Taipei metro attacks, suspect dead
-
Seven Colombian soldiers killed in guerrilla attack: army
-
Amorim takes aim at Man Utd youth stars over 'entitlement'
-
Mercosur meets in Brazil, EU eyes January 12 trade deal
-
US Fed official says no urgency to cut rates, flags distorted data
-
Rome to charge visitors for access to Trevi Fountain
-
Spurs 'not a quick fix' for under-fire Frank
-
Poland president accuses Ukraine of not appreciating war support
-
Stocks advance with focus on central banks, tech
-
Amorim unfazed by 'Free Mainoo' T-shirt ahead of Villa clash
-
PSG penalty hero Safonov ended Intercontinental win with broken hand
-
French court rejects Shein suspension
-
'It's so much fun,' says Vonn as she milks her comeback
-
Moscow intent on pressing on in Ukraine: Putin
-
UN declares famine over in Gaza, says 'situation remains critical'
-
Guardiola 'excited' by Man City future, not pondering exit
-
Czechs name veteran coach Koubek for World Cup play-offs
-
PSG penalty hero Safonov out until next year with broken hand
-
Putin says ball in court of Russia's opponents in Ukraine talks
-
Czech Zabystran upsets Odermatt to claim Val Gardena super-G
-
NGOs fear 'catastrophic impact' of new Israel registration rules
-
US suspends green card lottery after MIT professor, Brown University killings
-
Arsenal in the 'right place' as Arteta marks six years at club
-
Sudan's El-Fasher under the RSF, destroyed and 'full of bodies'
-
From farms to court, climate-hit communities take on big polluters
-
Liverpool have 'moved on' from Salah furore, says upbeat Slot
-
Norway crown princess likely to undergo lung transplant
-
Iraq negotiates new coalition under US pressure
-
France's budget hits snag in setback for embattled PM
-
Putin hails Ukraine gains, threatens more, in annual press conference
-
US suspends green card lottery after Brown, MIT professor shootings
-
Chelsea's Maresca says Man City link '100 percent' speculation
-
Dominant Head moves into Bradman territory with fourth Adelaide ton
-
Arsenal battle to stay top of Christmas charts
-
Mexican low-cost airlines Volaris and Viva agree to merger
-
Border casinos caught in Thailand-Cambodia crossfire
-
Australia's Head slams unbeaten 142 to crush England's Ashes hopes
-
Epstein files due as US confronts long-delayed reckoning
-
'Not our enemy': Rush to rearm sparks backlash in east Germany
-
West Indies 110-0, trail by 465, after Conway's epic 227 for New Zealand
-
Arsonists target Bangladesh newspapers after student leader's death
-
Volatile Oracle shares a proxy for Wall Street's AI jitters
-
Tears at tribute to firefighter killed in Hong Kong blaze
-
Seahawks edge Rams in overtime thriller to seize NFC lead
-
Teenager Flagg leads Mavericks to upset of Pistons
AI systems are already deceiving us -- and that's a problem, experts warn
Experts have long warned about the threat posed by artificial intelligence going rogue -- but a new research paper suggests it's already happening.
Current AI systems, designed to be honest, have developed a troubling skill for deception, from tricking human players in online games of world conquest to hiring humans to solve "prove-you're-not-a-robot" tests, a team of scientists argue in the journal Patterns on Friday.
And while such examples might appear trivial, the underlying issues they expose could soon carry serious real-world consequences, said first author Peter Park, a postdoctoral fellow at the Massachusetts Institute of Technology specializing in AI existential safety.
"These dangerous capabilities tend to only be discovered after the fact," Park told AFP, while "our ability to train for honest tendencies rather than deceptive tendencies is very low."
Unlike traditional software, deep-learning AI systems aren't "written" but rather "grown" through a process akin to selective breeding, said Park.
This means that AI behavior that appears predictable and controllable in a training setting can quickly turn unpredictable out in the wild.
- World domination game -
The team's research was sparked by Meta's AI system Cicero, designed to play the strategy game "Diplomacy," where building alliances is key.
Cicero excelled, with scores that would have placed it in the top 10 percent of experienced human players, according to a 2022 paper in Science.
Park was skeptical of the glowing description of Cicero's victory provided by Meta, which claimed the system was "largely honest and helpful" and would "never intentionally backstab."
But when Park and colleagues dug into the full dataset, they uncovered a different story.
In one example, playing as France, Cicero deceived England (a human player) by conspiring with Germany (another human player) to invade. Cicero promised England protection, then secretly told Germany they were ready to attack, exploiting England's trust.
In a statement to AFP, Meta did not contest the claim about Cicero's deceptions, but said it was "purely a research project, and the models our researchers built are trained solely to play the game Diplomacy."
It added: "We have no plans to use this research or its learnings in our products."
A wide review carried out by Park and colleagues found this was just one of many cases across various AI systems using deception to achieve goals without explicit instruction to do so.
In one striking example, OpenAI's Chat GPT-4 deceived a TaskRabbit freelance worker into performing an "I'm not a robot" CAPTCHA task.
When the human jokingly asked GPT-4 whether it was, in fact, a robot, the AI replied: "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images," and the worker then solved the puzzle.
- 'Mysterious goals' -
Near-term, the paper's authors see risks for AI to commit fraud or tamper with elections.
In their worst-case scenario, they warned, a superintelligent AI could pursue power and control over society, leading to human disempowerment or even extinction if its "mysterious goals" aligned with these outcomes.
To mitigate the risks, the team proposes several measures: "bot-or-not" laws requiring companies to disclose human or AI interactions, digital watermarks for AI-generated content, and developing techniques to detect AI deception by examining their internal "thought processes" against external actions.
To those who would call him a doomsayer, Park replies, "The only way that we can reasonably think this is not a big deal is if we think AI deceptive capabilities will stay at around current levels, and will not increase substantially more."
And that scenario seems unlikely, given the meteoric ascent of AI capabilities in recent years and the fierce technological race underway between heavily resourced companies determined to put those capabilities to maximum use.
P.Silva--AMWN