
-
EU says India's Russia links jeopardise closer ties
-
Ukraine reach BJK Cup semi-finals for first time
-
Benjamin sets up 'historic' hurdles showdown with Warholm and Dos Santos
-
Milan-Cortina bobsleigh track 'surpasses expectations', say Winter Olympics organisers
-
Stocks, dollar calm ahead of expected US rate cut
-
Nvidia CEO disappointed over China chip ban report
-
Portugal's Isaac Nader wins world men's 1,500m gold
-
France launches appeal to acquire Proust's 'madeleine' writings
-
East Timor to scrap MP pensions and SUVs after protests
-
Van Niekerk enjoys second wind in Tokyo after injury nightmare
-
American Moon wins third straight world pole vault gold
-
King gives Trump royal welcome on UK state visit
-
Man Utd post sixth straight annual loss despite record revenues
-
Australian teen Gout Gout revels in world championships debut
-
AI may boost global trade value by nearly 40%: WTO
-
New Zealand star Miller out of Women's Rugby World Cup semi-final
-
Lyles and Gout Gout advance to world 200m semi-finals
-
S.Africa commission begins probe into alleged links between politics and crime
-
PSG women in audacious bid to sign Barca's Putellas
-
Jefferson-Wooden eases into world 200m semis and sets sights on being next Fraser-Pryce
-
Germany's Merz vows 'autumn of reforms' in turbulent times
-
EU says India's Russian oil purchases, military drills hinder closer ties
-
Gold worth 600,000 euros stolen in Paris museum heist
-
Top music body says AI firms guilty of 'wilful' copyright theft
-
Trump gets royal treatment on UK state visit
-
Ostrich and emu ancestor could fly, scientists discover
-
Former boxing world champion Hatton 'excited for the future' before death: family
-
Stocks, dollar calm before expected US rate cut
-
After mass Nepal jailbreak, some prisoners surrender
-
Poison killed Putin critic Navalny, wife says
-
Australia coach expects Cummins to play 'key part' in Ashes
-
Hong Kong leader plans to fast-track border mega-project
-
Ben & Jerry's co-founder quits, says independence 'gone'
-
Erasmus keeps faith with Springbok squad after record All Blacks win
-
Hong Kong leader unveils plan to boost growth with border mega-project, AI push
-
Israel says opening new route for Gazans fleeing embattled city
-
New Zealand's historic athletics worlds a decade in the making
-
Trump to get royal treatment on UK state visit
-
Benfica sack Lage after shock defeat, Mourinho next?
-
Israel says to open new route for Gazans fleeing embattled city
-
Nestle share price slips as chairman follows CEO out the door
-
German suspect in Madeleine McCann case freed from prison
-
US tennis star Townsend apologises for 'crazy' Chinese food post
-
Peru evacuates 1,600 tourists from Machu Picchu amid protest
-
Nepal mourns its dead after anti-corruption protests
-
UK inflation stable ahead of central bank rate call
-
India checks Maoist rebel offer of suspending armed struggle
-
Israel to open new route for Gazans fleeing besieged city
-
Lower shipments to US, China weigh on Singapore August exports
-
Inside the hunt for the suspect in Charlie Kirk's killing

Irregular Raises $80 Million to Set the Security Standards for Frontier AI
Already generating millions in revenue, Irregular partners with leading labs like OpenAI and Anthropic to assess advanced models under real-world threats and define the security frameworks for safe deployment
Already generating millions in revenue, Irregular partners with leading labs like OpenAI and Anthropic to assess advanced models under real-world threats and define the security frameworks for safe deployment
SAN FRANCISCO, CALIFORNIA / ACCESS Newswire / September 17, 2025 / Irregular, the world's first frontier AI security lab, today announced it has raised $80 million in funding led by Sequoia Capital and Redpoint Ventures, with participation from Swish Ventures, as well as from notable angel investors including Wiz CEO Assaf Rappaport, and Eon CEO Ofir Ehrlich. Formerly known as Pattern Labs, Irregular has reached millions in annual revenue. It works side by side with the world's leading AI labs like OpenAI and Anthropic to evaluate how next generation AI models may themselves carry out real world threats - such as antivirus evasions or autonomous offensive actions - and to develop the defenses needed before deployment.
As AI adoption accelerates, the risks are more advanced than most realize. Frontier labs like OpenAI, Anthropic, and Google DeepMind were built to make AI powerful and safe, and Irregular was founded with the mission to make it secure.
The company runs controlled simulations on frontier AI models to test both their potential for misuse in cyber operations and their resilience when targeted by attackers. The company gives AI creators and deployers a secure way to uncover vulnerabilities early and build the safeguards needed.
"Irregular has taken on an ambitious mission to make sure the future of AI is secure as it is powerful," said Dan Lahav, Co-Founder and CEO of Irregular. "AI capabilities are advancing at breakneck speed; we're building the tools to test the most advanced systems way before public release, and to create the mitigations that will shape how AI is deployed responsibly at scale."
Irregular's work already shapes industry standards; the company's evaluations are cited in OpenAI's system cards for GPT-4 o3, o4 mini and GPT-5; the UK government and Anthropic use Irregular's SOLVE framework, with Anthropic using it to vet cyber risks in Claude 4 ; and Google DeepMind researchers recently cited the company in a paper on the evaluation Emerging Cyberattack Capabilities of AI. The company also co-authored a whitepaper with Anthropic presenting a novel approach to using Confidential Computing technologies to enhance the security of AI model weights and user data privacy, and co-authored with RAND a joint seminal paper on AI model theft and misuse, helping shape Europe's policy discussions on AI security and setting a benchmark for the field.
"The real AI security threats haven't emerged yet," said Shaun Maguire, Partner at Sequoia Capital. "What stood out about the Irregular team is how far ahead they're thinking. They're working with the most advanced models being built today and laying the groundwork for how we'll need to make AI reliable in the years ahead.״
About Irregular
Irregular is the first frontier AI security lab to mitigate cybersecurity risks posed by advanced AI models while protecting the models from cyber attacks. By partnering with leading frontier labs like OpenAI and Anthropic, Irregular tests foundation models to test both their potential for misuse in cyber operations and their resilience when targeted by attackers. With deep roots in both AI and cybersecurity, the team is redefining how we secure the next generation of AI. Irregular is building the tools, testing methods, and scoring frameworks that will help organizations deploy AI safely, securely, and responsibly.
Learn more at www.irregular.com
Media contact
Itai Singer, TellNY
[email protected]
SOURCE: Irregular
View the original press release on ACCESS Newswire
O.Johnson--AMWN