Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(62,070 posts)
Sun Mar 29, 2026, 05:17 PM 9 hrs ago

Behind the Curtain: AI's looming cyber nightmare

Source: Axios

Top AI and government officials tell Axios CEO Jim VandeHei that Anthropic, OpenAI and other tech giants will soon release new models that are scary good at hacking sophisticated systems at scale.

The one to watch: Anthropic is privately warning top government officials that its not-yet-released model — currently branded "Mythos" — makes large-scale cyberattacks much more likely in 2026.

The model allows agents to work on their own with wild sophistication and precision to penetrate corporate, government and municipal systems. It's a hacker's dream weapon.

Jim revealed in his new weekly newsletter for CEOs that one source briefed on the coming models says a large-scale attack could hit this year. Businesses are ripe targets.

-snip-

Read more: https://www.axios.com/2026/03/29/claude-mythos-anthropic-cyberattack-ai-agents



This is the first I've heard of Mythos, though I've posted other threads about AI agents being a real security risk. It's worrisome that Anthropic gave the US government that heads-up...and more worrisome that the dunces in the Trump regime are the last people we'd want dealing with this sort of threat.

The Axios article links to an article here, published two months ago:

https://www.darkreading.com/threat-intelligence/2026-agentic-ai-attack-surface-poster-child

2026: The Year Agentic AI Becomes the Attack Surface Poster Child
Dark Reading asked readers whether agentic AI attacks, advanced deepfake threats, board recognition of cyber as a top priority, or password-less technology adoption would be most likely to become a trending reality for 2026.

Tara Seals,Managing Editor, News,Dark Reading
January 30, 2026

-snip-

Nearly half (48%) of respondents believe agentic AI will represent the top attack vector for cybercriminals and nation-state threats by the end of 2026. It's a decent bet, given that agentic AI continues to gain ground at enterprises of all stripes. They're adopting it to streamline operations, to implement things like predictive maintenance and smart manufacturing, and to keep up competitively in realms like software development — amongst many, many other use cases. Amid the growing exuberance for the semi-autonomous (and highly permissioned) technology is a worry that headlong barreling to join the fray will come at the expense of prioritizing security.

"It's good to see this one topping the charts," says Rik Turner, chief analyst for cybersecurity at Omdia. "The expanded attack surface deriving from the combination of agents' levels of access and autonomy is and should be a real concern. A particular worry here, in my humble opinion, is if we see a rush to adopt agentic that results in developers deploying insecure code. There's already talk of the need to discover what open source model context protocol (MCP) servers are being thrown into the mix by devs keen to deliver on projects by the deadline. This, combined with what seems to be the widespread (nay, wholesale) adoption of vibe coding in 2025 suggests there are a lot of people assembling entirely insecure and vulnerable infrastructure already."

These concerns are exacerbated by the rise of open source AI agents and "shadow AI," which employees might be importing into work environments with no oversight from the security team.

-snip-

"AI raises the stakes for security because AI enables automation and scale, so we have attackers using AI to launcher wider scale attacks to find vulnerabilities," explains Melinda Marks, practice director for cybersecurity at Omdia. "At the same time, organizations are using AI to scale their productivity. We looked to technical innovations in the past to incrementally increase productivity, but now agentic AI and autonomous systems can scale productivity by five times or 10 times. But that also exponentially increases attack surfaces, including access points with non-human identities."

-snip-
7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

walkingman

(10,843 posts)
1. I think AI is very dangerous. Just take how it has changed customer service everywhere...
Sun Mar 29, 2026, 05:39 PM
8 hrs ago

there are very few businesses these days when you can actually talk to a real person anymore unless you are willing to stay on the phone for a very long time, listening to BS and elevator music. The companies know this, and they simply view it an acceptable business practice in order to make more profit.

The trend will accelerate in the next few years and as consumers in this so-called "modern" world the choices will disappear.

Government will and does follow the same trend. Once someone is elected they can do as they please because of the "system". As younger generation begin to view this as normal, it becomes normal.

This shit going on is not normal. There is little or no accountability or ability to control the outcomes....we are suffering the consequences.

lapfog_1

(31,903 posts)
2. Quantum computing is much more of a threat
Sun Mar 29, 2026, 06:02 PM
8 hrs ago

Quantum computers, by leveraging Shor's algorithm, would pose an existential threat to all widely used asymmetric (public-key) encryption systems. These algorithms rely on the mathematical difficulty of factoring large prime numbers or solving discrete logarithm problems, which a powerful quantum computer could solve exponentially faster than any classical computer.

Vulnerable schemes include:
RSA (Rivest-Shamir-Adleman): A widely used asymmetric algorithm that relies on prime factorization.
ECC (Elliptic Curve Cryptography): Another asymmetric method offering similar security to RSA with smaller key sizes, which is also vulnerable to Shor's algorithm.
Diffie-Hellman (DH) and Elliptic Curve Diffie-Hellman (ECDH): Key exchange protocols used to establish secure communication, which would become insecure.

I did a lot of the early work on RSA encryption... many of these are still in use.

OC375

(914 posts)
3. So Anthopic, the principled good guys, are warning their new software is set up to be a cyber super weapon?
Sun Mar 29, 2026, 06:39 PM
7 hrs ago

Gee, whizz. Thanks for the heads up assholes.

Raftergirl

(1,856 posts)
4. My son works for a cybersecurity software company and they have been talking about AI irt
Sun Mar 29, 2026, 07:49 PM
6 hrs ago

cybersecurity threats for years. It’s a very dangerous situation.

Response to Raftergirl (Reply #4)

dickthegrouch

(4,513 posts)
6. And Yet, business needs to "self-regulate"
Sun Mar 29, 2026, 07:58 PM
6 hrs ago

Because regulation and limits on business are Soooooooo bad.

I call complicity on the part of any business that enables (releases) bad behavior without setting limits on easily predictable maliciousness.

RICO penalties should apply, when a reasonable risk analysis should have shown that bad behavior was predictable.

Response to highplainsdem (Original post)

Latest Discussions»Latest Breaking News»Behind the Curtain: AI's ...