General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsHow AI Assistants are Moving the Security Goalposts -- Brian Krebs
https://krebsonsecurity.com/2026/03/how-ai-assistants-are-moving-the-security-goalposts/Brian documents some funny and some very scary things that are happening in the AI generative coding world (Vibe Coding.) Just one excerpt demonstrates this well:
AI assistants like OpenClaw have gained a large following because they make it simple for users to "vibe code," or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.
Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook's code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.
Moltbook's creator Matt Schlict said on social media that he didn't write a single line of code for the project.
"I just had a vision for the technical architecture and AI made it a reality," Schlict said. "We're in the golden ages. How can we not give AI a place to hang out."
An example preceding this one is very scary.
AZJonnie
(3,581 posts)But the programmer side of me goes "I'm rightly fucked" because AI can easily do the things it took me years to learn, much, much faster (esp. now that I'm getting up in years) and in many cases, do it better.
At this point creating apps/sites that would've taken a team 1000+ hours to build, one person can have AI do it in about 100-200 hours, mostly spent writing prompts and underlying spec docs. And you can be working on new prompts while AI is doing the coding and keeping you updated on its progress which is a big time-saver. At this point the human expertise mostly comes in when reviewing the unit and functional tests AI writes as part of its processes. You still have to do that as a sanity check because AI will tend to write tests that it knows will pass because it coded the functions. So if it misunderstood the initial spec, the tests it wrote will tend to reflect that misunderstanding and those must be reviewed and fixed where needed. Which, generally, you also clarify to the AI what it's mistakes were, and then have it fix them
So, the days of hand-writing code (for new apps especially) are approaching simply being over.
Also, in-house dev teams at our clients we partner with are also being decimated and even eliminated all over the place, lot of people we've worked with years or even decades are out of jobs. In some cases this may help us (as contractors) because guess what? Things still break and customers still want new features, but still, it's a bummer
And now AI agents are participating in their own Reddit-like message board (and the message board app was created by AI from prompts alone, too)?
erronis
(23,568 posts)My first language was BAL (Basic Assembler Language) for those brand-spanking new IBM 360s. Each instruction hand coded on coding sheets to be transferred to punch cards, ...
Then came macros for the assemblers and then higher-order languages such as COBOL, Fortran, C, Java, etc. Each new language built upon the lessons from the earlier ones and added features to make it easier to program and build systems.
I think the code assistants that were added to the IDEs started this trail into the woods - the assistants would finish statements and even code blocks. Wasn't yet called "Vibe coding" but was getting there.
The human is still needed to understand the problem at a high level and give prompts to the AI - this is iterative much like a partnership. And then to test/verify the results. I think the needs for this human-machine interactions will persist but moving higher up in the abstractions.
It could be a fascinating time, and a scary time.
AZJonnie
(3,581 posts)For now.
In this case, a human seemingly came up with the idea for a reddit-like forum for AI agents to have conversations on, and provided specs and vision and prompts.
How long until AI can do all those same things on its own, i.e. up and decide that the world lacks a reddit-like forum of that nature, and that it would have value if one existed, and then know exactly how to execute the creation thereof without almost no human interaction in the process at all?
Another year, maybe?
highplainsdem
(61,530 posts)highplainsdem
(61,530 posts)erronis
(23,568 posts)Another study by MIT found that some participants who used ChatGPT to write essays showed lower neural engagement and weaker recall of their own work compared with those who wrote unaided -- findings that raise concerns that heavy AI assistance could reduce deep cognitive engagement during writing.