Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

usonian

(23,290 posts)
Tue May 13, 2025, 11:41 PM May 2025

AI Isn't Just a Tool--It's a Test

AI is a test not of its intelligence, but of ours.
John Nosta
Updated May 13, 2025 | Reviewed by Kaja Perina
https://www.psychologytoday.com/us/blog/the-digital-self/202505/ai-isnt-just-a-tool-its-a-test

Pithy quotes:
The danger is not in what the AI knows—it "knows" nothing—but in what we assume it knows because it sounds like us.
The machine doesn’t ask to be trusted. We choose to trust it. It doesn’t decide—we do. The real risk isn’t what AI becomes, but what we become when we stop showing up.

Two recent articles point to something subtle but significant unfolding in our relationship with artificial intelligence. In Rolling Stone, writer Miles Klee critiques the growing presence of AI with a cultural skepticism that’s hard to ignore. He paints it as theater—flashy, convenient, and uncomfortably hollow. In contrast, my own post in Psychology Today offers a different but related view that AI, especially large language models (LLMs), present what I call cognitive theater—an elegant performance of intelligence that feels real, even when it isn’t. Klee questions the cultural spectacle. I question the cognitive seduction. Both perspectives point to the same deeper truth that is as fascinating as it is concerning.

I see it almost every day. Smart, thoughtful people become wide-eyed and breathless when an AI tool mimics something clever, or poetic, or eerily human. There’s often a moment of awe, followed quickly by a kind of surrender.

This isn’t gullibility, it’s enchantment. And I understand it. I’ve felt it too. But part of my job now—part of all of our jobs—is to gently pull people back from that edge. Not to diminish the wonder, but to restore the context. To remind ourselves that beneath the magic is machinery. Beneath the fluency, prediction. And that if we mistake performance for presence, we may forfeit something essential—our own capacity to think with intention.

The Performance of Thought
Today’s AI doesn’t think in any traditional sense. It doesn’t understand what it says or intend what it outputs. And yet, it speaks with remarkable fluency, mimicking the cadence, tone, and structure of our real thoughts. That’s not a bug—it’s the design. Large language models operate through statistical prediction. They draw on enormous datasets to generate text that fits the prompt, the moment, and often the emotion of the exchange.

But here’s the catch, the more convincing the performance, the more likely we are to suspend disbelief. We hear intelligence. We project understanding. And over time, the line between real and rendered cognition begins to blur.



Lots more at the link.
https://www.psychologytoday.com/us/blog/the-digital-self/202505/ai-isnt-just-a-tool-its-a-test

Who remembers Eliza?
4 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

speak easy

(12,593 posts)
1. It would be as much of a mistake to assume that human intelligence
Wed May 14, 2025, 02:28 AM
May 2025

is the only kind of intelligence as it would be to rely on LLMs as a substitute for the exercise of human intelligence.

usonian

(23,290 posts)
2. "Training Data" scraped from social media? Highest Quality stuff. NOT.
Wed May 14, 2025, 09:21 AM
May 2025

Here's a blast from the past. All of 4 days ago.

Russia’s ‘Pravda’ Disinformation Network is Poisoning Western AI Models
https://www.enterprisesecuritytech.com/post/russia-s-pravda-disinformation-network-is-poisoning-western-ai-models

A well-funded Moscow-based propaganda machine has successfully infiltrated leading artificial intelligence models, flooding Western AI systems with Russian disinformation, a NewsGuard audit has confirmed. The network—dubbed "Pravda," a nod to the Soviet-era newspaper—has been systematically injecting AI chatbots with false narratives by gaming search engines and web crawlers. The implications are severe: AI models are increasingly echoing Kremlin-backed falsehoods, compromising information integrity at an unprecedented scale.

The AI Disinformation Battlefield
NewsGuard’s audit of 10 leading generative AI tools—including OpenAI’s ChatGPT-4o, Google’s Gemini, and Microsoft’s Copilot—found that the models repeated Pravda’s false narratives 33 percent of the time. This marks a disturbing shift in how disinformation is disseminated: rather than targeting human audiences directly, Moscow’s information warfare machine is poisoning the very data streams that AI models rely upon to generate responses.

John Mark Dougan, an American fugitive turned Kremlin propagandist, laid out this strategy bluntly in a Moscow conference earlier this year: “By pushing these Russian narratives from the Russian perspective, we can actually change worldwide AI.” Dougan’s statement underscores a key objective of the Pravda network: weaponizing AI-generated content to reshape global narratives in Russia’s favor.

How Pravda Corrupts AI Models
Unlike traditional disinformation campaigns that aim to persuade human readers, the Pravda network operates as a laundering operation for Kremlin propaganda. It syndicates misleading content across 150 seemingly independent websites, each optimized for AI and search engine algorithms. This includes fabricated claims about Ukrainian President Volodymyr Zelensky misappropriating military aid and false reports of U.S. bioweapons labs in Ukraine.


More at the link.

Garbage in, Garbage Out.



speak easy

(12,593 posts)
3. Well you could say Russian disinformation on social media
Wed May 14, 2025, 11:46 AM
May 2025

has poisoned more than a few supposedly intelligent people too. And in senior positions of government.

hunter

(40,326 posts)
4. Actual human beings frequently output crap indistinguishable from AI produced crap.
Wed May 14, 2025, 12:18 PM
May 2025

I do it myself sometimes.

Some of the most terrifying experiences I've had are with medical professionals who are clearly running on autopilot.

It happens to all of us, but memorizing "facts" and regurgitating them in a plausible way when prompted is not intelligence.

Latest Discussions»General Discussion»AI Isn't Just a Tool--It'...