General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region Forums"The race for 'AI Supremacy' is over, at least for now, and the U.S. didn't win." It just became a tie with China.
The sentence in quotes in the thread title is from AI expert and critic Gary Marcus's latest Substack, just this morning. Link and excerpts from that below, and links to other articles and a video.
I've been so focused on politics this past week that I gave almost no attention to AI news, or Marcus's tweets and Bluesky posts. I also missed usonian's post about this on Thursday - https://www.democraticunderground.com/10971911 - in the Open Source and Free Software group here. Mea culpa. I should have posted about this in LBN days ago. Will try to explain why this is a BFD now.
Gary Marcus's Substack about this, which I hope you'll read in its entirety:
https://garymarcus.substack.com/p/the-race-for-ai-supremacy-is-over
-snip-
Realizations about Deepseek are already reverberating. Altman has already cut prices twice in the last few days, and we already know from his own testimony that OpenAI has been losing money on ChatGPT Pro; as Grady Booch observed on X, Now hell be losing money faster.
Nvidia could soon take a serious hit, too, for two reasons. First, the DeepSeek results suggest that large language models might be trained far more efficiently going forward. Nvidia has basically been getting rich selling (exceptionally well-designed) shovels in the midst of gold rush, but may suddenly face a world in which people suddenly require far fewer shovels.
Second, the DeepSeek results suggest that it is possible to get by without NVidia's top-of-the-line chips. Devices shipped in armored cars and traded on the black market might suddenly seem "nice-to-haves" rather than must-haves.
-snip-
From Investor's Business Daily this morning:
https://www.investors.com/market-trend/stock-market-today/dow-jones-futures-meta-tesla-microsoft-buy-points-earnings-due/
Articles:
https://mashable.com/article/deepseek-ai-hands-on
https://venturebeat.com/ai/deepseek-r1s-bold-bet-on-reinforcement-learning-how-it-outpaced-openai-at-3-of-the-cost/
https://www.wired.com/story/deepseek-china-model-ai/
https://www.forbes.com/sites/petercohan/2025/01/26/nvidia-stock-may-fall-as-deepseeks-amazing-ai-model-disrupts-openai/
CNBC video on DeepSeek:

highplainsdem
(54,620 posts)madville
(7,571 posts)Bottom about to fall out I guess, oh well. The rich will just sell now and buy it back cheaper, then run it back up again at some point, and repeat.
blogslug
(38,761 posts)in reply to:
Can someone explain it to me like I'm stupid?
The AI bubble was inflated based on the idea that we need bigger models that both are trained and run on bigger and even larger GPUs. A company came along that has undermined the narrative - ways both substantive and questionable - and now the market panicked that $200bn got wasted on AI capex


highplainsdem
(54,620 posts)in a different investment universe.
RJ-MacReady
(183 posts)"Trump loses AI war against China."
highplainsdem
(54,620 posts)Johnny2X2X
(22,507 posts)We have search engines that can organize data a little better, that's not AI.
Renew Deal
(83,637 posts)I dont know where the downloads are coming from, but I doubt theyre all in China.
highplainsdem
(54,620 posts)Renew Deal
(83,637 posts)And remains there. Rednote was #1 last week, so we'll see how long this lasts.
Hugin
(35,823 posts)I noticed some things that were directly related to the size of a training set. Which may still be applicable.
Personally, I was using some terms to differentiate the effects.
The larger sets, which I called deep have many bifurcations or branches. The vast majority of which, the conditions would rarely arise to be traversed. On those rare occasions are when a generative AI response could result in what has been called nonsense or a hallucination.
The flaw, as I saw it, with these smaller training sets is that they tend to be what I call, brittle. They contain mostly the highest probability branches to be traversed. When a rare condition occurs, instead of hallucinations, they break. I never came up with a solution to these breaks other than to feed a reset to the trunk. These resets were scripted prompts. I suppose its possible that theres other methods for implementing a partial reset. But, I didnt spend any time on it.
Once again, all of these observations about generative AI and its fitness are subjective and by no means objective. Does a smaller training set create a better AI? Who says its better and why. I never cease to be amused by the answers to those simple questions.
dalton99a
(87,159 posts)Yavin4
(37,107 posts)We have lost the future.
kysrsoze
(6,261 posts)I've been worried about the power draw for both this and the crypto-currency global pyramid scheme. President Caligula even suggested building coal-plants to serve the extra power "need."
tulipsandroses
(6,913 posts)So will DOGE speak out against this? Eloon has beef with Altman, maybe he will?