If this is absolutely true this news can be even deeper than deepseek news and it could be something that can bring a gigantic chaotic whipsawing event to all of AI as we know it..... Someone experimented with a 1997 processor and showed that just 128 MB of RAM is enough to harness the power of AI. https://farmingdale-observer.com/20...-of-ram-is-enough-to-harness-the-power-of-ai/
Get real. You really believe 128 Mb of RAM has usage in modern computing? You can't even load a decent Linux kernel with that little RAM. Deepseek was itself a mirage. They were using smuggled H100s from Singapore. You really expect the Chinese to tell you the truth? People on ET will believe anything written on the net. Just like everyone was sure the hyperscalers would magically make a better GPU than nVidia, you know the company who created the industry & has 25 year head start on everyone.
Always read such news with a pinch of salt. Then you will never be conned / scammed. And you could con the conman / scam the scammer.
So basically more media lies.... Dang it Guess they will have to pay $50,000 a pop for those blackwells
Running AI models vs Training models requires a big difference in compute power. The article says the LLM they ran had 260,000 parameters. I have no idea how useful such a small LLM would be, so I asked the AI: A language model with 250,000 parameters is extremely small by modern standards and would generally not be considered "good" for most real-world NLP tasks. Here's why: Modern LLMs like GPT-3, GPT-4, Claude, or LLaMA models typically have billions (and even trillions) of parameters. A model with 250k parameters is more like a toy model—useful for: Educational purposes Testing architectures Running on extremely low-resource environments (e.g., microcontrollers) Very simple or narrow tasks (like basic text classification, rule-based chatbots)
You can even run DeepSeek on a Raspberry Pi as shown in the video below. People have run AI engines on many different low end processors; it does not mean they are very effective or fast. This is nothing more than a hobbyist trend.
So what is the outcome of where this is all headed? Haven't some of these hobby type of findings changed the future? I'm sure there is someone out there trying to show something that you don't need to spend literally hundreds of billions of dollars to create this ai narrative that everyone says you need the most racks and most gpus ..
Major technology firms are now scaling back their projected investments in AI data centers. Emerging platforms like DeepSeek have demonstrated that AI may require significantly less computational power than originally anticipated. As a result, the infrastructure footprint needed to support AI is expected to shrink dramatically. This impacts AI data center stocks such as SMCI (Super Micro Computer) which recently updated guidance to scale down projections. IMO the narrative that projected sales are merely being "pushed out" is misleading—they are, in reality, vanishing. Consequently, any forward-looking statements regarding SMCI’s future earnings or stock price projections should be reconsidered — if not abandoned altogether. Certainly SMCI has plenty of "fans" but the rosy forward-looking picture is falling apart. SMCI was however the most shorted S&P 500 stock in April -- showing that market sentiment regarding it is changing.
Running the model and training the model are two very different things. Running a model doesn't require anything like the power to train the model. That said claiming billions was needed to train the models was a overhyped con to raise extra billions from investors.