Retro Chip.....Tiny RAM: AI's Power Unleashed....if this any bit true.......

Discussion in 'Wall St. News' started by S2007S, May 15, 2025.

  1. S2007S

    S2007S

  2. nitrene

    nitrene

    Get real. You really believe 128 Mb of RAM has usage in modern computing? You can't even load a decent Linux kernel with that little RAM.

    Deepseek was itself a mirage. They were using smuggled H100s from Singapore. You really expect the Chinese to tell you the truth?

    People on ET will believe anything written on the net. Just like everyone was sure the hyperscalers would magically make a better GPU than nVidia, you know the company who created the industry & has 25 year head start on everyone.
     
  3. maxinger

    maxinger

    upload_2025-5-15_19-45-14.jpeg


    Always read such news with a pinch of salt.
    Then you will never be conned / scammed.
    And you could con the conman / scam the scammer.
     
  4. Businessman

    Businessman

    Next version will run on a 1K ZX81
     
  5. S2007S

    S2007S

    So basically more media lies....


    Dang it


    Guess they will have to pay $50,000 a pop for those blackwells
     
  6. Businessman

    Businessman


    Running AI models vs Training models requires a big difference in compute power.

    The article says the LLM they ran had 260,000 parameters. I have no idea how useful such a small LLM would be, so I asked the AI:


    A language model with 250,000 parameters is extremely small by modern standards and would generally not be considered "good" for most real-world NLP tasks.

    Here's why:
    • Modern LLMs like GPT-3, GPT-4, Claude, or LLaMA models typically have billions (and even trillions) of parameters.

    • A model with 250k parameters is more like a toy model—useful for:
      • Educational purposes

      • Testing architectures

      • Running on extremely low-resource environments (e.g., microcontrollers)

      • Very simple or narrow tasks (like basic text classification, rule-based chatbots)
     
  7. gwb-trading

    gwb-trading

    You can even run DeepSeek on a Raspberry Pi as shown in the video below.

    People have run AI engines on many different low end processors; it does not mean they are very effective or fast. This is nothing more than a hobbyist trend.

     
  8. S2007S

    S2007S



    So what is the outcome of where this is all headed?

    Haven't some of these hobby type of findings changed the future? I'm sure there is someone out there trying to show something that you don't need to spend literally hundreds of billions of dollars to create this ai narrative that everyone says you need the most racks and most gpus ..
     
  9. gwb-trading

    gwb-trading

    Major technology firms are now scaling back their projected investments in AI data centers. Emerging platforms like DeepSeek have demonstrated that AI may require significantly less computational power than originally anticipated. As a result, the infrastructure footprint needed to support AI is expected to shrink dramatically.

    This impacts AI data center stocks such as SMCI (Super Micro Computer) which recently updated guidance to scale down projections. IMO the narrative that projected sales are merely being "pushed out" is misleading—they are, in reality, vanishing. Consequently, any forward-looking statements regarding SMCI’s future earnings or stock price projections should be reconsidered — if not abandoned altogether.

    Certainly SMCI has plenty of "fans" but the rosy forward-looking picture is falling apart. SMCI was however the most shorted S&P 500 stock in April -- showing that market sentiment regarding it is changing.
     
    Last edited: May 15, 2025
  10. Businessman

    Businessman

    Running the model and training the model are two very different things. Running a model doesn't require anything like the power to train the model.

    That said claiming billions was needed to train the models was a overhyped con to raise extra billions from investors.
     
    #10     May 15, 2025
    S2007S likes this.