The Big LLM Architecture Comparison

(magazine.sebastianraschka.com)

147 points | by mdp2021 7 hours ago

5 comments

  • strangescript 2 hours ago
    The diagrams in this article are amazing if you are somewhere in between a novice and expert. Seeing all of the new models laid out next to each other is fantastic.
  • webappguy 1 hour ago
    Would love to see a PT.2 w even what is rumored in top closed source frontier models eg. o5, o3 Pro, o4 or 4.5, Gemini 2.5 Pro, Grok 4 and Claude Opus 4
  • bravesoul2 5 hours ago
    This is a nice catchup for some who hasn't been keeping up like me
  • Chloebaker 2 hours ago
    Honestly its crazy to think how far we’ve come since GPT-2 (2019), today comparing LLMs to determine their performance is notoriously challenging and it feels like every 2 weeks a models beats a new benchmark. I’m really glad DeepSeek was mentioned here, bc the key architectural techniques it introduced in V3 that improved its computational efficiency and distinguish it from many other LLMs was really transformational when it came out.
  • dmezzetti 3 hours ago
    While all these architectures are innovative and have helped improve either accuracy or speed, the same fundamental problem of generating factual information still exists.

    Retrieval Augmented Generation (RAG), Agents and other similar methods help mitigate this. It will be interesting to see if future architectures eventually replace these techniques.

    • tormeh 1 hour ago
      To me, the issue seems to be that we're training transformers to predict text, which only forces the model to embed limited amounts of logic. We'd have to find something different to train models on in order for them to stop hallucinating.
    • bsenftner 1 hour ago
      I'm still thinking about how RAG being conceptually simple and easy to implement, why the foundational models have not incorporated it into their base functionality? The lack of that strikes me as a negative point about RAG and it's variants, because if any of them worked, it would be in the models directly and not need to be added afterwards.
      • bavell 24 minutes ago
        RAG is a prompting technique, how could they possibly incorporate it into the pre training?