AI comes up with bizarre physics experiments, but they work

(quantamagazine.org)

125 points | by pseudolus 5 hours ago

10 comments

  • JimDabell 3 hours ago
    > Initially, the AI’s designs seemed outlandish. “The outputs that the thing was giving us were really not comprehensible by people,” Adhikari said. “They were too complicated, and they looked like alien things or AI things. Just nothing that a human being would make, because it had no sense of symmetry, beauty, anything. It was just a mess.”

    This description reminds me of NASA’s evolved antennae from a couple of decades ago. It was created by genetic algorithms:

    https://en.wikipedia.org/wiki/Evolved_antenna

    • carabiner 1 hour ago
      [mandatory GA antenna post requirement satisfied]
  • luketaylor 3 hours ago
    Referring to this type of optimization program just as “AI” in an age where nearly everyone will misinterpret that to mean “transformer-based language model” seems really sloppy
    • saithound 29 minutes ago
      Referring to this type of optimization as AI in the age where nearly everybody is looking to fund transformer-based language models and nobody is looking to fund this kind of optimization is just common sense though.
    • bee_rider 1 hour ago
      How can one article be expected to fix the problem of people sloppily using “AI” when they mean LLM or something like that?
      • rachofsunshine 1 hour ago
        I use "ML" when talking about more traditional/domain specific approaches, since for whatever reason LLMs haven't hijacked that term in the same way. Seems to work well enough to avoid ambiguity.

        But I'm not paid by the click, so different incentives.

        • Nevermark 50 minutes ago
          I like that.

          AI for attempts at general intelligence. (Not just LLMs, which already have a name … “LLM”.)

          ML for any iterative inductive design of heuristical or approximate relationships, from data.

          AI would fall under ML, as the most ambitious/general problems. And likely best be treated as time (year) relative, i.e. a moving target, as the quality of general models to continue improve in breadth and depth.

        • smj-edison 52 minutes ago
          Generative AI vs artificial neural network is my go-to (though ML is definitely shorter than ANN, lol).
      • a_victorp 30 minutes ago
        Just don't use the term AI. It has no well defined meaning and is mostly intended as a marketing term
      • Lionga 52 minutes ago
        Just do not use AI for anything except LLMs anymore. Same way that crypto scam has taken the word crypto.

        crypto must now be named cryptography and AI must now be named ML to avoid giving the scammers and hypers good press.

        • ItsHarper 22 minutes ago
          Yep. I dislike it just as much as ceding crypto, but at the end of the day language changes, and clarity matters.

          I think image and video generation that aren't based on LLMs can also use the term AI without causing confusion.

    • tomrod 2 hours ago
      I know, but can we blame the masses for misunderstanding AI when they are deliberately misinformed that transformers are the universe of AI? I think not!
    • fragmede 44 minutes ago
      Thinking "nearly everyone" has that precise definition of AI seems way more sloppy. Most people haven't even heard of OpenAI and ChatGPT still, but among people who have, they've probably heard stories about AI in science fiction. My definition of AI is any advanced computer processing, generative or otherwise, that's happened since we got enough computing power and RAM to do something about it, aka lately.
    • pharrington 1 hour ago
      I'll bet that almost everyone who reads Quanta Magazine knows what they mean by AI.
    • andai 3 hours ago
      That's how I feel about Web 3.0...
      • buu700 2 hours ago
        Web 3(.0) always makes me think of the time around 14 years ago when Mark Zuckerberg publicly lightly roasted my room mate for asking for his predictions on Web 4.0 and 5.0.
    • zeofig 1 hour ago
      Absolutely agree.
  • markasoftware 4 hours ago
    not an LLM, in case you're wondering. From the PyTheus paper:

    > Starting from a dense or fully connected graph, PyTheus uses gradient descent combined with topological optimization to find minimal graphs corresponding to some target quantum experiment

  • topspin 1 hour ago
    "It added an additional three-kilometer-long ring between the main interferometer and the detector to circulate the light before it exited the interferometer’s arms."

    Isn't that a delay line? The benefit being that when the undelayed and delayed signals are mixed, the phase shift you're looking for is amplified.

  • viraptor 3 hours ago
    This sounds similar to evolved antennas https://en.wikipedia.org/wiki/Evolved_antenna

    There are a few things like that where we can throw AI at a problem is generating something better, even if we don't know why exactly it's better yet.

  • aeternum 4 hours ago
    More hype than substance unfortunately.

    The AI rediscovered an interferometer technique the Russian's found decades ago, optimized a graph in an unusual way and came up with a formula to better fit a dark matter plot.

    • irjustin 2 hours ago
      Ehhhhh, I'll say it's substantive and not just pure hype.

      Yes the AI "resurfaced" the work, but it also incorporated the Russian's theory into the practical design. At least enough to say "hey make sure you look at this" - this means the system produced a workable-something w/ X% improvement, or some benefit that the researchers took it seriously and investigated. Obviously, that yielded an actual design with 10-15% improvement and a "wish we had this earlier" statement.

      No one was paying attention to the work before.

    • rlt 3 hours ago
      The discovering itself doesn’t seem like the interesting part. If the discovery wasn’t in the training data then it’s a sign AI can produce novel scientific research / experiments.
      • wizzwizz4 1 hour ago
        It's not that kind of AI. We know that these algorithms can produce novel solutions. See https://arxiv.org/abs/2312.04258, specifically "Urania".
      • hammyhavoc 3 hours ago
        This is monkeys and typewriters.

        It's like seeing things in clouds or tea leaves.

        • Supermancho 3 hours ago
          If the "monkeys with typewriters" produces a Shakespear sonnet faster than he is reincarnated, it's a useful resource.

          At least, that's the thinking.

          • coliveira 1 hour ago
            It is 100% impossible for AIs to create a Shakespeare sonnet. They can create a pastiche of a sonnet, which is completely different.
          • andsoitis 43 minutes ago
            It can’t be a Shakespeare sonnet if Shakespeare didn’t write it
          • shermantanktop 2 hours ago
            That’s the looooong game on both counts.
            • tux1968 1 hour ago
              'tis the patient plot on either side, where time doth weave its cunning, deep and wide.
      • coliveira 1 hour ago
        AI companies stole massive amounts of information from every book they could get. Do you really believe there's any research they don't have input into their training sets?
  • smj-edison 41 minutes ago
    Am I understanding the article correctly that the created a quantum playground, and then set thein algorithm to work optimizing the design within the playgrounds' constranits? That's pretty cool, especially for doing graph optimization. I'd be curious to know how compute intensive it was.
  • anonym00se1 4 hours ago
    Feels like we're going to see a lot of headlines like this in the future.

    "AI comes up with bizarre ___________________, but it works!"

    • viraptor 3 hours ago
      We've seen this for a while, just not as often: antennas, IC, FPGA design, small mechanical things, ...
    • ninetyninenine 4 hours ago
      That’s how we become numb to the progress. Like think of this in the context of a decade ago. The news would’ve been amazing.

      Imagine these headlines mutating slowly into “all software engineering performed by AI at certain company” and we will just dismiss it as generic because being employed and programming with keyboards is old fashioned. Give it twenty years and I bet this is the future.

      • somenameforme 2 hours ago
        A decade ago it wouldn't have been called AI, and it probably shouldn't be called AI today because it's absurdly misleading. It's a python program that "uses gradient descent combined with topological optimization to find minimal graphs corresponding to some target quantum experiment".

        Of course today call something "AI" and suddenly interest, and presumably grant opportunities, increase by a few orders of magnitude.

        • JimDabell 1 hour ago
          That’s been called AI for about thirty years as far as I am aware. I’m pretty sure I first ran into it studying AI at uni in the 90s, reading Norvig’s Artificial Intelligence: A Modern Approach. This is just the AI Effect at work.

          https://en.wikipedia.org/wiki/AI_effect

        • ninetyninenine 2 hours ago
          Gradient descent is a learning algorithm. This is AI.
          • somenameforme 1 hour ago
            Hahah, if you're going to go that route you may as well call all of math "AI", which is probably where we're headed anyhow! Gradient descent is used in training LLM systems, but it's no more "AI" itself than e.g. a quadratic regression is.
            • ordu 1 hour ago
              Neural networks are on the hype now, but it doesn't mean that there was no AI before them. It was, it struggled to solve some problems, and to some of them it found solutions. Today people tend to reject everything that is not neural net as not "AI". If it is not neural net, then it is not AI, but general CS. However AI research generated a ton of algorithms for searching, and while gradient descent (I think) was not invented as a part of AI research, AI research adapted the idea to discrete spaces in multiple ways.

              OTOH, AI is very much a search in multidimensional spaces, it is so into it, that it would probably make sense to say that gradient descent is an AI tool. Not because it is used to train neural networks, but because the specialty of AI is a search in multidimensional spaces. People probably wouldn't agree, like they don't agree that Fundamental Theorem of Algebra is not of algebra (and not fundamental btw). But the disagreement is not about the deep meaning of the theorem or gradient descent, but about tradition and "we always did it this way".

      • hammyhavoc 3 hours ago
        Twenty bucks says it isn't.
  • qz_kb 2 hours ago
    This is not "AI", it's non-linear optimization...
    • tomrod 2 hours ago
      We all do math down here.
  • IAmGraydon 4 hours ago
    This is the kind of thing I like to see AI being used for. That said, as is noted in the article, this has not yet led to new physics or any indication of new physics.