I tried vibe coding in BASIC and it didn't go well

(goto10retro.com)

98 points | by ibobev 3 days ago

25 comments

  • nine_k 3 hours ago
    All these stories about vibe coding going well or wrong remind me of an old joke.

    A man visits his friend's house. There is a dog in the house. The friend says that the dog can play poker. The man is incredulous, but they sit at a table and have a game of poker; the dog actually can play!

    The man says: "Wow! Your dog is incredibly, fantastically smart!"

    The friend answers: "Oh, well, no, he's a naïve fool. Every time he gets a good hand, he starts wagging his tail."

    Whether you see LLMs impressively smart or annoyingly foolish depends on your expectations. Currently they are very smart talking dogs.

    • tliltocatl 1 minute ago
      Nobody says LLMs aren't impressive. But there is a subtle difference an impressive trick and being something worth throwing 1% of GDP at. Proponents say that if we keep throwing more money at it, it will improve, but this is far from certain.
    • markus_zhang 21 minutes ago
      From my experience (we have to vibe code as it becomes a norm in the company), vibe coding is most effective when the developer feeds detailed context to the agent beforehand, and gives very specific commands to it for each task. It still speeds up development quite a bit once everything goes the right direction.
      • 6LLvveMx2koXfwn 8 minutes ago
        I vibe code in domains I am unfamiliar with - getting Claude to configure the right choice of AWS service for a specific use-case can take a very long time. But would that still be quicker than me alone with docs is hard to tell.
    • totetsu 36 minutes ago
      Try playing an adversarial word game with ChatGPT. like the rules are, one player asks questions and the other is not allowed to say "yes" or "no", not allowed to reuse the same wording, and not allowed to evade the question. You'll see its tail wagging pretty quickly.
    • kqr 2 hours ago
      Somehow this also reminds me of http://raisingtalentthebook.com/wp-content/uploads/2014/04/t...

      "I taught my dog to whistle!"

      "Really? I don't hear him whistling."

      "I said I taught him, not that he learnt it."

      • pyman 1 hour ago
        A person is flying a hot air balloon and realises he’s lost. He lowers the balloon and spots a man down below. He shouts:

        “Excuse me! Can you help me? I promised a friend I’d meet him, but I have no idea where I am.”

        The man replies, “You’re in a hot air balloon, hovering 30 feet above the ground, somewhere between 40 and 41 degrees north latitude and between 59 and 60 degrees west longitude.”

        “You must be a Prompt Engineer,” says the balloonist.

        “I am,” replies the man. “How did you know?”

        “Well,” says the balloonist, “everything you told me is technically correct, but it’s of no use to me and I still have no idea where I am.”

        The man below replies, “You must be a Vibe Coder.”

        “I am,” says the balloonist. “How did you know?”

        "Because you don’t know where you are or where you’re going. You made a promise you can’t keep, and now you expect me to solve your problem. The fact is, you’re in the same position you were in before we met, but now it’s somehow my fault!"

        • johnisgood 21 minutes ago
          This is really good, saving it. :D
    • Marazan 1 hour ago
      The variation I've seen on this applied to AIs is:

      Fred insists to his friend that he has a hyper intelligent dog that can talk. Sceptical, the friend enquires of the dog "What's 2+2?"

      "Five" says the dog

      "Holy shit a talking dog!" says the friend "This is the most incredible thing that I've ever seen in my life".

      "What's 3+3?"

      "Eight" says the dog.

      "What is this bullshit you're trying to sell me Fred?"

      • codeflo 25 minutes ago
        This joke is a closer analogy to reality with a small addition. After the friend is suitably impressed:

        > "Holy shit a talking dog!" says the friend "This is the most incredible thing that I've ever seen in my life".

        this happens:

        "Yes," says Fred. "As you can see, it's already at PhD level now, constantly improving, and is on track to replace 50% of the economy in twelve months or sooner."

        Confused, the friend asks:

        > "What's 3+3?"

        > "Eight" says the dog.

        > "What is this bullshit you're trying to sell me Fred?"

  • recipe19 7 hours ago
    I work on niche platforms where the amount of example code on Github is minimal, and this definitely aligns with my observations. The error rate is way too high to make "vibe coding" possible.

    I think it's a good reality check for the claims of impending AGI. The models still depend heavily on being able to transform other people's work.

    • winrid 6 hours ago
      Even with typescript Claude will happily break basic business logic to make tests pass.
      • bapak 1 minute ago
        Speaking of TypeScript, every time I feed a hard type problem to LLMs they just can't do it. Sometimes I find out it's a TS limitation or just not implemented yet, but that won't stop us from wasting 40 minutes together.
      • motorest 5 hours ago
        > Even with typescript Claude will happily break basic business logic to make tests pass.

        It's my understanding that LLMs change the code to meet a goal, and if you prompt them with vague instructions such as "make tests pass" or "fix tests", LLMs in general apply the minimum necessary and sufficient changes to any code that allows their goal to be met. If you don't explicitly instruct them, they can't and won't tell apart project code from test code. So they will change your project code to make tests work.

        This is not a bug. Changing project code to make tests pass is a fundamental approach to refactoring projects, and the whole basis of TDD. If that's not what you want, you need to prompt them accordingly.

        • brailsafe 39 minutes ago
          Do you mean not just LLMs, but agents? Is this jot avoided by narrowing your scope and just using the chat interface that also may not produce what you're hoping for, but at least can't muck about in your existing code?
        • Terr_ 3 hours ago
          > It's my understanding that LLMs change the code to meet a goal

          I assume in this case you mean a broader conventional application, of which an LLM algorithm is a smaller-but-notable piece?

          LLMs themselves have no goals beyond predicting new words for a document that "fit" the older words. It may turn 2+2 into 2+2=4, but it's not actually doing math with the goal of making both sides equal.

          • motorest 1 hour ago
            > I assume in this case you mean a broader conventional application, of which an LLM algorithm is a smaller-but-notable piece?

            Not necessarily. If you prompt a LLM to limit changes to some projects or components, it complies with the request.

        • chuckadams 4 hours ago
          Fixing bugs is also changing project code to make tests pass. The assistant is pretty good at knowing which side to change when it’s working from documentation that describes the correct behavior.
      • CalRobert 5 hours ago
        That seems like the tests don’t work?
        • paffdragon 1 hour ago
          Or at least they don't cover business logic if they pass while breaking it.
    • pygy_ 1 hour ago
      I've had a similar problem with WebGPU and WGSL. LLMs create buffers with the wrong flags (and other API usage errors), doesn't clean up resources, mix up GLSL and WGSL, write semi-less WGSL (in template strings) if you ask them to write semi-less [0] JS...

      It's a big mess.

      0. https://github.com/isaacs/semicolons/blob/main/semicolons.js

    • vineyardmike 6 hours ago
      Completely agree. I’m a professional engineer, but I like to get some ~vibe~ help on person projects after-work when I’m tired and just want my personal project to go faster. I’ve had a ton of success with go, JavaScript, python, etc. I had mixed-success with writing idiomatic Elixir roughly a year ago, but I’ve largely assumed that this would be resolved today, since every model maker has started aggressively filling training data with code, since we found the PMF of LLM code-assistance.

      Last night I tried to build a super basic “barely above hello world” project in Zig (a language where IDK the syntax), and it took me trying a few different LLMs to find one that could actually write anything that would compile (Gemini w/ search enabled). I really wasn’t expecting it considering how good my experience has been on mainstream languages.

      Also, I think OP did rather well considering BASIC is hardly used anymore.

    • gompertz 7 hours ago
      Yep I program in some niche languages like Pike, Snobol4, Unicon. Vibe coding is out of the question for these languages. Forced to use my brain!
      • johnisgood 20 minutes ago
        You could always feed it some documentation and example programs. I did it with a niche language and it worked out really well, with Claude. Around 8 months ago.
    • andsoitis 6 hours ago
      > The models

      The models don’t have a model of the world. Hence they cannot reason about the world.

      • pygy_ 1 hour ago
        I tried vibe coding WebGPU/WGSl, which is thoroughly documented, but has little actual code around, and LLMs are pretty bad at it right now.

        They don't need a formal model, they need examples from which they can pilfer.

      • hammyhavoc 5 hours ago
        "reason" is doing some heavy-lifting in the context of LLMs.
    • jjmarr 6 hours ago
      I've noticed the error rate doesn't matter if you have good tooling feeding into the context. The AI hallucinates, sees the bug, and fixes it for you.
    • empressplay 7 hours ago
      I don't know if you're working with modern models. Grok 4 doesn't really know much about assembly language on the Apple II but I gave it all of the architectural information it needed in the first prompt of a conversation and it built compilable and executable code. Most of the issues I encountered were due to me asking for too much in a prompt. But it built a complete, albeit simple, assembly language game in a few hours of back and forth with it. Obviously I know enough about the Apple II to steer it when it goes awry, but it's definitely able to write 'original' code in a language / platform it doesn't inherently comprehend.
      • timschmidt 6 hours ago
        This matches my experience as well. Poor performance usually means I haven't provided enough context or have asked for too much in a single prompt. Modifying the prompt accordingly and iterating usually results in satisfactory output within the next few tries.
  • manca 6 hours ago
    I literally had the same experience when I asked the top code LLMs (Claude Code, GPT-4o) to rewrite the code from Erlang/Elixir codebase to Java. It got some things right, but most things wrong and it required a lot of debugging to figure out what went wrong.

    It's the absolute proof that they are still dumb prediction machines, fully relying on the type of content they've been trained on. They can't generalize (yet) and if you want to use them for novel things, they'll fail miserably.

    • conception 3 hours ago
      I’m curious what your process was. If you just said “rewrite this in Java” I’d expect that to fail. If you treated the llm like a junior developer or an official project, worked with them to document the codebase, come up with a plan, tasks for each part of the code base and a solid workflow prompt- I would expect it to succeed.
      • Marazan 1 hour ago
        Yes, if you do all the difficult time consuming bits I bet it would work.
    • nerdsniper 3 hours ago
      Claude Code / 4o struggle with this for me, but I had Claude Opus 4 rewrite a 2,500 line powershell script for embedded automation into Python and it did a pretty solid job. A few bugs, but cheaper models were able to clean those up. I still haven't found a great solution for general refactoring -- like I'd love to split it out into multiple Python modules but I rarely like how it decides to do that without me telling it specifically how to structure the modules.
    • abrookewood 5 hours ago
      Clearly the issue is that you are going from Erlang/Elixir to Java, rather than the other way around :)

      Jokes aside, they are pretty different languages. I imagine you'd have much better luck going from .Net to Java.

      • tsimionescu 3 hours ago
        Sure, it's easier to solve an easier problem, news at eleven. In particular, translating from C# to Java could probably be automated with some 90% accuracy using a decent sized bash script.
        • mattmanser 1 hour ago
          I once redid a project from VB.Net to C# and pretty much did that.

          People misjudge many tasks as 'hard' when they are in fact easy but tedious.

          The problem is you need a high degree of accuracy, which you don't get with LLMs.

          The best you can do is set the LLM on a loop and try and Brute force it, which is the current vibe 'coding' trick.

          I sound pessimistic but I'm actually shocked at how effective it is.

      • nine_k 3 hours ago
        This mostly means that LLMs are good at simpler forms of pattern matching, and have much harder time actually reasoning at a significant depth. (It's not easy even for human intellect, the finest we currently have.)
    • h4ck_th3_pl4n3t 5 hours ago
      I just wished the LLM model providers would realize this and instead would provide specialized LLMs for each programming language. The results likely would be better.
      • chuckadams 4 hours ago
        The local models JetBrains IDEs use for completion are specialized per-language. For more general problems, I’m not sure over-fitting to a single language is any better for a LLM than it is for a human.
    • hammyhavoc 6 hours ago
      They'll never be fit for purpose. They're a technological dead-end for anything like what people are usually throwing them at, IMO.
      • zer00eyz 5 hours ago
        I will give you an example of where you are dead wrong, and one where the article is spot on (without diving into historic artifacts).

        I run HomeAssistant, I don't get to play/use it every day. Here, LLM's excel at filling in the (legion) of blanks in both the manual and end user devices. There is a large body of work for it to summarize and work against.

        I also play with SBC's. Many of these are "fringe" at best. LLM's are as you say "not fit for purpose".

        What kind of development you are using LLM's for will determine your experience with them. The tool may or may not live up to the hype depending how "common", well documented and "frequent" your issue is. Once you start hitting these "walls" you realize that no, real reason, leaps of inference and intelligence are still far away.

        • SecuredMarvin 3 hours ago
          I also made this experience. As long as the public level of knowledge is high, LLMs are massively helpful. Otherwise not so much and still hallucinating. It does not matter if you think highly of this public knowledge. QFT, QED and Gravity are fine, AD emulation on SAMBA, or Atari Basic not so much.

          If I would program Atari Basic, after finishing my Atari Emulator on my C64, I would learn the environment and test my assumptions. Single shot LLMs questions won't do it. A strong agent loop could probably.

          I believe that LLMs are yanking the needle to 80%. This level is easy achievable for professionals of the trade and this level is beyond the ability of beginners. LLMs are really powerful tools here. But if you are trying for 90% LLMs are always trying to keep you down.

          And if you are trying for 100%, new, fringe or exotic LLMs are a disaster because they do not learn and do not understand, even while being inside the token window.

          We learn that knowledge, (power) and language proficiency are an indicator for crystalline but not fluid intelligence

          • otabdeveloper4 2 hours ago
            > yanking the needle to 80%

            80 percent of what, exactly? A software developer's job isn't to write code, it's understanding poorly-specified requirements. LLMs do nothing for that unless your requirements are already public on Stackoverflow and Github. (And in that case, do you really need an LLM to copy-paste for you?)

          • zer00eyz 43 minutes ago
            > fluid intelligence

            How about basic intelligence. Kids logic puzzles.

            https://daydreampuzzles.com/logic-puzzles/

            LLM's whiffing hard on these sorts of puzzles is just amusing.

            It gets even better if you change the clues from innocent things like "driving tests" or "day care pickup" to things that it doesn't really want to speak about. War crimes, suicide, dictators and so on.

            Or just flat out make up words whole cloth to use as "activates" in the puzzles.

      • motorest 5 hours ago
        > They'll never be fit for purpose. They're a technological dead-end for anything like what people are usually throwing them at, IMO.

        This comment is detached from reality. LLMs in general have been proven to be effective at even creating complete, fully working and fully featured projects from scratch. You need to provide the necessary context and use popular technologies with enough corpus to allow the LLM to know what to do. If one-shot approaches fail, a few iterations are all it takes to bridge the gap. I know that to be a fact because I do it on a daily basis.

        • otabdeveloper4 2 hours ago
          > because I do it on a daily basis

          Cool. How many "complete, fully working" products have you released?

          Must be in the hundreds now, right?

  • edent 3 hours ago
    Vibe Coding seems to work best when you are already an experienced programmer.

    For example "Prompt: Write me an Atari BASIC program that draws a blue circle in graphics mode 7."

    You need to know that there are various graphics modes and that mode 7 is the best for your use-case. Without that preexisting knowledge, you get stuck very quickly.

    • johnisgood 16 minutes ago
      Exactly. I would not like to be called a vibe coder for using an LLM for tedious tasks though, is it not a pejorative term? I used LLMs for a few projects and it did well, because I knew what I wanted and how I wanted. So yeah, you do have to be an experienced programmer to excel with LLMs.

      That said, you can learn a lot using LLMs, which is nice. I have a friend who wants to learn Python, and I have given him actual resources, but I have also told him to use LLMs.

    • baxtr 2 hours ago
      This is a description of a “tool”. Anyone can use a hammer and chisel to carve out wood, but only an artist with extensive experience will create something truly remarkable.

      I believe many in this debate are confusing tools with magic wands.

      • tonyhart7 57 minutes ago
        this marketing and social media buzz that AI (artificial intelligence) that would replace human intelligence or people job for everyone didn't help either

        sure it maybe someday but not today, but there are jobs that already get replaced tho for example like writing industry

    • JdeBP 1 hour ago
      Previous generations would have simply read something like the circle drawing writeup by Jeffrey S. McArthur in chapter 4 of COMPUTE!'s Third Book of Atari which as a matter of fact is available in scrapable text. (-:

      * https://archive.org/details/ataribooks-computes-third-book-o...

      * https://atariarchives.org/c3ba/page153.php

      Fun fact: Orson Scott Card can be found in chapter 1.

    • motorest 55 minutes ago
      > Vibe Coding seems to work best when you are already an experienced programmer.

      I think that is a very vague and ambiguous way of putting it.

      I would frame it a tad more specific: vibecode seems to work best when users know what they want and are able to set requirements and plan ahead.

      Vibecoding doesn't work at all or is an unmaintainable god awful mess if users don't do software engineering and instead hack stuff together hoping it works.

      Garbage in, garbage out.

    • forinti 55 minutes ago
      Exactly! If you can't properly assess the output of the AI, you are really only shooting into the dark.
    • j4coh 2 hours ago
      In this case I asked ChatGPT without the part specifying mode 7 and it replied with a working program using mode 7, with a comment at the top that mode 7 would be the best choice.
    • throwawaylaptop 3 hours ago
      Exactly this. I'm a self taught PHP/jQuery guy that learned it well enough to make an entire saas that enough companies pay for that it's a decent little lifestyle business.

      I started another project recently basically vibe coding in PHP. Instead of a single page app like I made before, it's just page by page single loading. Which means the AI also only needs to keep a few functions and the database in its head, not constantly work on some crazy ui management framework (what that's called).

      It's made in a few days what would have taken me weeks as an amateur. Yet I know enough to catch a few 'mistakes' and remind it to do it better.

      I'm happy enough.

    • cfn 2 hours ago
      Just for fun I asked ChatGPT "How would you ask an LLM to write a drawing program for the ATARI?" and it asked back a bunch of details to which I answered "I have no idea, just go with the simplest option". It chose the correct graphics mode and BASIC and created the program (which I didn't test).

      I still agree with you for large applications but for these simple examples anyone with a basic understanding of vibe coding could wing it.

    • kqr 2 hours ago
      Not only is it a useful constraint to ask for mode 7, but making sure the context contains domain-expert technology puts the LLM in a better spot in the sampling space.
  • Earw0rm 3 hours ago
    Has anyone tried it on x87 assembly language?

    For those that don't know. x87 was the FPU for 32-bit x86 architectures. It's not terribly complicated, but it uses stack-based register addressing with a fixed size (eight entry) stack.

    All operations work on the top-of-stack register and one other register operand, and push the result onto the top of the stack (optionally popping the previous top of stack before the push).

    It's hard but not horribly so for humans to write.. more a case of annoyingly slow and having to be methodical, because you have to reason about the state of the stack at every step.

    I'd be very curious as to whether a token-prediction machine can get anywhere with this kind of task, as it requires a strong mental model of what's actually happening, or at least the ability to consistently simulate one as intermediate tokens/words.

    • userbinator 2 hours ago
      If you are familiar with HP's calculators, x87 Asm isn't that difficult. Also noteworthy is that its density makes it a common choice for tiny demoscene productions.
      • Earw0rm 51 minutes ago
        Not too bad to write, kind of horrible to read.
    • silisili 2 hours ago
      I'm going to doubt that. I was pushing GPT a couple weeks ago to test its limits. It's 100% unable to write compilable Go ASM syntax. In fairness it's slightly oddball, but enough exists that it's not esoteric.

      In the error feedback cycle, it kept blaming Go, not itself. A bit eye opening.

      • Earw0rm 52 minutes ago
        The thing with x87 is that it's easy to write compilable, correct-looking code, and much harder to write correct compilable code, even for trivial sequences of a dozen or so operations.

        Whereas in most asm dialects, register AX is always register AX (word length aliasing aside), that's not the case for x87: the object/value at ST3 in one operation may be ST1 or ST5 in a couple of instructions' time.

      • messe 2 hours ago
        I'm comfortable writing asm for quite a few architectures, but Go's assembler...

        When I struggle to write Go ASM, I also blame Go and not myself.

  • Mikhail_Edoshin 56 minutes ago
    I don't know if anyone notices that, but LLMs are very much like what you see in dreams when you reflect on them when awake. When you asleep a dream feels very coherent; but when you're awake, you see wild gaps and jumps and the overall impression of many dreams' plot is that it is pure nonsense.

    I once heard advice on trying to re-read the text you see in dreams. And I did that once. It was a phrase where one of words referred to a city. The first time I read that city was "London". I remembered the advice and re-read the phrase and the word changed to the name of an old Russian city "Rostov". Yet the phrase was "same", that is it felt same in the dream, even though the city was different.

    LLMs are like that dream mechanics. It is how something else is reflected in what we know (e.g. an image of a city is rendered as a name of a city, just any city). So, on one hand, we do have a similar mechanism in our minds. But on another hand our normal reasoning is very much unlike that. It would be a very wild stretch to believe that reasoning somehow stems from dreaming. I'd say reasoning is opposite to dreaming. If we amplify our dreaming mechanics we won't get a genius; more likely we'll get a schizophrenic.

  • ofrzeta 5 hours ago
    It didn't go well? I think it went quite well. It even produced an almost working drawing program.
    • abrookewood 5 hours ago
      Yep, thought the same thing. I guess people have very different expectations.
  • xiphias2 4 hours ago
    4o is not even a coding model and very far from the best coding models OpenAI has, I seriously don't understand why these articles are upvoted so much
    • ofrzeta 5 minutes ago
      Even though I was impressed with the original article (so kind of contrary to the author) in the meantime I tried the same thing with Claude Sonnet 4 and got no better results. Now I tried about a dozen times but it did not manage to create a "BASIC programm for Atari 800XL that makes use of display list interrupts to draw rainbow-like colored horizontal stripes", although this is like a "hello world" for that technique and there should be plenty of samples on the Internet.
    • throw101010 2 hours ago
      > I seriously don't understand why these articles are upvoted so much

      It confirm a bias for some, it triggers others who might have the opposite position (and maybe have a bias too on the other end).

      Perfect combo for successful social media posts... literally all about "attention" from start to finish.

  • ghuntley 30 minutes ago
    It's not really 'vibe coding' if you're copying and pasting from ChatGPT by hand...
  • Yokolos 3 hours ago
    When I start getting nonsense back and forth prompting, I've found it best to just start a new chat/context with the latest working version and then try again with a slightly more detailed prompt that tries to avoid the issues encountered in the previous chat. It usually helps. AI generally quickly gets itself lost, which can be annoying.
  • register 37 minutes ago
    I'll tell you a secret. The same happens also with mainstream languages
  • ilaksh 6 hours ago
    I think it's a fair article.

    However I will just mention a few things. When you make an article like this please take note of the particular language model used and acknowledge that they aren't all the same.

    Also realize that the context window is pretty large and you can help it by giving it information from manuals etc. so you don't need to rely on the intrinsic knowledge entirely.

    If they used o3 or o3 Pro and gave it a few sections of the manual it might have gotten farther. Also if someone finds a way to connect an agent to a retro computer, like an Atari BASIC MCP that can enter text and take screenshots, "vibe coding" can work better as an agent that can see errors and self-correct.

  • danjc 2 hours ago
    What's missing here is tool use.

    For example, if the llm had a compile tool it would likely have been able to correct syntax errors.

    Similarly, visual errors may also have been caught if it were able to run the program and capture screens.

    • wbolt 2 hours ago
      Exactly! The way in which the LLM is used here is very, very basic and outdated. This experiment should be redone in a proper „agentic” setup where there is a feedback loop between the model and the runtime plus access to documentation / internet. The goal now is not to encapsulate all the knowledge inside single LLM - this is too problematic and costly. LLM is a language model not knowledge database. It allows to interpret and interact with knowledge and text data from multiple sources.
  • Radle 5 hours ago
    I had way better results. I'd assume the same would have happened to the author if he provided the LLM with a full documentation on what ATARI BASIC is and some example programs.

    Especially when asking the LLM to create a drawing program and a game the author would have probably received working code if he supplied the ai with documentation to the graphics function and sprite rendering using ATARI BASIC.

  • docandrew 7 hours ago
    Maybe other folks’ vibe coding experiences are a lot richer than mine have been, but I read the article and reached the opposite conclusion of the author.

    I was actually pretty impressed that it did as well as it did in a largely forgotten language and outdated platform. Looks like a vibe coding win to me.

    • grumpyprole 1 hour ago
      Sure it did ok with examples that are easily found in a text book like drawing a circle.
    • sixothree 6 hours ago
      Here's an example of a recent experience.

      I have a web site that is sort of a cms. I wanted users to be able to add a list of external links to their items. When a user adds a link to an entry, the web site should go out and fetch a cached copy of the site. If there are errors, it should retry a few times. It should also capture an mhtml single file as well as a full page screenshot. The user should be able to refresh the cache, and the site should keep all past versions. The cached copy should be viewable in a modal. The task also involves creating database entities, DTOs, CQRS handlers, etc.

      I asked Claude to implement the feature, went and took a shower, and when I came out it was done.

      • nico 5 hours ago
        Im pretty new to CC, been using it in a very interactive way.

        What settings are you using to get it to just do all of that without your feedback or approval?

        Are you also running it inside a container, or setting some sort of command restrictions, or just yoloing it on a regular shell?

      • hammyhavoc 5 hours ago
        Let us know how the security audit by human beings on the output goes.
        • catmanjan 5 hours ago
          The auditors are using llms too!
  • fcatalan 4 hours ago
    I had more luck with a little experiment a few days ago: I took phone pics of one of the shorter BASIC listings from Tim Hartnell's "Giant Book of Computer Games" (I learned to program out of those back in the early 80s, so I treasure my copy) and asked Gemini to translate it to plain C. It compiled and played just fine on the first go.
  • serf 6 hours ago
    please just include the prompts rather than saying "So I said X.."

    There is a lot of nuance in how X is said.

  • firesteelrain 8 hours ago
    Not surprised; there were so many variations of BASIC and unless you train ChatGPT on a bunch of code examples and contexts then it can only get so close.

    Try a local LLM then train it

    • ofrzeta 6 hours ago
      > ... unless you train ChatGPT on a bunch of code examples and contexts then it can only get so close.

      How do you do this?

  • clambaker117 6 hours ago
    Wouldn’t it have been better to use Claude 4?
    • sixothree 6 hours ago
      I'm thinking Gemini CLI because of the context. He could add some information about the programming language itself in the project. I think that would help immensely.
      • 4b11b4 4 hours ago
        Even though the max token limit is higher, it's more complicated than that.

        As the context length increases, undesirable things happen.

  • calvinmorrison 6 hours ago
    This is great. I am currently vibecoding a replacement connector for some old EDI software that is written in a business basic kind of language called ProvideX, and a fork of that has undocumented behaviour.

    It uses some built inet ftp tooling thats terrible and barely works, even internally anymore.

    We are replacing it with a winscp implementation since winscp can talk over a COM object.

    unsuprisingly the COM object in basic works great - the problem is that I have no idea what I am doing. I spent hours doing something like

    WINSCP_SESSION'OPEN(WINSCP_SESSION_OPTIONS)

    when i needed

    WINSCP_SESSION'OPEN(*WINSCP_SESSION_OPTIONS)

    It was obvious after because it was a pointer type of setup, but i didnt find it until pages and pages deep into old PDF manuals.

    However the vibecode of all the agents did not understand the syntax of the system, it did help me analyse the old code, format it, and at least throw some stuff at the wall.

    I finished it up friday, hopefully i deploy monday.

  • empressplay 7 hours ago
    We have Claude Code writing Applesoft BASIC fine. It wrote a text adventure (complete with puzzles) and a PONG clone, among other things. Obviously it didn't do it 100% right straight out of the gate, but the hand-holding wasn't extensive.

    I've been using Grok 4 to write 6502 assembly language and it's been a bit of a slog but honestly the issues I've encountered are due mostly my to naivety. If I'm disciplined and make sure it has all of the relevant information and I'm (very) incremental, I've had some success writing game logic. You can't just tell it to build an entire game in a prompt, but if you're gradual about it you can go places with it.

    Like any tool, if you understand its idiosyncrasies you can cater for them, and be productive with it. If you're not then yeah, it's not going to go well.

    • hammyhavoc 5 hours ago
      Ah yes, truly impressive, Pong. A game that countless textbooks et al have recreated numerous times. There's a mountain of training data for something so unoriginal.
  • 486sx33 1 hour ago
    [dead]
  • pavelstoev 6 hours ago
    I vibe coded a site about vibe 2 code projects. https://builtwithvibe.com/
    • esafak 5 hours ago
      The "Yo dawg, I heard..." memes are writing themselves today.
  • CMay 6 hours ago
    This does kind of make me wonder.

    It's believable that we might either see an increase in the number of new programming languages since making new languages is becoming more accessible, or we could see fewer new languages as the problems of the existing ones are worked around more reliably with LLMs.

    Yet, what happens to adoption? Perhaps getting people to adopt new languages will be harder as generations come to expect LLM support. Would you almost need to use LLMs to synthesize tons of code examples that convert into the new language to prime the inputs?

    Once conversational intelligence machines reach a sort of godlike generality, then maybe they could very quickly adapt languages from much fewer examples. That still might not help much with the gotchas of any tooling or other quirks.

    So maybe we'll all snap to a new LLM super-language in 20 years, or we could be concreting ourselves into the most popular languages of today for the next 50 years.