_aavaa_ 9 hours ago

“Every line of AI-generated code is a plausible-looking liability. It may pass basic tests, only to fail spectacularly in production with an edge case you never considered.”

Every time I read something along the lines I have to wonder whose code these people review during code reviews. It’s not like the alternative is bulletproof code.

  • adocomplete 9 hours ago

    I was thinking the same thing. Humans push terrible code to production all the time that slips through code reviews. You spot it, you fix it, and move on.

    • kanwisher 9 hours ago

      Also a lot of the AI code reviewer tools catch bugs that you wouldn't catch otherwise

  • resize2996 9 hours ago

    I do not know the future, every line of code is a plausible-looking liability.

  • peacebeard 6 hours ago

    A lot of people seem to equate using AI tools and deploying code that you don’t understand. All code should be fully understood by the person using the AI tool, then again by the reviewer. The productivity benefit of these tools is still massive, and there is benefit to doing the research and investigation to understand what the LLM is doing if it was not clear up front.

  • moomoo11 8 hours ago

    They set up a GitHub action that has AI do an immediate first pass (hallucinates high on drugs and not the good kind) and leave a review.

    Considering 80% of team mates are usually dead weight or mid at best (every team is carried by that 1 or 2 guys who do 2-3x), they will do the bare minimum review. Let’s be real.. PIP is real. Job hopping because bad is real.

    It’s a problem. I have dealt with this and had to fire.

macNchz 8 hours ago

I've been thinking in a similar way over the past year or so—we're seeing the emergence of more widespread access to custom software for use cases that previously never would have justified the investment.

There are so many situations where a little program can save one or a few people hours of work on tedious tasks, but wouldn't make sense to build or support given traditional software development overhead, but that becomes realistic with AI-assisted development. This is the idea of a sysadmin who has automated 80% of his job with a variety of shell scripts, borne out into many other roles.

We've seen the early phases of it with Replit and Lovable et al, but I think there's a big playing field for secure/constrained runtime environments for non-developers to be able to build things with AI. Low/no-code tooling increasingly seems anachronistic to me: I prefer code, just let an AI write and maintain it.

There's also a whole world of opportunity around the fact that many people who could benefit greatly from AI-built programs are simply not particularly suited to building them themselves, barring a dramatically more capable AI, where I think enterprising software engineers can likely spin out tons of useful stuff and address a client base they might never have previously been able to address.

  • chasd00 7 hours ago

    i'm not an llm superfan but i do find them useful for coming up with one-off scripts to process datafiles or other small tasks that would have taken me a couple hours to get right. The llm produces something 85% of the way there including command line argument processing and all that stuff i hate to type out. I just fix the broken bits and make some adjustments and i have what i need in about 20 min. For that kind of task they are very useful IMO.

jjmarr 8 hours ago

Long-term SWEs at non-tech companies will spend much of their time reviewing vibe coded features/prototypes/scripts from non-technical employees and scaling them once they become critical infrastructure.

This'll eliminate jobs in the "develop CRUD app" industry but will create better jobs in security/scalability/quality expertise. But it'll take a few years as all these vibe coded business process scripts start to fail.

Programmers miss the human element, which is that many managers look at a software project as too risky, even if automating a business process could trivially save money. There are millions of people in the USA who spend most of their day manually transferring data from incompatible systems.

AI allows anyone to create a hacky automation that demonstrates immediate value but has obvious flaws that can be fixed by a skilled SWE. That will make it easier to justify more spending on devs.

  • ares623 2 hours ago

    There better be new bootcamps on how to maintain these systems because without CRUD jobs how would someone new get the experience

  • bitwize 7 hours ago

    That is literally the exact promise of CASE tools in the 80s and the early 90s; UML code generation tools in the 2000s, and "low-code/no-code" platforms in the 2010s. It turned out to be a disaster every time, especially when the Idea Persons chucked their creations over the wall to SWEs to bash them into actual products because the Idea Persons had Far More Important Things To Do than maintain their coalesced brain farts.

    We're repeating history but with more energy consumption.

    • bonsai_spool 6 hours ago

      I do think that there’s a difference in kind here - we’re not producing UML graphs that require programmer time to implement (or sending the diagrams to SE Asia and then code reviewing).

      The code ‘works’ - and the folks who are improving the prototype can also benefit from the tools that the Idea Person used.

      • sarchertech 4 hours ago

        The code worked for the examples the OP gave as well. They weren’t talking just about UML graphs, but about automated tools to turn those graphs into code.

        And in the case of low code/no code, those produced working prototypes as well. And you could in most cases export them to raw code.

    • ndileas 6 hours ago

      I wasn't around for the second millennium versions. At some point, doesn't there exist a kind of activation energy threshold where enough money/promise etc is gained from the prototype that this pattern works for good ideas and not for bad ones?

SpecialistK 8 hours ago

I feel personally targeted :D

Programming classes didn't work out for me in college, so I went into sysadmin with a dash of Devops.

Now I can make small tools for things like syncing my living room PC to a big LED panel above the TV (was app-only but there's a Python reverse engineering which I vibe-coded a frontend for) or an orchestration script which generates a MAC address, assigns a DHCP reservation in OPNsense, and created the VM or CT using that MAC and a password which gets stored in my password manager.

I could have done either of these projects myself with a few weekends and tutorials. Now it's barely an evening for proof of concept and another evening to patch it up to an acceptable level. Knowing most of the stuff around coding itself definitely helped a lot.

infinitezest 6 hours ago

I find LLMs really useful on a daily basis but I keep wondering, what's going to happen when the VC money dries up and the real cost of inference kicks in? Its relatively cheap now but its also being heavily subsidized. The usual answer is to just jam ads into your product and slowly increase the price over time (see: Netflix) but I don't know how that'll work for LLMs.

  • pickledonions49 3 hours ago

    I heard that photonic chip stuff might make running this stuff cheaper in data center environments.

  • evolighting 5 hours ago

    you could localhost ollama, vLLM, or something like that; Open Models are good enough for simple task, With a bit of extra effort and learning, this is usually just works for most case.

    But in that situation, there may be no further updates, the future remains uncertain.

dzink 5 hours ago

AI use on corporate code means exponentially compounding complexity. Even if carefully planned for, AI enables more features to be added and more will be added a-la-carte, and the tower inevitably becomes bigger than humans can manage, especially if they are allowed increasingly less time to maintain a fast growing pile of code. That means eventually large enough code bases will be manageable only by AI or not at all. Talk about lock-in.

horizonVelox999 3 hours ago

AI tools aren't perfect, but they're great for quick personal projects where you understand exactly what you need. It's like having a helpful assistant who writes the boring parts while you focus on solving the actual problem.

MostlyStable 9 hours ago

I've made this point before, and in the short to medium term, I really do think it's one of the biggest and most underrated uses of AI. If I am making a tool for myself, and only myself, and if I deeply understand both the inputs and the expected outputs, and if it's a one off script or tool designed to solve a specific problem I have, then a huge swath of the issues with AI coding go away.

It's still not perfect, but it is dramatically easier to be fast and productive, and it is a huge leap in capabilities for people who previously couldn't code anything at all, but had deep enough domain knowledge to know what tools they wanted, and approximately how they should work, what kind of information they should ingress, and what kind of information they should egress.

ivanech 8 hours ago

AI tools have been so good for me for making home-cooked software. As a new-ish parent, it’s so much easier to do stuff. I don’t need to go into extra-deep focus mode to learn how to center a div for the hundredth time, I can spend that precious focus time on the problems that matter / the core motivation.

bitwize 8 hours ago

Ah, another "Now that we have AI, people can do [thing people could do for decades]" article. If there was something you wanted a computer to do, that it did not yet do, you programmed it. And if you didn't know how, you learned. BASIC was always there.

But the industry as a whole moved away from the idea that end users are to program computers sometime in the 80s or 90s (the glorious point and click future was not evenly distributed). So now the only tools for writing software out there are either outdated, or require considerable ceremony to get started with (even npm install). So what, we're gonna paper over the gap with acres of datacenter stealing our energy and fresh water to play token numberwang? Fuck me!

This article, and generative AI in general, is appealing to the people on Golgafrinchian Ark Fleet Ship B (aka "the managerial class") because it helps them convince themselves that they can now do all the things the folks on Golgafrinchian Ark Ship A can do (so who needs them, anyway) without having to learn anything. Now you can program without having to program! You're an Idea Person, and that's what's really important; so just idea-person into ChatGPT and all the rest will be taken care of for you. I think these folks are in for a rude awakening.

  • djmips 8 hours ago

    I feel like you've never actually tried to make tool with Claude Code or similar because BASIC is not it - that's viewing the past with rose coloured glasses. However I understand your central thesis that we could have actually put effort into making something that average folk could use to effectively leverage computers in a way that requires 'code'. But you know we have tried - we have Scratch - we have all of the node graph spaghetti in Unreal Engine and others - I am a programmer but I finally sat down and went through the process of making a working finished tool in a language I'm unfamliar with and using Claude Code and it went really well. And if folks like Ben Krasnow of Applied Science channel are using AI coding tools that they would formally take them 3 to 5 x longer to struggle through unfamiliarity then practically it's working - although I also take a nod to your 'at what cost'. But the idea that we could have been living in some Utopian BASIC derived alternate uinverse seems a little bit optimistic to me. I like AI coding (if I don't have to think of the costs)

    • bitwize 7 hours ago

      I'm not saying BASIC is it. But it was good enough in its day—my father used it to write engine simulations. It was the first language to attack the problem "getting computing nonprofessionals to write their own programs for their own needs" and it achieved that very well by the standards of the 01960s-01980s. But the fact that every computer shipped with a language that allowed users to get started with programming right away, was a noble thing we should have sought to preserve even in the present day. Scratch is for kids, and node spaghetti presents the usual no-code issues. HolyC comes close, but you know, Terry Davis. An acquired taste.

      I was kinda hoping that language would be Python, but even that requires ceremony these days.

  • seabombs 2 hours ago

    In my experience, the people using AI to make programs are still programmers. As in, they were trained in programming pre-AI. Managers are using AI to output manager stuff - documents, spreadsheets, etc. Similarly, marketers are using AI to write ad copy, not the marketing manager.

    This may change as tools become better known or adopted. But for know the same people who did the job before are now using AI in that job.

    FWIW, in my IRL experience, all the work output of those using AI for whatever task has been of poorer quality than without.

  • pickledonions49 3 hours ago

    From what I can tell, some professionals seem to perceive it as a way to not write the easy stuff and only deal with the harder, more specific stuff that llms don't get (because they are incapable of new ideas). I don't know all the facts, but it seems as though this "home cooked software" boom will be really dull because of llm limitations. It always seems like I am actually learning something when copying code from books, maybe that is one reason why the 80s was interesting in terms of software, but what do I know, I wasn't alive.