fhd2 6 hours ago

The last time I've used a leet code style interview was in 2012, and it resulted in a bad hire (who just happened to have trained on the questions we used). I've hired something like 150 developers so far, and what I ended up with after a few years of trial and error:

1. Use recruiters and network: Wading through the sheer volume of applications was even nasty before COVID, I don't even want to imagine what it's like now. A good recruiter or a recommendation can save a lot of time.

2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.

3. Put the candidate at ease - nervous people don't interview well, another problem with non-trivial tasks in technical interviews. I rarely do any live coding, if I do, it's pairing and for management roles, to e.g. probe how they manage disagreement and such. But for developers, they mostly shine when not under pressure, I try to see that side of them.

4. Talk through past and current challenges, technical and otherwise. This is by far the most powerful part of the interview IMHO. Had a bad manager? Cool, what did you do about it? I'm not looking for them having resolved whatever issue we talk about, I'm trying to understand who they are and how they'd fit into the team.

I've been using this process for almost a decade now, and currently don't think I need to change anything about it with respect to LLMs.

I kinda wish it was more merit based, but I haven't found a way to do that well yet. Maybe it's me, or maybe it's just not feasible. The work I tend to be involved in seems way too multi faceted to have a single standard test that will seriously predict how well a candidate will do on the job. My workaround is to rely on intuition for the most part.

  • Stratoscope 5 hours ago

    When I was interviewing candidates at IBM, I came up with a process I was really happy with. It started with a coding challenge involving several public APIs, in fact the same coding challenge that was given to me when I interviewed there.

    What I added:

    1. Instead of asking "do you have any questions for me?" at the very end, we started with that general discussion.

    2. A few days ahead, I emailed the candidate the problem and said they are welcome to do it as a take home problem or we could work on it together. I let them know that if they did it ahead of time, we would do a code review and I would ask them about their design and coding choices. Or if they wanted to work on it together, they should consider it a pair programming session where I would be their colleague and advisor. Not some adversarial thing!

    3. This the innovation I am proud of: a segment at the beginning of the interview called "teach me something". In my email I asked the candidate to think of something they would like to teach me about. I encouraged them to pick a topic unrelated to our work, or it could be programming related if they preferred that. Candidates taught me things like:

    • How to choose colors of paint to mix that will get the shade you want.

    • How someone who is bilingual thinks about different topics in their different languages.

    • How the paper bill handler in an ATM works.

    • How to cook pork belly in an air fryer without the skin flying off (the trick is punching holes in the skin with toothpicks).

    I listed these in more recent emails as examples of fun topics to teach me about. And I mentioned that if I were asked this question, I might talk about how to tune a harmonica, and why a harmonica player would want to do that

    This was fun for me and the candidate. It helped put them at ease by letting them shine as the expert in some area they had a special interest in.

    • snickerer 8 minutes ago

      I immediately want to learn about all these cool things you listed.

      I work as a developer and as an interviewer (both freelance). Now I want to integrate your point 3. into my interviews, but not to choose better candidates, just to learn new stuff I never thought about before.

      It is your fault that I see now this risk in my professional life, coming at me. I could get addicted to "teach me something". 'Hey candidate, we have 90 minutes. Just forget about that programming nonsense and teach me cool stuff'

    • vanceism7_ 5 hours ago

      That sounds really cool. I wish I was running into more job interviews like the one you describe. The adversarial interviewing really hurts the entire feel of the process

    • asalahli 5 hours ago

      Love the 3rd point! I might start using that in my future interviews

      • Stratoscope 5 hours ago

        Please report back when you do!

        I don't remember how I came up with the idea. Maybe I just like learning things.

        One candidate even wrote after their interview, "that was fun!"

        Have you ever had a candidate say that? This was the moment when I realized I might be on to something. :-)

        Interviews are too often an adversarial thing: "Are you good enough for us?"

        But the real question is would we enjoy working together and build great things!

        People talk about "signal" in an interview. Someone who has an interest they are passionate and curious about and likes to share it with others? That's a pretty strong signal to me.

        Even if it has nothing to do with "coding".

        • neilv 21 minutes ago

          If this became popular, wouldn't people start rehearsing for it like they do Leetcode interviews, and it would be become another performance that people focus on and optimize for, rather than on the skills for the job?

    • biztos 4 hours ago

      What, what? Harmonicas are tunable? TIL...

      • Stratoscope 3 hours ago

        Oh yes, they are!

        You just need a small file, like a point file. Any gasheads remember those? And a single edge razor blade or the like to lift the reed.

        To raise the pitch, you file the end of the reed, making it lighter so it vibrates faster.

        To lower the pitch, you file near the attached end of the reed. I am not sure on the physics of this and would appreciate anyone's insight.

        The specific tuning I've done many times is to convert a standard diatonic harp (the Richter tuning) to what is now called the Melody Maker tuning.

        The Richter tuning was apparently designed for "campfire songs". You could just "blow, man, blow" and all the chords would sound OK.

        Later, blues musicians discovered that you could emphasize the draw notes, the ones that are easy to bend to a flat note to get that bluesy sound. This is called "cross harp". For example, in a song in G you would use a C harp instead of one tuned in G.

        The problem with cross harp is that the 7th is a minor 7th and you have no way to raise it up to a major 7th if that would fit your song. And the 2nd is completely missing! In fact you just have the tonic (G in this case) on both the draw and blow notes where you might hope to hit the 2nd (A). There is no A in this scale, only the G twice.

        To imagine a song where this may be a problem, think of the first three notes of the Beatles song All My Loving. It starts with 3-2-1. Oops, I ain't got the 2. Just the 1 twice.

        This is where the file comes in. You raise the blow 1st to a major 2nd. And you raise the minor 7th to a major 7th in both octaves.

        Now you have a harp with that bluesy sound we all love, but in a major scale!

        • cshimmin 2 hours ago

          Re: lower pitch, I'd hazard a guess that you're basically reducing the restoring force, so the resonant frequency goes down. Think of the attachment point as a bunch of springs in parallel; you snip a few of them and the overall spring constant is reduced. Or another way to think of it: Imagine you had a reed of a given width and then added mass to the end by making it wider at the non-attached end. You'd expect the frequency to go down.

          • ashdnazg 13 minutes ago

            I'm sometimes tuning accordions, which is rather similar, but if you want to lower the tone by a lot, you can plop some solder close to the tip of the reed.

            It's pretty clear why that's the opposite of filing off material close to the tip, so obviously the tone goes lower.

            In my mental image, filing close to the base of the reed gives the reed a similar shape as putting extra material next to the tip (thinner at the base, thicker next to the tip), and that's why it behaves the same.

    • roenxi an hour ago

      (3) is biasing the process strongly in favour of people who spin a good story. If you're looking for a certain team culture then OK but this is going to neatly screen out anyone who just wants to do their job well and doesn't particularly know how to sell the extra-curriculars they have.

      • cupofjoakim an hour ago

        Specifically for engineering I think it could work if you really press on that it's about teaching something. A core part in an engineering team is to walk eachother through concepts, so judging how you can explain concepts to someone is actually a good thing i believe?

      • sbarre an hour ago

        That person can teach OP something about code then? They said it doesn't have to be work-related, but it can be.

        I don't know that many programmer/developer jobs where you can just put your headphones on and code without ever talking to another person.

        Being able to explain/teach your work is part of "doing the job well" for a developer (IMO).

      • the_snooze 35 minutes ago

        A big part of any engineering job is casual technical communication. This seems like it's entirely fair line of questioning. And the candidate gets to pick their strongest topic. What more do you want?

      • DrNosferatu an hour ago

        People were free to teach something in programming.

    • DrNosferatu an hour ago

      Really nice!

      I wish more bosses were like you.

    • rvba 4 hours ago

      This post is some unintentional satire how IBM operates (nobody invents anything in IBM anymore).

      The opening question is a copy of what was done before (probably by someone who doenst work at IBM anymore) and all the new stuff is stolen from outsiders.

      • Stratoscope 2 hours ago

        Seriously? I can assure you with 100% certainty that the most important point - "teach me something" - was entirely my own innovation.

        I am tempted to take some offense at your comment. But I have to assume that you mean well.

        You are correct that I don't work for IBM any more. What does that have to do with anything?

        • another2another 2 hours ago

          I'm surprised at the negativity your approach seems to have sparked in a few, but I found it really great, probably very effective as well and will probably start to use it at some point.

          Thanks for sharing.

    • radium_raven 4 hours ago

      > In my email I encouraged the candidate to teach me something unrelated to our work, or programming related if they preferred that.

      What if I decide to teach you something about the Quran and you don't hire me?

      Perhaps this is just urban legend but from the stories I've heard hiring for FAANG-type companies there are people out there interviewing with their only goal being baiting you into a topic they can sue you over.

      Worst instance I have heard of is when an interviewer asked about the books the candidate liked to read (since the candidate said they're an avid reader in their resume) and he just said that he liked to read the Bible. After not getting hired for failing the interviews he accused the company of religious discrimination.

      I'm by no means an expert in US law and don't know the people this happened to directly so maybe it's just one big fantasy story but it doesn't seem that far fetched that

      - If you are a rich corporation then people will look to bait you into a lawsuit

      - If you give software engineers (or any non-lawyers) free rein on how to conduct interviews then some of them can be baited into a situation that the company could be sued over way more easily than a rigid leetcode interview

      I think nothing came of the fellow who liked reading the Bible but I would imagine the legal department has a say in what kind of interview process a rich company can and can not use.

      • swiftcoder 4 hours ago

        I think you may have misunderstood the guidance from your legal team on this.

        There is literally no way to prevent a candidate from disclosing a protected characteristic during an interview. Some obvious examples: they might show up to the interview in a wheelchair, they might be wearing a religious garment, they might be visibly pregnant, and so on...

        What legal doesn't want, is you asking questions directly intended to elicit that kind information when the candidate didn't volunteer it. Asking the candidate a direct question like "are you planning to have kids?" makes it sound like that information will be used in the hiring decision.

      • baoluofu 4 hours ago

        I have found that if you tighten the parameters a bit, you can still get all the benefit of what this question is asking for. For example, teach me the rules of a sport, board game etc. You still get to see if they can present a coherent explanation of something relatively complex, but you can avoid these potentially dangerous topics.

      • Stratoscope 2 hours ago

        I gave candidates the option to teach me something, if they wanted to.

        As you can see from the examples I gave, what they taught me was completely up to them.

        It was just a fun exercise that I and every candidate enjoyed.

        Forgive me, but I just don't see the problem here.

        If you did decide to teach you something about the Quran, that would be great!

        I don't know enough about these holy books, and I always welcome a chance to learn more.

        • znpy an hour ago

          > Forgive me, but I just don't see the problem here.

          There is no problem. But some people will create problems out of thin air, when there is none.

      • biztos 4 hours ago

        Or, you can choose to not live in fear.

      • jaggederest 2 hours ago

        Companies with armies of attack lawyers riding helicopter gunships don't make great targets for this kind of thing, in general. I shudder to think about trying to take Apple or Disney to court.

        It's somewhat overblown - obviously anyone can try to submit a demand letter about anything. My experience with legal in the hiring process is they want to avoid obvious own goals, and document the process so that clear reasoning can be expressed. Then, unless you really do something obviously discriminatory, you can tell people who claim they've been discriminated against to go pound sand.

        There are lots of good tactical reasons to settle claims like that, so lawyers may advise you to settle, but if you're of the "we don't negotiate" mindset, in my experience most lawyers are quite happy to gear up for a fight with the right groundwork in place.

      • krageon 2 hours ago

        This is outrage bait you have fallen prey to. Now you're spreading it. You're better than this.

      • freilanzer 4 hours ago

        > What if I decide to teach you something about the Quran and you don't hire me?

        Simple, don't pick subjects, such as religion, sexuality, etc. It's not that deep.

  • biztos 4 hours ago

    As someone with a pretty long career already, and who's comfortable talking about it, I was a bit surprised that in three interviews last year nobody asked a single thing about any of my previous work. One was live coding and tech trivia, the other two were extensive take-home challenges.

    To their credit, I think they would have hired "the old guy" if I'd aced their take-homes, but I was a bit rusty and not super thrilled about their problem areas anyway so we didn't get that far. And honestly it seems like a decent system for hiring well-paid cogs in your well-oiled machine for the short term.

    Your method sounds like what we were trying to do ten years ago, and it worked pretty well until our pipeline dried up. I wish you, and your candidates, continued success with it: a little humanity goes a long way these days.

  • axegon_ 5 hours ago

    I feel like take home tests are meaningless and I always have. Even more so now with LLMs, though 9/10 times you can tell if it's an LLM-people don't normally put trivial comments in the code such as

    > // This line prevents X from happening

    I've seen a number of those. The issue here is that you've already wasted a lot of time with a candidate.

    So being firmly against take home tests or even leetcode, I think the only viable option is a face to face interview with a mixture of general CS questions(i.e. what is a hashmap, benefits and drawbacks, what is a readers-writer lock, etc) and some domain specific questions: "You have X scenario(insert details here), which causes a race condition, how do you solve it."

    • throwaway2037 4 hours ago

          > I feel like take home tests are meaningless and I always have. Even more so now with LLMs
      
      This has been discussed many times already here. You need to set an "LLM trap" (like an SSH honey trap) by asking the candidate to explain the code they wrote. Also, you can wait until the code review to ask them how they would unit test the code. Most cheaters will fall apart in the first 60 seconds. It is such an obvious tell. And if they used an LLM, but they can very well explain the code, well, then, they will be a good programmer on your team, where an LLM is simply one more tool in their arsenal.

      I am starting to think that we need two types of technical interview questions: Old school (no LLMs allowed) vs new school (LLMs strongly encouraged). Someone under 25 (30?) is probably already making great use of LLMs to teach themselves new things about programming. This reminds me of when young people (late 2000s/early 2010s) began to move away from "O'Reilly-class" (heh, like a naval destroyer class) 500 page printed technical books to reading technical blogs. At first, I was suspicious -- essentially, I was gatekeeping on the blog writers. Over time, I came to appreciate that technical learning was changing. I see the same with LLMs. And don't worry about the shitty programmers who try to skate by only using LLMs. Their true colours will show very quickly.

      Can I ask a dumb question? What are some drawbacks of using a hash map? Honestly, I am nearly neck-bearded at this point, and I would be surprised by this question in an interview. Mostly, people ask how do they work (impl details, etc.) and what are some benefits over using linear (non-binary) search in an array.

      • axegon_ 2 hours ago

        "Drawbacks" was the wrong word to use here, "potential problems" is what I meant - collisions. Normally a follow up question: how do you solve those. But drawbacks too: memory usage - us developers are pretty used to having astronomical amounts of computational resources at our disposals but more often than not, people don't work on workstations with 246gb of ram.

      • Cthulhu_ 4 hours ago

        If you really need to test them / check that they haven't used an LLM or hired someone else to do it for them (which was how people "cheated" on take-home tests before), ask them to implement a feature live; it's their code, it should be straightforward if they wrote it themselves.

  • jkaplowitz 4 hours ago

    How does that process handle people who have been out of work for a few years and can pass a take-home technical challenge (without a LLM) but cannot remember a convincing level of detail on the specifics of their past work? I’ve been experiencing your style of interview a lot and running up against this obstacle, even though I genuinely did the work I’m claiming to have done.

    Especially people with ADHD don’t remember details as long as others, even though ADHD does not make someone a bad hire in this industry (and many successful tech workers have it).

    I do prefer take-home challenges to live coding interviews right now, or at least a not-rushed live coding interview with some approximate advance warning of what to expect. That gives me time to refresh my rust (no programming language pun intended) or ramp up on whichever technologies are necessary for the challenge, and then to complete the challenge well even if taking more time than someone who is currently fresh with the perfectly matched skills might need. I want the ability to show what I can do on the job after onboarding, not what I can do while struggling with long-term unemployment, immigration, financial, and family health concerns. (My fault for marrying a foreigner and trying to legally bring her into the US, apparently.)

    And, no, my life circumstances do not make it easy for me to follow the common suggestion of ramping up in my “spare time” without the pressures of a job or a specific interview task. That’s completely different from when I can do on the job or for a specific interview’s technical challenge.

    • sbarre an hour ago

      This is slightly tangential to your questions, but to address the "remembering details about your past work", I've long-encouraged the developers I mentor to keep a log/doc/diary of their work.

      If nothing else, it's a useful tool when doing reviews with your manager, or when you're trying to evaluate your personal growth.

      It comes in really handy when you're interviewing or looking for a new job.

      Julia Evans calls it a "brag doc": https://jvns.ca/blog/brag-documents/

      It doesn't have to be super detailed either. I tell people to write 50% about what the work was, and 50% about what the purpose/value of the work was.. That tends to be a good mix of details and context.

      Writing in it once a month, or once a sprint, is enough. Even if it's just a paragraph or two..

    • fhd2 3 hours ago

      I can't say I've interviewed someone to which this applies - unfortunately! Probably just doesn't get surfaced by my channels.

      I would definitely not expect someone out of work for a while to have any meaningful side projects. I mean, if they do, that's cool, but I bet the sheer stress of not having a job kills a lot of creativity and productivity. Haven't been there, so I can only imagine.

      For such a candidate, I'd probably want to offer them a time limited freelance gig rather quickly, so I can just see what they work like. For people who are already in a job, it's more important to ensure fit before they quit there, but your scenario opens the door to not putting that much pressure on the interview process.

  • fhd2 6 hours ago

    To add: I can very well imagine this process isn't suitable for FAANG, so I can understand their university exam style approach to a degree. It's easy to arm chair criticise, but I don't know if I could come up with something better at their scale. These days, I'm mostly engaged by startups to help them build an initial team, I acknowledge that's different from what a lot of other folks hire for.

    • adastra22 5 hours ago

      Why not? Plenty of large organizations hire this way. My first employer is bigger than any FAANG company by head count, and they hired this way. Why is big tech different?

      • michaelt 2 hours ago

        When you're operating a small furniture company, your master craftsmen can hand-fit every part. Match up the hole that's slightly too big with the dowel that's slightly too big, which they put to one side the other day.

        When you're operating a huge furniture company, you want every dowel and every hole the same size. Then any fool can assemble it, and you don't have to waste time and space on a collection of slightly-too-large dowels.

        To scale up often means focusing on consistency and precision, and less on expert judgement and care.

      • fhd2 3 hours ago

        Well, I respect the scale and speed. My process was still working fine at ~5 per month. I have doubts it'd work with orders of magnitude more. There's a lot of intuition and finesse in there, that is probably difficult to blindly delegate. Plus, big companies have very strong internal forces to eliminate intuition in favour of repeatable, measurable processes.

      • lmz 5 hours ago

        The desire for a scalable, standardized scoring mechanism so they can avoid lawsuits.

        • adastra22 3 hours ago

          A lawsuit on what basis?

        • biztos 4 hours ago

          Why would a big-non-tech company not have the same desire to avoid lawsuits?

          And yet this interviewing problem seems to only affect tech companies.

    • mnky9800n 6 hours ago

      Is your job consulting for hiring technical positions?

      • fhd2 3 hours ago

        I have a consultancy - we mostly do development, but I also do quite a bit of "consulting CTO" work. Quite often, that means helping with hiring.

        But hiring is nowhere near my full time job, thankfully :D

        • mnky9800n 3 hours ago

          That’s cool. My main job is being a research scientist but i also work at a computational science consultancy that essentially builds numerical simulations for hire along with other software consultancy jobs. I occasionally have been asked to serve as an outside consultant for hiring data science positions. I’ve been wondering how to grow that a bit into more regular work because I enjoy it. Any thoughts on that?

  • joshvm 3 hours ago

    I’ve done the “at home” test for ML recently for a small AI consulting firm. It's a nice approach and got me to the next round, but the way the company evaluated it was to go through the questions and ask "fundamental ML bingo" questions. I don't think I had a single discussion about the company in the entire interview process. I was told up front "we probably won't get to the third question because it will take time to discuss theory for the first two".

    If you're a company that does this, please dog food your problems and make sure the interview makes the effort feel valued. It also smells weird if you claim it's representative of a typical engineering discussion. We all know that consultancy is wrangling data, bad data and really bad data. If you're arguing over what optimiser we're choosing I'd say there's better ways to waste your customer's money.

    On the other hand I like leetcode interviews. They're a nice equalizer and I do think getting good at them improves your coding skill. The point is to not ask ludicrously obscure hard problems that need tricks. I like the screen share + remote IDE. We used Code which was nice and they even had tests integrated so there wasn't the whiteboard pressure to get everything right in your head. You also know instantly if your solution works and it's a nice confidence if you get it first try, plus you can see how candidates would actually debug, etc.

  • throwaway2037 4 hours ago

        > Put the candidate at ease - nervous people don't interview well
    
    This is great advice. I have great success with it. I give the same 60 second speech at the start of each interview. I tell candidates that I realise that tech interviews are stressful -- "In 202X, the 'tech universe' is infinitely wide and deep. We can always find something that you don't know. If you don't have experience in a topic that we raise, let us know. We will move to a new topic. All, yes, all people that we interviewed had at least one topic where they had no experience, or none recent." Also, it helps to do "interview ramp-up", where you start with some very quick wins to build up confidence with the candidate. It is OK to tell them "I will push a bit harder here" so they know you are not being a jerk... only trying to dig deeper on their knowledge.
    • dmoy 4 hours ago

      Putting candidate at ease is definitely important.

      Another reason:

      If you're only say one of four interviewers, and you're maybe not the last interviewer, you really want the candidate to come out of your interview feeling like they did well or at least ok enough, so that they don't get tilted for the next interview. Because even if they did really poorly in your interview, maybe it's a fluke and they won't fail the rest of the loop.

      Which is then a really difficult skill as an interviewer - how do you make sure someone thinks they do well even if they do very poorly? Ain't easy if there's any technical guts in the interview.

      I sure as shit didn't get any good at that until I'd conducted like 100+ interviews, but maybe I'm just a slow learner haha

  • neilv 3 hours ago

    > 2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.

    You need to do this to establish some baseline competency... for junior hires with no track record?

  • vanceism7_ 5 hours ago

    Wow, that was a great write up. Can I interview with you? Lol, everything you wrote was really spot on with my own interview experiences. I tend to get super nervous during interviews and have choked up on many interviews asking for live coding on crazy algorithm problems. It state of hiring seems to be really bad right now. But I'll take your advice and try to get in contact with some recruiters

  • kamaal 5 hours ago

    >>Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.

    The biggest problem with the take home tests are not people who don't show up due to not being able to finish the assignment, But that those people who do, now expect to get hired.

    95% people don't finish the assignment. 5% do. Teams think submitting the assignment with 100% feature set, unit test cases, onsite code review and onsite additional feature implementation still shouldn't mean a hire(If not anything, there are just not enough positions to fill). From the candidate's perspective, its pointless to spend a week working and doing every thing the hiring team asked for and still receive a 'no'.

    I think if you are not ready to pay 2x - 5x market comp you shouldn't use take home assignments to hire people. There is too much work to do, and receive a reject at the end. Upping the comp absolutely makes sense as working a week to get a chance at earning 3x more or so makes sense.

    • sage76 an hour ago

      Most of the time, those take home tests cannot be done in 2 hours. I remember one where I wasn't even done with the basic setup in 2 hours, installing various software/libraries and debugging issues with them.

    • mavhc 5 hours ago

      If you're expecting a week's worth of work from them you'd better pay them for their time, if they turn up.

    • Fr3dd1 5 hours ago

      We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent. I always told the candidates, that the goal of the task is 1. to see some code and if some really basic stuff is on point and 2. that you can argue with someone about his or her code.

      • kamaal 5 hours ago

        >>We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent.

        Be upfront that finishing the assignment doesn't guarantee a hire and very likely the very people you want to hire won't show up.

        Please note that as much as you want good people to participate in your processes. Most good talent doesn't like to waste its time and effort. How would you feel if someone wasted your time and effort?

        • Fr3dd1 4 hours ago

          I am in germany, so by far not the same situation as in other areas of the world. If I would get such an assignment myself and I have the feeling that this will help the company and also me to verify, if it is a fit, I will do that 1 to 3 hour task very happily.

    • wkat4242 5 hours ago

      > From the candidate's perspective, its pointless to spend a week working and doing every thing the hiring team asked for and still receive a 'no'.

      Uhhh yeah. That would really piss me off. Like reviewbombing glassdoor type of pissed.

sergioisidoro 16 hours ago

I've let people use GPT in coding interviews, provided that they show me how they use it. At the end I'm interested in knowing how a person solves a problem, and thinks about it. Do they just accept whatever crap the gpt gives them, can they take a critical approach to it, etc.

So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI. So far my opinion is if you have a good interview process, you can clearly see who are the good candidates with or without ai.

  • alexjplant 11 hours ago

    Earlier this past week I asked Copilot to generate some Golang tests and it used some obscure assertion library that had a few hundred stars on GitHub. I had to explicitly ask it to generate idiomatic tests and even then it still didn't test all of the parameters that it should have.

    At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.

    LLMs are still not ready for prime-time. They churn out code like an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday.

    • DustinBrett 10 hours ago

      I feel like I am taking crazy pills that other devs don't feel this way. How bad are the coders that they think these AI's are giving them super powers. The PR's with AI code are so obvious and when you ask the devs why, they don't even know. They just say, well the AI picked this, as if that means something in and of itself.

      • doix 10 hours ago

        AI gives super powers because it saves you an insane amount of typing. I used to be a vim fanatic, I was very efficient but whenever I changed language there was a period where I had to spend getting efficient. Setup some new snippets for boilerplate, maybe tweak some LSP settings, save some new macros.

        Now in cursor I just write "extract this block of code into its own function and set up the unit tests" and it does it, with no configuration from my part. Before I'd have a snippet for the unit test boilerplate for that specific project, I'd have to figure out the mocks mysel, etc.

        Yes, if you use AI to just generate new code blindly and check it in without no understanding, you end up with garbage. But those people were most likely copy pasting from SO before AI, AI just made them faster.

        • abenga 7 hours ago

          AI saves you an insane amount of typing, but adds an insane amount of reading, which is strictly harder than typing (at least for me).

          • anonzzzies 7 hours ago

            Hmm, that is interesting; reading is harder? You have to read a lot of code anyway right? From team members, examples, 3rd party code/libraries? Through the decades of programming at least I became very proficient and rapidly spotting 'fishy' code and generally understanding code written by others. AI coding is nice because it is, for me, the opposite of what you have; reading the code it generates is much faster than writing it myself even though I am fast at writing it; not that fast.

            I have said it here before, because I would love to see some videos of HNers who complain AI gives them crap as we are getting amazing results on large and complex projects... We treat AI code the same as human code, we read it and recommend or implement fixes.

            • swiftcoder 4 hours ago

              > Hmm, that is interesting; reading is harder?

              Much, much harder. Sure, you can skim large volumes of code very quickly. But the type of close reading that is required to spot logic bugs in the small is quite taxing - which is the reason that we generally don't expect code review processes to catch trivial errors, and instead invest in testing.

              • anonzzzies 3 hours ago

                But we are not talking about large volumes of code here; we are talking about; LLM generates something, you check it and close read it to spot logic bugs and either fix yourself, ask the LLM or approve. It is very puzzling to me how this is more work/taxing than writing it yourself unless for very specific examples;

                Examples from every day reality in my company; writing 1000s of lines of react frontend code is all LLM (in very little time) and reviews catch all the issues while the database implementation we are working on we spend sometimes one hour on a few lines and the LLM suggest things but they never help. Reviewing such a little bit of code has no use as it's the result of testing a LOT of scenarios to get the most performance out in the real world (across different environments/settings). However, almost everyone in the world is working on (similar issues like) the former, not the latter, so...

                • swiftcoder 3 hours ago

                  > writing 1000s of lines of react frontend code

                  Maybe we just located the actual problem in this scenario.

                  • anonzzzies 3 hours ago

                    Shame we cannot combine the two threads we are talking about, but our company/clients structure do not allow us to do this differently (quickly; our clients have existing systems with different frontend tech; they are all large corps with many external and internal devs which built some 'framework' on top of whatever frontend they are using; we cannot abstract/library-fy to re-use across clients). I would if I could. And this is actually not a problem (outside it being a waste to which I agree) as we have never delivered more for happier clients in our existence (which is around 25 years now) than in 2024 because of that. Clients see the frontend and being able to over-deliver there is excellent.

            • abenga 5 hours ago

              "In the small", it's easy to read code. This code computes this value, and writes it there, etc. The harder part is answering why it does what it does, which is harder for code someone else wrote. I think it is worthwhile expending this effort for code review, design review, or understanding a library. Not for code that I allegedly wrote. Especially weeks removed, loading code I wrote into "working memory" to fix issues or add features is much much easier than code I didn't write.

              • anonzzzies 4 hours ago

                > The harder part is answering why it does what it does, which is harder for code someone else wrote.

                That's a vital part of writing software though.

                • abenga 3 hours ago

                  True. I will save effort by only expending it when needed (when I need to review my coworkers' code, legacy code, or libraries).

            • dagss 6 hours ago

              Depends ofc on the complexity of the area, but... reading someones code to me feels a bit like being given a 2D picture of a machine, then having to piece together a 3D model in my head from a single 2D photo from one projection of the machine. Then figuring out if the machine will work.

              When I write code, the hard part is already done -- the mental model behind the program is already in my head and and I simply dump it to keyboard. (At least for me typing speed has never been relevant as a limiting factor)

              But I read code I have to reassemble the mental model "behind" it in my head from the output artifact of the thought processes.

              Of course one needs to read code of co-workers and libraries -- but it is more draining, at least for me. Skimming it is fast but reading it thoroughly enough to find bugs by reading requires making the full mental model of the code which takes more mental effort for me at least.

              There is a huge difference in how I read code from trusted experienced coworkers and juniors though. AI falls in the latter category.

              (AI is still saving me a lot of time. Just saying I agree a lot that writing is easier than reading still.)

              • anonzzzies 5 hours ago

                Running code in your head is another issue that AI won't solve (yet); we had different people/scientists working on this; the most famous person there being Brett Victor, but also Jonathan Edwards [0] and Chris Granger (lighttable). I find the example in [0] the best; you are sitting there with your very logically weak brain trying to think wtf will this code do while there is a very powerful computer next to you that can tell you. But doesn't. And yet, we are mostly restricted to first think out the code to at least some extent before we can see it in action, same for the AI.

                [0] https://vimeo.com/140738254

              • pineaux 6 hours ago

                You mean like a blueprint of a machine? Because that is exactly how machines are usually presented in official documentation. To me the skill of understanding how "2d/code" translates to "3d/program execution" is exactly the skill that sets amateurs apart from pros, saying that, I consider myself an amateur in code and a professional in mechanical design.

            • jajko 5 hours ago

              When you type the code, you definitely think about it, deepening your mental model of the problem, stopping and going back and changing things.

              Reading is massively passive, and in fact much more mentally tiring if whole reading is in detective mode 'now where the f*ck are some hidden issues now'. Sure, if your codebase is 90% massive boilerplate then I can see quickly generated code saves a lot of time, but such scenarios were normally easy to tackle before LLMs came. Or at least those I've encountered in past few decades.

              Do you like debugging by just tracing the code with your eyes, or actually working on it with data and test code? I've never seen effective use of such regardless of seniority. But I've seen in past months wild claims about magic of LLMs that were mostly un-reproduceable by others, and when folks were asked for details they went silent.

          • petesergeant 5 hours ago

            Here is a chat transcript from today, I don't know if it'll be interesting to you. You can't see the canvas it's building the code in: https://chatgpt.com/share/67a07afe-3e10-8004-a5ea-cc78676fb6...

            Yes, I have to read what it writes, and towards the end it gets slow and starts making dumb mistakes (always; there's some magically bad length at which it always starts to fumble), but I feel like I got the advantages of pairing out of it without actually needing to sit next to another human? I'll finish the script off myself and review it.

            I don't know if I've saved actual _time_ here, but I've definitely saved some mental effort on a menial script I didn't actually want to write, that I can use for some of the considerably more difficult problems I'll need to solve later today. I wouldn't let it near anything where I didn't understand what every single line of code it wrote was doing, because it does make odd choices, but I'll probably use it again to do something tomorrow. If it needs to be part of a bigger codebase, I'll give it the type-defs from elsewhere in the codebase to start with, or tell it it can assume a certain function exists

        • porridgeraisin 33 minutes ago

          > vim > LLM

          I use both :)

          Vim has a way to run shell programs with your selection as standard input as you'd know, and it will replace the selection with stdout.

          So I type in my prompt e.g "mock js object array with 10 items each having name age and address" do `V!ollama run whatever` for example and it will fill it in there.

          Now this is blocking and I have a hacky way to run it async and fill it in based on marks later in my vimrc. Neovim really since I use jobstart().

          This also works with lots of other stuff, like quick code/mock generation e.g sometimes instead of asking an LLM I just write javascript/python inline and `vap!node`/python on it.

        • WalterBright 8 hours ago

          > AI gives super powers because it saves you an insane amount of typing.

          I must be very different, as very little of my coding time is spent typing.

          • doix 3 hours ago

            For me, getting what's in my head out onto the screen as fast as possible increases my productivity immensely.

            Maybe it's because I'm used to working with constant interruptions, but until what I want is on the screen, I can't start thinking about the next thing. E.g. if I'm making a new class, I'm not thinking about the implementation of the inner functions until I've got the skeleton of the class in place. The faster I get each stage done, the faster I work.

            It's why I devoted a lot of time getting efficient at vim, setting up snippets for my languages, etc. AI is the next stage of that in my mind.

            Maybe you can keep thinking about next steps while stuff is "solved" in your head but not on the screen. It also depends on the type of work you're doing. I've spent many hours to delete a few lines and change one, obviously AI doesn't help there.

          • v20 3 hours ago

            I think that spending a lot of time typing is likely an architectural problem. But I do see how AI tools can be used for "oneshot" code where pondering maintainability and structure is wasted time.

          • sampullman 7 hours ago

            It's the same for me if I'm working on something unique or interesting, or a new technology.

            Some kinds of coding require very little thinking, though. Converting a design to frontend interface or implementing a CRUD backend are mostly typing.

          • girvo 8 hours ago

            That's certainly the case for myself, too, though I've got roughly two fewer decades in this than yourself!

            But typing throughput has never been my major bottleneck. Refactoring is basically never just straight code transforms, and most of my time is spent thinking, exploring or teaching these days

        • yoyohello13 9 hours ago

          See THIS is a usage that makes sense to me. Using AI to manipulate existing code like this is great. I save a ton of time by pasting in a json response and saying something like “turn this into Data classes” makes api work so fast. On the other hand I really don’t understand devs that say they are using AI for ALL their code.

          • mewpmewp2 8 hours ago

            Copilot kind of auto completes exactly what I want most of the time. When I want something bigger I will ask Claude to give me that, but I always know what I am going to get, and I could have written it myself, it would have just taken tons of typing. I feel like I am kind of an orchestrator of sorts.

          • weebull 4 hours ago

            So if you were to do a transformation like that you'd cut the code and paste it into a new function. Then you'd modify that function to make the abstraction work. An LLM will rewrite the code in the new form. It's not cut/paste/edit. It's a rewrite every time with the old code as reference.

            Each rewrite is a chance to add subtle bugs, so I take issue with the description of LLMs "working on existing code". They don't use text editors to manipulate code like we do (although it might be interesting if they did) and so will have different issues.

        • swiftcoder 4 hours ago

          > AI gives super powers because it saves you an insane amount of typing

          I feel like I'm going a little bit more insane whenever folks say this, because one of the primary roles of a software engineer is to develop tools and abstractions to reduce or eliminate boilerplate.

          What kind of software are you writing, where generating boilerplate is the limiting factor (and why haven't you fixed that)?

          • anonzzzies 3 hours ago

            > because one of the primary roles of a software engineer is to develop tools and abstractions to reduce or eliminate boilerplate.

            Is it? Says who? Not only do I see entire clans of folk appearing who say ; DRY sucks, just copy/paste, it's easier to read and less prone to break multiple things with one fix vs abstractions that keep functionality restricted to one location, but also; most programmers are there to implement crap their bosses say, that crap almost never includes 'create tools & abstractions' to get there.

            I agree with you actually, BUT this is really not what by far most people working in programming believe one of their (primary) roles entail.

        • DustinBrett 9 hours ago

          I do agree with the filling in of text, but only when the patterns are clear. Any kind of thinking on logic or using libraries I find it still leads me astray every time.

      • linsomniac 20 minutes ago

        >I feel like I am taking crazy pills that other devs don't feel this way.

        Don't take this the wrong way, but maybe you are.

        For example, this weekend I was working on something where I needed to decode a Rails v3 session cookie in Python. I know, roughly, nothing about Rails. In less than 5 minutes ChatGPT gave me some code that took me around 10 minutes to get working.

        Without ChatGPT I could have easily spent a couple hours putzing around with tracking down old Rails documentation, possibly involving reading old Rails code and grepping around to find where sessions were generated, hunting for helper libraries, some deadends while I tried to intuit a solution ("Ok, this looks like it's base64 encoded, but base64 decoding kind of works but produces an error. It looks like there's some garbage at the end. Oh, that's a signature, I wonder how it's signed...")

        Instead, I asked for an overview of Rails session cookies, a fairly simple question about decoding a Rails session cookie, guided it to Rails v3 when I realized it was producing the wrong code (it was encrypting the cookie, but my cookies were not encrypted). It gave me 75 lines of code that took me ~15 minutes to get working.

        This is a "spare time" project that I've wanted to do for over 5 years. Quite simply, if I had to spend hours fiddling around with it, I probably wouldn't have done it; it's not at the top of my todo list (hence, spare time project).

        I don't understand how people don't see that AI can give them "superpowers" by leveraging a developers least productive time into providing their highest value.

        • jenscow 9 minutes ago

          > took me ~15 minutes to get working

          You didn't blindly use it, you still used your expertise. You gave it a high quality prompt and you understood the reply enough to adjust it.

          In other words, you used it correctly. It complimented your expertise, rather than replace it.

      • fenomas 9 hours ago

        Devs who don't feel that way aren't talking about the stuff you're talking about.

        Look at it this way - a powerful debugger gives you superpowers. That doesn't mean it turns bad devs into good devs, or that devs with a powerful debugger never write bad code! If somebody says a powerful debugger gives them superpowers they're not claiming those things; they're claiming that it makes good devs even better.

        • feoren 9 hours ago

          The best debugger in the world would make me about 5% more efficient. That's about the percentage of my development time I spend going "WTF? Why is that happening?" That's the best possible improvement from the best possible debugger: about 5%.

          The reason is that I almost always have a complete understanding of everything that is happening in all code that I write. Not because I have a mega-brain; only because "understanding everything that is happening all the time" becomes rather easy if all of your code is as simple as you can possibly make it, using clear interfaces, heavily leveraging the type system, keeping things immutable, dependency inversion, and aggressively attacking any unclear parts until you're satisfied with them. So debuggers are generally not involved. It's probably a couple times per week that I enter debug mode at all.

          It sounds a little like saying "imagine the driving superpowers you could have if your car could perfectly avoid obstacles for you!" Okay, sure, that'd be life-saving sometimes, but the vast majority of the time, I'm not just randomly dodging obstacles. Planning ahead and paying attention kinda makes that moot.

          • mewpmewp2 8 hours ago

            Now imagine working on a 10+ years old codebase that 100s of developer hands have gone over. And several dependencies not being possible to even run locally because of being so out of date. Why work on that in the first place? Sometimes it pays really well.

            • bluefirebrand 7 hours ago

              I would absolutely not be comfortable trusting current AI to work on this sort of codebase and make meaningful improvements

              • mewpmewp2 an hour ago

                Not AI, but we were talking about a debugger specifically.

          • fenomas 8 hours ago

            Please read things in context - my comment was about what people mean when they talk about a thing giving developers superpowers. I was not making a claim about how much you, personally, benefit from debuggers.

            Also: I don't use debuggers much either. Is Superman's heat vision not a superpower because he rarely needs to melt things? :P

          • adrianN 8 hours ago

            It’s different if you work on a large legacy codebase where you have maybe a rough understanding of the high level architecture but nobody at the company has seen the code you’re working on in the last five years. There a debugger is often very helpful.

          • anonzzzies 6 hours ago

            That is your own code and only your libs then, no imports? Or you work in a language where imports are small (embedded?) so you know them all and maintain them all? Or maybe vanilla js/c without libs? Because import 1 npm and you have 100gb of crap deps you really don't understand which happens to work at moment t0 but at t1, when 1 npm updated, nothing works anymore and can't claim to understand it all as many/most people don't write 'simple code'.

          • maxwellg 8 hours ago

            You've never had to debug code that someone else wrote?

      • eru 9 hours ago

        Depending on what language you use and what domain your problem is in, current AIs can vary widely in how useful they are.

        I was amazed at how great ChatGPT and DeepSeek and Claude etc are at quickly throwing together some small visualisations in Python or JavaScript. But they struggled a lot with Bird-style literate Haskell. (And just that specific style. Plain Haskell code was much less of a problem.)

      • steelframe 7 hours ago

        > They just say, well the AI picked this, as if that means something in and of itself.

        In any other professional field that would be grounds for termination for incompetence. It's so weird that we seem to shrug off that kind of behavior so readily in tech.

        • anonzzzies 6 hours ago

          Nah, already had multiple cases of that; one with a lawyer at a big corp and some others; the story is not straight up 'ai said so' but more like: 'we use different online and offline tools to aid us in our work, sometimes the results are less than satisfactory, and we try to correct those cases'. It is the same response, just showing vulnerability; we are only human, even with our tools.

        • bluefirebrand 7 hours ago

          I think what you're saying is a bit idealistic. We like to think that people get terminated for incompetence but the reality is more complicated than that

          I suspect people get away with saying "I don't know why that didn't work, I did what the computer told me to do" a lot more frequently than they get fired for it. "I did what the AI said" will be the natural extension of this

      • randomNumber7 6 hours ago

        You are. Only noobs do this, experts using a llm are way more efficient. (Unless you work like with the same language and libraries for years)

        • bilekas 6 hours ago

          > (Unless you work like with the same language and libraries for years)

          I'm not sure what you mean here? Are you saying that if you work with the same language for years that you're some how not proficient with using LLMs?

          • rvba 4 hours ago

            Sounds like someone who switches jobs and technologies every year, so someone else has to clean up the mess

      • trevor-e 10 hours ago

        Because there are plenty of devs who take the output, actually read if it makes sense, do a code review, iterate back and forth with it a few times, and then finally check in the result. It's just a tool. Shitty devs will make shitty code regardless of the tool. And good devs are insanely productive with good tools. In your example what's the difference with that dev just copy/pasting from StackOverflow?

        • DustinBrett 9 hours ago

          Because on SO someone real wrote it with a real reason and logic. With AI we still need to double check that what we are giving ever made any sense. And SO also has votes to show the validty.

          I agree if devs iterated over the results it could be good, but that has not been what I have been seeing.

          It is not a traditional tool because tools we had in the past had expected results.

        • eru 9 hours ago

          Agreed!

          To go off on a tangent: yes, good developers can produce good code faster. And bad developers can produce bad code faster (and perhaps slightly better than before, because the models are mostly a bit smarter than copy-and-paste is).

          Everyone potentially benefits, but it won't suddenly turn all bad programmers into great programmers.

      • worthless-trash 8 hours ago

        I understand exactly what you mean, feel like there are buckets of people who are just trying to gas light every reader.

    • fenomas 9 hours ago

      I don't follow this take. ChatGPT outputted a bug subtle enough to be overlooked by you and your colleague and your test suite, and that means it's not ready for prime time?

      The day when generative AI might hope to completely handle a coding task isn't here yet - it doesn't know your full requirements, it can't run your integration tests, etc. For now it's a tool, like a linter or a debugger - useful sometimes and not useful other times, but the responsibility to keep bugs out of prod still rests with the coder, not the tools.

      • Sammi 2 hours ago

        Yes and this means it doesn't replace anyone or make someone who isn't able to code able to code. It just means it's a tool for people who already know how to code.

    • csense 11 hours ago

      > an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday

      That's...oddly specific.

      • alexjplant 11 hours ago

        I'm only 33 and I've worked with at least two of 'em. They're a type :-D

        • blobbers 9 hours ago

          You were one of them... 8 years ago?

          • eru 9 hours ago

            (Not the OP.)

            The LLMs are much more eager to please and to write lots of code. When I was younger, I would get distracted and play computer games (or comment on HN..), rather than churn out mountains of mediocre code all the time.

            • sdesol 8 hours ago

              > The LLMs are much more eager to please and to write lots of code.

              My process right now when working LLMs is to do the following:

              - Create problem and solution statement

              - Create requirements and user stories

              - Create architecture

              - Create skeleton code

              - Map the skeleton code

              - Create the full code

              At every step, where I don't need the full code, the LLM will start coding and I need to stop it and add "Do not generate code. The focus is still design".

        • 1attice 10 hours ago

          No this all tracks. Matches my own experience.

    • linsomniac 44 minutes ago

      >it used some obscure assertion library that had a few hundred stars on GitHub.

      That sounds like a lot of developers I've worked with.

    • Cthulhu_ 4 hours ago

      Is that the LLM's fault or SQLAlchemy for having that API in the first place? Or was that a gap in your testing strategy, as (if I'm reading it right), flush() doesn't write anything to the database but is only intended as an intermediate step (and commit() calls flush() under water).

      I think we're in a period similar to self-driving cars, where the LLMs are pretty good, but not perfect; it's those last few percent that break it.

    • Seattle3503 5 hours ago

      > At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.

      Ive had ChatGPT do the same thing with code involving SQLAlchemy.

    • bboygravity 6 hours ago

      You can't tell us that LLM's aren't ready for prime time in 2025 after you tried Copilot twice last year.

      New better models are coming out almost daily now and it's almost common knowledge that Copilot was and is one of the worst. Especially right now, it doesn't even come close to what better models have to offer.

      Also the way to use them is to ask for small chunks of code or questions about code after you gave them tons of context (like in Claude projects for example).

      "Not ready for prime time" is also just factually incorrect. It is already being used A LOT. To the point that there are rumors that Cursor is buying so much compute from Anthropic that they are making their product unstable, because nvidia can't supply them hardware fast enough.

      • 59nadir 6 hours ago

        I stopped using AI for code a little over a year ago and at that point I'd used Copilot for 8-12 months. I tried Cursor out a couple of weeks ago for very small autocomplete snippets and it was about the same or slightly worse than Copilot, in my opinion.

        The integration with the editor was neat but the quality of the suggestions were no different than what I'd had with Copilot much earlier, and the pathological cases where it just spun off into some useless corner of its behavior (recommending code that was already in the very same file, recommending code that didn't make any sense, etc.) seemed to happen more than with Copilot.

        This was a ridiculously simple project for it to work on, to be clear, just a parser for a language I started working on, and the general structure was already there for it to work with when I started trying Cursor out. From prior experience I know the base is pretty easy to work with for people who aren't even familiar with it (or even parsing in general), so I think given the difficulties that Cursor had even putting together pretty basic things it might be that a user of Cursor would see minimal gains in velocity and end up having less understanding in the medium to long term, at least in this particular case.

        • IanCal 4 hours ago

          The cursor auto complete? It's useful but the big thing is using sonnet with it.

          • 59nadir 3 hours ago

            I tried it with Claude Sonnet 3.5 or whatever the name is, both tab-completed snippets and chat (to see what the workflow was like and to see if it gave access to something special).

      • consp 6 hours ago

        > It is already being used A LOT

        Which is an argument for quality why? Bad coders are not going to produce better code that way. Just more with less effort.

      • KeplerBoy 6 hours ago

        Copilot isn't a single model. Copilot is merely a brand and uses openAI and anthropics newest models.

    • fooker 7 hours ago

      You are using it wrong.

      Give examples and let it extrapolate.

      • lionkor 6 hours ago

        "You're holding it wrong" doesn't make a small, too light, crooked, and backwards hammer any better.

  • t-writescode 14 hours ago

    I imagine most of the things that would be good uses for seniors in AI aren't great uses for a coding interview anyway.

    "Oh, I don't remember how to do parameterized testing in junit, okay, I'll just copy-paste like crazy, or make a big for-loop in this single test case"

    "Oh, I don't remember the API call for this one thing, okay, I'll just chat with the interviewer, maybe they remember - or I'll just say 'this function does this' and the interviewer and I will just agree that it does that".

    Things more complicated than that that need exact answers shouldn't exist in an interview.

    • ehnto 11 hours ago

      > Things more complicated than that that need exact answers shouldn't exist in an interview.

      Agreed, testing for arcane knowledge is pointless in a world where information lookup is instant, and we now have AI librarians at our fingertips.

      Critical thinking, capacity to ingest and process new information, fast logic processing, software fundamentals and ability to communicate are attributes I would test for.

      An exception though is proving their claimed experience, you can usually tease that out with specifics about the tools.

  • alpha_squared 16 hours ago

    We do the same thing. It's perfectly fine for candidates to use AI-assistive tooling provided that they can edit/maintain the code and not just sit in a prompt the whole time. The heavier a candidate relies on LLMs, the worse they often do. It really comes down to discipline.

    • sureIy 10 hours ago

      Discipline for what?

      To me it's the lack of skill. If the LLM spits out junk you should be able to tell. ChatGPT-based interviews could work just as well to determine the ability to understand, review and fix code effectively.

      • skeeter2020 9 hours ago

        >> If the LLM spits out junk you should be able to tell.

        Reading existing code and ensuring correctness is way harder than writing it yourself. How would someone who can't do it in the first place tell if it was incorrect?

        • JoshuaDavid 7 hours ago

          Make the model write annotated tests too, verify that the annotations plausibly could match the test code, run the tests, feed the failures back in, and iterate until all tests are green?

  • meesles 14 hours ago

    This has been my experience as well. The ones that have most heavily relied on GPT not only didn't really know what to ask, but couldn't reason about the outputs at all since it was frequently new information to them. Good candidates use it like a search engine - filling known gaps.

    • vanceism7_ 4 hours ago

      Yea I agree. I don't rely on the AI to generate code for me, I just use it as a glorified search engine. Sure I do some copypasta from time to time, but it almost always needs modification to work correctly... Man does AI get stuff wrong sometimes lol

    • hsuduebc2 13 hours ago

      I don't really can't imagine being it usefull in the way where it writes logical part of the code for you. If you are not being lousy you still need to think about all the edge cases when it generates the code which seems harder for me.

      • sdesol 10 hours ago

        > you are not being lousy you still need to think about all the edge cases

        This is honestly where I believe LLMs can really shine. I think we like to believe the problems we are solving are unique, but I strongly believe most of us are solving problems that have already been solved. What I've found is, if you provide the LLM with enough information, it will surface edge cases that you haven't thought of and implement logic in your code that you haven't thought of.

        I'm currently using LLMs to build my file drag and drop component for my chat app. You can view the problem and solution statement at https://beta.gitsense.com/?chat=da8fcd73-6b99-43d6-8ec0-b1ce...

        By chatting with the LLM, I created four user stories that I never thought of to improve user experience and security. I don't necessarily think it is about knowing the edge cases, but rather it is about knowing how to describe the problem and your solution. If you can do that, LLMs can really help you surface edge cases and help you write logic.

        Obviously what I am working on, is really not novel, but I think a lot of the stuff we are doing isn't that novel, if we can properly break it down.

        So for interviews that allow LLMs, I would honestly spend 5 minutes chatting with it to create a problem and solution statement. I'm sure if you can properly articulate the problem, the LLM can help you devise a solution and a plan.

    • piloto_ciego 9 hours ago

      This makes me feel good because it’s exactly how I use it.

      I’m basically pair programming with a wizard all day who periodically does very stupid things.

  • onemoresoop 13 hours ago

    I like that you’re openminded to allow candidates to be who they are and judge them for the outcome rather than using a prescribed rigid method to evaluate them. Im not looking to interview right now but I’d feel very comfortable interviewing with someone like you, I’d very likely give out my best in such an interview. Id probably choose not to use an LLM during the interview unless I wanted to show how I brainstormed a solution.

  • alok-g 4 hours ago

    This is the best way to go.

    I would love to go through mock interviews for myself with this approach just to have some interview-specific experience.

    >> So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI.

    Thanks for sharing your experience! Makes sense actually.

  • Keyframe 15 hours ago

    same thing here. Interview is basically a representative thing of what we do, but also depends on the level of seniority. I ask people just to share the screen with me and use whatever you want / fell comfortable with. Google, ChatGPT, call your mom, I don't care as long as you walk me through how you're approaching the thing at hand. We've all googled tar xvcxfgzxfzcsadc, what's that permission for .pem is it 400, etc.. no shame in anything and we all use all of the things through day. Let's simulate a small task at hand and see where we end up at. Similarly, there is a bias where people leaning more on LLMs doing worse than those just googling or, gasp, opening documentation.

  • apwell23 12 hours ago

    It took a while for googling during interviews to be accepted

  • randall 15 hours ago

    i like this. it seems like a good and honest use of time.

  • CSMastermind 14 hours ago

    Yeah I've been doing the same. Have been pretty stunned at people's inability to use AI tools effectively.

    • 3eb7988a1663 14 hours ago

      What does effective use look like? I have attempted messing around with a couple of options, but was always disappointed with the output. How do you properly present a problem to a LLM? Requiring an ongoing conversation feels like a tech priest praying to the machine spirit.

      • codeyperson 10 hours ago

        High level - having a discussion with the LLM about different approaches and the tradeoffs between each

        Low level - I'll write up the structure of what I want in the form of a set functions with defined inputs and outputs but without the implementation detail. If I care about any specifics with the functions I'll throw some comments in there. And sometimes I'll define the data structures in advance as well.

        Once all this is set up it often spits out something that compiles and works first try. And all the context is established so iteration from that point becomes easier.

        • feoren 9 hours ago

          > High level - having a discussion with the LLM about different approaches and the tradeoffs between each

          I honestly can't imagine this. If the AI says "However, a downside of approach B is that it takes O(n^2) time instead of the optimal O(nlog(n))", what do you think the odds are that it literally made up both of those facts? Because I'd be surprised if they were any lower than 30%. It's an extremely confident bullshitter, and you're going to use it to talk about engineering tradeoffs!?

          > Once all this is set up it often spits out something that compiles and works first try

          I'm sorry, but I'm extremely* doubtful that it actually works in any real sense. The fact that you even use "compiles and works first try" as some sort of metric that the code it's producing shows how easily it could slip in awful braindead bugs without you ever knowing. You run it and it appears to work!? The way to know whether something works -- not first try, but every try -- is to understand every character in the code. If that is your standard -- and it must be -- then isn't the AI just slowing you down?

          • sanarothe 3 hours ago

            I don't code for a living, and I'm probably worse than a fresh grad would be but I use:

            "Please don't generate or rewrite code, I just want to discuss the general approach."

            Bc I don't know any design patterns or idiomatic approach, being able to discuss is amazing.

            Though quality and consistency of responses is another thing... :)

          • concordDance 5 hours ago

            It can list tradeoffs and approaches you might have forgotten. Thats the big use case for me.

      • CSMastermind 11 hours ago

        Candidates generally use it in one of two ways: either as an advanced autocomplete or like a search engine.

        They'll type in things like, "C# read JSON from file"

        As opposed to something like:

        > I'm working on a software system where ... are represented as arrays of JSON objects with the following properties:

        > ...

        > I have a file called ... that contains an array of these objects ...

        > I have ... installed locally with a new project open and I want to ... How can I do this?

        No current LLMs can solve any of the problems we give them so pasting in the raw prompt won't be helpful. But the set up deliberately encourages them to do some basic scaffolding, reading in a file, creating corresponding classes, etc. that an LLM can bang out in about 30 seconds but I've seen candidates spend 30 minutes+ writing it all out themselves instead of just getting the LLM to do it for them.

        • neonsunset 9 hours ago

          GitHub Copilot Edit can do the second version of this. It is pretty good at it too. It sometimes gets things wrong but for your average code (and candidates typing in "C# read JSON from file" are way below average unless they never written in C#), if you give all the files for a specific self-contained part of the program, it can extend/modify/test/etc. it impressively well for an LLM.

          The difference compared to where we were just 1-2 years ago is staggering.

          Edit: the above is with Claude-3.5-Sonnet

      • Guthur 14 hours ago

        In my opinion that's literally what we're aiming for, whether or not intentionally.

    • al_borland 13 hours ago

      When I was last interviewing people (several years ago now), I’d let them use the internet to help them on anything hands on. I was astounded by how bad some people were at using a search engine. Some people wouldn’t even make an attempt.

  • n00b101 14 hours ago

    [flagged]

    • bolognafairy 14 hours ago

      > I am a 10x developer

      lol, okay.

      • iab 14 hours ago

        What is x in the equation is the real question

twoparachute45 16 hours ago

My company, a very very large company, is transitioning back to only in-person interviews due to the rampant amount of cheating happening during interviews.

As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.

  • cuuupid 10 hours ago

    So far 3 of the 11 people we interviewed have been clearly using ChatGPT for the >>behavioral<< part of the interview (like, just chatting about background, answering questions about their experience). I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.

    We actually allow using AI in our in-person technical interviews, but our questions are worded to fail safety checks. We'll talk about smuggling nuclear weapons, violent uprising, staging a coup, manufacturing fentanyl, etc. (within the context of system design) and that gives us really good mileage on weeding out those who are just transcribing what we say into AI and reading the response.

    • hn8726 3 hours ago

      > I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.

      I'm genuinely curious what questions you ask during the behavioral interview. Most companies ask questions like "recall a time when..." and I know people who struggle with these kinds of questions despite being good teammates, either because they find it difficult to explain the situation, or due to stress. And recruitment process is not a "basic conversation" — as a recruiter you're in far more comfortable position. I find it hard to believe anyone would use an LLM if you ask them question like "what were your responsibilities in your last role", and I do see how they might've primed the chat to help them communicate an answer to a question like "tell me about a situation when you had a conflict with your manager"

    • dmazzoni 10 hours ago

      Ha ha, that's a great idea!

      I love the idea of embedding sensitive topics that ChatGPT and other LLMs will steer clear of, within the context of a coding question.

      Have you ever had any candidate laugh?

      Any candidates find it offensive?

    • pllbnk 5 hours ago

      I think you (your company) and many other commenters here are just trying too hard.

      I had just recently lead through several interview rounds for software engineering role and we have not had any issue with LLM use. What we do for the technical interview part is very simple - live whiteboarding design task where we try to identify what the candidate's focus is and might pivot at any time or dig deeper into particular topics. Sometimes, we will even go as detailed as talking about particular algorithms the candidate would use.

      In general, I found that this type of interview is the most fun for both sides. The candidates don't feel pressure that they must do the only right thing as there is a lot of room for improvisation; the interviewers don't get bored with repetitive interviews over and over as new candidates come by with different perspectives. Also, there is no room for LLM use because the candidate has to be involved in drawing on the whiteboard and showing their technical presentation skills, which are very important for developers.

      • lewisleclerc 5 hours ago

        Unfortunately, we've noticed that candidates are on another call and their screen is fed by someone else using chatGPT and pasting the responses, as they can hear both the interviewer and the candidate

        • joshvm 3 hours ago

          I saw a pretty impressive cheat tool that could apparently grab the screen from the live share, process text on the screen in response to an obscure keybind and then run it through OCR to solve (or just look up a LC solution).

          At that point it seems like trying too hard, but be aware there are theoretical approaches which are extremely hard to detect (the inevitable evolution of sticky notes on the desk, or wall behind the monitor).

    • xarope 10 hours ago

      what actually happens to the interviewee? Do they suddenly go blank when they realise the LLM has replied "I'm sorry I cannot assist you with this", or they try to make something up?

    • soheil 8 hours ago

      llama2-uncensored to the rescue

  • mr_00ff00 16 hours ago

    So depressing to hear that “because of rampant cheating”

    As a person looking for a job, I’m really not sure what to do. If people are lying on their resumes and cheating in interviews, it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.

    But to this day I haven’t done either.

    • chowells 15 hours ago

      Here's the thing: 95% of cheaters still suck, even when cheating. Its hard to imagine how people can perform so badly while cheating, yet they consistently do. All you need to do to stand out is not be utterly awful. Worrying about what other people are doing is more detrimental to your performance than anything else is. Just focus on yourself: being broadly competent, knowing your niche well, and being good at communicating how you learn when you hit the edges of your knowledge. Those are the skills that always stand out.

      • hnisoss 12 hours ago

        Yea but I also suck in 95% of FAANG like interviews since I'm very bad at leetcode medium/hard type of questions. It's just something that I never practiced. It's very tempting at this point to trow in my towel and just use some aid. No one cares about my intense career and the millions I helped my clients earn, all that matters (and sometimes directly affects comp rate) is how I do on the "coding task".

        • titanomachy 11 hours ago

          > I suck in FAANG interviews... it's just something I never practiced.

          Well, sounds like you know the solution. Or set your sights on a job that interviews a different way.

          I think it's mostly leetcode "easy", anyway. Maybe some medium. Never seen a hard, except maybe from one smartass at Google (they were not expecting a perfect answer). Out of a dozen technical interviews, I don't think I've ever needed to know a data structure more exotic than a hash map or binary search tree.

          The amount of deliberate practice required to stand out is probably not more than 10-20 hours, assuming you do actually have the programming and CS skills expected for a FAANG job. It's unlikely you need to do months of grinding.

          If 20 hours of work was all that stood between me and half a million dollars a year, I'd consider myself pretty lucky.

          • moorow 4 hours ago

            On the other hand, if 20 hours of leetcode practice is all that stands between you and half a million dollars a year, isn't that a pretty good indicator that the interview process isn't hiring based on your skills, talent and education, and instead on something you basically won't encounter in the workplace?

          • rramadass 6 hours ago

            Right. Almost any time somebody fails an interview it is not because of "very hard questions" but because they did not prepare properly in a sensible manner. People don't want whiteboarding, no programming questions, no mathematical questions, no fermi problems etc. which is plain silly and not realistic. One just needs to know the basics and simple applications of the above which is more than enough to get through most interviews. The key is not to feel overawed/overwhelmed with unknown notations/jargons which is what the actual problem is when people run away from big-O, DS/Algo, Recursion, application of Set Theory/Logic to Programming etc.

      • roughly 14 hours ago

        Yeah, we found this when we started doing take-home exams: it turns out that a junior dev who spends twice as much time on the problem than what we asked them to doesn’t put out senior-level code - we could read the skill level in the code almost instantly. Same thing with cheating like that - it turns out knowing the answer isn’t the same thing as having experience, and it’s pretty obvious pretty quickly which one you’re dealing with.

      • dkga 14 hours ago

        Well, cheaters only cheat because they suck and they know it. Otherwise cheat would not be a rational approach.

        • wakawaka28 12 hours ago

          I don't approve of cheating but I think you're underestimating how hard some interview questions can be. Even competent people don't know everything and could draw a blank, in which case they would benefit from cheating despite being competent.

          • bodge5000 8 hours ago

            Not just difficult, but there's just so many of them (for the same company ofc). You could ace 3 interviews and not even be half way through the process. You have to be continually on top form for days/weeks on end.

            • entropi 4 hours ago

              Right, last offer I got required 7 (non-HR) steps over a 4-month period, where around a dozen technical people got involved.

              I don't/won't cheat, as I am a rather anxious person who can't really handle "covert ops". But at this point I totally understand those who do.

            • wakawaka28 36 minutes ago

              A lot of these people also have a policy that even one person can fail you. So if you do 8 interviews with 2 people each, then there's up to 20 people in the process that can ruin it for you.

              I think LLM performance on previously seen questions like interview questions is too good for it to be allowed. I wouldn't mind someone using an IDE or API docs, but you have to draw the line somewhere. It's like how you can't use a calculator that can do algebra on a calculus test. It just doesn't accomplish the goal of testing anything if you use too much tech, and using the current LLMs that all suck in general but can nail these questions that have a million examples online is bad. I would much rather see someone consult online references for information than to see them use an LLM.

        • rahimnathwani 12 hours ago

          If that were true we would never hear of top level athletes using performance enhancing drugs.

          • nkrisc 11 hours ago

            That's different. These candidates are not trying to get an edge on qualifying for the top dev position in the entire world.

            Top athletes do it because they're essentially at the limits of human performance and those drugs are the only edge they can reasonably get.

            • lubujackson 10 hours ago

              Kids at my tiny high school football team did steroids to get an edge - no chance at a scholarship, either.

              Different people have a different threshold for cheating no matter the stakes. I imagine some people vheat even if they know the answer - just to be sure.

            • listenallyall 7 hours ago

              It's much more widespread. Minor league player uses PEDs to make the major leagues. Middling major leaguer uses them to be an all-star. All-star uses them to make the hall of fame. In the context of programming, if some kind of cheating is what's necessary to nab a $150k job, a whole lot of people are going to cheat.

    • ryandvm 15 hours ago

      I don't know, I kind of feel like leetcode interviews are a situation where the employer is cheating. I mean, you're admittedly filtering out a great number of acceptable candidates knowing that if you just find 1 in a 1000, that'll be good enough. It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.

      In my opinion, if a prospective employee is able to successfully use AI to trick me into hiring them, then that is a hell of a lot closer to the actual work they'll be hired to do (compared to leetcode).

      I say, if you can cheat at an interview with AI, do it.

      • twoparachute45 15 hours ago

        I dunno why there is always the assumption in these threads that leetcode is being used. My company has never used leetcode-style questions, and likely never will.

        I work in security, and our questions are pretty basic stuff. "What is cross-site scripting, and how would you protect against it?", "You're tasked with parsing a log file to return the IP addresses that appear at least 10 times, how would you approach this?" Stuff like that. And then a follow-up or two customized to the candidate's response.

        I really don't know how we could possibly make it easier for candidates to pass these interviews. We aren't trying to trick people, or weed people out. We're trying to find people that have the foundational experience required to do the job they're being hired for. Even when people do answer them incorrectly, we try to help them out and give them guidance, because it's really about trying to evaluate how a person thinks rather than making sure they get the right answer.

        I mean hell, it's not like I'm spending hours interviewing people because I get my rocks off by asking people lame questions or rejecting people; I want to hire people! I will go out of my way to advocate for hiring someone that's honest and upfront about being incorrect or not knowing an answer, but wants to think through it with me.

        But cheating? That's a show stopper. If you've been asked to not use ChatGPT, but you use it anyway, you're not getting the benefit of the doubt. You're getting rejected and blacklisted.

        • johnnyanmac 11 hours ago

          >I dunno why there is always the assumption in these threads that leetcode is being used

          because it matches my experience. I work in games and interviews are more varied (math, engine/language questions, game design questions, software design patterns). I'd still say maybe 30% of them do leetcode interviews, and another 40% bring in leetcode questions at some point. I hate it because I need to study too many other types of questions to begin with, and leetcode is the least applicable.

        • quacksilver 5 hours ago

          I once got a surprise leetcode coding interview for a security testing role that mentioned proficiency in a coding language or two as desirable but not essential.

          I come from a math background rather than CS and code for fun / personal projects, so don't know the 'proper' names for some algorithms from memory. I could have done some leetcode prep / revision if I had any indication that it was coming up, though the interview was pretty much a waste of time. I told them that and made a stab at it, though they didn't seem interested in engaging at all and barely made eye contact during the whole interview.

        • bikezen 8 hours ago

          > "You're tasked with parsing a log file to return the IP addresses that appear at least 10 times, how would you approach this?"

          Out of curiosity, did anyone just reply with `awk ... | sort | count ... | awk`? Its certainly what I would do rather than writing out an actual script.

      • kortilla 15 hours ago

        The employer sets the terms of the interview. If you don’t like them, don’t apply.

        What you’re suggesting here isn’t any different than submitting a fraudulent resume because you disagree with the required qualifications.

        • ADeerAppeared 13 hours ago

          > The employer sets the terms of the interview. If you don’t like them, don’t apply.

          What you're missing here is that this is an individual's answer to a systemic problem. You don't apply when it's _one_ obnoxious employer.

          When it's standard practice across the entire industry, we have a problem.

          > submitting a fraudulent resume because you disagree with the required qualifications.

          This is already worryingly common practice because employers lie about the required qualifications.

          Honesty gets your resume shredded before a human even looked at it. And employers refusing to address that situation is just making everything worse and worse.

          • terrabiped 12 hours ago

            You make a valid point that while the rules of the game are known ahead of time, it’s strange that the entire industry is stuck in this local maximum of LeetCode interviews. Big companies are comfortable with the status quo, and small companies just don’t have the budget to experiment with anything else (maybe with some one-offs).

            Sadly, it’s not just the interview loops—the way candidates are screened for roles also sucks.

            I’ve seen startups trying to innovate in this space for many years now, and it’s surprising that absolutely nothing has changed.

            • johnnyanmac 11 hours ago

              >I’ve seen startups trying to innovate in this space for many years now, and it’s surprising that absolutely nothing has changed.

              I don't want to be too crass, but I'm not surprised people who can startup a business are precisely the ones who hyper-fixate on efficiency when hiring and try to find the best coders. Instead of the best engineers. When you need to put your money where you mouth is, many will squirm back to "what works".

          • strken 10 hours ago

            > Honesty gets your resume shredded before a human even looked at it

            Does it? Mine is honest, fairly normal, and gets me through to interviews fine. What are common lies and why are they necessary?

        • hsuduebc2 13 hours ago

          Or he can simply choose to ignore the arbitrary and often pointless requirements, do the interview on his own terms, and still perform excellently. Many job requirements are nothing more than a pointless power trip from employers who think they have more leverage than they actually do.

        • Aeolun 14 hours ago

          I would like to be paid though. What do I care about the terms of the interview as long as they hire me?

          What is being suggested here is not participating in the mind numbing process that is called ‘applying for a job’.

          • hsuduebc2 13 hours ago

            You're absolutely right. Ditching the pointless corporate hoops, proving you can do the job, and getting paid like anyone else is what truly matters. Most hiring processes are just bureaucratic roadblocks that needlessly filter out great candidates. Unless you're working on something truly critical, there's no reason to play along with the nonsense.

          • H8crilA 14 hours ago

            Wanting to be paid under false pretenses is the definition of fraud.

            • nsxwolf 13 hours ago

              That doesn’t make any sense. The best engineers I know can’t pass these interviews because they started working long before they became standard.

            • hsuduebc2 13 hours ago

              Being paid for even excellent performance is a fraud.

            • davely 12 hours ago

              > Wanting to be paid under false pretenses is the definition of fraud.

              What? No, it isn't.

              Regardless, if the job requirements state "X years of XYZ experience" and you have to have >X years of experience, then using AI to look up how to do a leetcode problem for some algorithm you haven't used since your university days is absolutely not "false pretenses" nor fraud.

          • johnnyanmac 11 hours ago

            > What do I care about the terms of the interview as long as they hire me?

            well that's the neat part... they aren't going to. All this AI stuff just happened to coincide with a recession no one wants to admit, amplifying the issue.

            So yea, even if I'm desperate I need to be mindful of my time. I can only do so many 4-5 stage interviews only to be ghosted, have the job close, or someone else who applied earlier get the position.

          • meesles 14 hours ago

            If you lie about your qualifications to a degree that can be considered fraud, employers can and will sue you for their money back and damages. Wait till you discover how mind-numbing the American legal system is!

            • nsxwolf 13 hours ago

              I’m sorry is the job “professional Leetcoder”?

            • nradov 12 hours ago

              Nonsense. I don't endorse lying about qualifications, but employers don't sue over this. Employment law in most US states wouldn't even allow for that with regular W-2 employees.

        • twoparachute45 15 hours ago

          Yea, exactly.

          If a candidate were up front with me and asked if they could use AI, or said they learned an answer from AI and then wanted to discuss it with me, I'd be happy with that. But attempting to hide it and pretend they aren't using it when our interview rules specifically ask you not to do it is just being dishonest, which isn't a characteristic of someone I want to hire.

        • Apocryphon 15 hours ago

          On principle, what you’re saying has merit. In practice, the market is currently rife with employers submitting job postings with inflated qualifications, for positions that may or may not exist. So there’s bad actors all around and it’s difficult to tell who actually is behaving with integrity.

      • hsuduebc2 13 hours ago

        I wouldn't call it cheating but most of the time it's just stupid. For majority of software developer jobs would be more suitable to discuss the solution of the more complex problem tham randomly stress out people just because you think you should.

      • nkrisc 11 hours ago

        > It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.

        On the other hand, if you have 1,000 candidates, and you only need 1, why not do it if the top candidate selected by this method can do well on the test and your work?

    • dahart 14 hours ago

      > it feels like there’s nothing I can do except do the same.

      Why does it feel like that when you’re replying to someone who already points out that it doesn’t work? Cheating can prevent you from getting a job, and it can get you fired from the job too. It can also impede your ability to learn and level up your own skills. I’m glad you haven’t done it yet, just know that you can be a better candidate and increase your chances by not cheating.

      Using an LLM isn’t cheating if the interviewer allows it. Whether they allow it or not, there’s still no substitute for putting in the work. Interviews are a skill that can (and should) be practiced. Candidates are rarely hired for technical skill alone. Attitude, communication, curiosity, and lots of other soft skills are severely underestimated by so many job seekers, especially those coming right out of school. A small amount of strengthening your non-code abilities can improve your odds much faster than leetcode ever will. And if you have time, why not do both?

    • ghaff 15 hours ago

      Note also "And the majority of the time the answer is incorrect anyway."

      I haven't looked for development-related jobs this millennium, but it's unclear to me how effective a crutch AI is for interviews--at least for well-designed and run interviews. Maybe in some narrow domains for junior people.

      As a few of us have written elsewhere, I consider not having in-person interviews past an initial screen sheer laziness and companies generally deserve whoever they end up with.

    • johnnyanmac 11 hours ago

      sounds cheesy, but keep being honest. Eventually companies will realize (as we have years ago) that automating recruiting gets you automated candidates.

      But YMMV. I have 9 years and still can get interviews the old fashioned way.

    • aprilthird2021 5 hours ago

      > it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.

      Never buy into this mentality. Because once you do, it never goes away. After the interview, your coworkers might cheat, so you cheat too. Then your business competitors might cheat, so you cheat too. And on and on.

  • wccrawford 16 hours ago

    When I was interviewing entry level programmers at my last job, we gave them an assignment that should only take a few hours, but we basically didn't care about the code at all.

    Instead, we were looking to see if they followed instructions, and if they left anything out.

    I never had a chance to test it out, since we hadn't hired anyone new in so long, but ChatGPT/etc would almost always fail this exam because of how bad it is at making sure everything was included.

    And bad programmers also failed it. It always left us with a few candidates that paid attention, and from there we figure if they can do that, they can learn the rest. It seemed to work quite well.

    I was recently laid off from that company, and now I'm realizing that I really want to see what current-day candidates would turn in. Oh well.

    • z3t4 16 hours ago

      For those tests I never follow the rules, I just make something quick and dirty because I refuse to spend unpaid hours. In the interview the first question is why I didnt follow the instructions, and they think my reason is fair.

      Companies seem to think that we program just for fun and ask to make a full blown app... also underestimating the time candidates actually spend making it.

      • MathCodeLove 15 hours ago

        If you’re spending the time applying and submitting something then you might as well spend the extra 30 minutes or so to do it right, no?

        • Aeolun 14 hours ago

          Any time someone says ‘should only take a few hours’ they’re far underestimating the time it actually takes.

        • johnnyanmac 10 hours ago

          It's never been 30 minutes for me. Even leetcode timed exams tended to be 60-90 minutes.

          recently I spent a good 10 hours making a crossword solver. Hiring freeze a few days after I turned it in. I completely get GP's mentality.

        • halfcat 14 hours ago

          Not if you’re applying to hundreds, or thousands of jobs. Unless you know someone, it’s a quantity game.

          • dahart 14 hours ago

            I’ve screened a lot of resumes and given a lot of interviews over the years, and it’s usually obvious when people are trying the scattershot approach, they just don’t match. I feel like treating it like a quantity game is unlikely to improve your odds, and tbh spamming out hundreds or thousands of applications sounds like a miserable way to spend time. You could spend that time meeting and talking to people. I’ve never applied to more than 2 jobs at once, jobs that I actually want, and never had trouble getting at least one of them (and it still takes time and effort and some coding and interviews).

            • nsxwolf 13 hours ago

              It wouldn’t be obvious they’re using a scattershot approach when they’re a good match, though. I don’t see the downside.

              • dahart 12 hours ago

                Maybe not at the resume screening phase, but it’s usually still obvious once the interviews start when people aren’t interested in your specific company. Some people get lucky, sure, but the downside is that you have to get lucky, it’s wasting valuable time on low probability events. If you’re familiar with the statistical process of importance sampling, in my experience on both sides of the interview table, it’s effective and worthwhile to spend more time curating higher quality samples than to scatter and hope.

                • johnnyanmac 10 hours ago

                  >but it’s usually still obvious once the interviews start when people aren’t interested in your specific company.

                  Can you really blame them? If you're not a houshold name, why would you expect someone to spend hours researching your specific company?

                  On the other hand, it can come off as creepy if your a small company and suddenly someone nerds out about how your CEO said this one thing at a talk years ago and knows your lead has cancer based on his personal blog. I'd rather just treat it as a transaction of my skills and services for money. We are not a family (multiple layoffs have taught me so)

                  > it’s effective and worthwhile to spend more time curating higher quality samples than to scatter and hope.

                  Not in this market. Too many ghost jobs, too many people ghosting after multiple rounds. Too many hiring freezes when you spend a month talking with a company. If you want respect from candidates, don't disrespect them.

                  • dahart 7 hours ago

                    Naw I don’t blame them. I’m not suggesting anyone spend hours researching each company. And I don’t expect candidates to do anything, I’m saying the candidates who do are the ones that tend to land the job, but it’s entirely the candidate’s choice. All it takes is minutes, really.

                    You sound like you’ve been burned. That sucks and I’m sorry, I sympathize. I’m hearing that the job market is very tough right now. A big part of that is because it’s extremely competitive. Taking it personally and assuming it’s disrespect isn’t going to help get the job though (even if there was disrespect… but that’s not the only explanation, so it’s a dangerous assumption).

                    • johnnyanmac 7 hours ago

                      >I’m saying the candidates who do are the ones that tend to land the job, but it’s entirely the candidate’s choice. All it takes is minutes, really.

                      Well, everyone has different experiences. I never felt like knowing about a company put me ahead in my early days. I guess I have a dump stat in Charisma (not surprised).

                      Like you said, the market is competitive. No one's going to take the nice guy over the one who blitz's an interview unless that nice guy has connections. Those few minutes of thousands of applications adds up to days of research. I just lack that time and energy these days.

                      >You sound like you’ve been burned. That sucks and I’m sorry, I sympathize.

                      several times, yes. It's honestly worse than my first job search out of college 10 years ago.

                      >Taking it personally and assuming it’s disrespect isn’t going to help get the job though

                      I only ask for basic decency. Keep a candidate in the loop, don't drag the process on for the sake of it, any take home should warrant a response (even if it's a template rejection letter). i.e. respect people's time.

                      I haven't been burned in a lot of my interviews, I'm not talking about bummers like the several times I was interviewing before a hiring freeze. I don't even treat non-responses as an interview process. But several of them just end with absolutely no communication nor closure after speaking for weeks with recruiters and hiring managers.

                      I don't know what to call that in a day and age where AI is supposedly increasing efficiency, other than disrespect. This has never happened before 2023, which makes the times all the more weirder.

                      • z3t4 4 hours ago

                        My experience is that I've applied to companies where I was a perfect fit, but did not get an interview, and then I've applied to companies where I had not used any of the tech stack and still got an interview... There's a lot of wierd reason, one common is that they want to hire a specific person, maybe even in the company already, but they still need to post a job ad due to company policy. Or one where I got an interview even though I had no experience in their tech stack they explained they need to make at least 5 interviews before they hire and they had already found their guy so they interviewed other non qualified so that their candidate/friend would stand out as the most qualified... So never take hiring personally. It's just random. Do enough work to get an interview, many employers are very good at judging if you will fit in or not, so just leave it for them to figure that out, and be yourself. And don't take it personally when you get rejected. There's still a shortage of experienced software engineers, and lots of jobs to apply to Also if you get a bad feeling, just back out. It's when you've started turning down offers you have become good enough at searching/interviewing, and that's when you will find something great. Try to have at least 3 offers before you accept one.

  • prisenco 16 hours ago

    The industry (all industries really) might want to reconsider online applications, or at least privilege in-person resume drop-offs because the escalating ai application/evaluation war that's happening doesn't seem to be helping anyone.

    • datavirtue 14 hours ago

      No it's because AI shifted power over to the applicant.

      • bdangubic 14 hours ago

        this is very strange statement. in what world did AI possibly shift power to the applicant?? applicants have almost never been in shittier position than they are now and things are getting much, much worse by the day

        • ethin 9 hours ago

          Yeah I don't get this either. I've been looking for a job for like 3-4 years, even an entry-level one, since I graduated college in May of 22 and I still haven't found one. I'm probably doing something wrong (and that's a different discussion), but it's getting harder and harder to know if it's me or the AI applicants or the AI ATS system. And then we have the AI job seekers which are AI-created accounts trying to find employment -- I've already started to see a few of these pop up on Linked In. They were banned, but still, the fact it's happening all is a bit worrying if not predictable.

      • prisenco 14 hours ago

        How so? Tons of companies are moving to AI automated intake systems because they're getting flooded with low-quality AI generated resumes. Of course, the original online applications systems were terrible already which is what encouraged people towards low effort in their applications so it's become a stale-mate.

      • hibikir 8 hours ago

        Did it? What I see instead is total mistrust of the open resume pool, because the percentage of outright lies, from resume to behavioral to everything else is just that high. So I see companies raise their hands and going back to maximum prioritization of in-network candidates, where we have someone vouching that the candidate is not a total waste of everyone's time.

        The one who loses all power is the new junior straight out of school, which used to already be difficult to distinguish from many other candidates with similar resumes: Now they compete with thousands upon thousands of total fakes which claim more experience anyway.

  • md-ayaz-me an hour ago

    I second this. My previous company started in-person interview for this very reason.

  • hibikir 8 hours ago

    Some can be quite good at the cheating: At least good enough to get through multiple layers. I've been in hiring meetings where I was the only one of 4 rounds that caught the cheating, and they were even cheating in behaviorals. I've also been in situations with a second interviewer, where the other interviewer was completely oblivious even when it was clear I was basically toying with the guy reading from the AI, leading conversation in unnatural ways.

    Detection of AI in remote interviews, behavioral and technical, just has to be taught today if you are ever interviewing people that don't come from in-network recommendations. Completely fake candidates are way too common.

  • paxys 14 hours ago

    > it's wild to me how many candidates think they can get away with it

    Remember that you are only catching the candidates who are bad at cheating.

    • johnnyanmac 10 hours ago

      That's fine. The ones who are "good cheaters" are probably smarter than many honest people. Think about those school days where your smartest peers were cheating anyway, despite teaching you organically earlier on. Those kinds of cheaters do it to turn an A into an A+, not because they don't understand the material.

  • IG_Semmelweiss 11 hours ago

    I'm using AI for interview screeners for nontechnical roles that require knowledge work. The AI interviewing app is very very basic, its just a wrapper put together by an eng, with enough features to prevent cheating.

    Start with recording the session and blocking right-click, and you are halfway there. Its not hard.

    The AI app has helped me surface top candidates. I don't even look at resumes anymore. There's no point. I interview the top 10 out of 200, and then do my references and select.

    • zx8080 9 hours ago

      If there's no independent verification, how do you know it's really top 10? (not middle 10, or random 10?)

  • topkai22 7 hours ago

    I haven’t been doing that much interviewing, but in the dozen or so candidates I’ve had I don’t think a single one has tried to use AI. I almost wish they would, as then at least I’d get past the first half of the question…

  • wsintra2022 15 hours ago

    Is it cheating if I can solve the problem using the tools of AI, or is it just solving the problem?

    • dahart 14 hours ago

      Interviews aren’t about solving problems. The interviewer isn’t interested in a problem’s solution, they’re interested in seeing how you get to the answer. They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning. They already know how to use AI, they don’t need you for that. They want to know that you’ll contribute to the team. Wanting to use AI probably sends the wrong message, and is more likely to get you left out of the next round of interviews than it is to get you called back.

      Imagine you need to hire some people, and think about what you’d want. That’ll answer your question. Do you want people who don’t know but think AI will solve the problems, or do you want people who are capable of thinking through it and coming up with new solutions, or of knowing when and why the AI answer won’t work?

      • nyarlathotep_ 11 hours ago

        > They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning

        I admire this worldview, and wish for it to be true, but I can't help but see it in conflict with much of what floats around these parts.

        There's a recent thread on Aider where the authors' proudly proclaim that ~80% of code is written by Aider itself.

        I've no idea what to make of the general state of the programming profession at all at the moment, but I can't help but feel learning various programming trivia has a lower return on investment than ever.

        I get learning the business and domain and etc, but it seems like we're in a fast race to the bottom where the focus is on making programmers' skills as redundant as possible as soon as possible.

        • johnnyanmac 10 hours ago

          >I admire this worldview, and wish for it to be true, but I can't help but see it in conflict with much of what floats around these parts.

          Honest interviewers may not realize how dishonest other interviewers became in such recent times (2-3 years ago). Interviewing today compared to COVID times is night and day. Let alone the 10's Gold Rush.

          The respect is long gone.

      • icecube123 12 hours ago

        Isnt one of the ways of solving the problem using all the tools at your disposal? If at the end of the day, isnt having working code the fundamental goal? I guess you could argue that the code needs to be efficient, stable, and secure. But if you could use "AI" to get part way there, then use smarts to finish it off. Isnt that reasonable? (Devils advocate) The other big question is the legality of using code from an AI in a final commercial product.

        • dahart 12 hours ago

          Yes that’s a fair question. Some companies do allow LLMs in interviews and on the job. But again the solution isn’t what the interviewer wants, so relying on an LLM gives them no signal about your intrinsic capabilities.

          Keep in mind that the amount of time you spend in a real job solving clear and easy interview style problems that an LLM can answer is tiny to none. Jobs are most often about juggling priorities and working with other people and under changing conditions, stuff Claude and ChatGPT can’t really help you with. Your personality is way more important to your job success than your GPT skills, and that’s what interviewers want to see… your personality & behavior when you don’t know the right answer, not ChatGPT’s personality.

      • davely 12 hours ago

        > Interviews aren’t about solving problems.

        Eh, I wish more people felt that way, I have failed so many interviews because I haven't solved the coding problem in time.

        The feedback has always been something along the lines of "great at communicating your thoughts, discussing trade-offs, having a good back and forth" but "yeah, ultimately really wanted to see if you could pass all the unit tests."

        Even in interview panels I've personally been a part of, one of the things we evaluate (heavily) is whether the candidate solved the problem.

    • twoparachute45 14 hours ago

      If you've been given the problem of "without using AI, answer this question", and you use an AI, you haven't solved the problem.

      The ultimate question that an interview is trying to answer is not "can this person solve this equation I gave them?", it's usually something along the lines of "does this person exhibit characteristics of a trustworthy and effective employee?". Using AI when you've been asked not to is an automatic failure of trust.

      This isn't new or unique to AI, either. Before AI people would sometimes try to look up answers on Google. People will write research papers by looking up information on Wikipedia. And none of those things are wrong, as long as they're done honestly and up front.

    • jacobsenscott 14 hours ago

      If you are pretending to have knowledge and skills you don't have you are cheating. And if you have the required knowledge and skill AI is a hindrance, not a help. You can solve the problem easily without it. So "is using ai cheating"? IDK, but logically you wouldn't use AI unless you were cheating.

      • NotMichaelBay 13 hours ago

        Knowledge and skill are two different things. Sometimes interviewers test that you know how to do something, when in practice it's irrelevant if you A) know how to retrieve that knowledge and B) know when to retrieve it.

        • jacobsenscott 12 hours ago

          There is foundational knowledge you must have memorized through a combination of education and experience to be a software developer. The standard must be higher than "can use google and cut and paste." The answer can't always be - "I don't need to be able to recall that on command, I can google/chatgpt that when I end up needing it." Would you go to a surgeon who says "I don't need to know exactly where the spleen is, I can simply google it during surgery."

    • isbvhodnvemrwvn 14 hours ago

      For the goal of the interview - showing your knowledge and skills - you are failing miserably. People know what LLMs can do, the interview is about you.

    • risyachka 15 hours ago

      I guess its more of a question if you can solve the problem without AI.

      In most interview tasks you are not solving the task “with” ai.

      Its AI who solves the task while you watch it do it.

  • aprilthird2021 5 hours ago

    I'm at the same company I think. I don't get why we can't just use some software that monitors clicking away or tabbing away from the window, and just tell candidates explicitly that we are monitoring them, and looking away or tabbing away will appear suspect.

  • guywithahat 13 hours ago

    I mean they could be googling things; I’ve definitely googled stuff during an interview. I do think in-person interviews area important though, I did some remote final interviews with Amazon and they were all terrible

ryan-duve 18 hours ago

My startup got acquired last year so I haven't interviewed anyone in a while, but my technical interview has always been:

- share your screen

- download/open the coding challenge

- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare

My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.

  • shaneoh 16 hours ago

    I recently interviewed for my team and tried this same approach. I thought it made sense because I want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job.

    It proved to be awkward and clumsy very quickly. Some candidates resisted it since they clearly thought it would make them judged harsher. Some candidates were on the other extreme and basically tried asking ChatGPT the problem straight up, even though I clarified up front "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."

    After just the initial batch of candidates it became clear it was muddying things too much, so I simply forbade using it for the rest of the candidates, and those interviews went much smoother.

    • mmh0000 14 hours ago

      Over the years, I've walked from several "live coding" interviews. Arguably though, if you're looking for "social coders" maybe the interview is working as intended?

      But for me, it's just not how my brain works. If someone is watching me, I'll be so self-conscious the entire time you'll get a stream of absolute nonsense that makes me look like I learned programming from YouTube last night. So it's not worth the time.

      You want some good programming done? I need headphones, loud music, a closed door and a supply of Diet Coke. I'll see you in a few hours.

      • AznHisoka 14 hours ago

        Yep, if I’m forced to talk through the problem, I’ll force myself to go through various things that you might want to hear, that I wouldn’t do.

        Whereas my natural approach would be to take a long shower, workout etc and let my brain wander a bit before digging into it. But that wouldn’t fly during an interview..

      • shaneoh 3 hours ago

        Ironically this is exactly how I am too. Even at work, if I'm talking through a problem on a presentation or with my boss, I'm much more scatterbrained, and I'll try to dodge those kinds of calls with "Just give me 30 minutes and I'll figure it out." which always goes better for me.

        That said, now we're just talking about take home challenges for interviews and you always hear complaints about those too. And shorter, async timed challenges (something like "Here's a few hours to solve this problem, I'll check back in later") are now going to be way more difficult to judge since AI is now ubiquitous.

        So I really don't think there's any perfect methodology out there right now. The best I can come up with is to get the candidate in front of you and talk through problems with them. The best barometer I found so far was to set up a small collection of files making up a tiny app and then have candidates debug it with me.

      • 946789987649 an hour ago

        What do you do if a junior asks for help and it's easiest to code through with them?

      • joquarky 12 hours ago

        I need my default mode network to produce good code, and I don't talk while it's active

    • 946789987649 an hour ago

      I've had a few people chuck the entire problem into ChatGPT, it was still very much useful in a few ways:

      - You get to see how they then review the generated code, do they spot potential edge cases which the AI missed? - When I ask them to make a change not in the original spec, a lot of them completely shut down because they either didn't understand the code generated well enough, or they themselves didn't really know how to code.

      And you still get to see people who _do_ know how to use AI well, which at this point is a must for its overall productivity benefits.

    • Aeolun 14 hours ago

      What are you supposed to ask chatGPT if you can’t just ask it the answer? That’d confuse me too.

      • shaneoh 3 hours ago

        One example would be looking up syntax and common functions. In a high-pressure situation it's much tougher to bumble around Google and Stack Overflow, so this would be a way for solving for "I totally know how to do this thing but it's just not coming to mind at this moment" which is fair. Usually we the interviews can obviously just tell them ourselves though, but that's what I was going for.

        But yeah, the point is that once I applied it in practice it did quickly become confusing, so now I know from experience not to use it.

        I think the other suggestions in this thread about how to use it are good ones, but they would present their own meta challenges for an interview too. Just about finding whatever balance works for you I guess.

      • no-reply 13 hours ago

        Some part of the problem statement you want help with (rather than a complete answer)?

        • Aeolun 10 hours ago

          I mean, that’s obvious, but also incredibly silly if I know it can give me both the answer and the reasoning behind it.

          The challenge should be in determining if ChatGPT is correct.

      • datavirtue 14 hours ago

        Just another interview methodology pulled out of someone's ass. They don't know.

        • edanm 7 hours ago

          As opposed to all other interviewing methodologies which are rigourously tested?

          Unfortunately in our industry it's pretty much all personal anecdotes on what works better and what doesn't.

    • raincole 6 hours ago

      > "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."

      No, it's not "obvious" whatsoever. Actually it's obviously confusing: why you are allowing them to use ChatGPT but forbidding them from asking the questions directly? Do you want an employee who is productive at solving problems, or someone who guess your intentions better?

      If AI is an issue for you then just ban it. Don't try to make the interview a game of who outsmart who.

      • shaneoh 3 hours ago

        See my answer to the other comment on this question. We figured there were some good use cases for AI in an interview that weren't just copy/pasting code, it's not about guessing intentions. It seemed most helpful to potentially unstick candidates from specific parts of the problem if they were drawing a blank under pressure, basically just an easier "You can look it up on Google" in a way that would burn less time for them. However we quickly found it was just easier for us to unstick them ourselves.

        > If AI is an issue for you then just ban it.

        Yes, that was the conclusion I just said we rapidly came to.

    • layer8 15 hours ago

      Did you tell them that you “want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job”? Just curious.

      • shaneoh 3 hours ago

        Yup, we told them exactly that.

    • skinner927 11 hours ago

      Maybe come up with a problem that isn’t so simple you can just ask it to ChatGPT. Create some context that would be difficult/tedious to convey.

  • bagels 16 hours ago

    If you really don't penalize them for this, you should clearly state it. Some people may still think they'll be penalized as that is the norm.

  • staticautomatic 16 hours ago

    I did this while hiring last year and the number of candidates who got stuff wrong because they were too proud to just look up the answer was shocking.

    • prisenco 16 hours ago

      Is it pride or is it hard to shake the (reasonable, I'd say) fear the reviewer will judge regardless of their claims?

      • ryandrake 15 hours ago

        Exactly. You never know. Some interviewers will penalize you for not having something memorized and having to look it up, some will penalize you for guessing, some will penalize you for simply not knowing and asking for help. Some interviewers will penalize you for coming up with something quick and dirty and then refining it, some will penalize you for jumping right to the final product. There's no consistency.

      • staticautomatic 8 hours ago

        I do what I can to allay that fear. The rest is up to them.

  • hibikir 8 hours ago

    The difficulty of your questions have to change drastically if they are using good tooling. Many a problem that would take a reasonable candidate half an hour to figure out is 'free' for Claude, so your question might not show any signal. And if you tweak your questions to be sure to not be auto-solved by a strong enough AI, then you better say it's semi-required, because the difficulty level of the question you need to ask goes up quite a bit.

    Some of the questions in our interview loop have been posted in github... which means every AI has trained on them specifically. They are, therefore, useless if you have AI turned on. And if you interview enough people, someone will post on github, and therefore your question will have a pretty short shelf life before it's in training and instantly solved.

  • silasdavis 17 hours ago

    I don't care how you're good at it so long as I can watch.

  • random_walker 16 hours ago

    I love these kind of interviews. This would very closely simulate real world on-job Performance.

    • dalmo3 3 hours ago

      If I had to do real world on-job coding while someone looks over my shoulder at all times (i.e. screensharing), I'd be flipping burgers.

  • blazing234 13 hours ago

    im not doing any coding challenges that aren't real world

    if i see anything remotely challenging i dip out. interviewing is just a numbers game nowadays so i dont waste time on interviews if they seem like they're gonna burn me out for the rest of the day. granted i have 11 years experience

  • bbarnett 17 hours ago

    I'd be fine with the GPT side of things, as long as I could somehow inject poor answers, and see if the interviewee notices and corrects.

    • cpursley 17 hours ago

      That's actually a horribly awesome idea.

    • htrp 16 hours ago

      the trick is to phrase the problem in a way that GPT4 will always give the incorrect answer (due to vagueness of your problem) and that multiple rounds of guiding/correcting are needed to solve.

      • gtirloni 16 hours ago

        That's pretty good because it can exhaust the context window quickly and then it starts spiraling out of control, which would require the candidate to act.

        • htrp 15 hours ago

          If you only use ChatGPT to code, you are only able to copy paste the llm emitted code, then you ask for changes to the code (to reflect for example the evolution of the product)

    • hibikir 8 hours ago

      There's more than one possible AI on the other end, so crafting something that will not annoy a typical candidate, but will lead every AI astray seems pretty difficult.

      • notpushkin 3 hours ago

        Maybe you could allow using AI, but only through the interviewer-provided interface. That interface would allow using any model the candidate likes, but before sending the response it will inject errors into the code (either randomly or through another AI prompt).

  • yieldcrv 11 hours ago

    > I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.

    Too many people are the opposite that I would literally never tell you

    And this works.

    what can we do to help that?

    I’ve had interviews where AI use was encouraged as well.

    but so many casual tirades against it dont make me want to ever try being forthcoming. most organizations are realistically going to be 10 years behind the curve on this

  • OutOfHere 16 hours ago

    [flagged]

    • evilduck 16 hours ago

      It's pretty obvious when someone's input focus changes to nothing or when their mouse leaves the screen entirely, or you could just ask to see the display settings to begin. Doesn't solve for multiple computers but it's pretty obvious in real time when someone's actual attention drifts or they suddenly have abilities they didn't have before.

      Either way, screen sharing beats whiteboards. Even if we throw our hands up and give up, we'll be firing frauds before the probationary period ends.

      • OutOfHere 15 hours ago

        There is nothing fraudulent about using LLMs. If people can use them on the job, it's okay to use them on the interview. They're the calculators of tomorrow if not of today.

        Interviewing just needs to adapt such as by assessing one's open source projects and contributions. Not much more is needed. And if the candidate completely misrepresents their open source profile, this can be handled by an initial contract-to-hire period.

        • evilduck 9 hours ago

          Using AI secretly in an interview setting where you were told the constraints excluded them or the interview required everything to be on the screen share even if they were permitted is fraudulent behavior. It’s not much different than having a surrogate interviewee at that point. You’d only being doing it to deceive the interviewer.

          Open source contributions is a bad metric for interviewing too. People have lives outside a computer, if they aren’t doing open source contributions in their free time outside of work I wouldn’t hold that against them. If someone has those that’s great and I’d take a look, but I’m not disqualifying someone else for not working for free. Someone doing OSS as an interviewing badge of honor is a chump in my book. At least do it for principled reasons.

        • kortilla 15 hours ago

          > There is nothing fraudulent about using LLMs.

          There is if you’re asked not to.

          • OutOfHere an hour ago

            Negative. They are not the law.

        • bigstrat2003 14 hours ago

          I agree that there's nothing fraudulent with using a tool you would use on the job when you are interviewing. But in no way are LLMs equivalent to calculators. Calculators actually give the correct answer reliably, unlike LLMs. A sporadically reliable tool is worse than no tool at all.

          • OutOfHere an hour ago

            LLMs have come a long way. If you give gpt-o3-mini the same interview question five times, chances are good that it will get it right all five times. Yes, it's not a calculator, but it's approaching one.

explorigin 19 hours ago

Part of my resume review process is trying to decide if I can trust the person. If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate.

Once you get to the interview process, it's very clear if someone thinks they can use AI to help with the interview process. I'm not going to sit here while you type my question into OpenAI and try to BS a meaningful response to my question 30 seconds later.

AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.

  • brianstrimp 17 hours ago

    Good interviews are a conversation, a dialog to uncover how the person thinks, how they listen, how they approach problems and discuss. Also a bit detail knowledge, but that's only a minor component in the end. Any interview where AI in its current form helps is not good anyway. Keep in mind that in our industry, the interview goes both ways. If the candidate thinks your process is bad then they are less inclined to join your company because they know that their coworkers will have been chosen by a subpar process.

    That said, I'm waiting for an "interview assistant" product. It listens in to the conversation and silently provides concise extra information about the mentioned subjects that can be quickly glanced at without having to enter anything. Or does this already exist?

    Such a product could be useful for coding to. Like watching me over the shoulder and seeing aha, you are working with so-and-so library, let me show you some key parts of the API in this window, or you are trying to do this-and-that, let me give you some hints. Not as intrusive as current assistants that try to write code for you, just some proactive lookup without having to actively seek out information. Anybody knows a product for that?

    • kmoser 16 hours ago

      That might be good for newbie developers but for the rest of us it'll end up being the Clippy of AI assistants. If I want to know more about an API I'm using, I'll Google (or ask ChatGPT) for details; I don't need an assistant trying to be helpful and either treating me like a child, or giving me info that maybe right but which I don't need at the moment.

      The only way I can see that working is if it spends hundreds of hours watching you to understand what you know and don't know, and even then it'll be a bit of a crap shoot.

    • sien 15 hours ago

      I'm pretty sure I've been in an interview with an 'interview assistant' and that it was another person.

      This was 2-3 years ago in a remote interview. The candidate would hear the question, BS us a bit and then sometimes provide a good answer.

      But then if we asked follow up questions they would blow those.

      They also had odd 'AV issues' which were suspicious.

  • ktallett 19 hours ago

    This, and tbh this has always been the best way. Someone who has projects, whether personal or professional, and has the capability to discuss those projects in depth and with passion will usually be a better employee than a leet code specialist.

    • remus 17 hours ago

      Doesn't even have to be a project per se, if they can discuss some sort of technical topic in depth (i.e. the sort of discussion you might have when discussing potential solutions to a problem) then that's a great sign imo.

    • polishdude20 8 hours ago

      My resume has a bunch of personal projects on there as well as work experience and the project experience seems to not help at all. Just rejections after rejections.

      • ktallett 4 hours ago

        My suggestion was in an ideal world which sadly this isn't. Your issue suggests they aren't tailored for each application, which could potentially be a reason. It is better to show why one project makes you a great fit as opposed to how many projects you have done. Sometimes the person in charge of hiring may not fully have all the expertise in the area they are hiring for.

  • yosito 10 hours ago

    > If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate

    So you just subjectively say "this resume is too perfect, it must be bullshit"? How the fuck is any actual, qualified engineer supposed to get through your gauntlet of subjectivity?

    • colonial 8 hours ago

      You'd be surprised at how good you can get at sniffing out slop, especially when it's the type prompted by fools who think it'll get them an easy win. Often the actual content doesn't even factor in - what triggers my mental heuristics is usually meta stuff like tone and structure.

      I'm sure some small % of people get away with it by using LLaMA-FooQux-2552-Finetune-v3-Final-v1.5.6 or whatever, but realistically, the majority is going to be obvious to anyone that's been force-fed slop as part of their job.

    • alok-g 4 hours ago

      The strong language used aside, indeed, we should be cautious of our own potential biases when screening or otherwise.

      I am imagining an AI saying my CV is AI-generated, when in reality, I do not even use Auto-correct or Auto-suggest when I (type)write! :-)

  • vunderba 18 hours ago

    Agreed. This is why - while I won't ding an applicant for not having a public Github, I'm always happy when they do because usually they'll have some passion projects on there that we can discuss.

    • pdimitar 16 hours ago

      I have 23 years of experience and I am almost invisible on GitHub, and for all those years I've been fired from 4 contracts due to various disconnects (one culture mis-fit and two under-performances due to illness I wasn't aware of at the time, and one because the company literally restructured over the weekend and fired 80% of all engineers), and I have been contracting a lot in the last 10 years (we're talking 17-19 gigs).

      If you look solely at my GitHub you'd likely reject me right away.

      I wish I had the time and energy for passion projects in programming. I so wish it was so. But commercial work has all but destroyed my passion for programming, though I know it can be rekindled if I can ever afford to take a properly long sabbatical (at least 2 years).

      I'll more agree with your parent / sibling comments: take a look at the resume and look for bad signs like too vanilla / AI language, too grandiose claims (though when you are experienced you might come across as such so 50/50), or almost no details, general tone etc.

      And the best indicator is a video call conversation, I found as a candidate. I am confident in what I can do (and have done), I am energetic and love to go for the throat of the problems on my first day (provided the onboarding process allows for it) and it shows -- people have told me that and liked it.

      If we're talking passion, I am more passionate about taking a walk with my wife and discussing the current book we're reading, or getting to know new people, or going to the sauna, or wondering what's the next meetup we should be going to, stuff like that. But passion + work, I stand apart by being casual and not afraid of any tech problems, and by prioritizing being a good teammate first and foremost (several GitHub-centric items come to mind: meaningful PR comments and no minutiae, good commit messages, proper textual comment updates in the PR when f.ex. requirements change a bit, editing and re-editing a list of tasks in the PR description).

      I already do too much programming. Don't hold it against me if I don't live on the computer and thus have no good GitHub open projects. Talk to me. You'll get much better info.

      • johnnyanmac 10 hours ago

        Iroincally I'd probably have more github projects if I didn't spend 20 months looking for a full-time job.

        And tbh, at the senior level they rarely care about personal projects. I must have had 60+ interviews and I feel a lack of a github cost me maybe 2 positions. When you job is getting a job, you rarely have the time for passion.I'm doing contract work in the meantime; prevents gaps from showing, more appealing than a personal project, and I can talk about that to the extent of my NDA (plenty of tech to talk about without revealing the project)

        • pdimitar 9 hours ago

          > Iroincally I'd probably have more github projects if I didn't spend 20 months looking for a full-time job.

          Same. I could afford not working throughout most of 2023 but I had to deal with ruined health + my leeway didn't last as long as I hoped so I was back on the treadmill just when I was starting to enjoy some freedom and a peace of mind.

          > And tbh, at the senior level they rarely care about personal projects. I must have had 60+ interviews and I feel a lack of a github cost me maybe 2 positions.

          I have no idea how much it costed me but I was told in no uncertain terms 10+ times that having a GitHub portfolio would have meant no take-home assignment, and skipping parts of the interview I already attended. So it definitely carries weight _and_ can help shorten hiring processes.

          So I don't feel it was a deal-breaker for the people who interviewed me either but I think it would have netted me more points, so to speak.

          Assuming you are graded and are the same person:

          Without portfolio: 7/10

          With portfolio: 8/10

          ...for example.

          > I'm doing contract work in the meantime

          Same x2, but it's mentally draining. No stability. That removes future bandwidth that would have been used for those passion projects.

          TL;DR a lot of things conspire to rob you of your creative potential. :(

      • aakresearch 12 hours ago

        Brilliantly put! Upvoted and "favorited".

        I would also add meticulous attention to documenting requirements and decisions taken along the development process, especially where compromises were made. All the "why's", so to speak.

        But yes, commercial development, capital-A "Agile" absolutely kills the drive.

        • pdimitar 12 hours ago

          Thank you. <3

          And yep I didn't want to make my comment too big. I make double sure to document any step-by-step processes on "how to make X or Y work", especially when I stumble upon a confusing bug in a feature branch. I go out of my way to devise a 100% reproducible process and document it.

          Those, plus yours, and even others, are what makes a truly good programmer IMO.

    • satvikpendem 17 hours ago

      Also because most people are busy with actual work and don't have the time to have passion projects. Some people do, and that's great, but most people are simply not passionate about labor, regardless of what kind of labor it is.

    • AlgebraFox 8 hours ago

      I really hate those who ask for GitHub profiles. Mine is psuedo anonymous and I don't want to share it with my employer or anyone I don't want to. Besides privacy, I do not understand why a company would even expect the candidate to have free contribution in the first place. Can't the candidate have other hobbies to enjoy or learn?

    • nyrikki 16 hours ago

      To add to this, lots of senior people in the consultanting world are brought in under escalations. They often have to hide the fact they are an external resource.

      Also if you have a novel or disclosure sensitive passion project, GitHub may be avoided even as a very conservative bright line.

      As stated above I think it can be good to find common points to enhance the interview process, but make sure to not use it as a filter.

  • RajT88 9 hours ago

    > AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.

    Generally, this is how to figure out if a candidate is full of crap or not. When they say they did a thing, ask them questions about that thing.

    If they can describe their process, the challenges, how they solved the challenges, and all of it passes the sniff test: If they are bullshitting, they did crazy research and that's worth something too.

  • satvikpendem 19 hours ago

    There are much more sophisticated methods than that now with AI, like speech to text to LLM. It's getting increasingly harder to detect interviewees cheating.

    • yowlingcat 18 hours ago

      I think GP's point is that this says as much about the interview design and interviewer skill as it does about the candidate's tools.

      If you do a rote interview that's easy to game with AI, it will certainly be harder to detect them cheating.

      If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.

      • satvikpendem 17 hours ago

        > If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.

        I understood their point but my point is a direct opposition to theirs, that at some point with AI advances this will essentially become impossible. You can make it as open ended as you want but if AI continues to improve, the human interviewee can simply act as a ventriloquist dummy for the AI and get the job. Stated another way, what kind of "effective and well designed open ended interview" can you make that would not succumb to this problem?

        • gopher_space 16 hours ago

          > at some point with AI advances this will essentially become impossible.

          In-person interviews, second round comes with a plane ticket. This used to be the norm.

        • yowlingcat 14 hours ago

          My POV comes from someone who's indexed on what works for gauging technical signal at startups, so take it for what it's worth. But a lot of what I gauge for is a blend of not just technical capability, but the ability to translate that into prudent decisions with product instincts around business outcomes. AI is getting better at solving technical problems it's seen before in a black box, but it struggles to tailor that to any kind of context you give it to pre-existing constraints around user behavior, existing infrastructure/architecture, business domain and resource constraints.

          To be fair, many humans do too, but many promising candidates even at the mid-level band of experience who thrive at organizations I've approved them into are able to eventually get to a good enough balance of many tradeoffs (technical and otherwise) with a pretty clean and compact amount of back and forth that demonstrates thoughtfulness, curiosity and efficacy.

          If someone can get to that level of capability in a technical interviewing process using AI without it being noticeable, I'd be really excited about the world. I'm not holding my breath for that, though (and having done LOTS of interviews over the past few quarters, it would be a great problem to have).

          My solution, if I were to have the luxury of having that problem, would be a pretty blunt instrument -- I'd instead change my process to actually have AI use of tools be part of the interviewing process -- I'd give them a problem to solve, a tuned in-house AI to use in solving the problem, and have their ability to prompt it well, integrate its results, and pressure check its assumptions (and correct its mistakes or artifacts) be part of the interview itself. I'd press to see how creatively they used the tool -- did they figure out a clever way to use it for leverage that I wouldn't have considered before? Extra points for that. Can they use it fluidly and in the heat of a back and forth of an architectural or prototyping session as an extension of how they problem solve? That will likely become a material precondition of being a senior engineer in the future.

          I think we're still a few quarters (to a few years) away from that, but it will be an exciting place to get to. But ultimately, whether they're using a tool or not, it's an augment to how they solve problems and not a replacement. If it ever gets to be the latter, I wouldn't worry too much -- you probably won't need to do much hiring because then you'll truly be able to use agentic AI to pre-empt the need for it! But something tells me that day (which people keep telling me will come) will never actually come, and we will always need good engineers as thought partners, and instead it will just raise the bar and differentiation between truly excellent engineers and middle of the pack ones.

        • bbarnett 17 hours ago

          This is called fraud, and it is a crime.

          People don't really call the police, nor sue over this. But they can, and have in the past.

          If it gets bad, look for people starting to seek legal recourse.

          People aren't developers with 5 years experience, if all they can do is copy and paste. Anyone fraudulently claiming so is a scam artist, a liar, and deserves jail time.

          So you create an interview process that can only be passed by a skilled dev, including them signing a doc saying the code is entirely their work, only referencing a language manual/manpages.

          And if they show up to work incapable of doing the same, it's time to call the cops.

          That's probably the only way to deal with scam artists and scum, going forward.

          • le-mark 16 hours ago

            Can you cite case law around where some one misrepresented their capabilities in a job interview and were criminally prosecuted? Like what criminal statute specifically was charged? You won’t find it, because at worst this would fall under a contract dispute and hence civil law. Screeching “fraud is a crime” hysterically serves no one.

            • bbarnett 15 hours ago

              Fraud can be described as deceit to profit in some way. You may note the rigidity of the process above, where I indicated a defined set of conditions.

              It costs employers money to on board someone, not just in pay, but in other employees training that person. Obviously the case must be clear cut, but I've personally hired someone who clearly cheated during the remote phone interview, and literally couldn't even code a function in any language in person.

              There are people with absolutely no background as a coder, applying to jobs with 5 years experience, then fraudulently misrepresenting the work of others at their own, to get the job.

              That's fraud.

              As I said, it's not being prosecuted as such now. But if this keeps up?

              You can bet it will be.

              Because it is fraud.

          • nyarlathotep_ 11 hours ago

            > People aren't developers with 5 years experience, if all they can do is copy and paste. Anyone fraudulently claiming so is a scam artist, a liar, and deserves jail time.

            I won't name names, but there are a lot of Consulting companies that feed off Government contracts that are literally this.

            "Experience" means a little or a lot, depending on your background. I've met plenty of people with "years of experience" that are objectively terrible programmers.

            • bbarnett 7 hours ago

              Yet said poor programmers would never pass the test I specified, without committing fraud. That, and the other conditions I specified, ensure so.

              If the AI premise is true, then it's this or good programmers, and good companies will never meet.

          • hackable_sand 16 hours ago

            You want to coerce work through violence?

  • hibikir 8 hours ago

    There's candidates running speech-to-text that avoid the noticeable delays, but it's still possible to do the right kind of digging the AI will almost always refuse to do, because it's way too polite.

    It's as if we were testing for replicants in Blade Runner: The AI response will rarely figure out you are aiming to look for something frustrating, that they are actually proud of, or figure out when you are looking for a hot take you can then disagree with.

lolinder 17 hours ago

The traditional tech interview was always designed to optimize for reliably finding someone who was willing to do what they were told even if it feels like busywork. As a rule someone who has the time and the motivation to brush up on an essentially useless skill in order to pass your job interview will likely fit nicely as a cog in your machine.

AI doesn't just change the interviewing game by making it easy to cheat on these interviews, it should be changing your hiring strategy altogether. If you're still thinking in terms of optimizing for cogs, you're missing the boat—unless you're hiring for a very short term gig what you need now is someone with high creative potential and great teamwork skills.

And as far as I know there is no reliable template interview for recognizing someone who's good at thinking outside the box and who understands people. You just have to talk to them: talk about their past projects, their past teams, how they learn, how they collaborate. And then you have to get good at understanding what kinds of answers you need for the specific role you're trying to fill, which will likely be different from role to role.

The days of the interchangeable cog are over, and with them easy answers for interviewing.

  • nouveaux 16 hours ago

    Have you spent a lot of time trying to hire people? I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees. This perspective smells completely like "If I were in charge, things would be so much better." Guess what? If you were to take your idea and try to lead this change across a 100 people engineering org, there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.

    "talk about their past projects, their past teams, how they learn, how they collaborate"

    You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.

    • bodge5000 7 hours ago

      > You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.

      This is the job of a good interviewer. I've run the gauntlet from terrible to great answers to the exact same questions depending on the interviewer. If you literally just ask that question out of the blue, you'll either get a bad or rehearsed response. If you establish some rapport, and ask it in a more natural way, you'll get a more natural answer.

      It's not easy, but neither is being on the other side of the interviewer, and that's never been accepted as an excuse

    • dakiol 16 hours ago

      My take is:

      - “big” tech companies like Google, Amazon, Microsoft came up with these types of tech interviews. And there it seems pretty clear that for most of their positions they are looking for cogs

      - The vast majority of tech companies have just copied what “big” tech is doing, including tech interviews. These companies may not be looking for cogs, but they are using an interview process that’s not suitable for them

      - Very few companies have their own interview process suitable for them. These are usually small companies and therefore the number of engineers in such companies is negligible to be taken into account (most likely, less than 1% of the audience here work at such companies)

      • nouveaux 12 hours ago

        And what is wrong with being a cog? Not everyone is going to invent the next ai innovation and not everyone is cut out to build the next hot programming language.

        Bugs need to be fixed. Features need to be implemented. If it weren't for cogs, you'd have people just throwing new projects over the fence and dropped 6 months after release. Don't want to be another cog? Join a startup. Plenty of those hiring. The reality is that when you work at a large company, you're one of 50,000 people. By definition, only 1% are in the top 1%.

        Someone has to wash the dishes and clear the tables. Let's stop looking down at jobs just because it's not hot and sexy. People who show up and provide value is great and should be appreciated.

        • johnnyanmac 10 hours ago

          >And what is wrong with being a cog?

          The interview process being a circus of how many hoops you'll jump through. Which in this case is upwards of 3 months of trivia, beauracracy, and politics. And these days they don't even give you the grace of a response; they may just ghost you.

          But being a cog itself is personally fine. Work to live, not live to work. But leading people on to drop them on the tip of a hat is disrespectful of everyone's time. At least a 1-2 stage interview for a dishwasher or table busser is only wasting a few hours per role applied. Time is the most valuable resource we have, of course people want to use it carefully.

        • lolinder 9 hours ago

          > And what is wrong with being a cog?

          Human cogs are going to be phased out. I'm not an AI doomer who thinks engineers are going to be replaced across the board, but the need for a human being who functions like a robot is going away fast. We need humans to do what humans do well, and humans don't do well as cogs in a machine—machines are better at that role.

          The days of leetcode interviews are numbered not because they're too easy to cheat at, but because they were always optimizing for the wrong traits in most companies that cargo culted them, and even the companies that used them correctly (Big Tech) are going to rapidly need a different type of interview for the new types of hires they need.

    • dennis_jeeves2 15 hours ago

      > I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.

      The council itself is made of "busywork" worker bees. Slave hiring slaves - the vast majority of IT interviewers and candidates are idiot savants - they know very little outside of IT, or even realize that there is more to life than IT.

    • northern-lights 15 hours ago

      > You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.

      This was the norm until perhaps for about the last 10-15 years of Software Engineering.

    • lolinder 9 hours ago

      > I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.

      I didn't say that. I said that this style of interview was designed to hire pluggable cogs. As others have noted, that was the correct move for Big Tech and was cargo culted into a bunch of other companies that didn't know why their interviews were shaped the way they were.

      > there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.

      In answer to your original question: yes, I'm actively involved in hiring at a 100+ person engineering org that hires this way. And no, we're not looking to figure out how to hire compliant people, we're hiring engineers who will push back and do what works well, not just act because an executive says so.

      > You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.

      Only if you suck at making people comfortable and at understanding different (potentially awkward) communication styles. You don't have to discriminate against people for being awkward, that's a choice you can make. You can instead give them enough space to find their train of thought and pursue it, and it does work—I recently sat in on an interview like that with someone who fits your description exactly, and we strongly recommended him.

  • dahart 14 hours ago

    > what you need now is someone with high creative potential and great teamwork skills.

    That’s exactly what we always needed, long before LLMs arrived. That’s why all the interviews I’ve seen or give already were designed to have conversations.

    I’m agreeing with you, but I’ve never seen these ‘interchangeable cog’ interviews you’re talking about.

    • lolinder 12 hours ago

      Right, I agree. The leetcode interviews are a bad fit for almost every company—they only made sense in the Googles and Microsofts that invented them and actually did want to optimize for cogs.

alkonaut 5 hours ago

It's not the solution itself that is interesting to me, it's first finding out whether the person can go through the motions of solving it. Like reading instructions, submitting solutions etc. It filters out those who can't code at all or who can't read instructions. A surprisingly large chunk. If the person also pipes the problem through an LLM, good.

To then select a good developer I'd test communication skills. Have them communicate what the pros/cons of several presented solutions are. And have them critique their own solution. To ensure they don't have canned answers, I might just swap the problem/solutions for the in-person bit. The problem they actually solved and how they did it isn't very important. It's whether they could read and understand the problem, formulate multiple solutions, describe why one would be chosen over another. Being presented with a novel problem and being asked on the spot to analyze it is a good exercise for developers (Assuming software development is the job we're discussing here).

Just take the time to talk to people. The job is about reading and writing human language more than computer programming. Especially with the arrival of AI when every junior developer is now micro managing even more junior AI colleagues.

shihab 14 hours ago

To get an idea of just how advanced cheating tools has become, take a look here:

https://leetcodewizard.io/

I think every interviewer, hiring manager ought to know or be trained on these tools, your intuition about candidate's behaviour isn't enough. Otherwise, we will soon reach a tipping point where honest candidates will be at a severe disadvantage.

  • paxys 14 hours ago

    Tbh I’m very happy these tools exist. If your company wants to ask dumb formulaic leetcode questions and doesn’t care about the candidate’s actual ability then this is what you deserve. If they can automate the interview so well then they should also be able to automate the job right? Or are your interview questions not representative of what the job actually entails?

    • shihab 13 hours ago

      I understand this sentiment for experienced developers. It is an imperfect signal. But what is in your opinion a better signal for junior or new grads?

      Every alternative I can think of is either worse, or sounds nice but impractical to implement in practice at scale.

      I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with. And by the way, not all of them are completely dumb, they do know computer science, just perhaps not as well as an honest competitor.

      • johnnyanmac 10 hours ago

        >But what is in your opinion a better signal for junior or new grads?

        They are juniors, I don't expect them to be experts, I expect eagerness and passion. They spent 4 or more years focusing on schooling, show me the results of your projects. Let them talk and see how well they understand what they did. Side projects are even better to stand out.

        And you know... apparently people can still fail fizzbuzz in 2025. If you really question their ability to code, ask the basics, not if they can write a Sudoku verifier on the spot. If you aren't a sudoku game studio I don't see the application outside of "can they work with arrays?"

        >I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with.

        everyone has a different style. Personally I care a lot less about programming proficiency and a lot more about technical communication. If they only wrote 10 lines of code for a group project but can explain every aspect of the project as if they written it themselves, what am I really missing? The odds of that sort of technical reaspning being accompanied by poor coding is a lot rarer than the alternative of a Leetcode wizard who can't grasp architectural concept nor adjust to software tooling, in my experiences.

        • vanceism7_ 4 hours ago

          Yea I totally agree. During one of my interviews, the interviewers asked me to write "snake game" in react. I had spent the last week studying their open source project and learning how things were structured, and then the two part interview consisted of parsing json and outputting it as markdown, and writing snake game. They're weren't a game ship, so it really didn't make any sense that they would've asked about that... It was really lame

    • forrestthewoods 12 hours ago

      > Or are your interview questions not representative of what the job actually entails?

      100% of all job interviews are a proxy. It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.

      A leetcode interview either is or not a meaningful proxy. AI tools either do or not invalidate the validity of that proxy.

      Personally I think leetcode interview are an imperfect but relatively decent proxy. And that AI tools render that proxy invalid.

      Hopefully someday someone invents a better interview scheme that can reliably and consistently scale to thousands of interviews, hundreds of hires, and dozens of interviewers. That’d be great. Alas it’s a really hard and unsolved problem!

      • alok-g 4 hours ago

        >> It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.

        >> ... leetcode interview are an imperfect but relatively decent proxy.

        I think all this is just the status quo that should be challenged instead of being justified.

        When I conduct interviews (environment: a FAANG company), I focus on (a) fundamental understanding and (b) practical problems. None of the coding problems I pose are more than O[N] in complexity. Yet, my hiring decisions have never gone wrong.

    • pydry 14 hours ago

      Yeah, it's just a pity that human stupidity perpetuated leetcode as an interviewing tool to the point that AI had to kill it....

      Im really happy it's finally broken though. Dumbest fad our industry ever had.

      • lubujackson 9 hours ago

        I dunno... estimating how many golf balls fit in a bus or explaining why manhole cover are round make leetcode look almost... useful by comparison.

  • crooked-v 14 hours ago

    I think this is the first interview cheating tool I've seen that feels morally justified to me. I wonder if it will actually change company behavior at all.

  • 0x20cowboy 14 hours ago

    The faster leetcode interviews can be completely broken to the point they are abandoned the better.

  • blazing234 13 hours ago

    this is a good thing.

    anyone I know who actually got a job through leetcode style in the last 2 years cheated. they would get their friends to mirror monitor and then type the answers in from chatgpt LOL

  • low_tech_punk 7 hours ago

    We are approaching a singularity where we actually want to hire the cheating tool, not the cheater.

  • Der_Einzige 14 hours ago

    Glad this exists and big +1 to the creator.

    "Cheating" on leetcode is a net positive for society. Unironically.

  • dinkumthinkum 6 hours ago

    I strongly disagree. This is nothing. You can sort out if someone is using something like this to cheat. You have a conversation. You can ask conceptual questions about algorithms and time complexity and figure out their level and see how their sophistication matches their solution on the LeetCode problem or whatever. Now, if you have really bad intuition or understanding of human behavior then yeah it would probably be hard but in that case being a good interviewer is probably hopeless anyway.

ktallett 19 hours ago

The key is having interviewers that know what they are talking about so in-depth meandering discussions can be had regarding personal and work projects which usually makes it clear whether the applicant knows what they are talking about. Leetcode was only ever a temporary interview technique, and this 'AI' prominence in the public domain has simply sped up it's demise.

  • _puk 16 hours ago

    This completely..

    You ask a rote question and you'll get a rote answer while the interviewee is busy looking at a fixed point on the screen.

    You then ask a pointed question about something they know or care about, and suddenly their face lights up, they're animated, and they are looking around.

    It's a huge tell.

    • crooked-v 15 hours ago

      You know, this makes me wonder if a viable remote interview technique, at least until real-time deepfaking gets better, would be to have people close their eyes while talking to them. For somebody who knows their stuff it'll have zero impact; for someone relying entirely on GPT, it will completely derail them.

      • ickelbawd 8 hours ago

        That’s an interesting idea. Sadly I think the next AI interviewing tool to be developed in response would make you look like your eyes are closed. But in the interim period it could be an interesting way to interview. Doesn’t really help for technical interviews where they kinda need to have their eyes open, but for pre-screens maybe…

      • steve_taylor 6 hours ago

        A filter could probably do it already. There are already filters to make you appear to be looking at the camera no matter where your eyes are pointing.

  • danielbln 18 hours ago

    This is the way. We do an intro call, an engineering chat (exactly as you describe), a coding challenge and 2 team chat sessions in person. At the end of that, we usually have a good feeling about how sharp the candidate is, of they like to learn and discover new things, what their work ethic is. It's not bullet proof, but it removes a lot of noise from the signal.

    The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.

    • satvikpendem 18 hours ago

      > The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.

      Do you state this upfront or is it some hidden requirement? Generally I'd expect an interview coding exercise to not be done with AI, but if it's a hidden requirement that the interviewer does not disclose, it is unfair to be penalized for not reading their minds.

      • ktallett 18 hours ago

        I would say as long as it is stated you can complete the coding exercise using any tool available it is fine. I do agree, no task should be a trick.

        I am personally of the view you should be able to use search engines, AI, anything you want, as the task should be representative of doing the task in person. The key focus has to be the programmer's knowledge and why they did what they did.

        • CamperBob2 17 hours ago

          Reminds me of the old joke/story where the Caltech student asks, "Can we use Feynman in this open-book exam?"

        • trelliscoded 17 hours ago

          One client of mine has a couple repositories for non-mission critical things like their fork of an open source project, decommissioned microservices, a SVG generator for their web front-end, etc.

          They also take this approach of "whatever tool works," but their coding test is "here's some symptoms of the SVG generator misbehaving, figure out what happened and fix it," which requires digging into the commit history, issues, actually looking at the SVG output, etc.

          Once you've figured out how the system architecture works, and the most likely component to be causing the problem, you have to convert part of the code to use a newer, undocumented API exposed by a RPC server that speaks a serialization format that no LLM has ever seen before. Doing this is actually way faster and accurate using an AI, if you know how to centaur with it and make sure the output is tested to be correct.

          This is a much more representative test of how someone's going to handle doing actual work knocking issues out.

          • johnnyanmac 10 hours ago

            That's interesting and effective. But I do feel like "undocumented API" is an unnecessary trick in an interview setting.

      • danielbln 16 hours ago

        Well, the challenge involves using a python LLM framework to build a simple RAG system for recipes.

        It's not a hidden requirement per se to use LLM assistance, but the candidate should have a good answer ready why they didn't use an LLM to solve the challenge.

        • what 16 hours ago

          Why is it a negative that the candidate can solve the challenge without using an LLM? I don’t really understand this.

          Also, what is a good answer for not using one? Will you provide access to one during the course of the interview? Or I am just expected to be paying for one?

          • danielbln 15 hours ago

            It's not negative that the candidate can solve it without an LLM, but it is positive if the candidate can use the LLM to speed up the solution. The code challenge is timeboxed.

            We are providing an API key for LLM inference, as implementing the challenge requires this as well.

            And I haven't heard a good answer yet for not using one, ideally the candidate knows how to mitigate the drawbacks of LLMs while benefiting from their utility regardless.

            • what 15 hours ago

              >I haven’t heard a good answer for not using one

              Again, what would be a good answer? Or are you just saying there isn’t one?

              • dalmo3 3 hours ago

                A good answer in this situation would focus on demonstrating that you made a conscious decision based on the problem requirements and the approach that best suited the task. Here’s an example of a thoughtful response:

                "I considered various approaches for solving this problem. Initially, I thought about using an LLM, as it's great for natural language processing and generating text-based solutions. However, for this particular challenge, I felt that a more algorithmic or structured approach was more appropriate, given the problem's nature (e.g., the need for performance optimization, a specific coding pattern, or better control over the output). While LLMs are powerful tools, they may not always provide the precision and control required for highly specific, performance-critical tasks, so I chose to solve the problem through a more traditional method. That said, if the problem had been more open-ended or involved unstructured data like text generation, I would definitely consider leveraging an LLM."

                This answer reflects the candidate's ability to critically assess the problem and use the right tools for the job, showing maturity and sound judgment.

                - GP, probably

        • johnnyanmac 10 hours ago

          >but the candidate should have a good answer ready why they didn't use an LLM to solve the challenge.

          "LLM-esque AI, especially in my industry, is under heavy scrutiny and I want to wait for the dust to settle before exploring options with such tools."

          I was never asked as such, but I do have an answer to that.

        • mvdtnz 16 hours ago

          Ah so you expect mind readers who can divine something from your brain that goes against 99.99% of interviewers' practices and would get them instantly disqualified from an overwhelming majority of interviews. Nice work good luck finding candidates.

    • crooked-v 15 hours ago

      > as it's that much of a productivity boost when used right

      Frankly, if an interviewer told me this, I would genuinely wonder why what they're building is such a simple toy product that an LLM can understand it well enough to be productive.

  • soheil 8 hours ago

    its demise

dijit 15 hours ago

I've always just tried to hold a conversation with the candidate, what they think their strengths are weaknesses are and a little probing.

This works especially well if I don't know the area they're strongest in, because then they get to explain it to me. If I don't understand it then it's a pretty clear signal that they either don't understand it well enough or are a poor communicator. Both are dealbreakers.

Otherwise, for me, the most important thing is gauging: Aptitude, Motivation and Trustworthiness. If you have these three attributes then I could not possibly give a shit that you don't know how kubernetes operators work, or if you can't invert a binary tree.

You'll learn when you need it; it's not like the knowledge is somehow esoteric or hidden.

  • punk_coder 11 hours ago

    This is how I interview potential hires. I’ll admit I haven’t interviewed someone below a senior level in probably 10 years, so I interview someone that has a resume with experience that I can draw from. I read what they’ve worked on and just go from there. I hope I never have to submit someone to some stupid take home test or Leet Code interview.

_heimdall an hour ago

I went through a round of interviews the second half of last year. Interviewing felt the same as it had over the last 5 or 10 years honestly.

I had a few coding challenges, all were preinterview and submitted online or shared in a private repo. One company had an online quiz that was actually really interesting to take, the questions were all multiple choice but done really well to tease out someone's experience in a few key areas.

For what its worth I don't use LLMs and the interview loop went about as I'd expect in a tough job market.

  • MathMonkeyMan 18 minutes ago

    I've had the same experience lately, though I think I might be getting lucky with a few of these interviews. The leetcode questions, in particular, have been softball. I do appreciate that...

vrosas 19 hours ago

As someone currently job searching it hasn’t changed much, besides companies adding DO NOT USE AI warnings before every section. Even Anthropic forces you to write a little “why do you want to work here DO NOT USE AI” paragraph. The irony.

  • Pooge 18 hours ago

    They will very happily use AI to evaluate your profile, though :)

    • pizzalife 17 hours ago

      Applying at Anthropic was a bad experience for me. I was invited to do a timed set of leetcode exercises on some website. I didn't feel like doing that, and focused on my other applications.

      Then they emailed me a month later after my "invitation" expired. It looked like it was written by a human: "Hey, we're really interested in your profile, here's a new invite link, please complete this automated pre-screen thingie".

      So I swallowed my pride and went through with that humiliating exercise. Ended up spending two hours doing algorithmic leetcode problems. This was for a product security position. Maybe we could have talked about vulnerabilities that I have found instead.

      I was too slow to solve them and received some canned response.

    • x0x0 16 hours ago

      fyi, that's because (from experience) the last job req I publicly posted generated almost 450 responses, and (quite generously) over a third were simply not relevant. It was for a full-stack rails eng. Here, I'm not even including people whose experience was django or even React; I mean people with no web experience at all, or were not in the time zone requested. Another 20% or so were nowhere near the experience level (senior) requested either.

      The price of people bulk applying with no thought is I have to bulk filter.

      • Pooge 16 hours ago

        So you allow yourself to use AI in order to save time, but we have to put up with the shit[1] companies make up? That's good, it's for the best if I don't work for a company that thinks so lowly of its potential candidates.

        [1]: Including but not limited to: having to manually fill a web form because the system couldn't correctly parse a CV; take-home coding challenges; studying for LeetCode interviews; sending a perfectly worded, boot-licking cover letter.

thih9 16 minutes ago

Not everyone is using AI; and speaking as one such developer, interviewing is not fun.

MacsHeadroom 19 hours ago

Changed enormously. Both resumes and interviews are effectively useless now. If our AI agents can't find a portfolio of original work nearly exactly what we want to hire you for then you aren't ever going to hear from us. If you are one of the 1 in 4000 applications who gets an interview then you're already 70% likely to get an offer and the interview is mostly a formality.

  • Gigachad 16 hours ago

    What worked for me is just ignoring the job listing websites, and calling recruiters directly on the phone. Don’t bother hitting “easy apply” just scroll to the bottom and call the number.

    I’ve also been asked for the first time in ages to come to the companies office to do interviews.

    • andrewflnr 14 hours ago

      What do you tell them on the phone? Are they prepared for just "Hi I want to apply for the $job position"? And do they have an answer besides "cool, use the website"?

      • Gigachad 10 hours ago

        They put their phone number there because they want you to call it. I say "I saw this position <position name> advertised on LinkedIn and I'm interested, is this still available?"

        Last time I did this they told me it is but that they are at late stages of interviewing so I shouldn't bother applying for that one, but they got down my details and had other jobs that matched what I was looking for. Recruiters are sales people and you just reversed cold called them making their job easier. The majority of applications are AI bots and people who don't live in the country the job is listed in. By making a phone call you are up the top of the list of "most likely to be a legitimate applicant".

        • johnnyanmac 10 hours ago

          And when was this? I can't remember the last time anyone had their phone number publicly displayed on LinkedIn. And now messaging recruiters is a paid feature. The market's only making it more difficult to reach a human.

          • Gigachad 8 hours ago

            This was last week. Perhaps the Australian market is different, but I often but not always see the option to physically call.

            • johnnyanmac 7 hours ago

              Yeah, it may be a cultural difference. The US has a huge fear of doxxing in the modern world. Can be traced back to decades when a crazed fan murdered a celebrity in their home. Easily accessible firearms definitely doesn't help.

              This even applies to businesses in some cases. You trying to walk in and talk to someone is a security threat compared to the times where you could do that and walk out with a job offer. US companies absolutely hate unsolicited calls from non-businesses.

  • tmpz22 18 hours ago

    If the interview is mostly a formality is it still multiple hours of leet code?

  • johnnyanmac 10 hours ago

    >If our AI agents can't find a portfolio of original work nearly exactly what we want to hire you for

    that'd be a huge issue for most candidates (and basically all top candidates) because "exactly what you want to hire you for" is probably not open source code to begin with.

    >If you are one of the 1 in 4000 applications who gets an interview then you're already 70% likely to get an offer and the interview is mostly a formality.

    That has not been my experience at all in 2023/2024.

  • fifilura 18 hours ago

    Does that mean you will not hire anyone without a public portfolio?

    • Pooge 18 hours ago

      I thought that meant what you typically write in the "Experience" section. GP, am I wrong?

      Is everyone writing a "Projects" section by rewording what they wrote in "Experience"?! For me, "Projects" should strictly be personal projects. If not, maybe that's what I'm missing.

      • sshine 17 hours ago

        Projects are personal projects, or at least projects in which you did a distinguishable effort.

        They don't have to be public to the whole world, you can have links that are only in your resume.

        But if they're on GitHub, they have to be public, since there aren't unlisted repositories.

        • mdaniel 13 hours ago

          I actually believe that it would be possible to provide a read only clone url in a resume link but I don't know if a way to make a link to a browsable version (short of having a proxy server type setup, or, of course, a slim server protected by http basic)

  • crooked-v 15 hours ago

    > a portfolio of original work

    I'm too busy doing actual paid work for companies for that.

    • lnsru 6 hours ago

      That’s the reality for most people. Creating many things under NDA with tools watching for IP theft. So no single line of code can leave the company. I know a guy who has a portfolio, but he’s freelance web designer.

meter 16 hours ago

For the time being, I’ve banned LLMs in my interviews.

I want to see how the candidate reasons about code. So I try to ask practical questions and treat them like pairing sessions.

- Given a broke piece of code, can you find the bug and get it working?

- Implement a basic password generator, similar to 1Password (with optional characters and symbols)

If you can reason about code without an LLM, then you’ll do even better with an LLM. At least, that’s my theory.

I never ask trick questions. I never pull from Leetcode. I hardly care about time complexity. Just show me you can reason about code. And if you make some mistakes, I won’t judge you.

I’m trying to be as fair as possible.

I do understand that LLMs are part of our lives now. So I’m trying to explore ways to integrate them into the interview. But I need more time to ponder.

  • meter 15 hours ago

    Thinking out loud, here’s one idea for an LLM-assisted interview:

    - Spin up a Digital Ocean droplet

    - Add the candidate’s SSH key

    - Have them implement a basic API. It must be publicly accessible.

    - Connect the API to a database. Add more features.

    - Set up a basic deployment pipeline. Could be as simple as script that copies the code from your local machine to the server.

    Anything would be fair game. The goal would be to see how the candidate converses with the LLM, how they handle unexpected changes, and how they make decisions.

    Just a thought.

  • blazing234 13 hours ago

    for the first point you better provide a jira ticket with steps to get there (;

    i would just look at stack overflow for your second point lol...

acwan93 17 hours ago

I don’t know the answer, but I’d like to share that I asked a simple question about scheduling a phone interview to learn more about a candidate.

The candidate’s first response? “Memory updated”. That led to some laughs internally and then a clear rejection email.

  • buggy6257 17 hours ago

    My first read of this was they made a joke (not wise when scheduling for interviews sure but maybe funny) by intentionally responding that way.

    This is because my brain couldn't fathom what is likely the reality here -- that someone was just pumping your email thru AI and pumping the response back unedited and unsanitized, and so the first thing you got back was just the first "part" of the AI response.

    ...Christ.

    • rantallion 16 hours ago

      I'm with you. Looking at the way people respond online to things now since LLMs and GenAI went mainstream is baffling. So many comments along the lines of "this is AI" when there are more ordinary explanations.

    • johnnyanmac 10 hours ago

      I don't even understand why you want an AI responding to emails from an interviewer. I have 2-3 template answers I can access with a keystroke.

      But yes, I read it the same way. Pretty funny way to respond to a recruiter after they say "no AI please".

    • john-radio 16 hours ago

      Yeah I don't know about this specific situation, but as someone who is on the job market, is a good developer, but can come off as a little odd sometimes, I often wonder how often I roll a natural 1 on my Cha check and get perceived as an AI imposter.

      • acwan93 16 hours ago

        If anything, coming across as “a little odd” can be a sign I’m actually talking to a human.

        • crooked-v 15 hours ago

          That's a good point. The major LLMs are all tilted so much towards a weird blend of corpo-speak with third-world underpaid English speaker influence (e.g. "delve", from common Nigerian usage) that having any quirks at all outside that is a good sign.

    • acwan93 16 hours ago

      Your perception of the reality is spot on. For this round I was hiring for entry level technical support and we had limited time to properly vet candidates.

      Unfortunately what we end up doing is have to make some assumptions. If something seems remotely fishy, like that “Memory updated” or typeface change (ChatGPT doesn’t follow your text formatting when pasting into your email compose window), it raises a lot of eyebrows and very quickly leads to a rejection. There’s other cases where your written English is flawless but your phone interview indicates you don’t understand the English language compared to when we correspond over email/Indeed/etc.

      Mind you, this is all before we even get to the technical knowledge part of any interview.

      On a related hire, I am also in the unfortunate position where we may have to let a new CS grad go because it seemed like every code change and task we gave him was fully copy/pasted through ChatGPT. When presented with a simple code performance and optimization bug, he was completely lost on general debugging practices which led our team to question his previous work while onboarding. Using AI isn’t against company policy (see: small team with limited resources), but personally I see over reliance on ChatGPT as much, much worse than blindly following Stack Overflow.

      • gray_-_wolf 16 hours ago

        > typeface change

        Long live plain text email.

            ()  ascii ribbon campaign - against HTML e-mail 
            /\  www.asciiribbon.org   - against proprietary attachments
    • anal_reactor 15 hours ago

      A friend of mine works with industrial machines, and once was tasked with translating machine's user's manual, even though he doesn't speak English. I do, and I had some free time, so I helped him. As an example, I was given user manual for a different, but similar machine.

      1. The manual was mostly a bunch of phrases that were grammatically correct, but didn't really convey much meaning

      2. The second half of the manual talked about a different machine than the first half

      3. It was full of exceptionally bad mistranslations, and to this day "trained signaturee of the employee" is our inside joke

      Imagine asking ChatGPT to write a manual except ChatGPT has down syndrome and a heart attack so it gives you five pages of complete bullshit. That was real manual that got shipped a 100 000€ or so machine. And nobody bothered to proofread it even once.

      • quercusa 14 hours ago

        I once worked in the US for a Japanese company that had their manuals "translated" into English and then sent on for polishing. Like the parent, it would be mostly "a bunch of phrases that were grammatically correct, but didn't really convey much meaning" . I couldn't spend more than an hour a day on that kind of thing; more than that and it would start to make sense.

themanmaran 16 hours ago

On our side we've transitioned to only in person interviews.

The biggest thing I've noticed is take home challenges have lost all value. Since GPT can plausibly solve almost anything you throw at it, and it doesn't give you any indication of how the candidate thinks.

And to be fair, I want a candidate that uses GPT / Cursor / whatever tools get the job done. But reading the same AI solution to a coding challenge doesn't tell me anything about how they think or approach problems.

  • chrisfowles 13 hours ago

    From my perspective, you might be using take-home challenges incorrectly. The purpose for us is to have something that both sides are now somewhat familiar with, on which to base a technical conversation and ask questions. The actual solution delivered is a small part of the overall value.

  • ghaff 15 hours ago

    I'm not a fan of take-home challenges anyway (for the most part). Anything non-trivial is a big time suck and you know some people will spend all weekend on you two hour assignment.

    Sometimes you have to. In my previous analyst stint a writing sample was pretty non-negotiable unless they could oint to publicly-published material--which was much preferred. ChatGPT isn't much use there except to save some time. It's very formulaic and wouldn't pass though, honestly, some people are worse on their own.

iExploder 21 hours ago

Name of the game now is not to get fired at all costs and weather the storm until the dust settles...

  • jppope 17 hours ago

    hiring market tightened up... that doesn't mean there isn't one

    • iExploder 17 hours ago

      > hiring market tightened up... that doesn't mean there isn't one

      tightened market is one thing, the absolute insanity of the recruitment process in last couple years with now AI thrown into the mix is really something to behold, test these waters at your own peril

      • blazing234 13 hours ago

        yeah the name of the game now is just to avoid any company that has shitty recruitment. you can tell in an instant if they are worth your time or not, which I'd say in Canada is about 90% a waste of time.

        someone who actually wants to hire will want you and wants to do whatever they can to get a good candidate.

        • jppope 10 hours ago

          Realistically its just the blind leading the blind. People have forgotten that a interview process was designed to avoid false positives, and that the companies who were most selective were providing top 1% comp and had brands that could carry that weight. If you are google and you were handing $500K in RSUs on top of $300K+ in salary, you better damn pick the right candidate...

          For some random SMB in shipping or something to be bashing people over the head with 10 step-leetcode-full-panel-10-hour-systems-design interviews they just don't get it. For starters they probably don't even have the talent to properly evaluate the prospect. So who are they helping?

    • johnnyanmac 9 hours ago

      nope, but the process is worse

      - Much less resposnes

      - For your sanity you want to make sure there aren't obvious signs of something being a ghost job, an H1B hire, an internal hire, or a farce because "we're always growing" (lying).

      - expect longer processes. Hasn't happened to me but 7+ stage interview processes is not uncommon these days, even outside of tech

      - accept that some processes will be frozen under your nose. especially because of longer processes crossing quarters

      - expect less respect in the process. They feel like they do not want you. You are expendable

      - don't bother negotiating in this market. You get a number and take it unless you already have a job. Even then they may simply pass you for someone more desperate. BTW wages are being suppressed; you're probably not getting pre 2023 salaries right now.

      Yeah... if you're not being abused at work, I'd just weather it out.

  • johnnyanmac 9 hours ago

    I sadly failed long ago. The company itself did not weather the storm.

mattbillenstein 11 hours ago

I've only done a few interviews the past couple years, but I've asked people to turn off coding assistants and not use an LLM on my coding screen. I want to know how _they_ think and solve problems, not how the LLM does.

And generally, the more junior people are just completely lost without it. They've become so dependent on it, they can't even google anything anymore. Their search queries are very weirdly conversational questions and the idea of reading the docs for whatever language or library they're using is totally foreign to them.

I think it's really hampering the growth of junior devs - their reasoning and thought processes are just totally tuned to this conversational form of copy and paste programming, and the code is really bad. I think the bottom half of programmers may just LLM themselves out of any sort of job because they lose the ability to think and problem solve... Kinda sad imo.

  • politician 10 hours ago

    It was obvious from the moment ChatGPT went live that this was going to intellectually stunt junior developers.

n0rdy 14 hours ago

Here where I live, home assignments with the follow-up tech discussion are way more common than the leetcode-like interviews. Therefore, I can't say that the process has already changed dramatically. Yes, I did review a couple of home assignments that had all the signs of being completely AI-generated. But it all became clear during the presentation of the solution, "why"-like questions, and extending the scope of the task to discuss the improvements. If the candidate could answer that, then AI was just a supplementary tool, which is great, as we also use Copilot, ChatGPT, and friends at work. If not, well, it's an obvious rejection.

It happened twice, that the candidate on the other side was clearly typing during the interview, taking a pause for a second or two, and then reading from the screen. That's very obvious as of today, but I can see how it will become a problem one day with the future AI development in terms of the speed of responses, and better voice recognition techniques (so, no typing needed).

low_tech_punk 7 hours ago

I'm actually glad AI use is revealing misalignment:

If the AI is so good at it, why are we still hiring human to do the job? It just shows how the interview process is not measuring the right thing to start with.

madduci 7 hours ago

Do people still insist on coding tasks? Why don't simply formulate questions that await a broader and deeper knowledge of things, which helps to know how far the candidate can go?

And for questions, I don't mean "it's better a list or a set?", but something like: "you have an application like this, how can you improve it to perform X?"

  • mrweasel an hour ago

    People still cheat, or try to. Some of our interviewers have seen candidates start out the interview being completely unable to answer the most basic questions about a programming language, only to flip around a few moments later and give almost encyclopaedic answers, with release dates for specific feature and comment on implementation details. One minute you can't do array slicing and now you're able to talk about the inner workings of the garbage collector?

    We're trying to be fair in hiring, every resume is manually read, interviews a conversations, there are no leet code, no tricks, no AI screening, but they are done remote and so far two out of three candidate clearly cheat, during a live interview, to cover up an impressive lack of general knowledge.

  • edanm 7 hours ago

    Because most people who've been involved in tech interviewing have arrived at the same conclusion - these kinds of questions are too easy for candidates to answer in ways that make them appear great, while they can't actually program.

delduca 18 hours ago

I recently reviewed a medium-complexity assignment—just questions, no coding—and out of six candidates, I only approved one. The others were disqualified because their answers were filled with easily identifiable ChatGPT-generated fluff.

And I had made it clear that they should use their own words.

rachofsunshine 15 hours ago

We haven't seen major issues with AI with candidates on camera. The couple that have tried to cheat have done so rather obviously, and the problem we use is more about problem-solving than it is about reverse-a-linked-list.

This is borne out by results downstream with clients. No client we've sent more than a couple of people has ever had concerns about quality, so we're fairly confident that we are in fact detecting the cheating that is happening with reasonable consistency.

I actually just looked at our data a few days ago to see how candidates who listed LLMs or related terms on their resume did on our interview. On average, they did much worse (about half the pass rate, and double the hard-fail rate). I suspect this is a general "corporate BS factor" and not anything about LLMs specifically, but it's certainly relevant.

_sword 10 hours ago

Even before LLMs were popularized, the shift to remote work made hiring awful in my experience. In finance roles, I had candidates who aced their tests and projects but then showed up to the job unable to competently use excel or write coherent sentences in English. Phone / zoom interviews all went fine, but clearly there was rampant cheating during remote projects.

screaminghawk 17 hours ago

I don't understand why an interviewer would ban the use of AI if they are allowed to use AI in the role.

The interview is a chance to see how a candidate performs in a work like environment. Let them use the tools they will use on the job and see how well they can perform.

Even for verbal interviews, if they are using ChatGPT on the side and can manage the conversation satisfactorily then more power to them.

  • mrweasel an hour ago

    What if their lack of knowledge runs so deep that you question if their even able to prompt the AI without step by step instructions?

    There's nothing wrong with a candidate going "Normally I'd prompt ChatGPT and get a skeleton project going" or saying "Look, I don't run around with the entire standard library in my head. I look that stuff up and sometimes that's with an LLM". The problem is when they can't go through the steps of solving a program, without the AI. I don't care about the details, or if you ask Copilot to do the API query code, because you don't want to write the error handling, that actually fairly reasonable, but if you can't prompt it to add the logic for a HTTP 403 then what's the point? In that case I'd rather hire someone who takes longer, but who knows that the 403 should probably redirect an unauthenticated user to the login page.

sramam 11 hours ago

I recently completed a take-home assignment with the following instructions:

<instructions>

This project is designed to evaluate your ability to:

  - Deconstruct complex problems into actionable steps.
  - Quickly explore and adopt new frameworks.
  - Implement a small but impactful proof of concept (PoC).
  - Demonstrate coding craftsmanship through clean, well-architected code.
We estimate this project will take approximately 5–7 hours. If you find that it requires more time, let us know so we can adjust the scope.

Feel free to use any tools, libraries, frameworks, or LLMs during this exercise. Also, you’re welcome to reach out to us at any time with questions or for clarification.

</instructions>

I used LLM-as-a-junior-dev to generate 95+% of the code and documentation. I'm just an average programmer, but tried to set a bar that if I was on the other side of the table, I'd hire anyone who demonstrated the quality of output submitted.

  - The 5-7 hour estimate was exceeded (however, I was the first one through this exercise). 
  - IMHO the quality of the submission could NOT have been met in lesser time.
  - They had 3 tasks/projects:
     - a data science project, 
     - a CLI based project and
     - a web app
  - They wanted each to be done in a different language. 
  - I submitted my solution <38 hours of receipt of the assignment. 
  - In any other world, the intensity of this exercise would cause a panic-attack/burn-out. 
  - I slept well (2 nights of sleep), took care of family responsibilities and felt good enough to attack the next work-day.
I've been on both sides of the table of many interviews.

This was by far the most fun and one to replicate every chance I get.

[EDITS]: Formatting and typos.

  • johnnyanmac 9 hours ago

    5-7 hour interview take homes is already a nightmare. LLM assistance or not, I would absolutely not bother with such an assignment unless I was far into the process. Meanwhile, I'm given such tasks half the time before I speak to any human.

    • sramam 7 hours ago

      That is a fair point.

      This was the final technical screen so definitely something worth doing in my case.

      The reason I posted a reply was there is a lot of negativity around AI in the hiring process. This was an excellent example of using AI to the benefit of all parties.

      Instead of nit-picking on stylistic things from a smaller code-sample, one can nit-pick on the implemented complexity. I think it is a higher quality signal.

VPenkov 6 hours ago

My employer sends a take-home test. It is relatively easy and not very time-consuming. Its main purpose is to act as a basic filter and to provide some material to base an interview on.

In the recent couple of years I have seen a lot more people ace the test and not do very well during the actual interview. Take-home exams feel like they would always be ineffective now.

yodsanklai 4 hours ago

You can always interview in person. This has often been the norm, after some initial screening. I think it's the best option.

Even remotely, normally a coding interview isn't a candidate typing things for 45 min on a screen. There are interactions, follow-up questions, discussions about trade-offs and so on... I suppose it's possible for a good candidate to cheat and get some extra points, but the interview isn't broken yet.

You could also let the candidate use AI, and still gather all the relevant signals.

bilekas 6 hours ago

I've given a few interviews with take home screening small projects that I have been able to identify very quickly were majority generated, I usually don't mind it too much but when I asked why they chose certain patterns or why they went with their approach they couldnt give any "true" answer a week later.

I feel that smaller things like syntax etc make perfect sense. But for larger things that involve a slightly higher complexity it becomes a bit grey. I likenit personally to writing. When I write things down as I'm trying to work things out or even trying to learn something I find I retain the data so much better and have a better picture in my mind of what's going on. That might just be personal preference for learning but if I copy straight from claud I know 100% I'm not going to remember anything about it the next day.

nimish 16 hours ago

If your interview process is susceptible to AI then you don't need to hire for the job. Just use an AI and prompt it.

The job you are therefore hiring for is now trivial. If it weren't, no amount of AI could pass your interview process.

  • probably_wrong 15 hours ago

    I belive this line of thinking mistakes the result with the process, similar to assuming that the reason companies ask people to reverse a linked list is because there's an unmet market demand for list-reversal algorithms.

    An interview has to be hard enough to filter those that are unqualified but also easy enough that the right person can pass with some minor preparation. If an interviewer asked me for the equivalent of production-ready code to add support for custom hardware in the Linux kernel I'd either reply with my freelance hourly rate or I'd end the interview.

    • johnnyanmac 9 hours ago

      >An interview has to be hard enough to filter those that are unqualified but also easy enough that the right person can pass with some minor preparation.

      Yeah, ideally. Meanwhile the market evolved to asking Leetcode hards on the spot.

      As usual, companies take a good concept and milk it to the point where you need to cheat or make studying leetcode your full time job to ace it. I'm not to sympathetic.

      (PS yes, I've been asked to do what was essentially spec work a few times. Woudl have been smarter to reply with an invoice, but I was too tired and just said no).

  • blazing234 13 hours ago

    lol what

    this is implying interview process actually showcases someone would be good at a job when it isnt

    when has leetcode ever translated to web dev... fuckin never lol.

    • nimish 10 hours ago

      My point exactly.

      Why do you let your interview process for web devs use leetcode?

Xmd5a 16 hours ago

Shouldn't a portfolio of personal projects be enough ? In the past couple years I:

- adapted Java's Regex engine to work on streams of characters

- wrote a basic parametric geometry engine

- wrote a debugger for an async framework

- did innovative work with respect to whole-codebase transformation using macros

Among other things.

As for ChatGPT in the context of an interview, I'd only use it if I were asked to do changes on a codebase I don't know in limited time.

  • rockyj 3 hours ago

    This is what I do not get. I just do not understand the technical interview process these days.

    I have 20 years of experience in software development, I have hundreds of LinkedIn contacts you can check on, a dozen recommendations on LinkedIn and a dozen projects on Github, not to mention a blog and let's say a other indicators (e.g. Stackoverflow creds).

    Now what exactly are people checking on? My picture is on LinkedIn and Github. Clearly I can code and have done dozens of projects. What is the point of asking me - "Do you know Kafka?", "Have you used AWS S3 and how?", "How would you build / scale a Node.js project" - these are the real questions I was asked. Yes, had you cared to look at my Github / blogs you would have seen I have done this multiple times, what are we verifying now?

    I tried my best but at one point stopped caring about interviews.

  • ganoushoreilly 16 hours ago

    I think the arguments is there is no way to validate it was you that did the work. There's been too many instances of groups that do interviews for others or the work for take homes to help get people placed. There was a big deal about some H1Bs a while back where the people that showed up didn't look anything like the people that interviewed. So I understand both sides.

    It's frustrating though when you've done a lot of work, as you've listed. I think in a good interview maybe going over that code and getting the chance to explain things you did, why you did, or issues you had, could also go a long way.

    Interviewing is tough, more so at scale.

  • gray_-_wolf 16 hours ago

    Bit annoying is that when companies ask for a portfolio, they often mean GitHub. Lot of non-technical hiring people I discussed this with were confused by the fact that there are other ways to contribute, like mailing lists.

  • blazing234 13 hours ago

    most experienced developers don't have a portfolio because they code for a living lol

    • Madmallard 10 hours ago

      I don't really think this is true? This might be true for developers that only work for companies and never in their own time maybe.

      My portfolio site is just one of the sites on my personal website that also hosts many of my projects. It wasn't much work to setup, and it provides organization and sharing capability so there's motivation to make it anyways.

      • physicsguy 4 hours ago

        > This might be true for developers that only work for companies and never in their own time maybe.

        So most people with young kids then

      • johnnyanmac 9 hours ago

        >This might be true for developers that only work for companies and never in their own time maybe.

        I'd argue that that is indeed the majority. Maybe they have some work from school days, but very few devs are making portfolio pieces years into the industry once they can flesh out their "Employment" sectiion.

        IME you only need a portfolio as a non-junior if you are making a lateral move.

fergie 6 hours ago

In my experience the real objective of the take home exercise is to gauge how compliant the candidate is, and how many obligations they have outside of work. Otherwise it always makes more sense to conduct simple, in-person assessments.

skeeter2020 9 hours ago

>> now when problems are trivially solvable by GPT.

Only the trivial problems. We don't use AI during interviews but many try and it's always obvious. Delay after any question problem; textbook perfect initial answer; absolutely nothing when asked to go deeper on a specific dimension.

It's nice because interviews that are scheduled for an hour are only lasting ~20 minutes in these situations, and we can cut them short.

shafkathullah 2 hours ago

it makes sense, because interview questions often lack novelty, it's usually very repeated, and almost all AI is very good at these, i think asking to switch of AI and doing a live coding or thinking outloud is the way forward. or else just give them a sample story point and see how well they respond, if they can do it well means they can work well in the company.

elzbardico 17 hours ago

I have a colleague that uses AI to comment on RFCs. It is so clearly machine generated, that I wonder if I am the only one to see it. He is a good colleague though, but as he is a bit junior, it is still not clear to me if AI is helping him to improve faster or if it is hindering his deep learning of stuff.

trustinmenowpls 17 hours ago

I've been on both sides recently, and it hasn't really changed significantly. If you're heming and hawing you're not getting the job.

A4ET8a8uTh0_v2 16 hours ago

Buddy of mine recently got a position with the help of custom built model that was listening on the call and printed answers on another screen. The arms race is here and frankly, given that a lot of people are already using it at work, there is no way to stop it short of minute upon minute supervision and even biggest micromanagers won't be able to deal with it.

Honestly, if I could trust that companies won't try to evaluate my conversation through 20 different ridiculous filters, I would probably argue that my buddy is out of line.. As it stands, however, he is merely leveling out the playing field. But, just life with WFH, management class does not like that imposition one bit.

sweca 15 hours ago

My company actually encourages the use of AI. My interview process was one relatively complex take home, an explanation of my solutions and thinking, then a live "onsite" (via zoom) where I had to code in front of a senior engineer while thinking aloud.

If I was incompetent, I could've shoved the problem into o1 on ChatGPT and probably solved the problems, but I wouldn't have been able to provide insight into why I made the design choices I made and how I optimized my solutions, and that would've ultimately gotten me thrown out of the candidate pool.

  • Aachen 8 hours ago

    What does onsite mean if it was a video call?

RomanPushkin 7 hours ago

One must not forget that cheaters are now everyone, and it's likely you gonna be interviewed by a cheater. I've seen this multiple times already, recent is Meta interview ~8 months ago. Very low quality interviewer - complained about bug in the code, while there was no bug. Can't keep the conversation going, the same for system design - poor guy didn't even want to listen, and was pretty much rude.

I would say that if it wasn't a pattern. So let's not pretend they're not cheaters. Call them out.

qq66 5 hours ago

Why do you have to evaluate the person without AI? Unless they won't be able to use AI on the job (for security reasons or whatever) it seems like it makes more sense to have them pull up their favorite AI and use it to solve a problem. Give them some buggy code and ask them to fix it with any tools they want.

CoolCold 3 hours ago

not a problem yet - we still start asking hard question first - on differences on TCP and UDP and which one is better and why - very deep rabbit whole I must say. Then simple questions - how do you manage k8s clusters and what you suggest for multi DC setups

j-scott 17 hours ago

In my most recent cycle, I didn’t ask to use AI and I was only warned once about using AI when I had the official language plugin for an IDE annotate some struct fields with json tags. I explained the plugin functionality and we moved on.

When I was part of interviews on the other side for my former employer, I encountered multiple candidates who appeared to be using AI assistance without notifying the interviewers ahead of time or at all.

joshstrange an hour ago

It doesn’t affect our process, we even let people use AI/LLMs on the coding test.

Just asked to add a small feature or make a small change after the present the initial code (without AI help). That makes it really easy to see who doesn’t understand the code they are presenting.

I don’t care how much AI you use if you understand the code that it writes and can build on top of it.

bagels 16 hours ago

People are sending us emails that are not just written with chatgpt, but I think they've automated the process as well, as parts of the prompt slip in.

You can see things in the emails like:

"I provided a concise, polite response from a candidate to a job rejection, expressing gratitude, a desire for feedback, and interest in future opportunities."

  • buggy6257 14 hours ago

    Ignoring the hilarity who in their right mind is replying to job denial emails? What the hellll…

    • mdaniel 13 hours ago

      I for sure do, because I'm thrilled out of my mind they were polite enough to not just ghost me. I know better than to ask for feedback, or to expect any subsequent exchange, though

      • duskwuff 11 hours ago

        If they sent a personalized rejection email, maybe. Surely not to a boilerplate email from the ATS, though?

jppope 17 hours ago

I've been very curious about this and about how we should modify our hiring. Its obvious that an individual should be able to use AI companions to build better, faster, higher quality things... But the skillsets are sooo uneven now that its unfair to those who are with and without.

I think it ultimately comes back to impact (like always) which has remained largely unchanged.

bdcravens 17 hours ago

I haven't done any hiring in a while, but my feelings on the matter:

If they can talk through the technology and code fluently, honestly, I don't care how they do the work. Honestly I feel like the ability to communicate is a far more important skill than the precise technology.

This is of course presumes you have a clue about the technology you're hiring for.

topkai22 7 hours ago

I’m trying something new on my next interview- I’ve had an LLM solve a coding problem and I’ve uploaded the output to github, I’m going to ask the candidate to evaluate the generated code adapt it to new technical and non functional requirements.

khazhoux 15 hours ago

With AI making traditional coding problems trivial, tech interviews are shifting toward practical, real-world challenges, system design, and debugging exercises rather than pure algorithm puzzles. Some companies are revisiting in-person whiteboarding to assess thought processes, while others embrace AI, evaluating how candidates integrate it into their workflow. There's also a greater focus on explaining decisions, trade-offs, and collaboration. Instead of banning AI, many employers now test how effectively candidates use it while ensuring they have foundational skills. The trend favors assessing problem-solving in real work scenarios rather than just coding ability under artificial constraints.

ionwake an hour ago

PREFACE: LONG RANT

I dont know but Ill always remember the funniest thing I noticed once during my career in England..

A company called tripadvisor based in a very very small town where I was at the time a senior dev, working on my own things, had never reached out. yet I saw their ads and finally an actual article in a newspaper, basically where they were bragging about their latest hire. Let call him Pablo, who had apparently aced every single technical interview question, and so they had hired him, after interviewing tens of people. They were so happy with their hire, the article had been based on him and the "curiosity" that they were the first company to have hired him after he had failed I think it was something like 50 interviews.

Obviously they couldn't believe how lucky they were to have finally found someone who could have completed all the technical tests perfectly.

Now I have nothing against Pablo, and rooted for when when I read the article. But I found it hilarious, and still do almost a decade later, that this top tier company, based in a university town with perhaps the most famous university, had not realised they had simply over fitted for someone who could answer their leetcode selection perfectly. Not only not realised this but then commissioned the article.

Eventually they reached out for an interview with me where the recruiter promised there was no test for me, then I was "surprised by one" in a room with a manager who hadnt had time to read my expection ( which is fine ), but when he walked in and saw I hadn't done it I was "walked out". The whole interview having taken less than about 10 minutes, when I was the most qualified senior developer for hundreds of miles who was available at the time. No Im honestly not tryign to brag Im just saying the town was so small there just couldnt have been more during that short time period I was available.

I know this reads bitter,( my life is great now ) , I just remember it because, my point was at the time I was at my peak, and would have accepted the job if offered, but I was walked out within 10 mins.

Honestly just sharing this insight, the moral of the story I think for me is companies never were great hiring and well if anything the advent of LLMs might actually improve as LLMs start to assess people based on their profiles and work? One can hope, I dont, I want an edge in this market with my company.

gigatexal 14 hours ago

It has always been more about the application of facts or knowledge than the retrieval of facts that makes for a really good interview.

It’s better to know when to use a Linked list than how to make one (because I’d just use the one in the library).

So the candidate can prompt well good. But how much of the knowledge can they apply to a problem or are they just masters of hacker rank (sic).

But more often than not most interviewers are lazy and just use canned hacker rank style questions or if it’s not laziness it’s being too overworked to craft a really good interview.

WatchDog 11 hours ago

The last interview I ran was back when copilot came out.

My company at the same had been using the same coding exercise for years, and many candidates inevitably published their code to github, so copilot was well trained on the exercise.

I had a candidate that used copilot, still flubbed the interview, while ignoring perfectly good suggestions from the LLM.

DustinBrett 10 hours ago

"when problems are trivially solvable by GPT" only the fake leet code ones that were never really what you'd do at work anyway. Sadly that is what people are giving devs in interviews.

Screen share to avoid cheating via AI, same as we were doing before AI when people could get friends or Google to help them cheat.

gibbonsrcool 17 hours ago

Panel interviews seem to be more common. Curious if others have seen the same? I personally feel very uncomfortable coding in front of a group. First one of these I tried had like 5 people watching and I lost my nerve and bailed. :|

  • RevEng 16 hours ago

    Panels and live programming assignments are such an awful idea. Is that what the workplace is like? Doo they want people who can work under those conditions? I've been a working professional for 18 years who gives public talks regularly and I can still see myself clamming up in that situation. Everyone knows it's hard to think and type when you are being watched.

    • lubujackson 8 hours ago

      Worse, how little does the company value developer time? If you get hired there, how many brainnumbing panels will you need to be a part of? Stinks of a lack of focus, trust and quality.

ungreased0675 14 hours ago

I’ve had people using AI in interviews to answer simple “get to know you” questions. People just disconnect their brain and read whatever the machine says.

zitterbewegung 15 hours ago

I listened in to someone interviewing people since many people used AI. It's the same with googling the answer it is very obvious that someone is taking too long to get to the answer and or you can't see a separate screen. Mitigation is literally looking at the text window and seeing if they are not typing / taking too long to even make a bad implementation. There is now a problem if you allow for google since google will autogen a gemini query to solve it.

esafak 12 hours ago

These days I ask real-world debugging questions where the root cause is not the error on the screen. I allow LLM use as long as I see how.

In my unfortunate experience, candidates who covertly rely on LLMs also tend to have embellished resumes, so I try to root them out before asking technical questions.

donatj 12 hours ago

Oh man, I have kind of hunkered down and not interviewed anywhere since the start of COVID. I had not even thought about how AI might affect things let alone people not being in office.

Last time I interviewed I spent about half of it standing at a whiteboard.

acheong08 15 hours ago

Run competitions. If you're hiring fresh grads, this is probably the best way to filter by skill. If you can use AI to beat all the other candidates that's a skill by itself. In practice, those that use AI rarely ever make it into the top 10. Add a presentation/demo as part of the criteria to filter out those with bad communication skills.

  • johnnyanmac 9 hours ago

    In my industry, competitions for hire are just a front for spec work. So YMMV.

fastily 15 hours ago

I expect in person interviews are going to be the norm soon, assuming they’re not already. For now, the challenge I give candidates causes ChatGPT to produce convoluted code that a human never would. I then ask the person to explain the code line-by-line and they’re almost never able to give a satisfactory answer

hot_gril 17 hours ago

It's still remote. I don't get how you could pass an interview using ChatGPT unless it's purely leetcode.

crazygringo 20 hours ago

Not sure why interviews would change.

Even if you're using ChatGPT heavily it's your job to ensure it's right. And you need to know what to ask it. So you still need all the same skills and conceptual understanding as before.

I mean, I didn't observe interviews change after powerful IDE's replaced basic text editors.

  • InkCanon 19 hours ago

    Because interviews were always an attempt to discern a signal few hours interview into an accurate prediction of performance from months to years. AIs generate a lot of nosie to mask that. Interviewees can just pass the question to the AI, who will generate a reasonable sounding response.

  • fragmede 19 hours ago

    Because there's a format of interview that's basically a brainteaser that takes 45 minutes to think through and whiteboard some code for, but which is trivially solvable by copy and pasting a screenshot of the prompt into ChatGPT. This amounts to candidates being given the answer and then pretending to struggle with understanding your question and then figuring out a solution to it when really they're just stalling for time and then just copying the answer from one browser tab to the next.

    • Barrin92 13 hours ago

      If you're doing this at least face to face over Zoom and you can't tell that someone is copying answers from their second monitor and throwing ChatGPT explanations at you, you honestly need a better interviewer.

      I've done a lot of interviews over Zoom, and whenever someone cheats, by passing someone else's work off as their own (the weirdest thing I've ever encountered was someone having a friend on trying to feed them answers, which he admitted to later) it is so painfully obvious if you grill them a bit and throw a few curveballs.

      • fragmede 4 hours ago

        If you believe your catch rate is 100%, just because you've managed to identify a couple of people who were really bad at it, you might want to check your priors.

skeptrune 11 hours ago

I have been asking people questions about git a lot. It's useful to figure out whether or not they care about the craft.

epolanski 15 hours ago

Nothing, we don't do technical interviews they are silly.

NomDePlum 17 hours ago

It can be weird. Seen some decent resumes for people that in the actual interview the candidate obviously has zero demonstrable knowledge of.

Ask even the shallowest question and they are lost and just start regurgitating what feels like very bad prompt based responses.

At that point it's just about closing down the interview without being unprofessional.

shayarma 14 hours ago

They still find a way to make it horrible and degrading.

codr7 16 hours ago

About time, testing coding skills that way was always a bad idea.

sshine 17 hours ago

I can't speak for job interviewing, but having recently completed 3rd-semester trade-school oral exams in Java programming:

It is really important to watch people code.

Anyone can fake an abstract overview.

tzury 10 hours ago

Google. StackOverflow. ChatGPT/Claude et al. Those are all tools. Tools of the craft, so a professional candidate should be using it proficiently.

How good one is in understanding a problem (the scope and source), and how good are they at designing a solution that is simple, yet will tackle other "upcoming" problems (configurable vs hard coded, etc).

This is what one should care about.

* StackOverflow early days raised the same question, and so were Google. trust me. I am old enough to remember this.

Perenti 11 hours ago

Can I hallucinate answers in an interview, and have them rated as acceptable?

littlestymaar 4 hours ago

The coding interview I designed back in 2022 is still out of reach of all LLMs on the market while still being good at selecting people (I hired 6 people with it, all of them where great fit, out of 30ish applications so it doesn't seem to have a high false-negative rate either).

The main takeaway is that if you make design your interview questions to match the actual skill you're looking for, AI won't be an issue because it doesn't have those skills yet. In short: ask questions that are straightforward in surface but deep beneath with trade-offs that must be weighted by asking questions to in interviewer.

vunderba 20 hours ago

I mentioned this in a different related post but there seems to be a pretty sad lack of basic integrity in the tech world where it's become a point of pride to develop and use apps which allow an applicant to blatantly cheat during interviews using LLMs.

As a result, several of my friends who assist in hiring at their companies have already returned to "on-site" interviews. The funny thing about this is that these are 100% remote jobs - but the interviews are being conducted at shared workspaces. This is what happens when the level of trust goes down to zero due to bad actors.

  • johnnyanmac 9 hours ago

    The crushingly long and disrespectful interview process cut the bridge off first. If the job market becomes a problem of scale, people (especially programmers) will scale up as such. As the interviewers have with ATC.

  • ghaff 16 hours ago

    Largely past COVID it seems like sheer laziness or cheapness not to conduct in-person interviews for a professional job other than a short-term project after essentially an initial screen for all sorts of reasons that have little to do with cheating. I don’t care if the job is largely remote.

    As someone else noted, this used to be utterly standard. And frankly I’d probably just pass on someone who balked. Plenty of fish in the sea.

__loam 17 hours ago

There's still plenty of engineers that can't code their way out of a paper bag

  • throwaway123lol 14 hours ago

    Well they're not really engineers then are they?

    • __loam 9 hours ago

      Neither are most programmers

deadbabe 9 hours ago

We now consider the employees that we hired prior to the arrival of AI to be the equivalent of “low background steel”. They have much stronger job security.

Everyone hired after that is more suspect, and if they screw up too much or don’t perform well we just fire them quickly during the probation period, whereas previously it was rare for people to get fired during the probation period.

roland35 19 hours ago

There are some tools that read your screen and can provide hints and solutions for coding type questions. I honestly don't trust myself to not mess it up, plus the whole ethics side of it, but I'm sure that will always be a problem for online assessments

anothernewdude 5 hours ago

I make it clear on my resume that I won't work for companies that use AI in their hiring processes.

mr90210 5 hours ago

Some recruiters have tried to record our initial interview with one of those services that automatically captures notes. I REFUSED.

fsloth 8 hours ago

Trick problems and whiteboarding was always “jump through the hoops” bs IMHO.

In-person dialogue with multiple team members and sufficiently complex take-home assignment remains a pretty good method. LLM are excellent API docs and should not be avoided.

dinkumthinkum 6 hours ago

I don’t really see why it’s so hard to interview someone remotely because of AI or speech to text. I’m not aware of any system that is so advanced or fast enough that having a video call and decent conversation skills and carefully listening to the candidate can’t identify 99.99% of people using AI. Even without video, it’s no hard to have a conversation and guage their skill level and ask questions and listen to the answers carefully. You can tell when the conversation is not fluid or the answe given is at a level that is a mismatch with their apparent sophistication. You can ask an unreasonably obscure or difficult question and see if you get an LLM answer. You can suss it out. You don’t have to do in-person interviews.

nathias 3 hours ago

the biggest change I've seen as a remote dev is considerably less replies to my non-AI made cv, I think the competition for cheap labor got much harsher

kittikitti 17 hours ago

I tell everyone to share their entire screen, have their video on, and start coding. It's not that different. Even as an interviewer, I experimented with the usual cheating techniques so I know what to look out for. The best are the AI teleprompters. If you can do the work with your own AI then I see no need to care as the business will not care either.

The story is completely different for airgapped dark room jobs, but if you know you know.

  • locusofself 11 hours ago

    I'm curious what you mean by this as someone who has worked in a SCIF for years. What I've found (and I personally benefited from this) is that interviews are far more forgiving for people who have security clearances, because of the comparatively tiny available talent pool. AWS, Azure etc allow people to join as "SRE" (a bastardized term at this point) or "Cloud Engineer" and later switch to SWE pretty easily.

  • blazing234 13 hours ago

    if your interviews are easily cheatable you should probably fix the interviews. back in the day when i was in university we were allowed to use any resources for exams cause you couldnt cheat easily.

  • crooked-v 15 hours ago

    > share their entire screen

    That would be a bit awkward with my 32:9 primary monitor.

    • mdaniel 13 hours ago

      I have my dual 4K Dells for day to day work but I unplug one and scale it to something reasonable for interviews because having consideration is part of the signal that I'm sending (and empathy, because I hate trying to read purple text on a black background myself when trying to view other people's setups)

      • johnnyanmac 9 hours ago

        Nah, I have a triple monitor setup. I'll share my primary screen and work on it. If they can't trust me enough to do that, then we're already on a tumultuous relationship.

        Not like I'd do anything. I use one screen to work, one for video, and one for work visuals. Seeing one screen would show if I had any hidden windows anyway.

yieldcrv 11 hours ago

I recently did an interview that that involved making pull requests for code updates across a time period

That seemed to thwart AI use, at least thwart one-shotting, and require understanding and experience with working in an organization

I liked that

  • Aachen 8 hours ago

    What does that even mean? Trickling in changes you want to make via multiple pull requests, spaced across days or months or whatever, instead of submitting it at once?

cjkdkfktmymymyy 15 hours ago

This conversation feels bizarrely tone deaf. The skill of being able to recall specific knowledge on demand is going away.

How LLMs will evaluate a skill they are making obsolete is a question I am not sure I understand.

  • khazhoux 15 hours ago

    That is exactly what this thread is discussing.

    (You hardly needed a throwaway for this comment, though)

1270018080 13 hours ago

You have to depend on your professional network to skip the BS of the modern day interview process (as an applicant and interviewer).

xyst 15 hours ago

If a problem can be “trivially” solved by GPT. The problem is with your interview process, not the tool. It’s wild to me that interviewers still ask candidates for senior positions leetcode type questions. Yet the actual job is for some front end or devops position.

The gap between interview and actual on job duties is very wide at many — delusional - companies.

  • blazing234 13 hours ago

    imagine trying to get a job doing CRUD and getting leetcode style questions.

    • locusofself 11 hours ago

      That is the world we live in from my experience. This is one of the reasons I decide to switch to management at 40 years old earlier this year - If I lose my job I don't really want to be grinding leetcode.

EGreg 8 hours ago

I mean, it could be really bad

Online, you don't know if the person you're interviewing is an AI or not.

They could have an AI resume, AI persona, AI profile, and then someone shows up who looks like that and it could be a deepfake, then they do the coding challenges, and then you hire the person remotely, and they continue to do great work with the AI, but actually this person doesn't exist, they just created 100 such people and they're just AI agents.

It sucks if you want to hire humans. And it sucks for the humans. But the work gets done for the same price and more reliably. So dunno