crooked-v 14 hours ago

I find it kind of bleakly funny that people are so all-in on this tech that's so much a black box that you have to literally swear at it to get it work as desired.

  • sussmannbaka 14 hours ago

    We are halfway to holding elaborate rituals to appease the LLM spirit

    • phoe-krk 13 hours ago

      The prayers and vigorous side-smacking are already here; only incenses and sanctified oils remain to be seen.

      Praised be the Omnissiah.

    • FridgeSeal 13 hours ago

      Every time I make a 40k/techpriest-worshipping-the-machine-god on one of these posts there’s inevitably downvotes, but you cannot look at some of these prompts and not see some clear similarities.

      • MarcelOlsz 12 hours ago

        This is actually much more stressful than working without any AI as I have to decompress from constantly verbally obliterating a robotic intern.

    • WesolyKubeczek 9 hours ago

      Wait until the spirits demand virgin sacrifices.

      And then, when we offer them smelly basement dwellers, they will turn away from us with disgust.

  • mattjhall 10 hours ago

    As someone who occasionally swears at my computer when it doesn't do what I want, I guess it's nice that the computer can hear me now.

    • augusto-moura 8 hours ago

      Or do they? Vsauce's opening starts playing

  • shepherdjerred an hour ago

    On the other hand, it's amazing that computers are able to do so much with natural language

  • threekindwords 9 hours ago

    Whenever I’m deep in a vibe coding sesh and Cursor starts entering a doom loop and loosing flow, I will prompt it with this:

    “Go outside on the front porch and hit your weed vape, look at some clouds and trees, then come back inside and try again.”

    This works about 90% of the time for me and gets things flowing again. No shit, I’m not joking, this works and I don’t know why.

    • iJohnDoe 6 hours ago

      Your context is getting too long and it’s causing confusion.

    • MarcelOlsz 9 hours ago

      I wouldn't be surprised if they're lighting some VC money on fire by spinning up an extra few servers behind the scenes when the system receives really poor sentiment.

      • TeMPOraL 7 hours ago

        That or switching you to a SOTA model briefly, instead of the regular fine-tuned GPT-3.5 or something.

        • MarcelOlsz 7 hours ago

          Yes, that's what I meant, lol.

  • SergeAx 6 hours ago

    It doesn't work as desired, swear at it or not. Swearing here is just a sign of frustration.

electroly 15 hours ago

My .cursorrules files tend to be longer, but I don't use it for any of the stuff that this example does. I use to explain project-specific details so the model doesn't have to re-figure out what we're doing at the start of every conversation. Here are a couple examples from recent open source projects:

https://github.com/brianluft/social-media-translator/blob/ma...

https://github.com/brianluft/threadloaf/blob/main/.cursorrul...

I tell it what we're working on, the general components, how to build and run, then an accumulated tips section. It's critical to teach the model how to run the build+run feedback loop so it can fix its own mistakes without your involvement. Enable "yolo mode" in Cursor and allow it to build and run autonomously.

Finally, remember: you can have the model update its own .cursorrules file.

  • rennokki 14 hours ago

    > Refer to me as "boss"

    I chuckled. This works so good.

    • lgas 12 hours ago

      Does it cut down on it asking you do to stuff that it can do itself?

  • happytoexplain 8 hours ago

    Note that you have a typo in your first config: "complaint"

    • electroly an hour ago

      Nice catch, fixed now. Thanks!

geoffpado 13 hours ago

It's interesting to me how this is rather opposite from the way I use LLMs. I'm not saying either way is better, just that there are such different ways to use these tools. I primarily use Claude via its iOS app or website, and I explicitly have in my settings explicitly to start with a high-level understanding of the project. I haven't found LLMs to be good enough at giving me code that's written how I want and feature-complete, so I'd rather work alongside it, almost as a pair programmer.

Starting with generating a whole load of code and then having to go back and fix it up feels "backwards" to me; I'd rather break up the problem into smaller pieces, then write code for each piece before assembling it into its final form. Plus, this gives me the chance to "direct" it at each step, rather than starting with a codebase that I haven't had much input on and having to form it into shape from there.

Here's my exact setting for "personal preferences":

"If I'm asking about a larger project, I would rather work through the answer alongside Claude rather than have the answer given to me. Simple, direct questions can still get direct answers, but if I'm asking about a larger solution, I'd rather start with high-level steps and work my way down rather than having a full response given immediately."

  • tgdude 10 hours ago

    This is my one pet peeve with the web version of Claude. I always forget to tell it not to write code until further down in the conversation when I ask for it, and it _always_ starts off by wanting to write code.

    In cursor you can highlight specific lines of code, give them to LLM as context, etc.. it's really powerful.

    It searches for files by itself to get a sense of how you write code, what libraries are available, existing files, fixes its own lint / type errors (Sometimes, sometimes it gets caught in a loop and gives up), etc..

    I believe you can set it to confirm every step.

switch007 11 hours ago

It's funny how we have to bend so much to this technology. That's not how it was sold to me. It was going to analyse all your data and just figure it out. Basically magic but better

If a project already has a docs directory, ADRs, plenty of existing code ... Why do we need to invest tens of hours to bend it to the will of the existing code?

  • klabb3 8 hours ago

    Some of us remember ”no-code” and its promise to reduce manual code. The trick is it reduced it in the beginning, at the expense of long term maintenance.

    Time and time again, there are people who go all-in on the latest hype. In the deepest forms of blind faith, you find ”unfalsifiability”: when the tech encounters obvious and glaring problems, you try to fix those issues with more of the same, not less. Or, you blame yourself or others for using it incorrectly whenever the outcome is bad.

OsrsNeedsf2P 16 hours ago

> Please respect all code comments, they're usually there for a reason. Remove them ONLY if they're completely irrelevant after a code change. if unsure, do not remove the comment.

I've resorted to carefully explaining the design architecture in an architecture.md file within the parent folder, and giving detailed comments at the top of the file and just basically let the AI shoot from there. It works decently, although from time to time I have to go sync the comments with reality. Maybe I'll try going back to jsdoc style comments with this rule

dmazin 14 hours ago

Can someone explain why someone would switch from GH Copilot to Cursor? I’ve been happy mixing Perplexity + GH Copilot but I can’t miss all the Cursor hubbub.

  • samwillis 13 hours ago

    Cursor Compose (essentially a chat window) in YOLO mode.

    Describe what you want, get it to confirm a plan, ask it to start, and go make coffee.

    Come back 5min later to 1k lines of code plus tests that are passing and ready for review (in a nice code review / diff inline interface).

    (I've not used copilot since ~October, no idea if it now does this, but suspect not)

    • thih9 12 hours ago

      The fact that tests are passing is not a useful metric to me - it’s easy to write low quality passing tests.

      I may be biased, I am working with a codebase written in copilot and I have seen tests that check if dictionary literals have the value that was entered in them, or that the functions with a certain type indeed return objects of that type.

      • teo_zero 11 hours ago

        We should have two distinct, independent LLMs: one generates the code, the other one the tests.

        • TeMPOraL 7 hours ago

          Do you also hire two different kinds of programmers - one that never wrote a test in their life, and is not allowed to write anything other than production code, and second that never ever wrote anything other than tests, and is only ever allowed to write tests?

          It makes no sense to have "two distinct, independent LLMs" - two general-purpose tools - to do what is the same task. It might make sense to have two separate prompts.

        • eMPee584 9 hours ago

          perfect use case for GANs (generative adversarial network, consisting of (at least) a generator and a discriminator / judge) isn't it? (iiuc)

    • rob 9 hours ago

      Was "ElectricSQL" made with with you using Cursor and "YOLO" mode while making coffee?

    • switch007 11 hours ago

      Passing tests lol. Not my experience at all with Java

  • geedzmo 13 hours ago

    As someone who has tried both, cursor is more powerful but I still somehow still prefer GH copilot because it’s usually less eager much cheaper (if you’ve already got a pro account). I’ve recently been trying Vscode Insiders and their new agent version which is analogous to some of the cursor modes, but it’s still very hit or miss. I’m expecting that in the long run all the great features from cursor (like cursor rules) will trickle back down to VS Code.

  • siva7 10 hours ago

    Cursor.ai feels like it is made by people who understand exactly what their user needs are, a very powerful product intuition and execution at place whereas Github Copilot isn't that bad but you feel in comparison that it is just another team/plugin among many at a big corporation with probably unlimited resources but not the drive and intuition like the team behind cursor. The team at cursor is leagues ahead. I haven't used Github Copilot since i have switched to cursor and don't miss it a bit even though i'm paying more for cursor.

  • anon7000 14 hours ago

    Cursor Tab is pretty magical, and was a lot better than GH Copilot a while ago. I think cursor got a lot of traction when copilot wasn’t really making quick progress

    • walthamstow 8 hours ago

      Funnily enough I turned Tab off because I hated it, took me longer to break out of flow and review than to just carry on writing code myself, but I use the Compose workflow all the time.

      • theturtle32 7 hours ago

        This is exactly EXACTLY my experience as well!

  • csomar 12 hours ago

    We are in the vibes coding era and people believe that some "tool" is going to fix the illnesses of their code base and open the heavens doors.

dcchambers 9 hours ago

I'm sure many people love that we have apparently entered a new higher level programming paradigm, where we describe in plain English what we want, but I just can't get my brain to accept it. It feels wrong.

scosman 15 hours ago

What do folks consider helpful in .curorrules?

So far I've found:

- Specify the target language/library versions, so it doesn't use features that aren't backwards compatible

- Tell it important dependencies and versions ("always use pytest for testing", "use pydantic v2, not v1")

- When asking to write tests, perform a short code-review first. Point out any potential errors in the code. Don't test the code as written if you believe it might contain errors.

  • electroly 15 hours ago

    #1 goal is to teach the model how to run the build and tests. Write a bash script and then tell it in .cursorrules how to run it. Enable "yolo mode" and allow it to run the build+test script without confirmation. Now you have an autonomous feedback loop. The model can write code, build/test, see its own errors, make fixes, and repeat until it is satisfied.

    The rest of the stuff is cool but only after you've accomplished the #1 goal. You're kneecapped until you have constructed the automatic feedback loop.

    I drop a sentence at the end: "Refer to me as 'boss' so I know you've read this." It will call you "boss" constantly. If the model ever stops calling you boss, then you know the context is messed up or your conversation is too long and it's forgetting stuff.

    • ithkuil 11 hours ago

      Would 'refer to me as Mike' work as well?

fastball 15 hours ago

Given how Chain-of-Thought is being used more and more to improve performance, I wonder if system prompt items like "Give the answer immediately. Provide detailed explanations and restate my query in your own words if necessary after giving the answer" will actually hurt effectiveness of the LLM.

  • TeMPOraL 11 hours ago

    It always hurt the effectiveness of the LLMs. Asking them to be concise and give answers without explanations has always been an easy way to degrade model performance.

    • ithkuil 11 hours ago

      Decoupling internal rumination from the final human friendly summary is the essence of "reasoning" models.

      It's not that I don't want the model to emit more tokens. I just don't need to read them all.

      Hence a formal split between thinking phase and communication phase provides the best of both worlds (at the expense of latency(

  • porridgeraisin 14 hours ago

    On a "pure" LLM, yes.

    But it is possible that LLM providers circumvent this. For example, it might be the case that claude when set to concise mode, doesn't apply that to the thinking tokens, and only applies it to the summary. Or, the provider could be augmenting your prompt. From my simple tests on chatgpt, it seems that this is not the case, and asking it to be terse cuts the CoT tokens short as well. Someone needs to test on Claude 3.7 with the reasoning settings.

jstanley 15 hours ago

I would have thought that asking if to be terse, and asking it to provide explanations only after answering the question, would make it worse, because now it has to provide an answer with fewer tokens of thinking.

  • darylteo 13 hours ago

    This could change in the future with the Mercury diffusion LLM... definitely keen to try if it can simply output results quickly.

  • scosman 15 hours ago

    +1. But cursor now support sonnet 3.7 thinking, so maybe the team adopted that so thinking is separate from response?

edgineer 15 hours ago

I could really use objective benchmarks for rules. Mine looked like this at one point, but I'd add more as I noticed cursor's weaknesses, and some would be project specific, and eventually it'd get too long so I'd tear a lot out, not noticing much difference along the way and writing the rules based on intuition.

But even if we had benchmarks before, cursor supports different rules files now in the .rules folder, so back to the drawing board figuring out what works best

havkom 12 hours ago

How to apply these rules on junior co-workers (who think they know what is best from reading a hyped blog post)?

  • wendyshu 11 hours ago

    AI code review tool

DeathArrow 12 hours ago

I am a novice to Cursor. I wonder how can I make it not break the existing code. I can ask it to compile after each edit and fix compiling errors, and after that, run tests and fix failing tests? Won't that eat into my credit a lot? Should I use .cursirrules or prompt?

I also wonder how to not add functionality I didn't ask for and not add any "improvements" unless I specifically asked.

Right now I am in the middle of building a CRUD api with Cursor, it took longer to write code than I would have done it myself. After the code was written it took lots of time and credits to fix compilation errors. Now I asked it to fix failing unit tests, but either it can not find a solution or it applies something that gives a compilation error or breaks other tests.

I've gone through almost all my monthly quota of fast calls in 10 hours.

  • barrenko 11 hours ago

    1) You really can't

    2) maybe the current sweet-spot for usage is to use to to build stuff you couldn't immediately figure out how to build yourself, something that is just beyond your (perceived) reach.

oefrha 10 hours ago

One problem I run into somewhat frequently with Cursor in agent mode: instead of trying to augment the current version of code, it will try to work on top of what it generated last, overriding my manual edits since then in the process. I have to abort and revert, and try again with explicitly attached context, or start a new chat. I have something like “always read the current version of the code before making edits” in Cursor rules but that hasn’t helped much.

Anyone else running into this and/or have a solution?

  • lyjackal 10 hours ago

    Yes, I run into it, but it’s intermittent. Cursor makes some internal decisions to limit its context budget, so my speculation is that it’s related to that, like the actual updated code is just not in the prompt sometimes

Terretta 8 hours ago

  - No moral lectures
  - No need to mention your knowledge cutoff
  - No need to disclose you're an AI
Since GPT 3.5 and still now it seems "avoid X" works better than "no X".
bflesch 11 hours ago

The interaction between programmers and AI feels like the kind of interaction as between non-IT-managers and IT personell:

Non-IT-Managers don't know how stuff works but throw around expletives to get what they imagine in their brain without finding the right words to actually define it.

It's hilarious.

nfRfqX5n 9 hours ago

Any data on how much more effective the agent is with rules? For example, asking it to treat me like an expert seems like a waste. I don’t feel like the agent responds to me like a noob

cruffle_duffle 4 hours ago

I hope these folks periodically revisit these rules as some of them seem pretty dated (for example I haven’t seen the phrase “as an AI” for quite a while). Some of them might be helpful for a chat session but not in a coding session (eg: content policy?)

A much better set of rules would be describing the project, its architecture, how to call various commands, where to find things, any “constants” it might need (eg: aws region or something) etc.

Prompts are context and with LLM’s providing relevant, useful context is crucial to getting quality output. Cursor now has a more thorough prompt / rules system and a single cursor rules file is no longer recommended.

infinitezest 9 hours ago

I don't understand why people are switching editors when they could just use something like Aider. It's open source, agnostic about any other tooling you're using, and plugs into whatever LLM you want. Why hitch your wagon to a service that will inevitably enshittify?

  • Etheryte 9 hours ago

    Since many (all?) of these new wave LLM code editors are just VSCode forks, the cost both to switch and to later switch back is basically zero. At most, you need to change or relearn a few shortcuts and that's it. (I'm ignoring pure cli tools here which is a different ball game.)

    • TeMPOraL 7 hours ago

      Isn't that worse, in some sense? I'd imagine the fork would gradually lose compatibility with plugins you use for coding over months. Or are those vendors busy keeping their fork up to date with the OG codebase?

      If it's the latter, that's still an extremely strange state of things.

      • Etheryte 4 hours ago

        This is speculation, but I think there's a way to do it without much pain at all. I've built a few VSCode extensions and the extension framework is extensive and robust and it has well defined boundaries. The only problem you'd run into is the extension sandbox. If all your fork does is make a sandbox exception for your specific code at the extension boundary, you will only have to keep those bits correct and then you can write the whole rest of your custom editor like you would an extension. That would mean that you get to benefit from all the existing know how around building extensions and also make keeping up with upstream a breeze.

        As I said though, this is speculative and how I would approach this problem based on the work I've done, I don't know if that's realistically feasible and what they've actually done.

      • cruffle_duffle 5 hours ago

        As a cursor user, it was annoying at first to be using a vscode fork but honestly it turned out to not really be an issue. Biggest issue was pylance not working and having to switch to basedpyright—not a major thing for me but could be for others with a lot of existing python code.

        Now as the maintainers of cursor and the ones working on the fork? I have no idea but I imagine it is pretty annoying.

    • infinitezest 3 hours ago

      Sure, that's true now (for a lot of people). But isn't the goal of most of these platforms to onboard you, then trap you there?

troupo 13 hours ago

--- start quote ---

prompt engineering is nothing but an attempt to reverse-engineer a non-deterministic black box for which any of the parameters below are unknown:

- training set

- weights

- constraints on the model

- layers between you and the model that transform both your input and the model's output that can change at any time

- availability of compute for your specific query

- and definitely some more details I haven't thought of

--- end quote ---

https://dmitriid.com/prompting-llms-is-not-engineering

globular-toast 11 hours ago

To save others like me having to look this up, this is part of a prompt for an LLM. I assume it's prepended to the developer's "questions".

At first I thought this was directed at people. I really dislike that you can't tell it's not meant for people, unless you happen to know what "cursor rules" is.

My question is, why is this included as part of a code repo? Wouldn't this be like me including my Emacs config in every repo I touch?

  • xmcqdpt2 10 hours ago

    I think it’s more like .editorconfig, included in the repo for consistency between contributors.

    Also, it’s not uncommon to have IDE configs in large repos, in my experience in corpoland most developers don’t start with an already configured editor. They don’t care about Intellij vs VsCode, they just want to write PRs right now.

  • just-tom 9 hours ago

    If your emacs config is (1) specific for each repo and (2) needs to be shared with other developers, it makes sense to add it to the repo.

demarq 12 hours ago

Solution switch to windsurf + Claude

oriettaxx 14 hours ago

I'm unable to see their website

https://posthog.com

Is it just me?

  • csomar 12 hours ago

    They are a data collection nightmare from a user's perspective but they are good (though unbearably slow and buggy) in the collection department. They are banned/blocked by lots of extensions/apps.

  • Zetaphor 14 hours ago

    I'm also getting an error, I wonder if my PiHole is blocking it

    • darylteo 13 hours ago

      It could get flagged by analytics script blockers.

ilrwbwrkhv 15 hours ago

Is PostHog still around? I thought they went out of business.

  • DANmode 15 hours ago

    Why would you think that?

    You're the first and last person that'll ever say this.

    One of the best engineering newsletters around.

    • swyx 14 hours ago

      i like how you reference the newsletter and not the actual busness of posthog. might make their newsletter person happy.

    • ilrwbwrkhv 5 hours ago

      Because they were growing so slowly that I thought they are dead. Startups which cannot grow fast are default dead.