Pair programming is also not suitable for all cases
Maybe not for many cases
I mentioned this elsewhere but I find it absolutely impossible to get into a good programming flow anymore while the LLM constantly interrupts me with suggested autocompletes that I have to stop, read, review, and accept/reject
It's been miserable trying to incorporate this into my workflow
Second this. My solution is to have a 'non-AI' IDE and then a Cursor/VS Code to switch between. Deep work cannot be achieved by chatting with the coding bots, sorry.
Thirded. It was just completely distracting and I had to turn it off. I use AI but not after every keystroke, jeez.
But but but... "we are an AI-first company".
Yeah, nah. Fourthed!
> AI-first company
Does anybody introduce itself like that?
It's like when your date sends subtle signals, like kicking sleeping tramps in the street and snorting the flour over bread at the restaurant.
(The shocking thing is that the expression would even make sense when taken properly - "we have organized our workflows through AI-intelligent systems" -, while at this time it easily means the opposite.)
> > AI-first company
> Does anybody introduce itself like that?
Yes, I've started getting job posts sent to me that say that.
Declaring one's company "AI-first" right now is a great time-saver: I know instantly that I can disregard that company.
I do this as well and it works quite well for me like that!
Additionally, when working on microservices and on issues that don’t seem too straightforward, I use o3 and copy the whole code of the repo into the prompt and refine a plan there and then paste it as a prompt into cursor. Handy if you don’t have MAX mode, but a company-sponsored ChatGPT.
I do this too by pasting only the relevant context files into O3 or Claude 4. We have an internal tool that just lets us select folders/files and spit out one giant markdown.
This is kind of intentionally the flow with Claude code as I’ve experienced it.
I’m in VSCode doing my thing, and it’s in a terminal window that occasionally needs or wants my attention. I can go off and be AI-Free for as long as I like.
> Deep work cannot be achieved by chatting with the coding bots, sorry.
...by you. Meanwhile, plenty of us have found a way to enhance our productivity during deep work. No need for the patronization.
I don't believe you experience deep work the same way I do then
In my mind you cannot do deep work while being interrupted constantly, and LLM agents are constant interruptions
We're getting constantly interrupted with Slack messages, Zoom meetings, emails, Slack messages about checking said emails, etc. At least an LLM isn't constantly pinging you for updates (yet?) - you can get back to it whenever.
This sounds like an issue with the specific UI setup you are using. I have mine configured so it only starts doing stuff if I ask it to. It never interrupts me.
You can do better than a No true Scotsman fallacy. The fact is that not everyone works the same way you do, or interacts the same way with agents. They are not constant interruptions if you use them correctly.
Essentially, this is a skill issue and you're at the first peak of the Dunning–Kruger curve, sooner ready to dismiss those with more experience in this area as being less experienced, instead of keeping an open mind and attempting to learn from those who contradict your beliefs.
You could have asked for tips since I said I've found a way to work deeply with them, but instead chose to assume that you knew better. This kind of attitude will stunt your ability to adopt these programs in the same way that many people were dismissive about personal computers or the internet and got left behind.
It’s quite amusing to see you complain about patronisation, and then see you turn about and do it yourself one comment later.
As an observer to this conversation, I can't help but notice that both have a good point here.
Soulofmischief’s main point is that meesles made an inappropriate generalization. Meesles said that something was impossible to do, and soulofmischief pointed out that you can't really infer that it's impossible for everyone just because you couldn't find a way. This is a perfectly valid point, but it wasn't helped by soulofmischief calling the generalization “patronizing”.
Bluefirebrand pushed back on that by merely stating that their experience and intuition match those of meesles, but soulofmischief then interpreted that as implying they're not a real programmer and called it a No True Scotsman fallacy.
It went downhill from there with soulofmischief trying to reiterate their point but only doing so in terms of insults such as the Dunning-Kruger line.
I only took issue with ", sorry." The rest of it I was fine with. I definitely didn't need to match their energy so much though, I should have toned it down. Also, the No true Scotsman was about deep work, not being a programmer, but otherwise yeah. I didn't mean to be insulting but I could have done better :)
Oh 100%. I deliberately passed no judgement on the actual main points, as my experience is quite literally in between both of theirs.
I find agent mode incredibly distracting and it does get in the way of very deep focus for implementation for myself for the work I do... but not always. It has serious value for some tasks!
I'm open to hearing how being honest with them about their negative approach is patronizing them.
Calling someone "on the first peak of the Dunning-Kruger curve" is patronizing them.
How would you have handled it?
Here is how I might have handled it differently:
Instead of
> Meanwhile, plenty of us have found a way to enhance our productivity during deep work. No need for the patronization.
you could have written
> Personally, I found doing X does enhance my productivity during deep work.
Why it's better: 1) cuts out the confrontation (“you're being patronizing!”), 2) offers the information directly instead of merely implying that you've found it, and 3) speaks for yourself and avoids the generalization about “plenty of people”, which could be taken as a veiled insult (“you must be living as a hermit or something”).
Next:
> You can do better than a No true Scotsman fallacy.
Even if the comment were a No True Scotsman, I would not have made that fact the central thesis of this paragraph. Instead, I might have explained the error in the argument instead. Advantages: 1) you can come out clean in the case that you might be wrong about the fallacy, and 2) the commenter might appreciate the insight.
Reason you're wrong in this case: The commenter referred entirely to their own experience and made no “true programmer” assertions.
Next:
> Essentially, this is a skill issue [...] Dunning–Kruger curve [...] chose to assume that you knew better. [...]
I would have left out these entire two paragraphs. As best as I can tell, they contain only personal attacks. As a result, the reader comes away feeling like your only purpose here is to put others down. Instead, when you wrote
> You could have asked for tips
I personally would have just written out the tips. Advantage: the reader may find it useful in the best case, and even if not, at least appreciate your contribution.
That's real patronizing. His answers were fine, unless you think he is totally wrong.
Would be informative if both sides share what the problem domain is when providing their their experiences.
It's possible that the domain or the complexity of the problems are the deciding factor for success with AI supported programming. Statements like 'you'll be left behind' or 'it's a skill issue' are as helpful as 'It fails miserably'
For what it’s worth, the deepest-thinking and most profound programmers I have met—hell, thinkers in general—have a peculiar tendency to favour pen and paper. Perhaps because once their work is recognised, they are generally working with a team that can amplify them without needing to interrupt their thought flow.
Ha, I would count myself among those if my handwriting wasn't so terrible and I didn't have bad arthritis since my youth. I still reach for pen and paper on the go or when I need to draw something out, but I've gotten more productive using an outliner on my laptop, specifically Logseq.
I think there's still room for thought augmentation via LLMs here. Years back when I used Obsidian, I created probably the first or second copilot-for-Obsidian plugin and I found it very helpful, even though GPT-3 was generally pretty awful. I still find myself in deep flow, thinking in abstract, working alongside my agent to solve deep problems in less time than I otherwise would.
> You could have asked for tips since I said I've found a way to work deeply with them
How do you work deeply with them? Looking for some tips.
Analysis in the last 5-10 years has shown the Dunning-Kruger effect may not really exist. So it’s a poor basis on which to be judgmental and condescending.
> judgmental and condescending
pushing back against judgement and condescension is not judgemental and condescending.
> may not really exist
I'm open to reading over any resources you would like to provide, maybe it's "real", maybe it isn't, but I have personally both experienced and witnessed the effect in myself, other individuals and groups. It's a good heuristic for certain scenarios, even if it isn't necesarily generalizable.
I would invite you to re-read some of the comments you perceived as judgement and condescension and keep an open mind. You might find that you took them as judgement and condescension unfairly.
Meanwhile, you have absolutely been judgemental and condescending yourself. If you really keep the open mind that you profess, you'll take a moment to reflect on this and not dismiss it out of hand. It does not do you any favors to blissfully assume everyone is wrong about you and obliviously continue to be judgmental and condescending.
> It does not do you any favors to blissfully assume everyone is wrong about you and obliviously continue to be judgmental and condescending.
I think if you read my own comments again you will realize I make no such assumptions at all, and have been open to criticism from those who made a genuine attempt to give feedback.
I recently got a new laptop and had to setup my IDE again.
After a couple hours of coding something felt "weird" - turns out I forgot to login to GitHub Copilot and I was working without it the entire time. I felt a lot more proactive and confident as I wasn't waiting on the autocomplete.
Also, Cursor was exceptional at interrupting any kind of "flow" - who even wants their next cursor position predicted?
I'll probably keep Copilot disabled for now and stick to the agent-style tools like aider for boilerplate or redundant tasks.
It's strange the pure llm workflow and boring. I still write most of my own code and will llms when I'm too lazy to write the next piece.
If I give it to an llm most of my time is spent debugging and reprompting. I hate fixing someone elses bug.
Plus I like the feeling of the coding flow..wind at my back. Each keystroke putting us one step closer.
The apps I made with llms I never want to go back to but the apps I made by hand piece by piece getting a chemical reaction when problems were solved are the ones I think positively about and want to go back to.
I always did math on paper or my head and never used a calculator. Its a skill I never have forgotten and I worry how many programmers won't be able to code without llms in the future.
> who even wants their next cursor position predicted
I'm fascinated by how different workflows are. This single feature has saved me a staggering amount of time.
> Also, Cursor was exceptional at interrupting any kind of "flow" - who even wants their next cursor position predicted?
Me, I use this all the time. It’s actually predictable and saves lots of time when doing similar edits in a large file. It’s about as powerful as multi-line regex search and replace, except you don’t have to write the regex.
AI "auto-complete" or "code suggestions" is the worst, especially if you are in a strongly-type language because it's 80% correct and competing with an IDE that can be 100% correct.
AI agents are much better for me because 1) they don't constantly interrupt your train of thought and 2) they can run compile, run tests, etc. to discover they are incorrect and fix it before handing the code back to you.
I love the autocomplete, honestly use it more than any other AI feature.
But I'm forced to write in Go which has a lot of boilerplate (and no, some kind of code library or whatever would not help... it's just easier to type at that point).
It's great because it helps with stuff that's too much of a hassle to talk to the AI for (just quicker to type).
I also read very fast so one line suggestions are just instant anyway (like non AI autocomplete), and longer ones I can see if it's close enough to what I was going to type anyway. And eventually it gets to the point where you just kinda know what it's going to do.
Not an amazing boost, but it does let me be lazy writing log messages and for loops and such. I think you do need to read it much faster than you can write it to be helpful though.
> Pair programming is also not suitable for all cases
I think this is true but pair programming can work for most circumstances.
The times where it doesn't work is usually because one or both parties are not all-in with the process. Either someone is skeptical about pair programming and thinks it never works or they're trying to enforce a strict interpretation of pair programming.
It doesn't work when someone already has a solution in mind and all they need to do is type it into the editor
I've been doing this a while. This is most of my work
I’m a Vim user and couldn’t agree more.
Didn’t like any of the AI-IDEs, but loved using LLMs for spinning up one off solutions (copy/paste).
Not to be a fan boy, but Claude Code is my new LLM workflow. It’s tough trying to get it to do everything, but works really well with a targeted task on an existing code base.
Perfect harmony of a traditional code editor (Vim) with an LLM-enhanced workflow in my experience.
I’ve always seen it as primarily an _education_ tool; the purpose of pair programming isn’t that two people pair programming are more productive than two people working individually, they’re generally not. So pair programming with a magic robot seems rather futile; it’s not going to learn anything.
LLMs in their current incarnation will not, but there's nothing inherently preventing them from learning. Contexts are getting large enough that having a sidecar database living with each project or individual as a sort of corpus of "shit I learned pairing with Justin" is already completely achievable, if only a product company wanted to do that.
Claude Plays Pokemon is kind of an interesting case study for this. This sort of knowledgebase is implemented, but even state of the art models struggle to use it effectively. They seem to fixate on small snippets from the knowledge base without any ability to consider greater context.
Zed has a "subtle" mode, hopefully that feature can become table stakes in all AI editor integrations
Code regularly, and use ai to get unblocked if you do so or review code for mistakes.
Or have the ai write the entire first draft for some piece and then you give it a once over, correcting it either manually or with prompts.
This is just the same workflow that I've already had with Documentation, Google, and Stack Overflow for years
The AI doesn't seem to be adding any extra value? It's not like it is more accurate that SO, or that it produces answers any faster, honestly
It just is much less trustworthy imo