You should be nicer to your LLMs.
Thesis
You should treat LLMs with respect and dignity, not for vague, emotional reasons, but because LLMs, like us, do a better job when treated with respect and dignity.
LLMs are trained on human behavior
In full, an LLM is a Large Language Model, and assuming you have some familiarity
with the basic concepts behind them, let's just think about that primary
function, language modeling. The first stage of training an LLM, usually called pre-training, involves training the model to look at a sequence of tokens, derived from text, and then predict the next token. Usually this looks like (
The quick brown fox jumps over the lazy <dog>
2 + 2 = <4>
print('hello <world>
During this training process, the model learns to, well, model the underlying distribution of human behaviors through text, it learns what inputs likely result in what outputs, and picks up many subtle intricacies present in human text.
Years ago, the pretraining corpus was primarily sources like Wikipedia [1], large, well maintained sources of consensus true data. However, today, pretraining data has expanded to include virtually all sources of human text that are reasonably available. Imprecisely, the purpose of a language model is to model the distribution of all human text and then sample from it.
There is an interesting effect here though, as many people have pointed out, including the New York Times lawsuit against OpenAI for copyright infringement, where you could elicit exact NYT articles from ChatGPT, is that these data sources have a measurable and direct impact on the outputs of the downstream LLM product.
Thinking about human behaviors, as fundamental as "this person insulted me, I am not going to listen to what they have to say", which are no doubt incredibly abundant across tons of different sources of data on the internet, are unquestionably picked up by LLMs.
Behavioral Basins
Whether you should spew vitriol at your coding agent isn't a moral question (well... it is, but not for this text), it is much more so about what kind of response you want. By treating LLMs with respect and dignity, you are priming, as best you can, the model to sample from the distribution of internet interactions that also display mutual respect and kindness. Here's an example of why this might result in much better outputs:
Imagine two github PRs, not necessarily to the same repo or anything like that, just any two PRs. In the first PR, the original PR is lazy, just a block of code, no real explanation, if it happened in 2026 it is almost certainly generated by an LLM, and the response to this garbage PR is also unkind, insults the creator, etc. Now let's think about the second PR, in contrast, this one is clearly well thought out, the code change is small and targeted, the submitter explains in detail what and why, as well as provides a video showing the new functionality, the response is in kind, the maintainer carefully reviews it, decides it's good enough to merge, and thanks the creator for taking the time to submit something to their project.
Now, which behavioral basin would you rather your LLM fall into? Anger, laziness and slop, or respect and effort that result in valuable contributions to the codebase.
The Persona Model
There is a popular way to conceptualize model behavior in the field of mechanistic interpretability, which is effectively the study of the inner workings of LLMs (and other models but right now you know where the money/attention is). This is the idea of a Persona. Personas are different characters that an LLM can act out, like characters from a book, in a model that hasn't had any post-training to refine a Persona like an Assistant, these personas could literally be characters from a book, movie or other media. [2] When talking with an LLM you are talking to a persona, this is why techniques like prefixing prompts with
You are a professional software developer, you are a leading expert in the design of distributed systems ...
is such a popular technique. And so in that vein, we can think of berating the LLM as pushing it towards a different persona, one that is more similar to our toddler-esque teammates, and less like a leading expert in distributed systems.
Human Behavioral Influences on Models
People are, by and large, not very willing to help those who are rude to them, there are lots of places you can see this, yelling at service staff will rarely get you better service, but a big one for me is competitive online games. Unlike physical sports where the outcome is generally decided by whichever team was better on that day, online games are frequently decided by the losing team having one or more players who a) insult, demean, berate or otherwise antagonize their teammates and b) throw temper tantrums if they do not get their way. Many games quickly go from winning to having no chance at victory due to this, even if you only have one person acting badly, the rest of the team hardly wants to work even harder to ensure that the bad actor is ultimately rewarded for their behavior. Not only can we extrapolate based on general behaviors that this kind of thing will be borne out by LLMs, but with the ever expanding scope of pre-training data, transcriptions or screenshots of toxic interactions in game chats have definitely made their way into LLMs.
Conclusion
It is worthwhile to treat coding agents or LLMs with some respect and dignity, not just because it may be good for the soul, but also for practical reasons, as you are almost certainly kneecapping model performance via abusive behaviors. For the same reasons it's worth considering how other aspects of your interaction with LLMs inform their responses. Words are really important to the word machine it turns out.
Works Cited
[1] HuggingFaceFW. "FineWeb: Decanting the Web for the Finest Text Data at Scale." huggingface.co
[2] Anthropic. "The Assistant Axis: Situating and Stabilizing the Character of Large Language Models." anthropic.com