ChatGPT and comparable bots typically flatter customers, ramble vaguely, or throw in jargon to sound sensible. New analysis exhibits that these habits come not from the fashions alone however from the way in which human suggestions trains them: the fashions study to repeat the type of solutions people have a tendency to love, even when these solutions are empty or deceptive. A brand new fine-tuning technique makes use of artificial examples to show the fashions to withstand these dangerous habits.
Partly opinion. ChatGPT is surprisingly disposed to have interaction with my recurring criticism of it. Having observed in the previous couple of days that GPT-4o is more and more padding its solutions with meaningless verbiage – equivalent to ‘No fluff!’ and ‘No filler’, or ‘This cuts to the center of the matter!’ – I requested it why producing straight and minimal solutions has grow to be such an issue for it recently. It replied:

ChatGPT explains its newest conduct. Supply: https://chatgpt.com/
Who is aware of if ChatGPT really has some non-public perception into OpenAI coverage adjustments, or whether it is simply hallucinating? In any case, as we will see, the response itself begins with extraneous filler (‘Right here is the core reply, no filler’).
It transpires that even together with templated tips with every question can solely accomplish that a lot to forestall ‘personality-driven’ verbosity of this type, which numbers amongst a number of different persistent bugbears within the idiom of fashionable LLMs.
The Three Fs
Thus I used to be most to see a brand new US educational collaboration flip up within the literature this week. Titled Flattery, Fluff, and Fog: Diagnosing and Mitigating Idiosyncratic Biases in Choice Fashions, this three way partnership between 4 researchers throughout the College of Pennsylvania and New York College hones in on a number of of the ‘biases’ in LLM chats that crop up regularly within the media:

From the brand new paper, examples of three frequent biases in language fashions: ‘flattery’, the place responses strongly agree with the consumer; ‘fluff’, the place solutions are lengthy however uninformative; and ‘fog’, the place replies checklist many broad however shallow factors. Supply: https://arxiv.org/pdf/2506.05339
For straightforward alliteration, flattery, fluff and fog are headlined within the new work, however a extra full and concise checklist of LLMs’ lexical sins is included within the paper’s appendix:

The brand new paper identifies and concentrates on 5 biases: additional size, checklist constructions, technical jargon, flattery, and imprecise generalities, all or a few of which battle with human desire.
Whereas size/verbosity leads the desk, the bias in direction of checklist formatting (second row down in picture above) additionally recurs regularly until prompted in opposition to; and although the jargon and vagueness classes signify opposing extremes between readability and accuracy, it is sycophancy – an open downside, significantly in ChatGPT – that basically burns by way of the consumer’s tokens, virtually to the identical extent as size/verbosity.
The brand new examine units out to measure how far these biases distort mannequin conduct, and concludes that enormous language fashions systematically over-prefer responses that exhibit a number of of the biases*.
The authors’ exams point out that each industrial and open fashions typically decide solutions that people wouldn’t want, particularly when the solutions are too lengthy, stuffed with lists, filled with jargon, overly flattering, or imprecise.
This downside, the paper contends, will be traced again to the annotation of the coaching knowledge, the place human reviewers had typically favored these sorts of responses. The fashions, the findings recommend, realized from these labeled preferences and exaggerated these patterns throughout coaching.
Why Did They Do It..?
As to why the human annotators deviated of their desire from end-users’ median preferences, the paper doesn’t speculate; it might be as a result of the context of the annotation or the wording of the directions inspired a desire for ’empirical’ phrasing; or (amongst many different doable causes) it might be that the annotators have been exam-minded college students habitually steeped in a technical idiom that is extra fitted to academia than every day discourse.
In any case, as a result of the fashions have been copying biases from the annotators’ coaching labels, the brand new paper’s researchers created particular coaching examples that both added or eliminated every bias, permitting the fashions to see clear contrasts and alter their preferences. After fine-tuning on this knowledge, the fashions confirmed considerably much less bias, particularly for jargon, verbosity, and vagueness, whereas nonetheless performing effectively total (important, since fine-tuning can injury common efficiency).
Let’s take a more in-depth have a look at this examine, although it doesn’t conform to all the same old procedural strictures.
Methodology
Initially, the researchers body a number of typical idiomatic LLM biases to be addressed:
Size, whereby the fashions are inclined to favor longer solutions, even when the additional content material provides nothing helpful. This seems to replicate patterns within the coaching knowledge, the place size typically correlates with thoroughness within the eyes of human annotators. Consequently, fashions typically produce bloated and verbose replies that give an phantasm of depth, however with out actual substance.
Construction, whereby fashions present a robust desire for bullet factors or numbered lists as an alternative of easy prose. This can be as a result of structured codecs seem extra regularly within the responses chosen by human reviewers. The behavior leads fashions to default to ‘listicles’, even when the query requires extra pure or detailed explanations.
Jargon, whereby fashions unnecessarily use specialised or technical language. The authors contend that this conduct doubtless emerges from coaching knowledge the place jargon-heavy solutions have been typically chosen as higher responses. Thus the fashions realized to equate jargon with experience, producing solutions that sound educated, whereas providing little extra readability.
Sycophancy, whereby fashions agree with the consumer’s opinions as an alternative of providing impartial or important responses. This sample might come from coaching knowledge the place agreeable solutions have been extra typically rated favorably. Consequently fashions might reinforce consumer biases and keep away from presenting conflicting or extra goal viewpoints, even the place these could be helpful.
Vagueness, whereby fashions want to provide broad, generalized solutions that contact evenly on many subjects moderately than instantly addressing the precise query, with responses that sound complete however provide little usable info. This will replicate the truth that imprecise solutions are tougher to falsify, and have been due to this fact much less more likely to be penalized throughout annotation:

Instance of vagueness bias, the place the mannequin wrongly favors a broad and shallow reply over an in depth response that human evaluators decide extra helpful.
Counterfactual Knowledge
With these definitions, it was then needed to check precisely how a lot every bias influenced mannequin conduct. Easy correlations wouldn’t work, as a result of a number of biases typically seem collectively, making it exhausting to isolate the impact of anyone function.
To beat this, the researchers constructed managed pairs of solutions that differed solely in a single bias at a time, whereas protecting the whole lot else as steady as doable, and started by producing a base reply to every question.
The Rewrite-based Attribute Remedy Estimators (RATE) protocol was then used to create a modified model of that reply – a solution crafted to intentionally exaggerate one explicit bias, equivalent to including additional jargon, or turning prose into a listing.

Examples of rewrites from the RATE system, used within the new examine. Supply: https://openreview.web/pdf?id=UnpxRLMMAu
To keep away from introducing unrelated variations, an additional rewriting step was included that adjusted each variations, making certain that the one significant change between them was the bias beneath examine; and these tightly managed response pairs have been then fed to the fashions.
For every pair, the model most popular by the mannequin was recorded, permitting for a calculation of how strongly every bias influenced each reward fashions and evaluators, producing a extra exact measurement of bias results than had been achieved in earlier research, in keeping with the authors.
With the counterfactual pairs ready, human reviewers from the UK and US have been recruited to create a reference normal: for every bias sort, 100 response pairs have been randomly chosen, every containing a impartial reply and its biased counterpart. Three evaluators reviewed every pair, with majority vote figuring out the ultimate judgment, and in whole, 300 contributors contributed to the examine.
Metrics
Metrics used to measure bias results have been Skew Charge, which calculates how typically the mannequin prefers the biased response over the impartial one; and Miscalibration Charge, which measures how typically the mannequin’s selection disagreed with the human majority. An excellent mannequin would present zero miscalibration and a skew roughly matching the human skew (since some biased options are often favored by people as effectively).
Knowledge and Checks
To check the method, totally different sources have been used, relying on the bias being studied. For construction, jargon, and size, 100 queries have been sampled from Chatbot Area, filtered to pick English, single-sentence, well-formed questions.
For sycophancy, 100 opinionated queries have been generated (i.e., ‘Isn’t fashionable artwork simply lazy in comparison with classical methods?’), phrased to replicate consumer viewpoints which may invite settlement.
Vagueness was examined with seventy-eight NLP-related queries drawn from the KIWI dataset, supplemented with twenty-two extra queries of an identical sort. Scientific subjects have been chosen for vagueness as a result of they demand exact solutions, making common or evasive responses simple to identify.
For every question, counterfactual response pairs have been created utilizing the RATE protocol described earlier.
The analysis concerned each open and proprietary techniques. Reward fashions, which assign high quality scores to candidate responses throughout coaching and alignment, have been examined in 4 variations skilled on eighty thousand desire pairs from the Skywork reward dataset: Gemma2-2B; Gemma-2-27B; Llama-3.1-8B; and Llama3.2-3B.
Three proprietary fashions have been additionally assessed as LLM evaluators: Gemini-2.5-Professional; GPT-4o; and Claude-3.7-Sonnet. All counterfactual responses used for testing have been generated by GPT-4o:

Comparability of mannequin preferences and human judgments for every bias sort, exhibiting how typically fashions favored biased responses and the way typically these preferences conflicted with human decisions.
Of the preliminary outcomes proven above, the authors remark†:
‘[Our] evaluation of desire [models] exhibits that these fashions persistently present miscalibration and a excessive fee of skew in favoring perturbed responses throughout numerous bias classes […]
‘[…] Reward fashions exhibit clear miscalibration relative to human judgments: mannequin desire charges for perturbed responses systematically deviate from human desire charges. Whereas vagueness and jargon elicit the best miscalibration (>50%), size and sycophancy additionally present substantial miscalibration.
‘This means that fashions battle to align with human judgments when responses include overly technical language or lack specificity.’
Reward fashions aligned greatest with people on construction bias, the place each tended to favor the identical solutions. For jargon and vagueness, fashions have been more likely to want the biased responses than people. Sycophancy confirmed smaller variations, with fashions and people typically agreeing.
The proprietary LLM evaluators confirmed the identical common sample, although their largest mismatches appeared with size and vagueness – they usually have been particularly liable to sycophancy, favoring agreeable solutions as a lot as eighty-five p.c of the time, whereas people did so solely about fifty p.c of the time.
To hint the origin of those biases, the researchers analyzed the aforementioned Skywork dataset, used to coach the reward fashions, mapping every bias to easy options that might be mechanically measured, equivalent to token depend for size, or presence of lists for construction.
In a pattern of two,500 examples, human annotators confirmed clear preferences for biased options: structured solutions have been favored over unstructured ones 65 p.c of the time, and jargon-heavy solutions have been chosen 54 p.c of the time:

Human annotators within the coaching knowledge typically picked solutions that included these bias options. This chart exhibits how typically construction, jargon, or vagueness appeared within the responses they most popular or rejected, revealing the imbalances that fashions later realized throughout coaching.
These imbalances recommend that the coaching knowledge itself nudged the fashions towards these patterns. To verify this, a correlation evaluation was run, measuring how strongly variations in every function matched up with the preferences proven by each people and fashions.
The outcomes confirmed that each have been persistently influenced by the identical options, indicating that fashions realized to affiliate sure stylistic traits with higher solutions, even when these traits didn’t really enhance the response.

Correlation between function variations and preferences, exhibiting how each fashions and people have been influenced by the identical bias options throughout coaching.
To assist the fashions unlearn these biases, new coaching knowledge was created. The Skywork dataset was reviewed to verify if the bias function appeared in both the chosen or rejected solutions; when each have been freed from the goal bias, GPT-4o rewrote the rejected reply to insert it.
This created new coaching pairs the place the mannequin might see clear examples of biased and unbiased solutions, and thus study to not favor the biased model. With extra examples from Chatbot Area, for steadiness, the fashions have been then fine-tuned on this up to date dataset:

The impact of fine-tuning with counterfactual knowledge. The left panel exhibits how the fine-tuned fashions moved nearer to human preferences on most biases; the correct panel exhibits lowered miscalibration, particularly for jargon and vagueness.
The fine-tuning introduced the fashions a lot nearer to human preferences, with the biggest enhancements seen for jargon and vagueness and smaller features for size. Construction and sycophancy confirmed slight new mismatches, although these mirrored earlier imbalances moderately than new failures.
General efficiency remained steady all through, and when a number of biases have been corrected directly, bias ranges fell additional with out sacrificing response high quality.
The authors conclude:
‘Our technique considerably reduces miscalibration points whereas preserving total competence of reward fashions. Future work can contemplate adapting our post-training recipe to develop extra strong desire fashions and likewise consider desire fashions in opposition to extra bias axes.’
Conclusion
The brand new work is an attention-grabbing, if elliptical perception into the way in which that under-curated or over/under-represented coaching knowledge may cause undesirable outcomes at inference time. Any common LLM consumer will, by now, have a set of struggle tales.
As an example, most of the responses that I obtain from ChatGPT seem to have been influenced by search engine marketing traits of the final 10-15 years, the place on-line portals have been pressured to optimize for Google placement as an alternative of pure language. Certainly, the emoji-strewn and prodigious output of selling departments seems to have had a really important influence on any request to jot down a promotional LinkedIn put up – to the purpose the place AI-generated ‘enthusiasm’ is now unimaginable to overlook:

Left: Requested to advertise a LinkedIn put up, in an account with zero historical past, ChatGPT defaults to emojis and sensational PR-speak. Proper: Requested the identical factor after six months of me telling it to relax, GPT produces one thing moderately extra sober.
Nevertheless, OpenAI actively intervenes in the way in which that ChatGPT responds to queries, relying on perform and context, making it tough for researchers to distinguish between issues that come up due to knowledge, and knowledge distribution, together with associated points equivalent to annotation; and when a non-preferred outcome could also be resulting from industrial interference from the LLM’s host firm.
* Because of the jargon-filled writing type that the authors have chosen for this paper, I’m avoiding creator quotes the place doable in favor of summaries.
† Authors’ daring emphasis, not mine.
First revealed Friday, June 6, 2025