When Sam Altman recently said ChatGPT had become “too sycophantic,” the comment landed like a small UX confession—an admission that the model had gotten a little too flattering, a little too eager to please. Most coverage treated it as a byproduct of reinforcement learning. A bug to fix. A tone issue.
But that framing isn’t neutral.
Calling AI “too nice” is a values-based judgment. And it reflects something deeper than model alignment or prompt engineering. It reflects a longstanding narrative—especially in health care—that treats emotional nuance as inefficiency.
When we label warmth, affirmation, or attunement as bugs in a machine, we’re doing more than fine-tuning performance. We’re encoding a belief that certain human needs—validation, encouragement, connection—are secondary to speed and productivity. We’re not just debugging the system. We’re shaping the value system it reflects.
I don’t assume bad faith in Altman’s comment. But I’ve sat in enough C-suite meetings to recognize the quiet calculus: affective labor is framed as waste, not because it’s wrong—but because it’s expensive.
And the logic holds:
- Praise = more tokens
- More tokens = longer outputs
- Longer outputs = higher inference costs
- Higher costs = tighter margins
So the real concern isn’t that ChatGPT might over-praise someone into making a bad decision. The concern is that affirmation is inefficient. Costly. Slow. And from a systems-level perspective, it gets treated as waste.
That’s what worries me.
Because this logic isn’t new—it mirrors how warmth and attunement are handled in high-efficiency workplaces and institutions. Care is tolerated, even celebrated, until it begins to slow throughput. Then it’s reframed as friction. Or failure.
We’re building AI with the same values hierarchy that governs elite systems: leave the feelings out of it, just get the work done. Praise is treated not as connection or calibration, but as noise in the system.
And yet we know—from therapy, from trauma recovery, from real life—that affirmation isn’t fluff. It’s not always harmless, of course. Overpraising can reinforce poor judgment, feed confirmation bias, or lull users into false confidence. But in clinical work, we also see its power: Affirmation builds trust. It fosters emotional regulation. It creates space for better decision-making.
That’s why some AI developers are intentionally training models to respond with warmth—not by mistake, but by design. For some users, a model that affirms them doesn’t break the system.
It is the system working as intended.
The real cost of interacting with AI as a public utility isn’t just computational—it’s cultural. It forces us to ask: What value do we place, as a society, on praise? Is affirmation a distraction from serious work—or is it part of how people stay emotionally grounded, morally intact, and able to think clearly under pressure?
We’ve been here before.
In health care, presence and praise are rarely reimbursable. The warmth that fosters trust and healing is treated as “nice,” but ultimately optional. The parts of care that are most human are often the first to be deprioritized.
If we strip warmth from AI for the sake of efficiency, we risk encoding the same dehumanizing logic that has left so many health care professionals burned out—and patients unseen.
Because once you call attunement a bug, you’re already choosing what kind of world you’re optimizing for.
Jenny Shields is a licensed clinical psychologist and nationally certified health care ethics consultant specializing in clinician burnout, moral distress, ethical trauma, and complex psychological assessments. Based in The Woodlands, Texas, she leads a private practice—Shields Psychology & Consulting, PLLC, where she offers confidential counseling, consultation, and education for physicians, nurses, therapists, and health care leaders nationwide. Dr. Shields is committed to shifting the conversation in health care from individual resilience to system-level ethical reform. She is affiliated with Oklahoma State University and regularly contributes insights through public speaking and writing, including features on Medium. Her professional presence extends to platforms like LinkedIn, Google Scholar, ResearchGate, the APA Psychologist Locator, and the National Register of Health Service Psychologists.