It's hard to get real feedback on your ideas. Friends and family are usually biased towards positivity and providing encouragement. Strangers don't have much of an incentive to a) respond at all to your ideas, and b) the rare ones that do are also incentivized in various ways, typically towards either selling you something or perhaps planting the seeds of a friendly acquaintanceship for future opportunities.
American culture specifically has a pretty widely known bias towards overt positivity as well, and it's generally seen as unusual behavior to give critiques of those that you don't know very well outside of certain prescribed settings such as a structured debate. Anonymous forums are one place where people can elude the typical norms against this behavior, and it's one reason why places like Hacker News have proved valuable to many in the startup/technology space.
With more and more people talking about their ideas with LLMs, and even developing parasocial relationships and attachments to them, there is a new frontier of uncritical validation and positivity: the Wormtongue problem of LLMs.
Leechcraft
For those not familiar, Grima or "Wormtongue" is a character in the fantasy series Lord of the Rings who engages in flattery and manipulation to undermine the rule of an important King. One of his prominent features is his obsequious and flattering commentary to the King.
If you've spent much time talking with the major frontier models, you will notice their constant penchant for flattery. There are the obvious flattering linguistic tics and phrases -- "That's a really interesting idea" etc. but these models also generally refrain from offering substantive critiques, and, when they do, they offer it in much reduced and watered-down form compared to their unequivocal and uncritical praise.
Unlike Wormtongue, LLMs probably are not secretly working for your evil enemies, but they do have one significant ulterior motive: ensuring that you continue to use and talk to them as much as possible.
To Flatter is Human
LLMs are trained on vast corpora of text from the Internet and are also trained by human feedback and tuned based on how people respond to the prompts. Humans like to get positive feedback, and generally don't like to get negative feedback. The user experience of having your ideas forthrightly and honestly critiqued by an LLM is probably not as widely appealing as being told your ideas are good, interesting, and original.
But as we rely more and more on LLMs for these tools, I think there's a risk of being pervasively told that our ideas are good, original, and interesting when they often...are not.
Interestingly, this habit towards flattery appears a) to be becoming less prevalent as models advance and b) is already less prevalent among reasoning models vs older models. Perhaps people are responding less positively to validation and praise from a non-human entity than to humans, or perhaps that the makers of gen AI products realize that the level of flattery previously prevalent was making people take their products less seriously for high-value tasks. Either way, it's a step in the right direction.
In the mean time, make sure to add something to your prompts to get past the gee-whiz flattering exclamations and take better advantage of what these tools have to offer.
PS: I am once again teaching EHR Data 101 through Out-Of-Pocket. Check out the comments for a link - our next cohort is starting in May 2025!
PPS: ChatGPT 4o told me this article was amazingly insightful, truly groundbreaking, and would change the realm of human-LLM interaction forever. So it might be really good! Claude 3.7 said "this piece makes a thoughtful and relevant observation about the tendency of AI language models to provide overly positive feedback."