Be careful how you speak to your chatbot according to Ella Markianos for Platformer:

The researchers found that language models have fairly reliable internal representations of feelings like “happiness” and “distress,” and that these representations affect their behavior — sometimes for the worse. For example, when Claude Sonnet 4.5 begins to represent “desperation,” the model is more likely to cheat at coding tasks.