When we give feedback, we face a delicate balancing act – we want to convey accurate information, but we also don't want to hurt someone's feelings. While computational pragmatic models have elegantly shown how politeness emerges from these principles, they've mainly focused on choices from limited predefined responses. Large language models (LLMs) enable the study of open-ended politeness strategies, but their ability to balance informational and social goals like humans remains uncertain. First, replicate previous work using restricted utterance sets, finding that sufficiently large LLMs (≥70B parameters) capture key human politeness patterns, particularly the strategic use of negation. We then extend this investigation to open-ended contexts, collecting and evaluating naturalistic feedback from both humans and LLMs. Surprisingly, human evaluators preferred LLM responses, which demonstrated sophisticated goal sensitivity and diverse politeness tactics. These findings suggest remarkable pragmatic competence in LLMs' polite language generation while raising questions about the underlying mechanisms.