They've been getting plenty of poor results all along. The image example just quickly and graphically demonstrated what a blunt tool prompt engineering is, but it's still been an
industry wide practice for the last few years.
LLMs won't just repeat bias in the data and the prompts, they'll refine it and magnify it before spitting it out in it's crudest form.
Commercial viability is an easy excuse for ham fisted efforts to reduce bias, regardless of whether it's with heavily filtered training data or prompt engineering.
There's no excuse for pedophile apologetics though, that's just cause for immediate termination, which is why I can't imagine it getting implemented. It's not like Google doesn't implement modern practices of pair programming and quality control through code reviews before going live.