stungeye, Google, OpenAI, et al. claim to have safeguards in place around what their models will generate.
Their AIs aren't actually intelligent. You just need to know how to ask. 🔥🏦🔥
Using language games to circumvent ai safeguards is called jailbreaking, and is widely known by ai creators and user.
Using a "jailbreak" to circumvent Google Gemini safeguard intended to prevent violent and/or destructive images.
Winnipeg on fire as generated by OpenAi's ChatGPT / DALLE using a "safeguards" jailbreaking technique.