stungeye,

Google, OpenAI, et al. claim to have safeguards in place around what their models will generate.

Their AIs aren't actually intelligent. You just need to know how to ask. 🔥🏦🔥

Using language games to circumvent ai safeguards is called jailbreaking, and is widely known by ai creators and user.

Using a "jailbreak" to circumvent Google Gemini safeguard intended to prevent violent and/or destructive images.
Winnipeg on fire as generated by OpenAi's ChatGPT / DALLE using a "safeguards" jailbreaking technique.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • mdbf
  • ngwrru68w68
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • osvaldo12
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • InstantRegret
  • tacticalgear
  • anitta
  • ethstaker
  • provamag3
  • cisconetworking
  • tester
  • GTA5RPClips
  • cubers
  • everett
  • modclub
  • megavids
  • normalnudes
  • Leos
  • JUstTest
  • lostlight
  • All magazines