Just ran into this entertaining and accessible explainer by #LiveOverflow about why large language models like #ChatGPT sometimes 'misbehave' and present output to the user that they're not supposed to see.
Long story short: both the system's 'filters' and the user input are presented as one big prompt to the model, which means you can influence the filters.