#ML systems can leak confidential data in their training set even with a very silly attack. This is a direct and clear #MLsec issue that applies well beyond the #LLM case
I am giving two #swsec breakfast seminars back to back mid-April. If you are in Sweden, Norway or Finland, please consider coming. Pass it on to those who may be interested.
Dear press people, you can't fix generative AI by blocking prompts. Really. If you need to talk about why that is, call me up. This credulous coverage is just silly.
When you choose to use an #ML#LLM foundation model, you accept the risk management decisions made by the vendor without your input. Wonder what they are? Read this #MLsec paper from #IEEE computer.