"Just as ChatGPT can make up facts, it’s apparently willing to lie about ensuring that the code it writes passes the tests you give it. It can also behave like a recalcitrant child who knows, but must constantly be reminded, to follow the rules. But if you hold its feet to the fire, tests can be a great way to focus its attention on the code you’re asking it to write."
Superheroes 🦸♀️🦸♂️, privacy 😎, and threat modeling ⚡️
What's not to like?!
Are you ready for the clash of privacy vs. security?✊️
Check the recording of this epic battle between Professor Privacy and Captain Security (@sec_tigger) at @WEareTROOPERS
youtu.be/rBdcupIhkDc
For this fun talk, I had the pleasure to join forces with Avi Douglen. Together we explained the need to protect privacy, the power of threat modeling, and how privacy can be a force multiplier when combined with security.
This is a rather vacuous treatment of a critically-important problem. How do we represent things in ML and what implications do such representations have? We were hoping for more treatment of: distributedness, bigness, sparseness, and modling.
A very easy to grasp discourse covering the math of eating your own tail. This is directly relevant to LLMs and the pollution of large datasets. We pointed out this risk in 2020. This is the math.
"From semi-structured interviews, it is apparent that polite language, articulated and text-book style answers, comprehensiveness, and affiliation in answers make completely wrong answers seem correct,"
@ojensen you can demonstrate that with one exploit, but you can't "prove" anything. I agree that some people don't get get this yet. But the disingenuous press coverage that pretends this will secure AI is hogwash.