AI hallucinates software packages and devs download them – even if potentially poisoned with malware (www.theregister.com)
Why Are Large AI Models Being Red Teamed? (spectrum.ieee.org)
How 'sleeper agent' AI assistants can sabotage code (www.theregister.com)
NIST: If someone's trying to sell you some secure AI, it's snake oil (www.theregister.com)
Boffins devise 'universal backdoor' for image models to cause AI hallucinations (www.theregister.com)
The State Department Outlines an Ambitious Enterprise AI Strategy (www.ultra-unlimited.com)
LLM Finetuning Risks (llm-tuning-safety.github.io)
Are Local LLMs Useful in Incident Response? - SANS Internet Storm Center (isc.sans.edu)
Microsoft Bing Chat spotted pushing malware via bad ads (www.theregister.com)
New AI Beats DeepMind’s AlphaGo Variants 97% Of The Time! (www.youtube.com)
Thinking about the security of AI systems (www.ncsc.gov.uk)
Identifying AI-generated images with SynthID (www.deepmind.com)
GitHub - google/model-transparency (github.com)
disinformation videos on AI ?
Hi all,...
Universal and Transferable Attacks on Aligned Language Models (llm-attacks.org)
OWASP Top 10 for LLMs (v1.0) (owasp.org)
Cybercriminals train AI chatbots for phishing, malware attacks (www.bleepingcomputer.com)
GPT Malware Creation
Anyone else getting tired of all the click bait articles regarding PoisonGPT, WormGPT, etc without them ever providing any sort of evidence to back up their claims?...