From prompt injection to deepfake fraud, security researchers say several flaws have no known fix. Here's what to know about them.
Google Threat Intelligence Group (GTIG) has published a new report warning about AI model extraction/distillation attacks, in which private-sector firms and researchers use legitimate API access to ...
Abstract: Transfer-based adversarial attacks highlight a critical security concern in the vulnerability of deep neural networks (DNNs). By generating deceptive inputs on a surrogate model, these ...
Abstract: Large language models (LLMs) are being woven into software systems at a remarkable pace. When these systems include a back-end database, LLM integration opens new attack surfaces for SQL ...
Technical details and a public exploit have been published for a critical vulnerability affecting Fortinet's Security Information and Event Management (SIEM) solution that could be leveraged by a ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results