Article Details

Scrape Timestamp (UTC): 2023-10-27 10:56:30.012

Source: https://thehackernews.com/2023/10/google-expands-its-bug-bounty-program.html

Original Article Text

Click to Toggle View

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats. Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to reward researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to bolster AI safety and security. "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations)," Google's Laurie Richardson and Royal Hansen said. Some of the categories that are in scope include prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks that trigger misclassification, and model theft. It's worth noting that Google earlier this July instituted an AI Red Team to help address threats to AI systems as part of its Secure AI Framework (SAIF). Also announced as part of its commitment to secure AI are efforts to strengthen the AI supply chain via existing open-source security initiatives such as Supply Chain Levels for Software Artifacts (SLSA) and Sigstore. "Digital signatures, such as those from Sigstore, which allow users to verify that the software wasn't tampered with or replaced," Google said. "Metadata such as SLSA provenance that tell us what's in software and how it was built, allowing consumers to ensure license compatibility, identify known vulnerabilities, and detect more advanced threats." The development comes as OpenAI unveiled a new internal Preparedness team to "track, evaluate, forecast, and protect" against catastrophic risks to generative AI spanning cybersecurity, chemical, biological, radiological, and nuclear (CBRN) threats. The two companies, alongside Anthropic and Microsoft, have also announced the creation of a $10 million AI Safety Fund, focused on promoting research in the field of AI safety.

Daily Brief Summary

MISCELLANEOUS // Google Expands Bug Bounty Program to Secure Generative AI Systems

Google is broadening its Vulnerability Rewards Program (VRP) to incentivize researchers finding attack scenarios specifically targeting generative AI systems, aiming to enhance AI safety and security.

The extended program will tackle issues unique to generative AI such as potential for unfair bias, model manipulation, or misinterpretations of data.

The scopes of the expanded program include prompt injections, leakage of sensitive data from training datasets, model manipulation, adversarial perturbation attacks causing misclassification, and model theft.

Google had previously established an AI Red Team in July to deal with threats to AI systems as part of Secure AI Framework (SAIF) initiative.

The company also seeks to solidify the AI supply chain via open-source security initiatives such as SLSA and Sigstore which allows users to verify the software's integrity.

OpenAI recently established a new internal Preparedness team to defend against cataclysmic risks to generative AI across cybersecurity, chemical, biological, radiological, and nuclear threats.

Google, OpenAI, in collaboration with Anthropic and Microsoft, created a $10 million AI Safety Fund, aimed at promoting research into AI safety.