Article Details

Scrape Timestamp (UTC): 2024-01-08 07:55:17.377

Source: https://thehackernews.com/2024/01/nist-warns-of-security-and-privacy.html

Original Article Text

Click to Toggle View

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment. The U.S. National Institute of Standards and Technology (NIST) is calling attention to the privacy and security challenges that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. "These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data," NIST said. As AI systems become integrated into online services at a rapid pace, in part driven by the emergence of generative AI systems like OpenAI ChatGPT and Google Bard, models powering these technologies face a number of threats at various stages of the machine learning operations. These include corrupted training data, security flaws in the software components, data model poisoning, supply chain weaknesses, and privacy breaches arising as a result of prompt injection attacks. "For the most part, software developers need more people to use their product so it can get better with exposure," NIST computer scientist Apostol Vassilev said. "But there is no guarantee the exposure will be good. A chatbot can spew out bad or toxic information when prompted with carefully designed language." The attacks, which can have significant impacts on availability, integrity, and privacy, are broadly classified as follows - Such attacks, NIST said, can be carried out by threat actors with full knowledge (white-box), minimal knowledge (black-box), or have a partial understanding of some of the aspects of the AI system (gray-box). The agency further noted the lack of robust mitigation measures to counter these risks, urging the broader tech community to "come up with better defenses." The development arrives more than a month after the U.K., the U.S., and international partners from 16 other countries released guidelines for the development of secure artificial intelligence (AI) systems. "Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences," Vassilev said. "There are theoretical problems with securing AI algorithms that simply haven't been solved yet. If anyone says differently, they are selling snake oil." The Ultimate Enterprise Browser Checklist Download a Concrete and Actionable Checklist for Finding a Browser Security Platform. Master Cloud Security - Get FREE eBook Comprehensive eBook covering cloud security across infrastructure, containers, and runtime environments for security professionals

Daily Brief Summary

MISCELLANEOUS // NIST Highlights Security Risks in Emerging AI Systems

The U.S. National Institute of Standards and Technology (NIST) is warning of increased privacy and security risks stemming from rapid AI system deployment.

Risks include adversarial manipulation of AI training data, exploitation of model vulnerabilities, and unauthorized extraction of sensitive information through AI system interactions.

Rapid integration of AI into online services, particularly generative AI like OpenAI's ChatGPT and Google's Bard, exacerbates the threat landscape at various stages of machine learning operations.

Vulnerabilities identified by NIST encompass corrupted training data, software security flaws, model poisoning, supply chain issues, and privacy breaches through prompt injection attacks.

NIST computer scientist Apostol Vassilev highlights the lack of guaranteed benign exposure and robust defenses against AI system manipulations.

NIST categorizes potential attacks based on the attacker's knowledge level (white-box, black-box, or gray-box) and calls for the tech community to strengthen AI defenses.

These warnings follow international collaborative efforts by the U.K., U.S., and other partners to create guidelines for the secure development of AI systems, addressing unresolved issues in AI algorithm security.