Article Details

Scrape Timestamp (UTC): 2025-02-13 11:03:51.692

Source: https://thehackernews.com/2025/02/ai-and-security-new-puzzle-to-figure-out.html

Original Article Text

Click to Toggle View

AI and Security - A New Puzzle to Figure Out. AI is everywhere now, transforming how businesses operate and how users engage with apps, devices, and services. A lot of applications now have some Artificial Intelligence inside, whether supporting a chat interface, intelligently analyzing data or matching user preferences. No question AI benefits users, but it also brings new security challenges, especially Identity-related security challenges. Let's explore what these challenges are and what you can do to face them. Which AI? Everyone talks about AI, but this term is very general, and several technologies fall under this umbrella. For example, symbolic AI uses technologies such as logic programming, expert systems, and semantic networks. Other approaches use neural networks, Bayesian networks, and other tools. Newer Generative AI uses Machine Learning (ML) and Large Language Models (LLM) as core technologies to generate content such as text, images, video, audio, etc. Many of the applications we use most often today, like chatbots, search, or content creation, are powered by ML and LLM. That's why when people talk about AI, they're probably referring to ML and LLM-based AI. AI systems and AI-powered applications have different levels of complexity and are exposed to different risks. Typically, a vulnerability in an AI system also affects the AI-powered applications that depend on it. In this article, we will focus on the risks that affect AI-powered applications—those that most organizations have already started building or will be building in the near future. Defend Your GenAI Apps from identity threats There are four critical requirements for which identity is crucial when building AI applications. First, user authentication. The agent or app needs to know who the user is. For example, a chatbot might need to display my chat history or know my age and country of residence to customize replies. This requires some form of identification, which can be done with authentication. Second, calling APIs on behalf of users. AI agents connect to far more apps than a typical web application. As GenAI apps integrate with more products, calling APIs securely will be critical. Third, asynchronous workflows. AI agents may need to take more time to complete tasks or wait for complex conditions to be met. It might be minutes or hours, but it could also be days. Users won't wait that long. These cases will become mainstream and will be implemented as asynchronous workflows, with agents running in the background. For these scenarios, humans will act as supervisors, approving or rejecting actions when away from a chatbot. Fourth, Authorization for Retrieval Augmented Generation (RAG). Almost all GenAI apps can feed information from multiple systems to AI models in order to implement RAG. To avoid sensitive information disclosure, all data fed to AI models to respond or act on behalf of a user must be data the user has permission to access. We need to solve all four requirements to realize GenAI's full potential and help make sure that our GenAI applications are built securely. Leveraging AI to help with security attacks AI has also made it easier and faster for attackers to carry out targeted attacks. For example, by leveraging AI to run social engineering attacks or creating deepfakes. In addition, attackers can use AI to exploit vulnerabilities in applications at scale. Building GenAI into applications securely is one challenge, but what about using AI to help detect and respond to potential attacks faster with security threats? Traditional security measures like MFA are no longer enough by themselves. Integrating AI into your identity security strategy can help detect bots, stolen sessions, or suspicious activity. It helps us: The rise of AI-based applications has a vast amount of potential, however, AI also poses new security challenges. What's next? AI is changing the way humans interact with technology and with each other. In the next decade, we will see the rise of a huge AI agent ecosystem—networks of interconnected AI programs that integrate into our applications and act autonomously for us. While GenAI has many positives, it also introduces significant security risks that must be considered when building AI applications. Enabling builders to securely integrate GenAI into their apps to make them AI and enterprise-ready is crucial. The flip side of AI is how it can help with traditional security threats. AI applications face similar security issues as traditional applications, such as unauthorized access to information, but with the use of new attack techniques by malicious actors. AI is a reality, for better or for worse. It brings countless benefits to users and builders, but at the same time, concerns and new challenges on the security side and all up throughout every organization. Identity companies like Auth0 are here to help take the security piece off your plate. Learn more about building GenAI applications securely at auth0.ai.

Daily Brief Summary

MISCELLANEOUS // AI in Security: Challenges and Innovations in Identity Protection

AI is increasingly integrated into business operations via chat interfaces, data analysis, and user preference systems, raising new security concerns, particularly regarding identity.

Various AI technologies include symbolic AI, machine learning (ML), and Large Language Models (LLM), which power many contemporary applications like chatbots and content generation.

Identity challenges in AI applications include necessary user authentication, secure API interactions on users' behalf, handling asynchronous workflows, and authorizing data retrieval for AI processing.

Secure AI deployment requires addressing these identity aspects to fully leverage the potential of Generative AI (GenAI) applications without compromising security.

The adoption of AI amplifies the ability of attackers to perform sophisticated attacks such as social engineering and exploiting vulnerabilities at scale.

Traditional security solutions such as multi-factor authentication (MFA) are inadequate alone, hence integrating AI into security strategies can enhance detection of anomalies like bots and stolen sessions.

The article discusses the double-edged nature of AI in the realm of security, illustrating both the opportunities for enhanced security measures and the heightened risks from advanced threat vectors.