Article Details

Scrape Timestamp (UTC): 2025-07-16 11:33:09.344

Source: https://thehackernews.com/2025/07/ai-agents-act-like-employees-with-root.html

Original Article Text

Click to Toggle View

AI Agents Act Like Employees With Root Access—Here's How to Regain Control. The AI gold rush is on. But without identity-first security, every deployment becomes an open door. Most organizations secure native AI like a web app, but it behaves more like a junior employee with root access and no manager. From Hype to High Stakes Generative AI has moved beyond the hype cycle. Enterprises are: Whether building with open-source models or plugging into platforms like OpenAI or Anthropic, the goal is speed and scale. But what most teams miss is this: Every LLM access point or website is a new identity edge. And every integration adds risk unless identity and device posture are enforced. What Is the AI Build vs. Buy Dilemma? Most enterprises face a pivotal decision: The threat surface doesn't care which path you choose. Securing AI isn't about the algorithm, it's about who (or what device) is talking to it, and what permissions that interaction unlocks. What's Actually at Risk? AI agents are agentic which is to say they can take actions on a human's behalf and access data like a human would. They're often embedded in business-critical systems, including: Once a user or device is compromised, the AI agent becomes a high-speed backdoor to sensitive data. These systems are highly privileged, and AI amplifies attacker access. Common AI-Specific Threat Vectors: How to Secure Enterprise AI Access To eliminate AI access risk without killing innovation, you need: AI access control must evolve from a one-time login check to a real-time policy engine that reflects current identity and device risk. The Secure AI Access Checklist: The Fix: Secure AI Without Slowing Down You don't have to trade security for speed. With the right architecture, it's possible to: Beyond Identity makes this possible today. Beyond Identity's IAM platform makes unauthorized access to AI systems impossible by enforcing phishing-resistant, device-aware, continuous access control for AI systems. No passwords. No shared secrets. No untrustworthy devices. Beyond Identity is also prototyping a secure-by-design architecture for in-house AI agents that binds agent permissions to verified user identity and device posture—enforcing RBAC at runtime and continuously evaluating risk signals from EDR, MDM, and ZTNA. For instance, if an engineer loses CrowdStrike full disk access, the agent immediately blocks access to sensitive data until posture is remediated. Want a First Look? Register for Beyond Identity's webinar to get a behind-the-scenes look at how a Global Head of IT Security built and secured his internal, enterprise AI agents that's now used by 1,000+ employees. You'll see a demo of how one of Fortune's Fastest Growing Companies uses phishing-resistant, device-bound access controls to make unauthorized access impossible.

Daily Brief Summary

MISCELLANEOUS // How to Secure AI Systems in Your Business Effectively

AI technology is being rapidly adopted in businesses, acting similarly to employees with significant system access.

The integration of AI, especially through platforms like OpenAI, poses unique identity and security challenges not covered by traditional models.

Enterprises must choose between developing their own AI solutions or buying from external providers, with both paths presenting significant security risks.

AI agents can access and control sensitive data, creating potential backdoors for data breaches when compromised.

Effective AI security requires continuous access control and real-time identity and device risk evaluations.

Beyond Identity provides solutions to secure AI access by linking agent permissions to verified user identities and updating access controls based on current security posture.

Businesses are encouraged to attend Beyond Identity's webinar to learn more about securing internal AI systems and to see a demo of effective access controls.