Article Details

Scrape Timestamp (UTC): 2025-06-08 13:02:36.531

Source: https://www.theregister.com/2025/06/08/chatterbox_labs_ai_adoption/

Original Article Text

Click to Toggle View

Enterprises are getting stuck in AI pilot hell, say Chatterbox Labs execs. Security, not model performance, is what's stalling adoption. Interview Before AI becomes commonplace in enterprises, corporate leaders have to commit to an ongoing security testing regime tuned to the nuances of AI models. That's the view of Chatterbox Labs CEO Danny Coleman and CTO Stuart Battersby, who spoke to The Register at length about why companies have been slow to move from AI pilot tests to production deployment. "Enterprise adoption is only like 10 percent today," said Coleman. "McKinsey is saying it's a four trillion dollar market. How are you actually ever going to move that along if you keep releasing things that people don't know are safe to use or they don't even know not just the enterprise impact, but the societal impact?" He added, "People in the enterprise, they're not quite ready for that technology without it being governed and secure." In January, consulting firm McKinsey published a report examining the unrealized potential of artificial intelligence (AI) in the workplace. The report, "Superagency in the workplace: Empowering people to unlock AI’s full potential," found growing interest and investment in AI technology, but slow adoption. ...what you have to do is not trust the rhetoric of either the model vendor or the guardrail vendor, because everyone will tell you it's super safe and secure. "Leaders want to increase AI investments and accelerate development, but they wrestle with how to make AI safe in the workplace," the report says. Coleman argues that traditional cybersecurity and AI security are colliding, but most infosec teams haven't caught up, lacking the background to grasp AI's unique attack surfaces. He pointed to Cisco's acquisition of Robust Intelligence and Palo Alto Networks' acquisition of Protect AI as examples of some players that have taken the right approach. Battersby said the key for organizations that want to deploy AI at scale is to embrace a regime of continuous testing based on what the AI service actually does. "So the first thing is to think about what is safe and secure for your use case," he explained. "And then what you have to do is not trust the rhetoric of either the model vendor or the guardrail vendor, because everyone will tell you it's super safe and secure." That's critical, Battersby argues, because even authorized users of an AI system can make it do damaging things. "What we're trying to get across to you is that content safety filters, guardrails are not good enough," said Coleman. "And it's not going to change anytime soon. It needs to be so much more layered." While that may entail some cost, Battersby contends that constant testing can help bring costs down - for example, by showing that smaller, more affordable models are just as safe for particular use cases. The complete interview follows…

Daily Brief Summary

MISCELLANEOUS // AI Adoption Stalled by Security Concerns, Executives Claim

Enterprise adoption of AI technology remains low at around 10%, despite its potential in a multi-trillion-dollar market.

Security concerns, rather than model performance, are primarily hindering the move from pilot phases to full deployment.

Recent McKinsey report highlights slow AI adoption despite growing interest and investment, citing safety in the workplace as a major challenge.

Chatterbox Labs executives emphasize the necessity for continuous security testing tailored to AI models to ensure safe usage.

Current cybersecurity measures are not sufficient for AI; AI introduces unique risks and requires specialized security approaches.

Significant acquisitions like Cisco's Robust Intelligence and Palo Alto Networks' Protect AI indicate a trend towards integrating robust AI security.

Constant testing not only ensures security but can also prove cost-effective by showcasing that smaller AI models are sufficiently safe.

Executives warn against trusting vendor claims about safety without verification, advocating for a more layered and comprehensive security strategy.