Article Details
Scrape Timestamp (UTC): 2025-05-29 00:24:04.677
Source: https://www.theregister.com/2025/05/29/qa_adversarial_ai_financial_services_2025/
Original Article Text
Click to Toggle View
Adversarial AI: The new frontier in financial cybersecurity. The financial sector is adept at balancing risk and opportunity. Adversarial AI is its next big challenge. Partner content From the use of ATMs to online banking, the financial services sector has always been at the forefront of technology. Now, it's leading the charge in AI. In their third annual survey of financial institutions the Bank of England and Financial Conduct Authority found 75% of companies already using AI with another 10% planning to do so over the next three years. AI presents opportunities in areas ranging from fraud detection to analytics, but it's also relatively new as a commercial technology. Barely a third of financial institutions believe they fully understand it, said the survey. Those companies must expand their knowledge of how their AI works. Their adversaries certainly are, and they will use it for their own ends. Preparing for adversarial AI Adversarial AI attackers tamper gently with AI algorithms or the data that feeds them. This changes the algorithm's output in the attackers' favor, perhaps manipulating market forecasts so that they can trade on the results, or blinding algorithms to carefully tailored fraudulent transactions. That presents a big threat to the financial sector. These attacks lie outside the traditional cybersecurity defenses that financial institutions use to protect themselves, such as firewalls, malware detection, and access controls. They require a more flexible approach to handling security risk. Financial institutions must understand concepts like data poisoning, inference-time attacks, model contamination and risks from the AI supply chain. Regulators are busy understanding these concepts too. Compliance requirements will eventually expand to capture the nuances of adversarial AI risk. When they do, financial services companies must be ready. QA has years of experience in tracking and understanding developments in AI technology along with their ramifications for sectors including finance. We are your training partner in addressing these emerging threats, and also your industry advocate. Our work includes raising awareness and lobbying for updated regulatory frameworks with best practices that keep financial institutions and their customers safe. Not all financial institutions understand AI today, but they understand risk. And they know that as attackers master this powerful new technology, they need to assimilate and manage that risk. A strong training regime can give your financial services company the skills it needs to take advantage of AI's most powerful use cases while keeping dangers at bay. AI has already arrived in financial services; now is the time to secure it. Read more about our efforts here. Contributed by QA.
Daily Brief Summary
75% of financial institutions are currently using AI, with an additional 10% planning to integrate it within the next three years, per a survey by the Bank of England and Financial Conduct Authority.
A profound gap in understanding AI technologies exists, with only about a third of institutions confident in their AI knowledge.
Adversarial AI poses significant threats by manipulating algorithms or data, benefiting attackers through distorted market forecasts or unnoticed fraudulent transactions.
Traditional cybersecurity measures like firewalls and malware detection are insufficient against adversarial AI tactics that involve data poisoning and model contamination.
Financial companies and regulators need to adapt to these emerging threats by expanding compliance requirements to include adversarial AI risks and ensuring a more flexible security risk management approach.
Training and awareness are crucial; financial entities must develop a strong training regime to both leverage AI benefits and mitigate potential adversarial risks effectively.
QA's role extends to educating and lobbying for regulatory updates to incorporate best practices for tackling adversarial AI issues in the financial sector.