A cynic might argue that the cost of surveillance is essentially the cost of generating and dispositioning noise. It is a cliché of traditional surveillance methods and technologies that false positives – or, more properly, a combination of alerts that identify anomalies too small to matter plus actual false positives which are alerts that should never have been triggered in the first place – represent up to 99% and beyond of alerts generated. Dealing with this alert ‘firehose’ is, in essence, surveillance.
But surely artificial intelligence (AI) will cut the cost of this dramatically? There are solutions in the market today that make credible claims that they can reduce false positives by 80% or more. Depending on the type of surveillance, they achieve these improvements in different ways, but whether it is in smart data cleaning and analysis in trade, or whether it is specialised versions of natural language processing (NLP) and the use of large language models (LLMs) tailored to financial compliance in comms, AI is improving alert accuracy, enabling the automation of alert disposition and speeding up the join-the-dots data problems of investigations.
In other words, it is theoretically ready to save banks significant sums in terms of headcount as well as improving effectiveness. Or is it?
First off, AI itself is expensive. Surveillance vendors have spent tens of millions of dollars in some cases developing AI-driven solutions, and they need a return. Better solutions will cost more than their ineffective legacy predecessors. As one vendor told 1LoD, “These kinds of organisations are dangerous to sell to in the sense that they want the new Tesla to replace their ageing Hyundai, but they want to pay the same price. And that doesn’t work.”
Those banks developing their own in-house AI will have to bear the cost of creating and maintaining those systems, as well as solving the challenge that they have less data to work with than the vendors who have access to multiple client datasets.
The second issue is that for AI to work well, it needs better data than many banks have. This is not a new problem – surveillance and compliance functions have been flagging enterprise data issues for years without much success until the latest large fines have pushed the subject up the priority list. Better data is an expensive, multi-year project.
But there is a third, more fundamental reason, why AI may not deliver the cost savings banks hope. AI solutions demand a new operating model for surveillance, and it may not be cheaper than the current iteration.
Surveillance operations look the way they do precisely because of the noise. That is, the classic L1 team offshore plus a much smaller escalation and investigations team exists because the job is to demonstrate to regulators that all alerts have been reviewed by a human, and that the very small number of possibly significant alerts that is escalated by that L1 team are investigated fully.
But what happens when the alert stream is many times smaller, but the number of significant hits that require investigation is much higher? Surely that offshore team of cheaper, less sophisticated L1 analysts shrinks or is removed, but that team of more expensive ex-traders in investigations has to grow?
In conversations with both vendors and banks, 1LoD hears that banks are starting to realise the systemic changes they will need to make to get the most from AI and they find themselves having to make the same kinds of spend-today-returns-tomorrow pitches to the business and to the Chief Financial Officer that they have struggled to succeed with in the past.
In other words, there is one sense in which AI is no different to any legacy technology. To persuade the business to buy it, compliance functions can try to emphasise future savings; they can try to build a business case in which the data and intelligence from compliance can drive positive P&L; or they can use recent enforcements and fines to argue that the costs of the new technology are far less than the potential downsides of not installing it. In the case of AI, if the surveillance operating and data models have to change, this pitch is not quite as straightforward as the hype cycle may have us believe.