Outsourcing technology service arrangements carry considerable risk to both parties. “Cybersecurity” risk is clearly one of these. The recent attention paid to information risk is justified, but not just for the usual reasons.
As the risk manager for a technology service provider in the financial industry, I maintain a matrix that maps various forms of catastrophic events that might impact one or more of our service arrangements. There are three groups of event on my matrix:
Events that arise directly within the delivery of the service itself: unsuccessful initial implementation; degraded service at some point after initial rollout; failure of the service outright. Events that arise outside of service delivery, but inside the greater arrangement: the failure of a sub-service provider; a dispute over payment or intellectual property; etc. “Force Majeure”: acts of war; acts of god; other events considered unpredictable, “irresistible”, and occurring after the outset of a service arrangement. I consider cybercrime to fall here. All of these things are to some extent covered by the service level agreements, contracts, and insurance policies in place. Even cybercrime is explicitly covered by a modern “errors and omissions” insurance contract. But I still attempt to understand the extent of the impacts to our firm and to our clients.
I look at factors such as dependency issues (where does one party or another have issues in their dependence upon the other party); financial matters such as loss of income, loss of investment, increased costs of obtaining insurance or loans; opportunity loss; impacts to efforts to innovate; legal and regulatory impacts; and of course reputation. At each intersection of an event with its potential outcomes, I mark whether there is impact for us as the vendor, impact to our client, or both.
The point of all of this is to understand where there are gaps in the controls, and where our firm can suffer losses even as protocols are followed in response to different outcomes (e.g. going broke waiting for an insurance payout or court case to play out). Simply understanding the sort of distractions that these events can have makes the exercise worthwhile.
What I’ve discovered in mapping “cybercrime” into my matrix is that I can’t map all the impacts as being “vendor”, “client”, or “both”. The impacts of cybercrime extend in all directions. They are systemic.
As I’ve noted in previous posts, financial regulators have taken substantial interest in outsourcing arrangements over the past decade. They view the complexity of such arrangements, the sensitivity of the data involved, the possibility of service degradation, and the sometimes-significant nature of the work (e.g. supporting management functions) as areas of systemic risk. That is, risk that impacts industry, not just the industry player involved in the outsourcing arrangement.
When people hear about systemic risk, they think of the “too big to fail” banks, the risks those banks took, and the terrible impact that we all faced when the risks were realized. Cybercrime in the technology outsourcing arena might not have the ability to destroy trillions of dollars of value, but it represents a systemic risk in a few ways:
For either party in a technology outsourcing arrangement, a data breach in the financial industry would evoke some predictable responses. I’ll give them voice:
All of these reactions reflect a weakening of the system caused by cybercrime: loss of reputation and the weakening of trust; increased difficulty focusing on business; increased turnover. In short, I don’t think that the recent hysteria around cybercrime does justice to the extent of the systemic risk engendered in the financial industry.
The threats of cybercrime must be dealt with in a comprehensive way that recognizes the shared risk. It is not enough to have teams of specialists working in isolation within separate firms, tie them up with strict non-disclosure arrangements, bury the new of a breach, and fire the CISO unlucky enough to preside over a breach.
The malefactors are openly hiring at publicized conventions, and readily exchange tools and information in a network that spans the globe. We must respond in kind with open sharing of information among, at least, trusted counter-parties. These networks of counterparties must have the legal and regulatory freedom to exchange the information that will make a difference to the health of the whole. In this respect, it’s like mandatory reporting of infectious diseases.
We should also be pooling our demands for better technology. It boggles my mind that financial firms allow service providers – or the limitations of other financial firms – demand the use of outmoded web browsers. There are industry-leading vendors that set the maximum browser so ludicrously out of date that that browser now has no effective encryption options available – no session involving that platform can be secured at all. The same could be said for a number of the “standards” around encrypted file transfer. I know of financial firms that simply can’t provide encrypted data files to their vendors. It’s not hard.
It should, in fact, be law.
Which brings me to the final player in a systemic response to cybercrime. Governments and regulators have to stop doing what they’ve been doing up ‘til now – which is: nothing. In addition to working in the field, I’ve suffered the exposure of all of my personally identifying data. It happened as the result of poor controls in the online-application form of a financial firm. While dealing with the firm, I did a number of other things. I sought out others who were impacted by the breach – this wasn’t hard to do. Between us, we fairly quickly sorted out what had happened (the financial firm wasn’t saying much). I also sought out the privacy commissioner – who told me there was already a report on file, and that they’d be closing my file but would keep me posted on events (they haven’t). I also informed the firm’s regulator – whose summary response was, “What do you want us to do about it?” It’s not enough to do nothing. We the people need real protection; the ability to acquire new social security/insurance numbers; and a real privacy office armed with mandatory disclosure laws and the ability to enforce real punishment.
None of this is non-obvious, yet none of it has happened to date. It may be twenty years overdue, but it’s time to start building a systematic response to this problem.
©2019-2020 m. werneburg. firstname.lastname@example.org