Most people1 don't realise quite how much fun security is, or exactly how sexy security expertise makes you to other people.2 We know that it's engrossing, engaging, and cool, they don't. For this reason, when security people go to the other people (let's just call them "normal people" for the purposes of this article), and tell them that they're doing something wrong, and that they can't launch their product, or deploy their application, or that they must stop taking sales orders immediately and probably for the next couple of days until this is fixed, then those normal people don't always react with the levels of gratefulness that we feel is appropriate.
Sometimes, in fact, they will exhibit negative responses—even quite personal negative responses—to these suggestions.
The problem is this: security folks know how things should be, and that's secure. They've taken the training, they've attended the sessions, they've read the articles, they've skimmed the heavy books,3 and all of these sources are quite clear: everything must be secure. And secure generally means "closed"—particularly if the security folks weren't sufficiently involved in the design, implementation, and operations processes. Normal people, on the other hand, generally just want things to work. There's a fundamental disjoint between those two points of view that we're not going to get fixed until security is the very top requirement for any project from its inception to its ending.4
Now, normal people aren't stupid.5 They know that things can't always work perfectly; but they would like them to work as well as they can. This is the gap7 that we need to cross. I've talked about managed degradation as a concept before, and this is part of the story. One of the things that we security people should be ready to do is explain that there are risks to be mitigated.
For security people, those risks should be mitigated by "failing closed." It's easy to stop risk: you just stop system operation, and there's no risk it can be misused. But for many people, there are other risks: an example being that the organisation may in fact go completely out of business because some _____8 security person turned the ordering system off. If they'd offered me the choice to balance the risk of stopping taking orders against the risk of losing some internal company data, would I have taken it? Well yes, I might have. But if I'm not offered the option, and the risk isn't explained, then I have no choice. These are the sorts of words that I'd like to hear if I'm running a business.
It's not just this type of risk, though. Coming to a project meeting two weeks before launch and announcing that the project can't be deployed "because the calls against this API aren't being authenticated" is no good at all. To anybody. As a developer, though, I have a different vocabulary—and different concerns—to those of the business owner. How about instead of saying, "you need to use authentication on this API or you can't proceed," the security person asks, "what would happen if data that was provided on this API was incorrect, or provided by someone who wanted to disrupt system operation?" In my experience, most developers are interested—are invested—in the correct operation of the system they're running and the data it processes. Asking questions that show the possible impact of lack of security is much more likely to garner positive reactions than an initial "discussion" that basically amounts to a "no."
Don't get me wrong; there are times when we, as security people, need to be firm and stick to our guns.9 But in the end, it's the owners—of systems, or organisations, or business units, or resources—who get to make the final decision. It's our job to talk to them in words they can understand and ensure that they are as well informed as we can possibly make them. Without just saying "no."
5. While we've all met our fair share of stupid normal people, I'm betting you've met your fair share of stupid security people, too, so it balances out.6
This article originally appeared on Alice, Eve, and Bob – a security blog and is republished with permission.