Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
3 security tips for software developers
3 security tips for software developers
Don't make these common security mistakes that leave you vulnerable to attack.
Get the newsletter
Every developer knows the importance of following best security practices. But too often we cut corners, maybe because we have to work hard until those security practices sink in. Unfortunately, that usually takes something like seeing a security malpractice that's so bad it gets marked in indelible ink in our brains.
I've seen a lot of instances of poor security practices during my career as a sysadmin, but the three I'm going to describe here are basic things that every software developer should avoid. It's important to note that I've seen every single one of these errors committed by large companies and experienced developers, so you can't chalk these mistakes up to novice junior engineers.
1. Don't encrypt passwords, hash them.
Earlier in my career, I worked for a company that used a management system that held some pretty important information. One day I was asked to perform a security review of the network and the software that stored our critical information. I spent a few minutes poking around before deciding to fire up Wireshark to see what traffic was running around the network.
I used my local workstation, logged into the information system, and noticed something weird. Even though this was before SSL was all the rage, I did not expect to see data in plain text containing bytes such as "username" and "password." Upon closer inspection, it appeared that the system was sending my username and a random string—that was not my password—across the wire. I couldn't let it rest. I tried logging in again, except this time I purposely entered my password wrong. I didn't change all of it, just a single character.
What I expected to see was a completely different random string representing the password. Instead, only the first two bytes changed. This was interesting. Even though I was relatively inexperienced, I knew that if the representation of my password were hashed, as it should have been, it would be entirely different, not just two characters different. Heck, even a GOOD encryption scheme would do that. This, however, was not doing that at all. I tried two more wrong passwords.
Armed with some sheets of paper and a pencil, I spent the next two hours figuring out the decryption scheme. At the end of those two hours, I had a Python script that could take any of those "encrypted" passwords and decrypt it to reveal the original password, something that no one should ever be able to do. I'm sure the person who dreamed up this encryption scheme never thought that someone with a couple of hours on their hands would ever sit down and work it out, but I did.
Why? Because I could.
If you have to store passwords for comparison, never encrypt them, as there is always the possibility that someone can find a decryption algorithm or key. A hash has no direct reverse, meaning no one can reverse it unless they already have a table with the mapping from plain text to hash (or they simply guess it). Knowing the hash mechanism doesn't betray the integrity of the data, whereas knowing the encryption scheme and keys will.
2. Don't put secret backdoors in software.
As part of a third-party software rollout, I was supporting some users who told me that their logins didn't work. This was a paid-for service provided by a vendor, but before troubling with what is usually one of the most annoying support calls ("My login doesn't work"), I thought I would try it myself. It was true, the logins didn't work.
The system was a web-based learning management platform, of which we had paid for a small portion of its greater capabilities. As I poked around on the login page a little more, something caught my eye. One character in one of the words looked different. Perhaps it was a different font, a slightly different shaped "o." Me being me, I viewed the page in source view, and noticed that there was a link associated with this particular letter. The link was purposefully hidden. The mouse cursor didn't change on hovering over it.
I gingerly loaded that mystery link into a new browser window. All of a sudden, I was met with a screen detailing an entire suite of computers, giving me full control over what they could do and the ability to shut them down, reboot them, take screenshots, you name it. I telephoned the software vendor and asked to speak to the IT guy. After jumping through a few hoops, I finally got to someone who knew what I was talking about.
"Oh yeah!" he said. "We put that there for easy access, and no one ever found it until you. We'll remove it right away." Before we ended the call, he asked me one final question: "Why did you start digging around in the HTML?"
My answer was simple: "Because I could."
It's just not worth the risk of putting some fancy backdoor access into any system, because you can bet your bottom dollar someone will find it. No matter how obscure, code analysis—and just general prodding and poking—often yields the most surprising and interesting results.
3. Authenticate users on every page—not only on the login page.
At one point in my career, I was involved with a software development project that was being implemented by a seasoned developer. Feeling a little out of my league with this particular application, I told my manager that we would need an in-depth security review of the code. I was asked to look anyway to see what I could find. I started playing with the app, logged in, and viewed some of the data. Then I found something really interesting.
If I bookmarked one of the URLs that I hit further into the system, I could just copy and paste it into another browser, and boom! I'd be there, without having to log in. I asked the developer, "Why don't you check the login on every page? If I just enter the URL of a page further into the system, I can get there without logging in." He asked, "Why would you do that?"
"Because I can," I replied.
Don't leave anything up to chance
Even seasoned developers can make these mistakes. They think that someone won't ever try to delve deeper into a system that they have no real access to. The problem is people will prod, they will poke. The overriding advice I, someone who only dabbles in security, want to impart here is: Don't leave anything up to chance. There are people out there like me, who like to dig into things and see why and how they work. But there are also a great many people who will dig to exploit your flaws and vulnerabilities.
Why? Because they can!