Choosing Security-Critical Programs

The world of Internet servers is evolving rapidly, and you may find that you want to use a server that has not been mentioned here in a security-critical position. How do you figure out whether or not it is secure?

My Product Is Secure Because . . .

The first step is to discount any advertising statements you may have heard about it. You may hear people claim that their server is secure because: None of these things guarantees security or reliability. Horrible security bugs have been found in programs with all these characteristics.

It contains no publicly available code, so it's secret

People don't need to be able to see the code to a program in order to find problems with it. In fact, most attacks are found by trying attack methods that worked on similar programs, watching what the program does, or looking for vulnerabilities in the protocol, none of which require access to the source code. It is also possible to reverse-engineer an application to find out exactly how it was written. This can take a considerable amount of time, but even if you are not willing to spend the time, it doesn't mean that attackers feel the same way. Attackers are also unlikely to obey any software license agreements that prohibit reverse engineering.

In addition, some vendors who make this claim apply extremely narrow definitions of "publicly available code". For instance, they may in fact use licensed code that is distributed in source format and is free for noncommercial use. Check copyleft acknowledgments -- a program that includes copyleft acknowledgments for the University of California Board of Regents, for instance, almost certainly includes code from some version of the Berkeley Unix operating system, which is widely available. There's nothing wrong with that, but if you want to use something based on secret source code, you deserve to get what you're paying for.

It contains publicly available code, so it's been well reviewed

Publicly available code could be well reviewed, but there's no guarantee. Thousands of people can read publicly available code, but most of them don't. In any case, reviewing code after it's written isn't a terribly effective way of ensuring its security; good design and testing are far more efficient.

People also point out that publicly available code gets more bug fixes and more rapid bug fixes than most privately held code; this is true, but this increased rate of change also adds new bugs.

It is built entirely from scratch, so it didn't inherit any bugs from any other products

No code is bug free. Starting from scratch replaces the old bugs with new bugs. They might be less harmful or more harmful. They might also be identical; people tend to think along the same lines, so it's not uncommon for different programmers to produce the same bug. (See Knight, Leveson, and St. Jean, "A Large-Scale Experiment in N-Version Programming," Fault-Tolerant Computing Systems Conference 15, for an actual experience with common bugs.)

It is built on an old, well-tested code base

New problems show up in old code all the time. Worse yet, old problems that hadn't been exploited yet suddenly become exploitable. Something that's been around for a long time probably isn't vulnerable to attacks that used to be popular, but that doesn't predict much about its resistance to future attacks.

It doesn't run as root/Administrator/LocalSystem

A program that doesn't run as one of the well-known privileged accounts may be safer than one that does. At the very least, if it runs amok, it won't have complete control of your entire computer. However, that's a very long distance from actually being safe. For instance, no matter what user is involved, a mail delivery system has to be able to write mail into users' mailboxes. If the mail delivery system can be subverted, it can be used to fill up disks or forge email, no matter what account it runs as. Many mail systems have more power than that.

There are two separate problems with services that are run as "unprivileged" users. The first is that the privileges needed for the service to function carry risks with them. A mail system must be able to deliver mail, and that's inherently risky. The second is that few operating systems let you control privileges so precisely that you can give a service exactly the privileges that it needs. The ability to deliver mail often comes with the ability to write files to all sorts of other places, for instance. Many programs introduce a third problem by creating accounts to run the service and failing to turn off default privileges that are unneeded. For instance, most programs that create special accounts to run the service fail to turn off the ability for their special accounts to log in. Programs rarely need to log in, but attackers often do.

It doesn't run under Unix, or it doesn't run on a Microsoft operating system

People produce dozens of reasons why other operating systems are less secure than their favorite one. (Unix source code is widely available to attackers! Microsoft source code is too big! The Unix root concept is inherently insecure! Windows's layered model isn't any better!) The fact is, almost all of these arguments have a grain of truth. Both Unix and Windows have serious design flaws as secure operating systems; so does every other popular operating system.

Nonetheless, it's possible to write secure software on almost any operating system, with enough effort, and it's easy to write insecure software on any operating system. In some circumstances, one operating system may be better matched to the service you want to provide than another, but most of the time, the security of a service depends on the effort that goes into securing it, both at design and at deployment.

There are no known attacks against it

Something can have no known attacks without being at all safe. It might not have an installed base large enough to attract attackers; it might be vulnerable but usually installed in conjunction with something easier to attack; it might just not have been around long enough for anybody to get around to it; it might have known flaws that are difficult enough to exploit that nobody has yet implemented attacks for them. All of these conditions are temporary.

It uses public key cryptography (or some other secure-sounding technology)

As of this writing, public key cryptography is a popular victim for this kind of argument because most people don't understand much about how it works, but they know it's supposed to be exciting and secure. You therefore see firewall products that say they're secure because they use public key cryptography, but that don't say what specific form of public key cryptography and what they use it for. This is like toasters that claim that they make perfect toast every time because of "digital processing technology". They can be digitally processing anything from the time delay to the temperature to the degree of color-change in the bread, and a digital timer will burn your toast just as often as an analog one.

Similarly, there's good public key cryptography, bad public key cryptography, and irrelevant public key cryptography. Merely adding public key cryptography to some random part of a product won't make it secure. The same is true of any other technology, no matter how exciting it is. A supplier who makes this sort of claim should be prepared to back it up by providing details of what the technology does, where it's used, and how it matters.

Their Product Is Insecure Because . . .

You'll also get people who claim that other people's software is insecure (and therefore unusable or worse than their competing product) because:

It's been mentioned in a CERT-CC advisory or on a web site listing vulnerabilities

CERT-CC issues advisories for programs that are supposed to be secure, but that have known problems for which fixes are available from the supplier. While it's always unfortunate to have a problem show up, if there's a CERT-CC advisory for it, at least you know that the problem was unintentional and the vendor has taken steps to fix it. A program with no CERT-CC advisories might have no problems; but it might also be completely insecure by design, be distributed by a vendor who never fixes security problems, or have problems that were never reported to CERT-CC. Since CERT-CC is relatively inactive outside of the Unix world, problems on non-Unix platforms are less likely to show up there, but they still exist.

Other lists of vulnerabilities are often a better reflection of actual risks, since they will list problems that the vendor has chosen to ignore and problems that are there by design. On the other hand, they're still very much a popularity contest. The "exploit lists" kept by attackers, and people trying to keep up with them, focus heavily on attacks that provide the most compromises for the least effort. That means that popular programs are mentioned often, and unpopular programs don't get much publicity, even if the popular programs are much more secure than the unpopular ones.

In addition, people who use this argument often provide big scary numbers without putting them in context; what does it mean if you say that a given web site lists 27 vulnerabilities in a program? If the web site is carefully run by a single administrator, that might be 27 separate vulnerabilities; if it's not, it may be the same 9 vulnerabilities reported three times each. In either case, it's not very interesting if competing programs have 270!

It's publicly available

We've already argued that code doesn't magically become secure by being made available for inspection. The other side of that argument is that it doesn't magically become insecure, either. A well-written program doesn't have the kind of bugs that make it vulnerable to attack just because people have read the code. (And most attackers don't actually read code any more frequently than defenders do -- in both cases, the conscientious and careful read the code, and the vast majority of people just compile it and hope.)

In general, publicly available code is modified faster than private code, which means that security problems are fixed more rapidly when they are found. This higher rate of change has downsides, which we discussed earlier, but it also means that you are less likely to be vulnerable to old bugs.

It's been successfully attacked

Obviously, you don't want to install software that people already know how to attack. However, what you should pay the most attention to is not attacks but the response to them. A successful attack (even a very high-profile and public successful attack) may not be important if the problem was novel and rapidly fixed. A pattern where variations on the same problem show up repeatedly or where the supplier is slow to fix problems is genuinely worrisome, but a single successful attack usually isn't, even if it makes a national newspaper.

Real Indicators of Security

Any of the following things should increase your comfort:

Security was one of the design criteria

The first step towards making a secure program is trying to make one. It's not something you can achieve by accident. The supplier should have convincing evidence that security was kept in mind at the design stage, and that the kind of security they had in mind is the same kind that you have in mind. It's not enough for "security" to be a checkbox item on a list somewhere. Ask what they were trying to secure, and how this affected the final product.

For instance, a mail system may list "security" as a goal because it incorporates anti-spamming features or facilitates encryption of mail messages as they pass across the Internet. Those are both nice security goals, but they don't address the security of the server itself if an attacker starts sending it evil commands.

The supplier can discuss how major security problems were avoided

Even if you're trying to be secure, you can't get there if you don't know how. Somebody associated with your supplier and responsible for the program should be able to intelligently discuss the risks involved, and what was done about them. For instance, if the program takes user-supplied input, somebody should be able to explain to you what's been done to avoid buffer overflow problems.

It is possible for you to review the code

Security through obscurity is often better than no security at all, but it's not a viable long-term strategy. If there is no way for anybody to see the code, ever, even a bona-fide expert who has signed a nondisclosure agreement and is acting on behalf of a customer, you should be suspicious. It's perfectly reasonable for people to protect their trade secrets, and it's also reasonable for people to object to having sensitive code examined by people who aren't able to evaluate it anyway (for instance, it's unlikely that most people can do an adequate job of evaluating the strength of encryption algorithms). However, if you're willing to provide somebody who's competent to do the evaluation, and to provide strong protection for trade secrets, you should be allowed to review the code. Code that can't stand up to this sort of evaluation will not stand the test of time, either.

You may not be able and willing to review the code under appropriate conditions. That's usually OK, but you should at least verify that there is some procedure for code review.

Somebody you know and trust actually has reviewed the code

It doesn't matter how many people could look at a piece of software if nobody ever does. If it's practical to do so, it's wise to make the investment to have somebody reasonably knowledgeable and trustworthy actually look at the code. While anybody could review open source, very few people do. It's relatively cheap and easy, and any competent programmer can at least tell you whether it's well-written code. Don't assume that somebody else has done this.

There is a security notification and update procedure

All programs eventually have security problems. A well-defined process should be in place for notifying the supplier of security problems and for getting notifications and updates from them. If the supplier has been around for any significant amount of time, there should be a positive track record, showing that they react to reported problems promptly and reasonably.

The server implements a recent (but accepted) version of the protocol

You can have problems with protocols, not just with the programs that implement them. In order to have some confidence in the security of the protocol, it's helpful to have an implementation of an accepted, standard protocol in a relatively recent version. You want an accepted and/or standard protocol so that you know that the protocol design has been reviewed; you want a relatively recent version so that you know that old problems have been fixed. You don't want custom protocols, or experimental or novel versions of standard protocols, if you can avoid them. Protocol design is tricky, few suppliers do a competent job in-house, and almost nobody gets a protocol right on the first try.

The program uses standard error-logging mechanisms

In order to secure something, you need to manage it. Using standard logging mechanisms makes programs much easier to manage; you can simply integrate them into your existing log management and alerting tools. Nonstandard logging not only interferes with your ability to find messages, it also runs the risk of introducing new security holes (what if an attacker uses the logging to fill your disk?).

There is a secure software distribution mechanism

You should have some confidence that the version of the software you have is the correct version. In the case of software that you download across the Internet, this means that it should have a verifiable digital signature (even if it is commercial software!).

ore subtly, if you're getting a complex commercial package, you should be able to trust the distribution and release mechanism, and know that you have a complete and correct version with a retrievable version number. If your commercial vendor ships you a writable CD burned just for you and then advises you to FTP some patches, you need to know that some testing, integration, and versioning is going on. If they don't digitally sign everything and provide signatures to compare to, they should at least be able to provide an inventory list showing all the files in the distribution with sizes, dates, and version numbers.