Tuesday, December 8, 2009

Why auditing makes poor security

Compliance and auditing has been the main driver for "securing" computer systems for about a decade now. There's basically rules in place, be they legal regulations, or conditions which need to be met before a contract is signed, and these rules need to be followed or else these are consequences. The consequences typically just result in a fine and maybe a lost contract and a little bad press.

The basic problem with the compliance and audits is that it gets people into the mindset that if they follow all of the steps laid out so they are compliant and they do nothing more. This doesn't make them much more secure, since the attackers are just as familiar with the regulations as the auditors and system administrators who implement the rules. However, it does give them two important things. First, it gives them the false sense of security. More importantly, however, they don't really care if they're compromised because it gives them legal and political protection. The legal protection is that they were compliant, and therefor the compromise clearly wasn't their fault and thus the liability is severely limited. The political protection is the same argument but used in the event that the story makes it to the media. This doesn't save the company any money, but it helps make them look good.

So if we don't give companies guidelines on what they need to be secure, how will they know what to do to make sure the information is secure? Well, I'd say that's up to them. We don't have laws requiring them to do their accounting, but they seem to manage to make that work and still inter-operate with the IRS, the state government, and other businesses. Another argument for having these compliance standards is if we didn't have them they wouldn't take any initiative to secure their information. This purely depends on the economics of the situation, it's merely a matter of the cost to secure their system and the cost if they don't. If they spend hundreds of thousands of dollars buying equipment and hiring quality people, that may make they a very difficult target and tremendously lower the risk of data theft. On the other hand if they spend little to no money on security, they might not get compromised anyway.

If information security is important to the population at large, then the punishment needs to be stricter. If a company can't secure the information they've been entrusted with, be it due to neglegence or incompentence, they should be held accountable. To say "Well we did everything on the checklist. We spent a lot of money on security and tried really hard" is a fine explanation, but that does not excuse them from what they allowed to happen. Whatever they did obviously wasn't enough.

There are some outstanding disclosure laws which do a good job at accomplishing this. It makes it less common for companies to just sweep things under the rug when something bad happens. Instead, they must report it to the government who is going to make sure the incident is publicly known. The limitation of this is that a slick public relations person can mitigate the damage very well. In addition to making it known, they should be required to pay damages to the people. As long as there's such a little cost to allowing your company to have a data breach, we can expect to see more and more of these problems.

Now, some are quick to point out papers like that of Romanosky which indicate that we can't find any "statistically significant effect that laws reduce identity theft." Of course, I could refute that with other papers which indicat there is some correlation, but go one step further in looking at other benefits of these disclosure laws. My point is just that it's a step in the right direction; I certainly wouldn't claim that it's enough to motivate industries to take their obligation to secure the information they possess.

Of course, when companies don't even know they've been compromised, it's a difficult problem to solve. There are some interesting products that look at "normal" traffic over one specific protocol and will detect anomalies which would indicate there's a problem (attack occurring, something compromised and data outbound, etc). The problem is that this is an incredibly difficult thing to do, just from a technical standpoint. With research now being done so that plain text which looks like it's English being used to launch shellcode attacks, it's really difficult to tell the good data from the bad. Filters which look for "things that look like social security numbers" are inaccurate on both ends (miss things, and flag things which are not actually SSNs) plue they're often limited to a specific protocol (typically HTTP).

The moral of the story is that, just like any complex problem, there's no magic bullet. There are things which will help in different aspects, but it really takes a person who is knowledgeable and spends time thinking about the technical limitations of something.

Thursday, December 3, 2009

Interesting patterns

I just looked at the clock and saw it was 12:36, which seemed like an interesting series. I came to determine that this was inherently interesting because 12 * 3 = 36 and three is not only the third digit in the series, but also the lowest factor of 12 (excluding 1, of course). Beyond that three is also the largest common denominator between 12 and 36 as well as the smallest odd prime. The square root of 36 is six, which also happens to be the last digit. If just looking at the digits by themselves, I noticed that 1 + 2 = 3 and 2 * 3 = 6. If there was another number in the series, it'd probably be 729 (3^6).

That makes me wonder about a lot of things. Like do other people see numbers and start picking out patterns? Do certain series of numbers look interesting to others, even if they can't explain why? Would I (and perhaps others) pick up on these interesting numbers if looking at an analog clock?