A Friendly Reminder: Compliance Does Not Equal Security
Security standards require you to build best practices in IT security. Article by Ari Takanen, Founder and CTO of Codenomicon
The practices defined in the security standards are unfortunately somewhat behind on the best practices available in the industry. New security vulnerabilities emerge constantly and adjustment on the used protection measures and vulnerability detection techniques are always needed according and beyond those defined in the standards.
Reactive and proactive security
One example of recent security paradigms is the movement from reactive security tools such as firewalls, IDS systems and security scans into proactive tools that find and protect against zero-day threats, that is against the vulnerabilities which the manufacturers, developers and vendors are unaware off. There are no protection measures available for zero-day weaknesses. The only means of protecting against these is to find the weaknesses before someone else does that. And that is how you become proactive.
Finding security vulnerabilities
There are two ways of finding new previously unknown vulnerabilities in software. First method is code auditing, but unfortunately it requires access to source code, and is encumbered with high rate of false positives, reports of weaknesses that have no relation to the security of the product. The easier method is robustness testing, or fuzzing. It involves test automation technique that will generate abnormal inputs to the software under test to stimulate crash-level failures. Fuzzing has no false positives, a crash is a crash, and always serious.
Two methods of fuzzing
But be warned, not all fuzzing techniques are effective. The original meaning of fuzzing was to send random or semi-random inputs to the piece of software that you want to break. Random fuzzing can break things, and sometimes surprisingly is actually much more effective than many of the recent academic developments in intelligent fuzzing. But the most effective means of fuzzing comes from model-based testing techniques, where the test tools are taught the operation of the communication interfaces, the protocols including the syntax of messages and the state-machines that are followed in the message exchange. A thumb rule is that random fuzzing can find around 20-30% of the flaws that are in hiding, whereas model-based fuzzing finds more than 80% of the flaws.
The tools are out there to grab
Before 1999, fuzzing was an academic technique used only by few researchers, and a handful of hackers. But with freely available fuzz-test suites such as PROTOS (University of Oulu, 1999-2001) it quickly became a tool for all software developers. Although it was first used by people involved with development of operating systems and network protocols, web developers quickly also adapted the technique. In 2001, the first commercial tools emerged from companies such as Codenomicon and Cenzic. The tools are plentiful today. But note that whereas other tools are just point-and-shoot, requiring zero knowledge on the protocols or specialist security knowledge, many of the tools are just fuzzer development frameworks. These require that you build and maintain the tools yourself. Don’t be scared from the first experiences; just look elsewhere for easier to use tools.
Who is using fuzzing?
Still, not all developers are aware of fuzzing tools, or just have chosen not to use them due to time limitations or budget restrictions. That is why the second wave of fuzzing users came from the system integration and service provider domain. Anyone who builds critical systems or networks is naturally interested in how reliable are the devices that they are using. Some even started using fuzzing as a procurement criteria, enforcing their vendors to use fuzzing before they even considered evaluating their products. With that, fuzzing turned from an R&D tool into security assessment tool. Majority of the users of fuzzers today come from any Enterprise environment, security aware IT staff and newly built product security teams risk assessment teams. Almost all self-respecting penetration testing service providers use at least some type of fuzzing in their assessments.
Still today, fuzzing is integrated to very few compliance assessments. A handful of product certification processes include even a mention of fuzzing or any similar testing technique. Fuzzing is not part of all penetration tests, and it is not used in all system integration tests. The compliance requirements that do require fuzzing are almost all proprietary requirements specifications by a range of Fortune-1000 Enterprises and telecommunication service providers. And without such requirements, we cannot expect every network to be secure. Zero-day threats will keep up emerging, and your people will be kept busy with the patch-and-penetrate race: Patch before it is too late. For most of you, security is still a reactive process. But I urge you not to close your eyes for the future. Security should be proactive, not reactive. The time of reactive security has come to an end.
Ari Takanen, founder and CTO of Codenomicon, has since 1998 been researching information security issues in critical environments. His work at Codenomicon aims to ensure that new technologies are accepted by the general public by providing means of measuring and ensuring quality in networked software. Ari Takanen is one of the people behind PROTOS research that studied information security and reliability errors in numerous protocol implementations. His company, Codenomicon Ltd. provides automated tools with a systematic approach to test a multitude of interfaces on mission critical software. He is author of two books on VoIP security and on security testing.