To all the coders, testers, security aficionados and fuzzers-in-the-making out there—welcome!
This post kicks off our article series about fuzzing, a powerful software testing method for finding bugs. We want to share knowledge and best practices to promote a better fuzzing experience. We believe the world will become a better place if more security-conscious people—developers and security enthusiasts alike—construct effective fuzzers. Let’s make fuzzing enjoyable and successful.
Fuzzing is an automated software testing technique that detects flaws by injecting invalid, malformed, or unexpected inputs at runtime.
In short, fuzzing employs automated data mutation to trigger bugs. In full, fuzzing involves injecting massive amounts of random data to make software crash. If a flaw is found, a debugger identifies the cause.
With Microsoft and Google as early adopters, fuzzing has gained widespread popularity. More and more companies use fuzzing in software development. For example, 65% of security decision-makers adopted fuzz testing in 2023, with a further 16% planning to do so, according to a Forrester survey (Forrester 2023).
Fuzzing is popular for its effectiveness. For instance, Google’s OSS-Fuzz fixed over 8,800 vulnerabilities and 28,000 bugs in 850 open-source projects over the course of 8 years (OSS-Fuzz). Google’s Project Zero team found 16 Windows kernel vulnerabilities in font handling through fuzzing (Project Zero).
Fuzzing offers a high benefit-to-cost ratio. Testing all possible user input in complex applications is difficult. Automated fuzzing software addresses this pain point by testing a high multitude and variance of inputs, providing good coverage.
Fuzzing explores paths through the program or data space that the developers have never considered. Such areas are usually not covered by traditional testing methods. This way, fuzzing helps identify defects that are overlooked during development, testing, and debugging.
On the downside, effective fuzzing has a steep learning curve—it’s an art and science at the same time. Some software is hard to fuzz, and creating an effective fuzzing setup can be complex: Setting up APIs and sending essential data takes knowledge and effort, as many pieces need to work together seamlessly. This is particularly true when it comes to large-scale fuzzing.
And, as with all good things in life, fuzzing has limitations. Although it finds serious faults, fuzzing alone cannot draw a complete picture of an application’ s overall security posture. Combined with other proven methods of security analysis, such as static code analysis, dynamic testing, and hands-on penetration testing, fuzzing enables a complete picture. In development, fuzzing should be part of the Secure Development Cycle, alongside security requirements, threat analysis, and surface reduction.
Most importantly, poorly designed harnesses can significantly limit your bug detection capability. Fuzzing excels at finding memory safety issues like buffer overflows. It is less effective at detecting logical vulnerabilities or permission flaws that don’t cause crashes.
Here’s an outlook on our upcoming fuzzing article series:
#2: Common fuzzing mistakes and best practices
#3: How to write harnesses for Go and fuzz Go applications
#4: How to write harnesses for Rust and Python and fuzz them
#5: How to scope a software target for APIs to fuzz
#6: The different types of fuzzing harnesses
#7: Effective Seeding
#8: How to perform coverage analysis
#9: How to run fuzzing campaigns
#10: Continuous fuzzing campaigns
We hope you’re eager to read the next part as soon as it’s published. Let’s break some code (safely)!
Special thanks to Reviewer Stephan Zeisberg.
Editing by Maria A. Sivenkova