模糊测试

小结:

1、向一个程序输入一段随机比特流

provide invalid, unexpected, or random data as inputs to a computer program

 

Fuzzing - Wikipedia https://en.wikipedia.org/wiki/Fuzzing

Fuzzing or fuzz testing is an automated software testing technique that involves providing invalid, unexpected, or random data as inputs to a computer program. The program is then monitored for exceptions such as crashes, failing built-in code assertions, or potential memory leaks. Typically, fuzzers are used to test programs that take structured inputs. This structure is specified, e.g., in a file format or protocol and distinguishes valid from invalid input. An effective fuzzer generates semi-valid inputs that are "valid enough" in that they are not directly rejected by the parser, but do create unexpected behaviors deeper in the program and are "invalid enough" to expose corner cases that have not been properly dealt with.

For the purpose of security, input that crosses a trust boundary is often the most useful.[1] For example, it is more important to fuzz code that handles the upload of a file by any user than it is to fuzz the code that parses a configuration file that is accessible only to a privileged user.

 

History[edit]

The term "fuzz" originates from a fall 1988 class project[2] in the graduate Advanced Operating Systems class (CS736), taught by Prof. Barton Miller at the University of Wisconsin, whose results were subsequently published in 1990.[3] To fuzz test a UNIX utility meant to automatically generate random input and command-line parameters for the utility. The project was designed to test the reliability of UNIX command line programs by executing a large number of random inputs in quick succession until they crashed. Miller's team was able to crash 25 to 33 percent of the utilities that they tested. They then debugged each of the crashes to determine the cause and categorized each detected failure. To allow other researchers to conduct similar experiments with other software, the source code of the tools, the test procedures, and the raw result data were made publicly available.[4] This early fuzzing would now be called black box, generational, unstructured (dumb) fuzzing.

This 1990 fuzz paper also noted the relationship of reliability to security: "Second, one of the bugs that we found was caused by the same programming practice that provided one of the security holes to the Internet worm (the 'gets finger' bug). We have found additional bugs that might indicate future security holes." (Referring to the Morris worm of November 1988.)

The original fuzz project went on to make contributions in 1995, 2000, 2006 and most recently in 2020:

  • 1995:[5] The "fuzz revisited" paper consisted of four parts. (1) Reproduced the original command line study, including a wider variety of UNIX systems and more utilities. The study showed that, if anything, reliability had gotten worse. This was the first study that included open source GNU and Linux utilities that, interestingly, were significantly more reliable than those from the commercial UNIX systems. (2) Introduced the fuzz testing of GUI (window based) applications under X-Windows. This study used both unstructured and structured (valid sequences of mouse and keyboard events) input data. They were able to crash 25% of the X-Windows applications. In addition, they tested the X-Windows server and showed that it was resilient to crashes. (3) Introduced fuzz testing of network services, again based on structured test input. None of these services was crashed. (4) Introduced random testing of system library call return values, specifically randomly returning zero from the malloc family of functions. Almost half the standard UNIX programs failed to properly check such return values.
  • 2000:[6] Applied fuzz testing to the recently released Windows NT operating system, testing applications that ran under the Win32 window system. They were able to crash 21% of the applications and hang an additional 24% of those tested. Again, applications were testing with both unstructured and structured (valid keyboard and mouse events) input, crashing almost half of those applications that they tested. They identified the causes of the failures and found them to be similar to previous studies.
  • 2006:[7] Applied fuzz testing to Mac OS X, for both command line and window based applications. They tested 135 command line utility programs, crashing 7% of those tested. In addition, they tested 30 applications that ran under the MacOS Aqua window system, crashing 73% of those tested.
  • 2020:[8] Most recently, they applied the classic generational, black box, unstructured testing to current UNIX systems, specially Linux, FreeBSDand MacOS, to see if the original techniques were still relevant and if current utility programs were resistant to this type of testing. They tested approximately 75 utilities on each platform, with failure rates of 12% on Linux, 16% on MacOS and 19% on FreeBSD. (Note that these failure rates were worse than the results from earlier test of the same systems.) When they analyzed each failure and categorized them, they found that the classic categories of failures, such as pointer and array errors and not checking return codes, were still broadly present in the new results. In addition, new failure causes cropped up from complex program state and algorithms that did not scale with the input size or complexity. They also tested UNIX utilities written more recently in Rust and found that they were of similar reliability to the ones written in C, though (as expected) less likely to have memory errors.

In April 2012, Google announced ClusterFuzz, a cloud-based fuzzing infrastructure for security-critical components of the Chromium web browser.[9]Security researchers can upload their own fuzzers and collect bug bounties if ClusterFuzz finds a crash with the uploaded fuzzer.

In September 2014, Shellshock[10] was disclosed as a family of security bugs in the widely used UNIX Bash shell; most vulnerabilities of Shellshock were found using the fuzzer AFL.[11] (Many Internet-facing services, such as some web server deployments, use Bash to process certain requests, allowing an attacker to cause vulnerable versions of Bash to execute arbitrary commands. This can allow an attacker to gain unauthorized access to a computer system.[12])

In April 2015, Hanno Böck showed how the fuzzer AFL could have found the 2014 Heartbleed vulnerability.[13][14] (The Heartbleed vulnerability was disclosed in April 2014. It is a serious vulnerability that allows adversaries to decipher otherwise encrypted communication. The vulnerability was accidentally introduced into OpenSSL which implements TLS and is used by the majority of the servers on the internet. Shodan reported 238,000 machines still vulnerable in April 2016;[15] 200,000 in January 2017.[16])

In August 2016, the Defense Advanced Research Projects Agency (DARPA) held the finals of the first Cyber Grand Challenge, a fully automated capture-the-flag competition that lasted 11 hours.[17] The objective was to develop automatic defense systems that can discover, exploit, and correct software flaws in real-time. Fuzzing was used as an effective offense strategy to discover flaws in the software of the opponents. It showed tremendous potential in the automation of vulnerability detection. The winner was a system called "Mayhem"[18] developed by the team ForAllSecure led by David Brumley.

In September 2016, Microsoft announced Project Springfield, a cloud-based fuzz testing service for finding security critical bugs in software.[19]

In December 2016, Google announced OSS-Fuzz which allows for continuous fuzzing of several security-critical open-source projects.[20]

At Black Hat 2018, Christopher Domas demonstrated the use of fuzzing to expose the existence of a hidden RISC core in a processor.[21] This core was able to bypass existing security checks to execute Ring 0 commands from Ring 3.

In September 2020, Microsoft released OneFuzz, a self-hosted fuzzing-as-a-service platform that automates the detection of software bugs.[22] It supports Windows and Linux.[23]

Early random testing[edit]

Testing programs with random inputs dates back to the 1950s when data was still stored on punched cards.[24] Programmers would use punched cards that were pulled from the trash or card decks of random numbers as input to computer programs. If an execution revealed undesired behavior, a bug had been detected.

The execution of random inputs is also called random testing or monkey testing.

In 1981, Duran and Ntafos formally investigated the effectiveness of testing a program with random inputs.[25][26] While random testing had been widely perceived to be the worst means of testing a program, the authors could show that it is a cost-effective alternative to more systematic testing techniques.

In 1983, Steve Capps at Apple developed "The Monkey",[27] a tool that would generate random inputs for classic Mac OS applications, such as MacPaint.[28] The figurative "monkey" refers to the infinite monkey theorem which states that a monkey hitting keys at random on a typewriter keyboard for an infinite amount of time will eventually type out the entire works of Shakespeare. In the case of testing, the monkey would write the particular sequence of inputs that would trigger a crash.

In 1991, the crashme tool was released, which was intended to test the robustness of Unix and Unix-like operating systems by randomly executing systems calls with randomly chosen parameters.[29]

Types[edit]

A fuzzer can be categorized in several ways:[30][1]

  1. A fuzzer can be generation-based or mutation-based depending on whether inputs are generated from scratch or by modifying existing inputs.
  2. A fuzzer can be dumb (unstructured) or smart (structured) depending on whether it is aware of input structure.
  3. A fuzzer can be white-, grey-, or black-box, depending on whether it is aware of program structure.

Reuse of existing input seeds[edit]

A mutation-based fuzzer leverages an existing corpus of seed inputs during fuzzing. It generates inputs by modifying (or rather mutating) the provided seeds.[31] For example, when fuzzing the image library libpng, the user would provide a set of valid PNG image files as seeds while a mutation-based fuzzer would modify these seeds to produce semi-valid variants of each seed. The corpus of seed files may contain thousands of potentially similar inputs. Automated seed selection (or test suite reduction) allows users to pick the best seeds in order to maximize the total number of bugs found during a fuzz campaign.[32]

A generation-based fuzzer generates inputs from scratch. For instance, a smart generation-based fuzzer[33] takes the input model that was provided by the user to generate new inputs. Unlike mutation-based fuzzers, a generation-based fuzzer does not depend on the existence or quality of a corpus of seed inputs.

Some fuzzers have the capability to do both, to generate inputs from scratch and to generate inputs by mutation of existing seeds.[34]

Aware of input structure[edit]

Typically, fuzzers are used to generate inputs for programs that take structured inputs, such as a file, a sequence of keyboard or mouse events, or a sequence of messages. This structure distinguishes valid input that is accepted and processed by the program from invalid input that is quickly rejected by the program. What constitutes a valid input may be explicitly specified in an input model. Examples of input models are formal grammarsfile formatsGUI-models, and network protocols. Even items not normally considered as input can be fuzzed, such as the contents of databasesshared memoryenvironment variables or the precise interleaving of threads. An effective fuzzer generates semi-valid inputs that are "valid enough" so that they are not directly rejected from the parser and "invalid enough" so that they might stress corner cases and exercise interesting program behaviours.

A smart (model-based,[34] grammar-based,[33][35] or protocol-based[36]) fuzzer leverages the input model to generate a greater proportion of valid inputs. For instance, if the input can be modelled as an abstract syntax tree, then a smart mutation-based fuzzer[35] would employ random transformations to move complete subtrees from one node to another. If the input can be modelled by a formal grammar, a smart generation-based fuzzer[33] would instantiate the production rules to generate inputs that are valid with respect to the grammar. However, generally the input model must be explicitly provided, which is difficult to do when the model is proprietary, unknown, or very complex. If a large corpus of valid and invalid inputs is available, a grammar induction technique, such as Angluin's L* algorithm, would be able to generate an input model.[37][38]

A dumb fuzzer[39][40] does not require the input model and can thus be employed to fuzz a wider variety of programs. For instance, AFL is a dumb mutation-based fuzzer that modifies a seed file by flipping random bits, by substituting random bytes with "interesting" values, and by moving or deleting blocks of data. However, a dumb fuzzer might generate a lower proportion of valid inputs and stress the parser code rather than the main components of a program. The disadvantage of dumb fuzzers can be illustrated by means of the construction of a valid checksum for a cyclic redundancy check (CRC). A CRC is an error-detecting code that ensures that the integrity of the data contained in the input file is preserved during transmission. A checksum is computed over the input data and recorded in the file. When the program processes the received file and the recorded checksum does not match the re-computed checksum, then the file is rejected as invalid. Now, a fuzzer that is unaware of the CRC is unlikely to generate the correct checksum. However, there are attempts to identify and re-compute a potential checksum in the mutated input, once a dumb mutation-based fuzzer has modified the protected data.[41]

Aware of program structure[edit]

Typically, a fuzzer is considered more effective if it achieves a higher degree of code coverage. The rationale is, if a fuzzer does not exercise certain structural elements in the program, then it is also not able to reveal bugs that are hiding in these elements. Some program elements are considered more critical than others. For instance, a division operator might cause a division by zero error, or a system call may crash the program.

black-box fuzzer[39][35] treats the program as a black box and is unaware of internal program structure. For instance, a random testing tool that generates inputs at random is considered a blackbox fuzzer. Hence, a blackbox fuzzer can execute several hundred inputs per second, can be easily parallelized, and can scale to programs of arbitrary size. However, blackbox fuzzers may only scratch the surface and expose "shallow" bugs. Hence, there are attempts to develop blackbox fuzzers that can incrementally learn about the internal structure (and behavior) of a program during fuzzing by observing the program's output given an input. For instance, LearnLib employs active learning to generate an automaton that represents the behavior of a web application.

white-box fuzzer[40][34] leverages program analysis to systematically increase code coverage or to reach certain critical program locations. For instance, SAGE[42] leverages symbolic execution to systematically explore different paths in the program. If the program's specification is available, a whitebox fuzzer might leverage techniques from model-based testing to generate inputs and check the program outputs against the program specification. A whitebox fuzzer can be very effective at exposing bugs that hide deep in the program. However, the time used for analysis (of the program or its specification) can become prohibitive. If the whitebox fuzzer takes relatively too long to generate an input, a blackbox fuzzer will be more efficient.[43] Hence, there are attempts to combine the efficiency of blackbox fuzzers and the effectiveness of whitebox fuzzers.[44]

gray-box fuzzer leverages instrumentation rather than program analysis to glean information about the program. For instance, AFL and libFuzzer utilize lightweight instrumentation to trace basic block transitions exercised by an input. This leads to a reasonable performance overhead but informs the fuzzer about the increase in code coverage during fuzzing, which makes gray-box fuzzers extremely efficient vulnerability detection tools.[45]

Uses[edit]

Fuzzing is used mostly as an automated technique to expose vulnerabilities in security-critical programs that might be exploited with malicious intent.[9][19][20] More generally, fuzzing is used to demonstrate the presence of bugs rather than their absence. Running a fuzzing campaign for several weeks without finding a bug does not prove the program correct.[46] After all, the program may still fail for an input that has not been executed, yet; executing a program for all inputs is prohibitively expensive. If the objective is to prove a program correct for all inputs, a formal specification must exist and techniques from formal methods must be used.

Exposing bugs[edit]

In order to expose bugs, a fuzzer must be able to distinguish expected (normal) from unexpected (buggy) program behavior. However, a machine cannot always distinguish a bug from a feature. In automated software testing, this is also called the test oracle problem.[47][48]

Typically, a fuzzer distinguishes between crashing and non-crashing inputs in the absence of specifications and to use a simple and objective measure. Crashes can be easily identified and might indicate potential vulnerabilities (e.g., denial of service or arbitrary code execution). However, the absence of a crash does not indicate the absence of a vulnerability. For instance, a program written in C may or may not crash when an input causes a buffer overflow. Rather the program's behavior is undefined.

To make a fuzzer more sensitive to failures other than crashes, sanitizers can be used to inject assertions that crash the program when a failure is detected.[49][50] There are different sanitizers for different kinds of bugs:

Fuzzing can also be used to detect "differential" bugs if a reference implementation is available. For automated regression testing,[51] the generated inputs are executed on two versions of the same program. For automated differential testing,[52] the generated inputs are executed on two implementations of the same program (e.g., lighttpd and httpd are both implementations of a web server). If the two variants produce different output for the same input, then one may be buggy and should be examined more closely.

Validating static analysis reports[edit]

Static program analysis analyzes a program without actually executing it. This might lead to false positives where the tool reports problems with the program that do not actually exist. Fuzzing in combination with dynamic program analysis can be used to try to generate an input that actually witnesses the reported problem.[53]

Browser security[edit]

Modern web browsers undergo extensive fuzzing. The Chromium code of Google Chrome is continuously fuzzed by the Chrome Security Team with 15,000 cores.[54] For Microsoft Edge and Internet ExplorerMicrosoft performed fuzzed testing with 670 machine-years during product development, generating more than 400 billion DOM manipulations from 1 billion HTML files.[55][54]

Toolchain[edit]

A fuzzer produces a large number of inputs in a relatively short time. For instance, in 2016 the Google OSS-fuzz project produced around 4 trillioninputs a week.[20] Hence, many fuzzers provide a toolchain that automates otherwise manual and tedious tasks which follow the automated generation of failure-inducing inputs.

Automated bug triage[edit]

Automated bug triage is used to group a large number of failure-inducing inputs by root cause and to prioritize each individual bug by severity. A fuzzer produces a large number of inputs, and many of the failure-inducing ones may effectively expose the same software bug. Only some of these bugs are security-critical and should be patched with higher priority. For instance the CERT Coordination Center provides the Linux triage tools which group crashing inputs by the produced stack trace and lists each group according to their probability to be exploitable.[56] The Microsoft Security Research Centre (MSEC) developed the !exploitable tool which first creates a hash for a crashing input to determine its uniqueness and then assigns an exploitability rating:[57]

  • Exploitable
  • Probably Exploitable
  • Probably Not Exploitable, or
  • Unknown.

Previously unreported, triaged bugs might be automatically reported to a bug tracking system. For instance, OSS-Fuzz runs large-scale, long-running fuzzing campaigns for several security-critical software projects where each previously unreported, distinct bug is reported directly to a bug tracker.[20] The OSS-Fuzz bug tracker automatically informs the maintainer of the vulnerable software and checks in regular intervals whether the bug has been fixed in the most recent revision using the uploaded minimized failure-inducing input.

Automated input minimization[edit]

Automated input minimization (or test case reduction) is an automated debugging technique to isolate that part of the failure-inducing input that is actually inducing the failure.[58][59] If the failure-inducing input is large and mostly malformed, it might be difficult for a developer to understand what exactly is causing the bug. Given the failure-inducing input, an automated minimization tool would remove as many input bytes as possible while still reproducing the original bug. For instance, Delta Debugging is an automated input minimization technique that employs an extended binary search algorithm to find such a minimal input.[60]

 

 

https://zh.wikipedia.org/zh-hans/模糊测试

模糊测试[编辑]

维基百科,自由的百科全书
 
 
 
跳到导航跳到搜索

模糊测试 (fuzz testing, fuzzing)是一种软件测试技术。其核心思想是将自动或半自动生成的随机数据输入到一个程序中,并监视程序异常,如崩溃,断言(assertion)失败,以发现可能的程序错误,比如内存泄漏。模糊测试常常用于检测软件或计算机系统的安全漏洞。

模糊测试最早由威斯康星大学的Barton Miller于1988年提出。[1][2]他们的工作不仅使用随机无结构的测试数据,还系统的利用了一系列的工具去分析不同平台上的各种软件,并对测试发现的错误进行了系统的分析。此外,他们还公开了源代码,测试流程以及原始结果数据。

模糊测试工具主要分为两类,变异测试(mutation-based)以及生成测试(generation-based)。模糊测试可以被用作白盒,灰盒或黑盒测试。[3]文件格式与网络协议是最常见的测试目标,但任何程序输入都可以作为测试对象。常见的输入有环境变量,鼠标和键盘事件以及API调用序列。甚至一些通常不被考虑成输入的对象也可以被测试,比如数据库中的数据或共享内存。

对于安全相关的测试,那些跨越可信边界的数据是最令人感兴趣的。比如,模糊测试那些处理任意用户上传的文件的代码比测试解析服务器配置文件的代码更重要。因为服务器配置文件往往只能被有一定权限的用户修改。

历史[编辑]

1988年,威斯康星大学的Barton Miller教授率先在他的课程实验提出模糊测试[1][2]。实验内容是开发一个基本的命令行模糊器以测试Unix程序。这个模糊器可以用随机数据来“轰炸”这些测试程序直至其崩溃。类似的实验于1995年被重复,并且包括了图形界面程序(例如X Window System),网络协议和系统API库[3]。一些后续工作可以测试Mac和Windows系统上的命令行程序与图形界面程序。

关于模糊测试更早的想法可以追溯到1983年前。Steve Capps于当时开发了一个叫做The Monkey的Macintosh程序以测试Mac程序,并曾被用于发现MacPaint程序错误[4]

另一个早期的模糊测试工具是crashme,于1991年发布。其主要功能是让Unix以及类Unix系统去执行随机机器指令以测试这些系统的健壮性。[5]

用途[编辑]

模糊测试通常被用于黑盒测试。其回报率通常比较高。[6]当然,模糊测试只是相当于对系统的行为做了一个随机采样,所以在许多情况下通过了模糊测试只是说明软件可以处理异常以避免崩溃,而不能说明该软件的行为完全正确。这表明模糊测试更多是一种对整体质量的保证,并不能替代全面的测试或者形式化方法。作为一种粗略的可靠性度量方法,模糊测试可以提示程序哪些部件需要特殊的注意。对于这些部件可以进一步使用代码审计,静态分析以及代码重写。

技术[编辑]

模糊测试工具通常可以被分为两类。变异测试通过改变已有的数据样本去生成测试数据。生成测试则通过对程序输入的建模来生成新的测试数据。[3]

最简单的模糊测试是通过命令行,网络包或者事件向一个程序输入一段随机比特流。这种技术目前依然是有效的发现程序错误的方法。另一个常见且易于实现的技术是通过随机反转一些比特或者整体移动一些数据块来变异已有的输入数据。但是,最有效的模糊测试需要能够理解被测试对象的格式或者协议。这可以通过阅读设计规格来实现。基于设计规格的模糊工具包含完整的规格,并通过基于模型的测试生成方法去遍历规格,并在数据内容,结构,消息,序列中引入一些异常。这种“聪明的”模糊测试也被称作健壮性测试,句法测试,语法测试以及错误注入。[7][8][9][10]这种协议感知的特性也可以启发式的从例子中生成。相关的工具有Sequitur

模糊测试也可以与其他技术结合。白盒模糊测试结合了符号执行技术与约束求解技术。演化模糊测试则利用了一个启发的反馈来有效的实现自动的探索性测试

 

 

 

 

模糊测试[编辑]

维基百科,自由的百科全书
 
 
 
跳到导航跳到搜索

模糊测试 (fuzz testing, fuzzing)是一种软件测试技术。其核心思想是将自动或半自动生成的随机数据输入到一个程序中,并监视程序异常,如崩溃,断言(assertion)失败,以发现可能的程序错误,比如内存泄漏。模糊测试常常用于检测软件或计算机系统的安全漏洞。

模糊测试最早由威斯康星大学的Barton Miller于1988年提出。[1][2]他们的工作不仅使用随机无结构的测试数据,还系统的利用了一系列的工具去分析不同平台上的各种软件,并对测试发现的错误进行了系统的分析。此外,他们还公开了源代码,测试流程以及原始结果数据。

模糊测试工具主要分为两类,变异测试(mutation-based)以及生成测试(generation-based)。模糊测试可以被用作白盒,灰盒或黑盒测试。[3]文件格式与网络协议是最常见的测试目标,但任何程序输入都可以作为测试对象。常见的输入有环境变量,鼠标和键盘事件以及API调用序列。甚至一些通常不被考虑成输入的对象也可以被测试,比如数据库中的数据或共享内存。

对于安全相关的测试,那些跨越可信边界的数据是最令人感兴趣的。比如,模糊测试那些处理任意用户上传的文件的代码比测试解析服务器配置文件的代码更重要。因为服务器配置文件往往只能被有一定权限的用户修改。

历史[编辑]

1988年,威斯康星大学的Barton Miller教授率先在他的课程实验提出模糊测试[1][2]。实验内容是开发一个基本的命令行模糊器以测试Unix程序。这个模糊器可以用随机数据来“轰炸”这些测试程序直至其崩溃。类似的实验于1995年被重复,并且包括了图形界面程序(例如X Window System),网络协议和系统API库[3]。一些后续工作可以测试Mac和Windows系统上的命令行程序与图形界面程序。

关于模糊测试更早的想法可以追溯到1983年前。Steve Capps于当时开发了一个叫做The Monkey的Macintosh程序以测试Mac程序,并曾被用于发现MacPaint程序错误[4]

另一个早期的模糊测试工具是crashme,于1991年发布。其主要功能是让Unix以及类Unix系统去执行随机机器指令以测试这些系统的健壮性。[5]

用途[编辑]

模糊测试通常被用于黑盒测试。其回报率通常比较高。[6]当然,模糊测试只是相当于对系统的行为做了一个随机采样,所以在许多情况下通过了模糊测试只是说明软件可以处理异常以避免崩溃,而不能说明该软件的行为完全正确。这表明模糊测试更多是一种对整体质量的保证,并不能替代全面的测试或者形式化方法。作为一种粗略的可靠性度量方法,模糊测试可以提示程序哪些部件需要特殊的注意。对于这些部件可以进一步使用代码审计,静态分析以及代码重写。

技术[编辑]

模糊测试工具通常可以被分为两类。变异测试通过改变已有的数据样本去生成测试数据。生成测试则通过对程序输入的建模来生成新的测试数据。[3]

最简单的模糊测试是通过命令行,网络包或者事件向一个程序输入一段随机比特流。这种技术目前依然是有效的发现程序错误的方法。另一个常见且易于实现的技术是通过随机反转一些比特或者整体移动一些数据块来变异已有的输入数据。但是,最有效的模糊测试需要能够理解被测试对象的格式或者协议。这可以通过阅读设计规格来实现。基于设计规格的模糊工具包含完整的规格,并通过基于模型的测试生成方法去遍历规格,并在数据内容,结构,消息,序列中引入一些异常。这种“聪明的”模糊测试也被称作健壮性测试,句法测试,语法测试以及错误注入。[7][8][9][10]这种协议感知的特性也可以启发式的从例子中生成。相关的工具有Sequitur

模糊测试也可以与其他技术结合。白盒模糊测试结合了符号执行技术与约束求解技术。演化模糊测试则利用了一个启发的反馈来有效的实现自动的探索性测试

posted @ 2016-09-02 22:56  papering  阅读(357)  评论(0编辑  收藏  举报