Skip to content

greatly improve capabilities of the fuzzer #23416

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

gooncreeper
Copy link
Contributor

This PR significantly improves the capabilities of the fuzzer. For comparison, here is a ten minute head to head between the old and new fuzzer implementations (with newly included fuzz tests):

-- Old --

Total Runs: 49020931
Unique Runs: 1044131 (2.1%)
Speed (Runs/Second): 81696
Coverage: 2069 / 15866 (13.0%)

(note: Unique Runs is highly inflated due of the inefficiency of the old implementation)

-- New --

Total Runs: 537039526
Unique Runs: 1511 (0.0%)
Speed (Runs/Second): 894950
Coverage: 3000 / 15719 (19.1%)
Examples: `while(C)i(){}else|`
          `{y:n()align(b)addrspace`
          `switch(P){else=>`
          `[:l]align(_:r:l)R`
          `(if(b){defer{nosuspend`
          `union(enum(I))`

NOTE: You have to rebuild the compiler due to new fuzzing instrumentation being enabled for memory loads.

The changes made to the fuzzer to accomplish this feat mostly include tracking memory reads from .rodata to determine new runs, new mutations (especially the ones that insert const values from .rodata reads and __sanitizer_conv_const_cmp), and minimizing found inputs. Additionally, the runs per second has greatly been increased due to generating smaller inputs and avoiding clearing the 8-bit pc counters.

An additional feature added is that the length of the input file is now stored and the old input file is rerun upon start, though this does not close #20803 since it does not output the input (though it can be very easily retrieved from the cache directory.)

Other changes made to the fuzzer include more logical initialization, using one shared file in for inputs, creating corpus files with proper sizes, and using hexadecimal-numbered corpus files for simplicity. Additionally, volatile was removed from MemoryMappedList since all that is needed is a guarantee that compiler has done the writes, which is already accomplished with atomic ordering.

Furthermore, I added several new fuzz tests to gauge the fuzzer's efficiency. I also tried to add a test for zstandard decompression, which it crashed within 60,000 runs (less than a second.)

Bug fixes include:

  • Fixed a race conditions when multiple fuzzer processes needed to use the same coverage file.
  • Web interface stats now update even when unique runs is not changing.
  • Fixed tokenizer.testPropertiesUpheld to allow stray carriage returns since they are valid whitespace.
  • Closes fuzzer: don't remove or modify byte of empty input #23180

Possible Improvements:

  • Remove the 8-bit pc counting code prefer a call to a sanitizer function that updates a flag if a new pc hit happened (similar to how the __sanitizer_cov_load functions already operate).
  • Less basic input minimization function. It could also try splitting inputs into two between each byte to see if they both hit the same pcs. This is useful as smaller inputs are usually much more efficient.
  • Deterministic mutations when a new input is found.
  • Culling out corpus inputs that are redundant due to smaller inputs already hitting their pcs and memory addresses.
  • Applying multiple mutations during dry spells.
  • Prioritizing some corpus inputs.
  • Creating a list of the most successful input splices (which would likely contain grammar keywords) and creating a custom mutation for adding them.
  • Removing some less-efficient mutations.
  • Store effective mutations to the disk for the benefit of future runs.
  • Counting __sanitizer_cov @returnAddresses in determining unique runs.
  • Optimize __sanitizer_cov_trace_const_cmp methods (the use of an ArrayHashMap is not too fast).
  • Processor affinity
  • Exclude fuzzer's .rodata

Nevertheless, I feel like the fuzzer is in a viable place to start being useful (as demonstrated with the find in #23413)

This PR significantly improves the capabilities of the fuzzer. For
comparison, here is a ten minute head to head between the old and new
fuzzer implementations (with newly included fuzz tests):
-- Old --
```
Total Runs: 49020931
Unique Runs: 1044131 (2.1%)
Speed (Runs/Second): 81696
Coverage: 2069 / 15866 (13.0%)
```
(note: Unique Runs is highly inflated due of the inefficiency of the
old implementation)
-- New --
```
Total Runs: 537039526
Unique Runs: 1511 (0.0%)
Speed (Runs/Second): 894950
Coverage: 3000 / 15719 (19.1%)
Examples: `while(C)i(){}else|`
          `{y:n()align(b)addrspace`
          `switch(P){else=>`
          `[:l]align(_:r:l)R`
          `(if(b){defer{nosuspend`
          `union(enum(I))`
```
NOTE: You have to rebuild the compiler due to new fuzzing
instrumentation being enabled for memory loads.

The changes made to the fuzzer to accomplish this feat mostly include
tracking memory reads from .rodata to determine new runs, new
mutations (especially the ones that insert const values from .rodata
reads and __sanitizer_conv_const_cmp), and minimizing found inputs.
Additionally, the runs per second has greatly been increased due to
generating smaller inputs and avoiding clearing the 8-bit pc counters.

An additional feature added is that the length of the input file is now
stored and the old input file is rerun upon start, though this does not
close ziglang#20803 since it does not output the input (though it can be
very easily retrieved from the cache directory.)

Other changes made to the fuzzer include more logical initialization,
using one shared file `in` for inputs, creating corpus files with
proper sizes, and using hexadecimal-numbered corpus files for
simplicity. Additionally, volatile was removed from MemoryMappedList
since all that is needed is a guarantee that compiler has done the
writes, which is already accomplished with atomic ordering.

Furthermore, I added several new fuzz tests to gauge the fuzzer's
efficiency. I also tried to add a test for zstandard decompression,
which it crashed within 60,000 runs (less than a second.)

Bug fixes include:
* Fixed a race conditions when multiple fuzzer processes needed to use
the same coverage file.
* Web interface stats now update even when unique runs is not changing.
* Fixed tokenizer.testPropertiesUpheld to allow stray carriage returns
since they are valid whitespace.
* Closes ziglang#23180

Possible Improvements:
* Remove the 8-bit pc counting code prefer a call to a sanitizer
function that updates a flag if a new pc hit happened (similar to how
the __sanitizer_cov_load functions already operate).
* Less basic input minimization function. It could also try splitting
inputs into two between each byte to see if they both hit the same pcs.
This is useful as smaller inputs are usually much more efficient.
* Deterministic mutations when a new input is found.
* Culling out corpus inputs that are redundant due to smaller inputs
already hitting their pcs and memory addresses.
* Applying multiple mutations during dry spells.
* Prioritizing some corpus inputs.
* Creating a list of the most successful input splices (which would
likely contain grammar keywords) and creating a custom mutation for
adding them.
* Removing some less-efficient mutations.
* Store effective mutations to the disk for the benefit of future runs.
* Counting __sanitizer_cov `@returnAddress`es in determining unique
runs.
* Optimize __sanitizer_cov_trace_const_cmp methods (the use of an
ArrayHashMap is not too fast).
* Processor affinity
* Exclude fuzzer's .rodata

Nevertheless, I feel like the fuzzer is in a viable place to start
being useful (as demonstrated with the find in ziglang#23413)
@andrewrk
Copy link
Member

Additionally, volatile was removed from MemoryMappedList since all that is needed is a guarantee that compiler has done the writes, which is already accomplished with atomic ordering.

It's the other way around. You don't need to guarantee atomicity or ordering of the writes with respect to other memory loads and stores. Also, depending on the ordering, atomics don't cause the writes to have side effects.

volatile on the other hand is exactly the right tool for the job. It only causes the volatile writes to be considered to have side effects, and to be ordered with respect to each other, which is exactly the required semantics.

@gooncreeper
Copy link
Contributor Author

It's the other way around. You don't need to guarantee atomicity or ordering of the writes with respect to other memory loads and stores. Also, depending on the ordering, atomics don't cause the writes to have side effects.

volatile on the other hand is exactly the right tool for the job. It only causes the volatile writes to be considered to have side effects, and to be ordered with respect to each other, which is exactly the required semantics.

Thanks for clarifying, it makes a lot more sense when thinking about the side effects. I will push a commit reverting this once I figure out the CI failures.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

write fuzz inputs to a shared memory region before running a task
2 participants