* retry sequentially if multiprocessing do_bad_build_check detects failures
https://github.com/google/oss-fuzz/issues/5441
The error seen in the build log is:
Whoops, the target binary crashed suddenly, before receiving any input
from the fuzzer!
suggesting that the fuzzer crashed before it got to do anything.
Debugging locally what I tend to see is that
a) in src/afl-forkserver.c afl_fsrv_start the read_s32_timed call
returns 0 and that triggers kill(fsrv->fsrv_pid, fsrv->kill_signal);
(SIGKILL)
b) read_s32_timed returns 0 because *stop_soon_p is non-zero at
restart_read:
c) *stop_soon_p becomes non-zero in handle_stop_sig of
src/afl-fuzz-init.c due to receiving SIGINT
d) that SIGINT is sent by the timeout script used in bad_build_check so
it is that "outer" timeout process which is sending SIGINT which then
triggers afl-forkserver's internal SIGKILL to kill the process
I get improved results if I retry the killed off fuzzers sequentially
* Remove unneeded semicolons to fix presubmit
Co-authored-by: Abhishek Arya <inferno@chromium.org>
Delete unneeded LLVM tools, clang libraries and testing tools.
This reduces the image size from 1.71 GB to 901 MB.
It may be possible to improve on this by deleting some LLVM
libraries though I don't know which ones we should delete
because AFL++ might use some).
Related https://github.com/google/oss-fuzz/issues/5170
* Fixes go coverage with modules
* Golang coverage html report turning off modules
Otherwise, we get the error
working directory is not part of a module
Reduce image size by:
1. Not installing go toolchain in final image. Build go tools in
seperate image that doesn't become base-runner.
2. Download the JVM zip in the same step we remove it.
Precompile AFL like we already do for honggfuzz.
This saves about a minute in compilation time of AFL targets by doing it in base-builder
It only adds about 30 MB to the image size.
Reduce build time by doing the following:
1. Building the second stage clang build with a clang binary we download
from chromium.
2. Changing NPROC to be half of the cores instead of assuming it's 16
cores. This still addresses the OOM when building on GCB but speeds up
local building.
3. Don't install recommended packages and use --depth 1 when possible
(very minor improvements compared to the above).
In all this reduces local build time of base-clang from 32 minutes
to 11 minutes.
Because build times are reduced, it will be easier to
iteratively develop changes needed for #5170
Instead of recompiling libFuzzer each time we do a libFuzzer
build of a project, always use Clang's builtin version of libFuzzer.
Do this by copying the builtin libFuzzer to /usr/local/lib/FuzzingEngine.a.
This means that the projects that aren't using -fsanitize=fuzzer now also
use the builtin libFuzzer. And we no longer need to compile a sanitized
libFuzzer for them.
This change improves fuzzing performance and developer experience.
1. It improves developer experience by saving time spent compiling libFuzzer
when recompiling fuzzers.
The time saved is about 25 seconds on my machine.
This will make iterating on fuzzer integration much easier.
2. It improves fuzzer performance. The builtin libFuzzer isn't sanitized so it is faster.
In some cases (see [here](https://bugs.chromium.org/p/chromium/issues/detail?id=934639))
sanitized libFuzzers can waste 37% of the time running non-performant implementations
of code that the builtin-libFuzzer can do almost instantaneously (assembly vs C code).
The consequences of improving developer experience and
fuzzer performance aren't so easy to measure (though
we will look for perf consequences on ClusterFuzz).
But some of the consequences of saving time compiling libFuzzer
are easy to figure out and quite important. They are:
1. Saving $14646 a year on build costs. Based on the following:
build time saved (on GCB): ~38 seconds
libFuzzer builds per day: 990
builds per year: >365
price per build-minute (32 core instance, https://cloud.google.com/build/pricing): 0.064
38/60*.064*990*365 = 14,646
2. Speeding up infra-tests.
Many of the integration tests build fuzzers and so building libFuzzer
was a considerable bottleneck.
On my many-core machine the savings were good and noticeable
(and are probably larger on the less performant CI machines).
| | With compiling libfuzzer | Without compiling libfuzzer |
| ---------------------- | ------------------------------- | ----------------------------------- |
| Parallel tests | 45 | 34 |
| Sequential tests | 276 | 190 |
3. Speeding up CIFuzz.
CIFuzz needs to be fast but it spends about 40 seconds compiling libFuzzer.
In a run where no bugs are discovered which is intended to take about 20 minutes
compiling libFuzzer takes about 3% of the time (40/(20*60)*100).
Now we don't need to waste that time.
See https://github.com/google/oss-fuzz/issues/5180, which this partially fixes.
This bug fixes https://github.com/google/oss-fuzz/issues/2312 and https://github.com/google/oss-fuzz/issues/4677.
* Makes vitess build local
As it uses vitess.io instead of github
* Completes minify project
* Completes quic-go
* Local build for nats project
* Completes ipfs
* run go mod tidy after adding go module
* Right bash sequence for go mod tidy
* go: right bash condition for changing directory
* go-json-iterator: uses git clone
So as to copy fuzz target in right directory
* go: uses tags when running go list
* go-redis: uses git clone and builds local fuzz target
* cascadia: uses git clone instead of go get