We only need to make sure that if the wrapped compiler (clang) prints
something to stderr, we report that to stderr in the wrapper, __even
if__ the compiler exits with 0. This is because when starting up, Bazel
invokes the compiler with various flags to detect what features are
available and what flags to pass during regular compilation. The
detection is based on the stderr of the compiler invocation, so we need
to make sure we are properly printing out stderr. Finally, Bazel uses
stderr to determine if the compiler is clang / gcc or a third option. If
we don't report stderr, then Bazel considers we are using a generic
compiler and then gets confused about what to generate in the toolchain.
Currently, this is the diff from the toolchain autoconfig when Bazel
starts up:
```diff
--- w-clang/bazel-w-clang/external/bazel_tools~cc_configure_extension~local_config_cc/BUILD 2023-09-15 16:54:56.131676995 +0000
+++ w-jcc/bazel-w-jcc/external/bazel_tools~cc_configure_extension~local_config_cc/BUILD 2023-09-15 18:17:24.486499047 +0000
@@ -85,16 +85,13 @@
"/usr/include/x86_64-linux-gnu",
"/usr/include",
"/usr/local/lib/clang/15.0.0/share",
- "/usr/include/c++/9",
- "/usr/include/x86_64-linux-gnu/c++/9",
- "/usr/include/c++/9/backward",
"/usr/local/include/c++/v1"],
tool_paths = {"ar": "/usr/bin/ar",
"ld": "/usr/bin/ld",
"llvm-cov": "/usr/local/bin/llvm-cov",
"llvm-profdata": "/usr/local/bin/llvm-profdata",
"cpp": "/usr/bin/cpp",
- "gcc": "/usr/local/bin/clang-15",
+ "gcc": "/usr/local/bin/clang-jcc",
"dwp": "/usr/bin/dwp",
"gcov": "/usr/bin/gcov",
"nm": "/usr/bin/nm",
```
The 3 missing header files could be because of
7a4eefa869/tools/cpp/unix_cc_configure.bzl (L316-L321)
but I could not find a way to force this. So far, it didn't look like it
was causing problems though.
---------
Signed-off-by: Mihai Maruseac <mihaimaruseac@google.com>
Adds a few features that are very beneficial for CI fuzzing. e.g.
AFL_IGNORE_SEED_PROBLEMS
This also fixes several minor bugfixes.
---------
Co-authored-by: jonathanmetzman <31354670+jonathanmetzman@users.noreply.github.com>
Mitigates the known issue where we don't automatically change to the
`WORKDIR` defined in `Dockerfile` when running cloud experiments.
Question:
Would it be preferred if I introduce a flag for this?
(e.g., `--use_workdir` or `--workdir=/src/<project>`)
While this gives more flexibility, I feel `cd` to `WORKDIR` should
always be preferred if we want the cloud experiments to behave the same
as local ones.
In #9658, all jar files of the project has been extracted and dumped in
a directory. Then certain class duplication are removed. But the
removing logic contains a bug. Those class files for fuzzers in the base
of the dump directory are accidentally removed and because they don't
have duplication from the jar files, they are missing from the resulting
coverage report. This PR fixes the bug by ignoring the base directory of
the class dump location when removing classes duplication to ensure the
fuzzer classes existed in the final coverage report.
Signed-off-by: Arthur Chan <arthur.chan@adalogics.com>
This has a set of performance improvements in Fuzz Introspector, the two
changes with most impact are:
- removal of some expensive and unnecessary loops in the code
- switching parsing of large yaml files from pure python code to using a
C backend.
Locally it makes OpenSSL builds take approximately 70 minutes whereas in
the cloud build it seems to take 20+ hours. Similar impact happens
across several large java projects.
Signed-off-by: David Korczynski <david@adalogics.com>
corrected a typo found when reading a function documentation
I read the contributing.md, the code of conduct and signed the Google
Individual Contributor License Agreement.
Two updates since last:
- Argument name extraction has improved for all functions, and it's
added to `summary.json` files.
- All function names as they are in the binaries are saved. This has
effect only for C++ where the underlying function name is a mangled
function name. Having the raw makes it possible to e.g. demangle it and
get a better-looking name than what currently exists in the reports.
Signed-off-by: David Korczynski <david@adalogics.com>
This primarily adds additional output and in particular an annotated
list of nodes in the graph which corresponds to first layer of nodes
called by a given fuzzer.
Signed-off-by: David Korczynski <david@adalogics.com>
Jacoco is used for code coverage report generation of JVM projects. It
will make use of the dumped class file to generate the report. But only
java classes covered by at least one fuzzer will be included in the
report, other files in the project will be ignored. This PR aims to
point the Jacoco class file discovery folder to the original compiled
project jar file in order to force Jacoco to generate coverage report
with source for all java class of the project.
This came as a need from Fuzz Introspector where we noticed some
discrepancies between the static analysis and the code coverage reports.
Signed-off-by: Arthur Chan <arthur.chan@adalogics.com>
---------
Signed-off-by: Arthur Chan <arthur.chan@adalogics.com>
Signed-off-by: David Korczynski <david@adalogics.com>
Co-authored-by: jonathanmetzman <31354670+jonathanmetzman@users.noreply.github.com>
Co-authored-by: David Korczynski <david@adalogics.com>
- Print error properly when compiling for C++ to begin with. Previously
we were masking errors when compiling for C++ at first.
- Use the return code from clang.
- Prefer printing the C++ error over the C error.
- Use Go naming conventions.
This allows us to run arbitrary scripts inside project containers on
GCB.
All files in `/workspace/out` are upload to a specified GCS path at the
end.
Usage:
```bash
$ python project_experiment.py --project libxml2 --command 'python3 /workspace/hello.py' \
--experiment_name test-ochang \
--upload_output gs://BUCKET/to/upload/NAME
```
(Where /workspace is the OSS-Fuzz repository).
Currently a `project.yaml` with wrong key names will result in the
following error:
```
...
return all([check(changed_files) for check in checks])
File "infra/presubmit.py", line 229, in check_project_yaml
return all([_check_one_project_yaml(path) for path in paths])
File "infra/presubmit.py", line 229, in <listcomp>
return all([_check_one_project_yaml(path) for path in paths])
File "infra/presubmit.py", line 223, in _check_one_project_yaml
return checker.do_checks()
File "infra/presubmit.py", line 131, in do_checks
check_function()
File "infra/presubmit.py", line 179, in check_valid_section_names
self.error(f'{name} is not a valid section name ({valid_names})')
NameError: name 'valid_names' is not defined
```
This fixes it.
Signed-off-by: David Korczynski <david@adalogics.com>
Previously the cloud build exited before we upload the logs. Now we
write a file to record build success and report the failure later after
uploading the log if there was one.