Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sancov based ctx/ngram LibAFL fuzzers #1956

Closed
wants to merge 26 commits into from

Conversation

tokatoka
Copy link
Contributor

@tokatoka tokatoka commented Feb 21, 2024

Hello, fuzzbench team.

We implemented ctx and ngram coverage based on sanitizer coverage.
(libafl_ctx_large_map, libafl_ctx_mid_map, libafl_ctx_small_map, libafl_ngram_large_map, libafl_ngram_mid_map, libafl_ngram_small_map)
The previous implementation was based on AFL's llvm pass which has negative impact on performance.

Therefore we want to compare how this new implementation compares to the baseline.
Both ctx, ngram has 3 variants depending on the map size we use.

@tokatoka
Copy link
Contributor Author

tokatoka commented Feb 23, 2024

This one is ready

Ths command is
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-02-23-libafl --fuzzers libafl_ctx_large_map libafl_ctx_mid_map libafl_ctx_small_map libafl_ngram_large_map libafl_ngram_mid_map libafl_ngram_small_map

@DonggeLiu
Could you run this experiment? :)

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-02-26-libafl --fuzzers libafl_ctx_large_map libafl_ctx_mid_map libafl_ctx_small_map libafl_ngram_large_map libafl_ngram_mid_map libafl_ngram_small_map

@DonggeLiu
Copy link
Contributor

Ops, could you please add a dummy change to enable PR experiments?
Thanks :)

@tokatoka
Copy link
Contributor Author

hi i made the change also i added the more recent libafl to compare as the baseline

the command would be

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-02-23-libafl --fuzzers libafl_ctx_large_map libafl_ctx_mid_map libafl_ctx_small_map libafl_ngram_large_map libafl_ngram_mid_map libafl_ngram_small_map libafl_280224

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-02-29-libafl --fuzzers libafl_ctx_large_map libafl_ctx_mid_map libafl_ctx_small_map libafl_ngram_large_map libafl_ngram_mid_map libafl_ngram_small_map libafl_280224

@tokatoka
Copy link
Contributor Author

tokatoka commented Mar 6, 2024

Hello @DonggeLiu
Could you do another run?

In this change, we

- make LibAFL fuzzers base on the same commit as #1840 so we can make a fair comparison (sorry previous run was based on the latest LibAFL, it is my fault :man_facepalming: 
- add value-profile (named as libafl_fuzzbench_vp_alter) and ngram-8 fuzzers, both are sancov based new implementation

The command is

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-06-libafl --fuzzers libafl_fuzzbench_ngram4 libafl_fuzzbench_ngram8 libafl_fuzzbench_ctx libafl_fuzzbench_vp_alter

@DonggeLiu
Copy link
Contributor

DonggeLiu commented Mar 11, 2024

Thanks @tokatoka, unfortunately we are making some change to FuzzBench at this moment and may need a few days before it is ready for experiments :)

It seems to be ready now, I will do it below.

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-11-libafl --fuzzers libafl_fuzzbench_ngram4 libafl_fuzzbench_ngram8 libafl_fuzzbench_ctx libafl_fuzzbench_vp_alter

@DonggeLiu
Copy link
Contributor

Experiment 2024-03-11-libafl data and results will be available later at:
The experiment data.
The experiment report.

@tokatoka
Copy link
Contributor Author

this is done. thank you! 👍

@tokatoka tokatoka closed this Apr 12, 2024
@tokatoka tokatoka reopened this Apr 16, 2024
@tokatoka
Copy link
Contributor Author

Hi @DonggeLiu. This is not the preparatory test for the long-run experiment that I was talking about last week, but it is additional experiment for our fuzzer comparison paper

In this experiment we want to evaluate the degree of two fuzzer's component's interference. (like if the component X and component Y of the fuzzer will have better or worse interaction)

Can you run the experiment for the next 5 fuzzers? The command is

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-17-libafl --fuzzers libafl_fuzzbench_fast libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_fast_value_profile libafl_fuzzbench_ngram libafl_fuzzbench_value_profile

@DonggeLiu
Copy link
Contributor

Sure, could you please fix the presubmit failure? Thanks!

@tokatoka
Copy link
Contributor Author

done!

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-22-libafl --fuzzers libafl_fuzzbench_fast libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_fast_value_profile libafl_fuzzbench_ngram libafl_fuzzbench_value_profile

@DonggeLiu
Copy link
Contributor

The experiment failed to launch because of invalid fuzzer name.
Maybe libafl_fuzzbench_ngram should be libafl_fuzzbench_ngram4 or libafl_fuzzbench_ngram8?

@tokatoka
Copy link
Contributor Author

I'm sorry 😔
It is libafl_fuzzbench_ngram8

so
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-22-libafl --fuzzers libafl_fuzzbench_fast libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_fast_value_profile libafl_fuzzbench_ngram8 libafl_fuzzbench_value_profile

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-23-libafl --fuzzers libafl_fuzzbench_fast libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_fast_value_profile libafl_fuzzbench_ngram8 libafl_fuzzbench_value_profile

@tokatoka
Copy link
Contributor Author

Hi. I adjusted the map size because previously it was using a map that was too large.

Can you run it again along with an additional fuzzer of libafl_fuzzbench_ngram4 ?

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-4-libafl --fuzzers libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_ngram4

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-24-libafl --fuzzers libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_ngram4

@tokatoka
Copy link
Contributor Author

tokatoka commented May 2, 2024

Hi @DonggeLiu , I waited for a few days, for the 2024-04-24-libafl experiment but the results do not look complete.
I downloaded the csv from here https://www.fuzzbench.com/reports/experimental/2024-04-24-libafl/data.csv.gz.
However, the result does not contain the result for 24 hours. and most experiments ends in 17.5 hours. I suspect it is because of out of resource since there was another different experiment conducted on the same day (2024-04-24-full-muttfuzz).

Could you run this again for me? thanks

@DonggeLiu
Copy link
Contributor

/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-03-libafl --fuzzers libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_ngram4

@DonggeLiu
Copy link
Contributor

Hi @DonggeLiu , I waited for a few days, for the 2024-04-24-libafl experiment but the results do not look complete. I downloaded the csv from here https://www.fuzzbench.com/reports/experimental/2024-04-24-libafl/data.csv.gz. However, the result does not contain the result for 24 hours. and most experiments ends in 17.5 hours. I suspect it is because of out of resource since there was another different experiment conducted on the same day (2024-04-24-full-muttfuzz).

Thanks for reporting this, @tokatoka! I've re-started the experiment in case it is flaky.
Usually, we expect the experiment to finish within four days. Please feel free to ping me if it takes longer.

While waiting for the new experiment to complete, did you have a chance to look into the run log of your experiment?
Sometimes, the fuzzer's run log may have clues, too.
I am curious to learn the reason and see if there is anything we can improve.

@tokatoka
Copy link
Contributor Author

tokatoka commented May 3, 2024

While waiting for the new experiment to complete, did you have a chance to look into the run log of your experiment?
Sometimes, the fuzzer's run log may have clues, too.

For the LibAFL log, we do not log anything, so I can only guess by the last corpus update time.

These days I almost always see an weird restart (or you could say vm preemption) with the results.
That is,

  1. First the experiments starts.
  2. Then after 10 ~ 12 hours, suddenly most instances stop, and then the result page will also revert back.
  3. After that the instances start again, this time they will run for 23 hours.

For me, I'm fine as long as they give the complete result in 3.
However 3. didn't happen this time, as long as i can tell from the result, they didn't finish its 23-hours run.
When I quickly look through these results. https://storage.googleapis.com/fuzzbench-data/index.html?prefix=2024-04-24-libafl/experiment-folders/
then the instances with bigger trial id (these are the restarted instances in the above 3.) somehow all stopped fuzzing around on 26th April 1PM to 2PM.
So I would assume something has happend during that period.

@tokatoka
Copy link
Contributor Author

tokatoka commented May 3, 2024

And on the other hand,
https://storage.googleapis.com/fuzzbench-data/index.html?prefix=2024-04-24-full-muttfuzz

Oddly here, the last corpus time is at latest around 26th April's morning. So in theory, there should be no "out of resource" at the time of 26th April's afternoon. Because at this time, this experiment is already finished

@DonggeLiu
Copy link
Contributor

DonggeLiu commented May 4, 2024

However 3. didn't happen this time, as long as i can tell from the result, they didn't finish its 23-hours run. When I quickly look through these results. https://storage.googleapis.com/fuzzbench-data/index.html?prefix=2024-04-24-libafl/experiment-folders/ then the instances with bigger trial id (these are the restarted instances in the above 3.) somehow all stopped fuzzing around on 26th April 1PM to 2PM. So I would assume something has happend during that period.

Yes, this is indeed strange why no trial instances are running:

  1. There is no trial instance of 2024-04-24-libafl, but the main dispatcher VM still exists:
NAME                         STATUS      CREATION_TIMESTAMP   PREEMPTIBLE  ZONE
d-2024-04-24-libafl          RUNNING     2024-04-25T00:03:46  FALSE        us-central1-c
  1. The report shows the experiment has not finished: https://storage.googleapis.com/www.fuzzbench.com/reports/experimental/2024-04-24-libafl/index.html

  2. The gcloud log shows it has not finished either:

image

I will kill d-2024-04-24-libafl and let's see how the new experiment goes.

@DonggeLiu
Copy link
Contributor

BTW, this code ensures trial instances will eventually complete within 2 days:

# Don't keep creating preemptible instances forever. Don't create
# them if the experiment has already taken a certain amount of time
# longer than the equivalent nonpreemptible experiment.
# *NOTE*: preemptible_window_passed is slightly broken. When
# the measurer uses this method it may produce slightly different
# results than the scheduler because the initial trials may be
# different. This is unlikely to happen in the real world. It is
# probably benign as well because the measurer may think the window
# end is slightly later than the scheduler. The effect of this will
# simply be that the measurer may measure for slightly longer than
# needed.
return False

If a trial was preempted after 1 day, then a non-preemptible instance will be used.
Not sure why they did not produce result in the last exp, though.

@tokatoka
Copy link
Contributor Author

tokatoka commented May 6, 2024

It's complete thank you 👍

@tokatoka tokatoka closed this May 6, 2024
@tokatoka tokatoka deleted the libafl_exp_ctx branch November 19, 2024 16:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants