-
Notifications
You must be signed in to change notification settings - Fork 279
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sancov based ctx/ngram LibAFL fuzzers #1956
Conversation
This one is ready Ths command is @DonggeLiu |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-02-26-libafl --fuzzers libafl_ctx_large_map libafl_ctx_mid_map libafl_ctx_small_map libafl_ngram_large_map libafl_ngram_mid_map libafl_ngram_small_map |
Ops, could you please add a dummy change to enable PR experiments? |
hi i made the change also i added the more recent libafl to compare as the baseline the command would be
|
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-02-29-libafl --fuzzers libafl_ctx_large_map libafl_ctx_mid_map libafl_ctx_small_map libafl_ngram_large_map libafl_ngram_mid_map libafl_ngram_small_map libafl_280224 |
Hello @DonggeLiu In this change, we
The command is
|
It seems to be ready now, I will do it below. |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-03-11-libafl --fuzzers libafl_fuzzbench_ngram4 libafl_fuzzbench_ngram8 libafl_fuzzbench_ctx libafl_fuzzbench_vp_alter |
Experiment |
this is done. thank you! 👍 |
Hi @DonggeLiu. This is not the preparatory test for the long-run experiment that I was talking about last week, but it is additional experiment for our fuzzer comparison paper In this experiment we want to evaluate the degree of two fuzzer's component's interference. (like if the component X and component Y of the fuzzer will have better or worse interaction) Can you run the experiment for the next 5 fuzzers? The command is
|
Sure, could you please fix the presubmit failure? Thanks! |
done! |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-22-libafl --fuzzers libafl_fuzzbench_fast libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_fast_value_profile libafl_fuzzbench_ngram libafl_fuzzbench_value_profile |
The experiment failed to launch because of invalid fuzzer name. |
I'm sorry 😔 so |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-23-libafl --fuzzers libafl_fuzzbench_fast libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_fast_value_profile libafl_fuzzbench_ngram8 libafl_fuzzbench_value_profile |
Hi. I adjusted the map size because previously it was using a map that was too large. Can you run it again along with an additional fuzzer of libafl_fuzzbench_ngram4 ?
|
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-04-24-libafl --fuzzers libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_ngram4 |
Hi @DonggeLiu , I waited for a few days, for the Could you run this again for me? thanks |
/gcbrun run_experiment.py -a --experiment-config /opt/fuzzbench/service/experiment-config.yaml --experiment-name 2024-05-03-libafl --fuzzers libafl_fuzzbench_fast_ngram4 libafl_fuzzbench_ngram4 |
Thanks for reporting this, @tokatoka! I've re-started the experiment in case it is flaky. While waiting for the new experiment to complete, did you have a chance to look into the run log of your experiment? |
For the LibAFL log, we do not log anything, so I can only guess by the last corpus update time. These days I almost always see an weird restart (or you could say vm preemption) with the results.
For me, I'm fine as long as they give the complete result in 3. |
And on the other hand, Oddly here, the last corpus time is at latest around 26th April's morning. So in theory, there should be no "out of resource" at the time of 26th April's afternoon. Because at this time, this experiment is already finished |
Yes, this is indeed strange why no trial instances are running:
![]() I will kill |
BTW, this code ensures trial instances will eventually complete within 2 days: fuzzbench/experiment/scheduler.py Lines 301 to 312 in 162ca0c
If a trial was preempted after 1 day, then a non-preemptible instance will be used. |
It's complete thank you 👍 |
Hello, fuzzbench team.
We implemented ctx and ngram coverage based on sanitizer coverage.
(
libafl_ctx_large_map
,libafl_ctx_mid_map
,libafl_ctx_small_map
,libafl_ngram_large_map
,libafl_ngram_mid_map
,libafl_ngram_small_map
)The previous implementation was based on AFL's llvm pass which has negative impact on performance.
Therefore we want to compare how this new implementation compares to the baseline.
Both ctx, ngram has 3 variants depending on the map size we use.