-
Notifications
You must be signed in to change notification settings - Fork 418
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance & runtime improvements to info-theoretic acquisition functions (1/N) #2748
base: main
Are you sure you want to change the base?
Conversation
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Thanks! It seems like |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2748 +/- ##
=======================================
Coverage 99.99% 99.99%
=======================================
Files 203 203
Lines 18685 18691 +6
=======================================
+ Hits 18684 18690 +6
Misses 1 1 ☔ View full report in Codecov by Sentry. |
@sdaulton for sure! I currently observe similar things for JES, but I'm not sure whether the found points are actually higher in acquisition function value or not (for either LogEI or JES) |
That would be interesting to see |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Carl! This seems like a decent improvement. Just a few comments in-line
520aad7
to
43744fa
Compare
3ae01cb
to
f4f01bf
Compare
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
ef6822d
to
6b3002b
Compare
…tions (0/N) - Restructuring of sampling methods (#2753) Summary: Reshuffling of sampling methods that are not directly related to acquisition function optimization (i.e., don't take it as an argument) based on [this discussion](#2748 (comment)). To remove code duplication specifically related to optimization of info-theoretic acquisition functions, this seemed like sensible moves! Pull Request resolved: #2753 Test Plan: Moved unittests and added new one for `boltzmann_sample`, which was used throughout and is once again used in subsequent PRs. ## Related PRs First of a series, like [this one](#2748). Reviewed By: esantorella Differential Revision: D70131981 Pulled By: saitcakmak fbshipit-source-id: 48dd86e7e06006054294d7cd8b9a3d318b0b0ad1
6b3002b
to
9d39469
Compare
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
1 similar comment
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
improve info-theoretic acquisition functions.
a2a2b46
to
d2ee1de
Compare
@sdaulton has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
A series of improvements directed towards improving the performance of PES & JES, as well as their MultiObj counterparts.
Motivation
As pointed out by @SebastianAment in this paper, the BoTorch variant of JES, and to an extent PES, is brutally slow an suspiciously ill-performing. To bring them up to their potential, I've added a series of performance improvements:
1. Improvement to get_optimal_samples and optimal_posterior_samples: As this is an integral part of their efficiency, I've added

.
suggestions
(similar tosample_around_best
) tooptimize_posterior_samples
.Marginal runtime improvement in acquisition optimization (sampling time practically unchanged):
Substantial performance improvement:
2. Added initializer to acquisition funcction optimization: Similar to KG, ES methods have sensible suggestions for acquisition function optimization in the form of the sampled optima. This drastically reduces the time of acquisition function optimization, which could on occasion take 30+ seconds when
num_restarts
was large>4
.Benchmarking INC
2b. Multi-objective support for initializer: By re-naming arguments of the multi-objective variants, we get consistency and support for MO variants.
3. Enabled gradient-based optimization for PES: The current implementation contains a while-loop which forces the gradients to be recursively computed. This commonly causes NaN gradients, which is why the recommended option is
"with_grad": False
in the tutorial. Onedetach()
alleviates this issue, enabling gradient-based optimization.NOTE: this has NOT been ablated, since the non-grad optimization is extremely computationally demanding.
Test Plan
Unit tests and benchmarking.
Related PRs
First of a couple!
Bonus: while benchmarking, I had issues repro'ing the LogEI performance initially. I found that

sample_around_best
made LogEI worse on Mich5. All experiments are otherwise a repro of the settings used in the LogEI paper.