Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP]Update constants to use values from CODATA2022 #5661

Open
wants to merge 14 commits into
base: development
Choose a base branch
from

Conversation

dpgrote
Copy link
Member

@dpgrote dpgrote commented Feb 12, 2025

The values of the physical constants were from CODATA 2018. These should be updated to the current accepted values as specified in CODATA 2022.

This breaks many CI benchmarks since the checks are at a higher precision than the changes in the constants.

Note that scipy recently updated the constants to use CODATA 2022 (in version 1.15.0 on January 3). This may cause problems in the CI tests. However, we use Ubuntu 20.4 to run the tests, and there, the version of scipy is 1.3.3 which uses constants from CODATA 2014!

These CI tests needed to be updated.

  • test_rz_galilean_psatd_current_correction_psb needed the charge conservation tolerance increase. This seems to be chance. Running this test locally (on Mac) passes the tolerance check

@EZoni
Copy link
Member

EZoni commented Feb 12, 2025

Thanks, Dave.

If there is a large number of benchmarks that need to be reset, this could be a good opportunity to test again our tool Tools/DevUtils/update_benchmarks_from_azure_output.py and its instructions in our documentation.

In theory, I updated and tested the tool manually in #5372. However, it is not tested automatically yet.

@dpgrote
Copy link
Member Author

dpgrote commented Feb 12, 2025

Thanks, Dave.

If there is a large number of benchmarks that need to be reset, this could be a good opportunity to test again our tool Tools/DevUtils/update_benchmarks_from_azure_output.py and its instructions in our documentation.

In theory, I updated and tested the tool manually in #5372. However, it is not tested automatically yet.

Thanks @EZoni ! It worked and was easy to do. BTW, to download the raw log file, I copied the URL from he location bar and pasted it into the curl command, "curl https://dev.azure.com/ECP-WarpX/... > raw_log", making it easy to download it.

Note that almost all of the changes in the benchmarks are small as expected, ~1.e-9 or smaller. One exception is the test_3d_beam_beam_collision test with errors of order 10% are seen, presumably because it runs a long time allowing the differences to grow.

@EZoni
Copy link
Member

EZoni commented Feb 13, 2025

Note that almost all of the changes in the benchmarks are small as expected, ~1.e-9 or smaller. One exception is the test_3d_beam_beam_collision test with errors of order 10% are seen, presumably because it runs a long time allowing the differences to grow.

I agree. I think that test has relatively large tolerances anyways, if I remember correctly. @aeriforme, what do you think?

@EZoni
Copy link
Member

EZoni commented Feb 13, 2025

Thanks @EZoni ! It worked and was easy to do. BTW, to download the raw log file, I copied the URL from he location bar and pasted it into the curl command, "curl https://dev.azure.com/ECP-WarpX/... > raw_log", making it easy to download it.

Thanks for pointing this out! I added this hint to our documentation in #5663.

@ax3l ax3l added the component: core Core WarpX functionality label Feb 19, 2025
@ax3l ax3l mentioned this pull request Feb 19, 2025
3 tasks
Copy link
Member

@lucafedeli88 lucafedeli88 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @dpgrote ! I think that updating constants to CODATA 2022 is a good idea.

I've reviewed the logs of the tests that have failed.

I agree that issues with test_3d_beam_beam_collision.checksum (and also with test_2d_collision_xz_picmi.checksum ) are likely due to the fact that these tests are relatively long.

We need to investigate a bit better the cases test_1d_ohm_solver_ion_beam_picmi, test_1d_ohm_solver_ion_beam_picmi, and test_3d_qed_schwinger_2 show non-negligible discrepancies (I will have a look at the QED-related one).

We need to increase the tolerance of several analysis scrips, since they seem to be a bit too strict :
test_2d_theta_implicit_jfnk_vandb, test_2d_theta_implicit_jfnk_vandb_filtered, test_2d_theta_implicit_jfnk_vandb_picmi, test_2d_theta_implicit_strang_psatd, test_2d_pec_field_insulator_implicit_restart,test_3d_particle_boundaries, test_3d_load_external_field_grid_picmi, test_3d_load_external_field_grid_picmi,test_3d_particle_fields_diags , test_3d_particle_fields_diags, test_3d_reduced_diags.

@lucafedeli88
Copy link
Member

Thanks @dpgrote ! I think that updating constants to CODATA 2022 is a good idea.

I've reviewed the logs of the tests that have failed.

I agree that issues with test_3d_beam_beam_collision.checksum (and also with test_2d_collision_xz_picmi.checksum ) are likely due to the fact that these tests are relatively long.

We need to investigate a bit better the cases test_1d_ohm_solver_ion_beam_picmi, test_1d_ohm_solver_ion_beam_picmi, and test_3d_qed_schwinger_2 show non-negligible discrepancies (I will have a look at the QED-related one).

We need to increase the tolerance of several analysis scrips, since they seem to be a bit too strict : test_2d_theta_implicit_jfnk_vandb, test_2d_theta_implicit_jfnk_vandb_filtered, test_2d_theta_implicit_jfnk_vandb_picmi, test_2d_theta_implicit_strang_psatd, test_2d_pec_field_insulator_implicit_restart,test_3d_particle_boundaries, test_3d_load_external_field_grid_picmi, test_3d_load_external_field_grid_picmi,test_3d_particle_fields_diags , test_3d_particle_fields_diags, test_3d_reduced_diags.

I can provide a possible explanation for the QED-related test. Looking at the analysis script I found this comment:

elif test_number == "2":
    # Second Schwinger test with stronger EM field. Many pairs are created and a Gaussian
    # distribution is used to get the weights of the particles. This is the most sensitive test
    # because the relative std is extremely low.
    Ex_test = 1.0e18
    Bx_test = 1679288857.0516706
    By_test = 525665014.1557486
    Bz_test = 1836353079.9561853

Note that the analysis script uses these constants (CODATA 2014, I think):

c = 299792458.
m_e = 9.10938356e-31
e = 1.6021766208e-19
hbar = 1.054571800e-34

while PICSAR-QED uses (CODATA 2018):

template<typename RealType = double>
constexpr auto light_speed = RealType(299792458.);
    
template<typename RealType = double>
constexpr auto electron_mass = RealType(9.1093837015e-31);

template<typename RealType = double>
constexpr auto elementary_charge = RealType(1.602176634e-19);
    
template<typename RealType = double>
constexpr auto reduced_plank = RealType(1.054571817e-34);

@dpgrote
Copy link
Member Author

dpgrote commented Feb 20, 2025

@lucafedeli88 Thanks for looking over this PR. I think a number of the issues are related to the use of scipy.constants in the analysis scripts, since the constants are inconsistent, i.e. still the CODATA2018 values. Until this issue is resolved, I don't think this PR should be merged. Unfortunately, there is no easy solution. The simplest is probably to use the most recent version of ubuntu for the CI tests, which uses the most recent scipy with updated constants.

A more robust longer term solution would be to create a new light-weight Python module that has the constants with the values consistent with the ones in C++, and then use this everywhere instead of relying on scipy.constants. This would guarantee that the values are always consistent.

@dpgrote dpgrote changed the title Update constants to use values from CODATA2022 [WIP]Update constants to use values from CODATA2022 Feb 25, 2025
@lucafedeli88
Copy link
Member

@lucafedeli88 Thanks for looking over this PR. I think a number of the issues are related to the use of scipy.constants in the analysis scripts, since the constants are inconsistent, i.e. still the CODATA2018 values. Until this issue is resolved, I don't think this PR should be merged. Unfortunately, there is no easy solution. The simplest is probably to use the most recent version of ubuntu for the CI tests, which uses the most recent scipy with updated constants.

A more robust longer term solution would be to create a new light-weight Python module that has the constants with the values consistent with the ones in C++, and then use this everywhere instead of relying on scipy.constants. This would guarantee that the values are always consistent.

We could maybe discuss this in the upcoming developers' meeting. Using Ubuntu 24.04 in CI tests should be rather straightforward.

@EZoni
Copy link
Member

EZoni commented Mar 3, 2025

We could maybe discuss this in the upcoming developers' meeting. Using Ubuntu 24.04 in CI tests should be rather straightforward.

Yes, we can do that soon. There had been discussions about the "need" to have less strict tolerances for the checksums to be compatible with version upgrades like this one. The work done in #5456 has set up things so that we can do that easily (see point "Add logic to reset tolerances based on environment variables" in the follow-up list of that PR description). This said, Ubuntu LTS version upgrades come once every two years, so I personally think that the tolerance fine tuning is not a real roadblock for this particular update, we could simply upgrade the Ubuntu version and reset the checksums that need to be reset. I will get to this, one way or another, as soon as possible.

@EZoni

This comment was marked as outdated.

@EZoni
Copy link
Member

EZoni commented Mar 6, 2025

Ideally I thought it could be good to have the Ubuntu upgrade run through first without prior checksums changes and record how tests fail, like I did in #5731 (comment), to make it easier to discuss if/when a tolerance upgrade could be appropriate. But let's see if we can still assess this despite the pre-existing checksums changes.

@EZoni
Copy link
Member

EZoni commented Mar 6, 2025

As mentioned in of the last comments in #5731, we can try to merge development to incorporate #5736 in order to address some of the Python errors that we see here. If that does not work, we need to add the workaround I had added in #5731, before the installation of Regression/requirements.txt in .azure-pipelines.yml:

      # (remove system copy of Matplotlib to avoid conflict
      # with version set in the requirements file - see, e.g.,
      # https://github.com/matplotlib/matplotlib/issues/28768)
      sudo apt remove python3-matplotlib

@EZoni
Copy link
Member

EZoni commented Mar 6, 2025

If I'm not mistaken, I still see errors like

Traceback (most recent call last):
  File "/home/vsts/work/1/s/Examples/Tests/langmuir/analysis_2d.py", line 116, in <module>
    cax1 = make_axes_locatable(ax1).append_axes("right", size="5%", pad="5%")
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/vsts/.local/lib/python3.12/site-packages/matplotlib/_api/deprecation.py", line 382, in wrapper
    return func(*inner_args, **inner_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/mpl_toolkits/axes_grid1/axes_divider.py", line 525, in append_axes
    ax = create_axes(
         ^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/mpl_toolkits/axes_grid1/axes_divider.py", line 459, in new_horizontal
    ax = self._get_new_axes(**kwargs)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3/dist-packages/mpl_toolkits/axes_grid1/axes_divider.py", line 425, in _get_new_axes
    axes_class = axes._axes_class
                 ^^^^^^^^^^^^^^^^
AttributeError: 'Axes' object has no attribute '_axes_class'

which occurr only with older versions of Matplotlib.

There is a conflict between the system version (old) coming from python3-matplotlib and the version installed via pip through the requirements file (newer).

#5736 does not seem to have resolved the issue, so I think we need the workaround in #5661 (comment), unless you have other solutions in mind.

@EZoni EZoni self-requested a review March 6, 2025 22:52
@@ -80,7 +80,7 @@

## Checks whether this is the 2D or the 3D test
with open("./warpx_used_inputs") as warpx_used_inputs:
is_2D = re.search("geometry.dims\s*=\s*2", warpx_used_inputs.read())
is_2D = re.search(r"geometry.dims\s*=\s*2", warpx_used_inputs.read())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I think it would be also safe to remove the \s*, since warpx_used_inputs is generated automatically and uses only one white space, as in key = value, but up to you. It might make it more readable overall:

Suggested change
is_2D = re.search(r"geometry.dims\s*=\s*2", warpx_used_inputs.read())
is_2D = re.search("geometry.dims = 2", warpx_used_inputs.read())

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component: core Core WarpX functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants