Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding Boot::multiprecision as a DGtal::BigInteger model #1749

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

dcoeurjo
Copy link
Member

@dcoeurjo dcoeurjo commented Dec 5, 2024

PR Description

Adding boost::multiprecision for DGtal::BigInteger.

Checklist

  • Unit-test of your feature with Catch.
  • Doxygen documentation of the code completed (classes, methods, types, members...)
  • Documentation module page added or updated.
  • New entry in the ChangeLog.md added.
  • No warning raised in Debug mode.
  • All continuous integration tests pass (Github Actions)

@dcoeurjo
Copy link
Member Author

dcoeurjo commented Dec 5, 2024

ping @JacquesOlivierLachaud @troussil

@dcoeurjo
Copy link
Member Author

dcoeurjo commented Dec 6, 2024

Hi @rolanddenis , I may need your help with this PR (cc @JacquesOlivierLachaud )

This PR aims to include boost::multiprecision as a (header only) alternative to GMP for mutliprecision integer computations. DGtal::BigInteger are now enabled by default, and the backend can be switched to GMP if the user wants that.

I'm fixing a few issues related to WITH_GMP usages and I'm facing a problem in PointVector and the convert you did for explicit casts.

On this PR, if you have a look to

auto p = p1 - p2;
the operator-() is not defined for DGtal::BigInteger. It seems to be related to ArithmeticConversionTraits you wrote a while ago.

Could you please help us here?

thx

@BDoignies
Copy link
Contributor

BDoignies commented Mar 11, 2025

Hey @dcoeurjo

I spent some time trying to fix things here but things gets very complicated with Boost.

Fixing code for GMP

  • First, add a specialisation of ArithmeticalTraits for floating-point types
  • Add a way to convert gmp integers to doubles.
  • Wrap the crossProduct operation inside the converting function (see the Boost section, paragraph 2, to understand why only these two are needed).

This is the easy part.

Fixing code for Boost

  • Convert any integer to a mixed integer/float operation. This includes every operation in PointVector, but also some more (BigInt = double - BigInt).

Boost BigIntegers are not meant to be combined with anything other than integers. And the operation listed above barely makes sense to be fair. I am quite surprised it works with GMP to be honest.

The hardest part is that the tests do not cover everything. For example, only mixed BigInt/float crossProducts are actually checked in PointVector (due to the test you introduced in this PR). So missing any case could result in the error popping up at any time during further development.

Fixing code for everyone at once

The only stable solution I have is to code our own BigInteger which is a wrapper around either GMP or Boost depending on the compile options. This allows us to make sure everything works properly and overload the necessary operators and conversions as needed (and hopefully we won't miss any of them :) ).

Your thoughts ?

PS: To check this PR, do not forget to disable the -WITH_GMP (or DGTAL_WITH_GMP if merged with main2.0) in the github actions, otherwise it won't check against boost.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In progress
Development

Successfully merging this pull request may close these issues.

2 participants