Skip to content

activate_custom_mpi.sh: do not link against mlir #2905

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

mitchdz
Copy link
Collaborator

@mitchdz mitchdz commented May 13, 2025

See #2892

It might be worth it to further trim down what the ELF file links against, but the other files seem benign for now.

cudaq@2570547-lcedt:/opt/nvidia/cudaq/distributed_interfaces$ readelf -d libcudaq_distributed_interface_mpi.so  | grep cudaq
 0x000000000000001d (RUNPATH)            Library runpath: [/usr/local/llvm/lib:/opt/nvidia/cudaq/lib:/opt/nvidia/cudaq/lib/plugins:/opt/nvidia/cudaq/distributed_interfaces:/usr/local/openmpi/lib64:/usr/local/openmpi/lib]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-common.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-ensmallen.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-nlopt.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-spin.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-operator.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-comm-plugin.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-pyscf.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-em-default.so]
 0x0000000000000001 (NEEDED)             Shared library: [libcudaq-platform-default.so]

Copy link

copy-pr-bot bot commented May 13, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@mitchdz
Copy link
Collaborator Author

mitchdz commented May 13, 2025

/ok to test 63ca191

Command Bot: Processing...

@bmhowe23 bmhowe23 linked an issue May 13, 2025 that may be closed by this pull request
4 tasks
@mitchdz
Copy link
Collaborator Author

mitchdz commented May 13, 2025

/ok to test 8c3a3c9

Command Bot: Processing...

github-actions bot pushed a commit that referenced this pull request May 13, 2025
Copy link

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

github-actions bot pushed a commit that referenced this pull request May 13, 2025
Copy link

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

@mitchdz
Copy link
Collaborator Author

mitchdz commented May 14, 2025

/ok to test 591c725

Command Bot: Processing...

@mitchdz
Copy link
Collaborator Author

mitchdz commented May 20, 2025

/ok to test c5fb8df

Command Bot: Processing...

@mitchdz
Copy link
Collaborator Author

mitchdz commented May 20, 2025

I will modify this PR shortly to instead of just dropping the mlir links, add a new flag to nvq++.in. Something like --disable-cudaq-links. This is ultimately what I'd like to do anyways, so let's just do it right the first time rather than having the --disable-mlir-links as a bandaid.

mitchdz added 5 commits May 23, 2025 16:01
This flag allows the option to disable linking cudaq libraries.

This is useful for the MPI plugin as an example.

Signed-off-by: Mitchell <[email protected]>
@mitchdz
Copy link
Collaborator Author

mitchdz commented May 27, 2025

/ok to test 69ae33c

Command Bot: Processing...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

MPI error when scaling out beyond 1 server
2 participants