Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

P2P is supported between GPUs that belong to different NUMA nodes #284

Open
themoonstone opened this issue Jul 16, 2024 · 0 comments
Open

Comments

@themoonstone
Copy link

I've execute the simpleP2P on my server (GPU-L4, with the following topology) which GPU0 and GPU1 belongs to different numa nodes.

	GPU0	GPU1	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	SYS	0-7,32-39	0		N/A
GPU1	SYS	 X 	16-23,48-55	2		N/A

And I obtained the following results. Why is that? How does it support P2P (Peer-to-Peer) connections between GPUs that are on different PCIe root complexes?

./simpleP2P 
[./simpleP2P] - Starting...
Checking for multiple GPUs...
CUDA-capable device count: 2

Checking GPU(s) for support of peer to peer memory access...
> Peer access from NVIDIA L4 (GPU0) -> NVIDIA L4 (GPU1) : Yes
> Peer access from NVIDIA L4 (GPU1) -> NVIDIA L4 (GPU0) : Yes
Enabling peer access between GPU0 and GPU1...
Allocating buffers (64MB on GPU0, GPU1 and CPU Host)...
Creating event handles...
cudaMemcpyPeer / cudaMemcpy between GPU0 and GPU1: 19.87GB/s
Preparing host buffer and memcpy to GPU0...
Run kernel on GPU1, taking source data from GPU0 and writing to GPU1...
Run kernel on GPU0, taking source data from GPU1 and writing to GPU0...
Copy data back to host from GPU0 and verify results...
Disabling peer access...
Shutting down...
Test passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant