Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could you provide a way to run with multiple GPUs? #21

Open
FranciscoPark opened this issue Dec 30, 2024 · 1 comment
Open

Could you provide a way to run with multiple GPUs? #21

FranciscoPark opened this issue Dec 30, 2024 · 1 comment

Comments

@FranciscoPark
Copy link

FranciscoPark commented Dec 30, 2024

export OMP_NUM_THREADS=16 export CUDA_VISIBLE_DEVICES=0,1,2,3 python run_src/do_generate.py \ --model_ckpt meta-llama/Llama-3.1-8B \ --dataset_name GSM8K \ --note tensor_parallelism \ --num_rollouts 16 \ --api vllm \ --model_parallel \ --tensor_parallel_size 4

Im trying to use vllm with 4GPUs. Is there anything else I should change?
@LordEdison
Copy link

same problem😭

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants