Dockerfile for MILVLG/bottom-up-attention.pytorch, a PyTorch reimplementation of the bottom-up-attention project based on Caffe
In order use GPUs inside containers, we'll be using NVIDIA Container Toolkit.
Make sure you have installed the NVIDIA driver and Docker 19.03 for your Linux distribution Note that you do not need to install the CUDA toolkit on the host, but the driver needs to be installed.
For more information, please visit NVIDIA Container Toolkit official repository.
To image, you can directly pull from Dockerhub[recommended] or build from the local environment.
docker pull denton35/butd-pytorch-docker
will pull the latest image from dockerhub. Use sudo
if needed.
Pull the Dockerfile:
git clone https://github.com/HyperDenton/butd-pytorch-docker
Goto the repo directory:
cd butd-pytorch-docker
Build the image:
docker build .
Start the image and enter the image bash
:
sudo docker run --gpus all --rm -it -v <absolute-path-to-repo>/bottom-up-attention.pytorch:/workspace/bottom-up-attention.pytorch denton35/butd-pytorch-docker
in which
--gpus all
to enable NVIDIA Container Toolkit;
--rm
to remove the image after use;
-it
to enter the image bash
;
-v <local-path>:<image-path>
to access host system's directory.
For more usage of Docker, please visit Docker Reference Page.
If it's the first time you run the script or you made changes to the source code, please run under repo directory inside the image:
python setup.py build develop
python3 extract_features.py --mode caffe --config-file configs/bua-caffe/extract-bua-caffe-r101.yaml --image-dir datasets/demo/ --out-dir output/ --resume
this will extract every image in <datasets/demo/>
and put the features to <output/>
directory.
Consider using docker accelerator:
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://****.****.****.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
where https://****.****.****.com
is your address of your accelerator.
To get address accelerator, you might need to register to the docker accelerator service provider, e.g. Aliyun