You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
medaka version (can be found by running medaka --version)
v2.0.1
GPU model
Nvidia driver version
NVIDIA V100
Additional context
Add any other context about the problem here.
I have noticed that runnning medaka_consensus on flye output has taken significantly longer since I bumped up my version of Medaka from v1.8.0 to v2.0.1. Runtimes that took 30-60 minutes take now well over 5 hours. I tested a sample with ~50X coverage depth using a CPU v. GPU node on our HPC and found that the GPU took 2.5 hours compared to CPU (7 hours) at this particular stage (screen shot attached). Older stdout of this process seemed to take much less time with tensorflow compared to the pytorch version of medaka.
The text was updated successfully, but these errors were encountered:
You are not the first person to report this discrepancy. I was not able to reproduce this until noticing that users were installing medaka through conda. It seems likely that the pytorch packages coming through conda are not as optimised as those that get installed through Python's pip package manager.
I'm currently running through an Arabidopsis assembly in pararallel having installed medaka 2.0.1 with both conda and pip. The pip installed setup is running around 1.6x faster.
I have not noticed as large a discrepancy between versions 1.21.1 and 2.0.1 when installing with pip.
Describe the bug
A clear and concise description of what the bug is including the command that you have run.
Logging
Please attach any relevant logging messages. (Use ``` before and after code blocks).
Environment (if you do not have a GPU, write No GPU):
Installation method [from github source, pypi (pip install), conda]
Conda
OS: [e.g. Ubuntu 16.04]
RHEL 7.9 operating system, Bright Cluster Manager, IBM Spectrum LSF (job scheduler)
medaka version (can be found by running
medaka --version
)v2.0.1
GPU model
Nvidia driver version
NVIDIA V100
Additional context
Add any other context about the problem here.
I have noticed that runnning medaka_consensus on flye output has taken significantly longer since I bumped up my version of Medaka from v1.8.0 to v2.0.1. Runtimes that took 30-60 minutes take now well over 5 hours. I tested a sample with ~50X coverage depth using a CPU v. GPU node on our HPC and found that the GPU took 2.5 hours compared to CPU (7 hours) at this particular stage (screen shot attached). Older stdout of this process seemed to take much less time with tensorflow compared to the pytorch version of medaka.
The text was updated successfully, but these errors were encountered: