-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running dragon as an executor from within a larger python program #20
Comments
i am away through next week. Can we talk when i get back? if not we can
have you talk to another team member. sounds like we should talk
…On Fri, Aug 2, 2024 at 12:33 PM Tom Nicholas ***@***.***> wrote:
I would like to use dragon as within the context of a larger piece of
python software (Cubed <https://github.com/cubed-dev/cubed>). In
particular I want to write an equivalent to Cubed's ProcessesExecutor
<https://github.com/cubed-dev/cubed/blob/19f844e2c1ea799ae4bb0cb754f7aa92f917e598/cubed/runtime/executors/local.py#L266>
(or ThreadsExecutor
<https://github.com/cubed-dev/cubed/blob/19f844e2c1ea799ae4bb0cb754f7aa92f917e598/cubed/runtime/executors/local.py#L227>)
but which uses Dragon as the concurrent_executor
<https://github.com/cubed-dev/cubed/blob/19f844e2c1ea799ae4bb0cb754f7aa92f917e598/cubed/runtime/executors/local.py#L176>
.
All this executor needs to do is execute a series of stages each made up
of a number of embarrasingly parallel tasks (each of which are python
functions). I just want Dragon to launch the tasks in parallel for me
across a whole HPC allocation.
I'm looking through the docs and I have two main questions:
*1) Should I use the dragon.workflows.parsl_executor.DragonPoolExecutor
<https://dragonhpc.github.io/dragon/doc/_build/html/ref/workflows/dragon.workflows.parsl_executor.html#dragon.workflows.parsl_executor.DragonPoolExecutor>?*
That seems like a drop-in replacement, but if that's actually using Parsl
<https://parsl-project.org/> (which I noticed got built when I built the
dragon executable) and that's all I want to use then would I be better
off not bothering with Dragon and just using the
parsl.executors.ThreadPoolExecutor
<https://parsl.readthedocs.io/en/stable/stubs/parsl.executors.ThreadPoolExecutor.html#parsl-executors-threadpoolexecutor>
instead? What's the difference?
*2) How do I launch dragon from within the context of another python
program?*
All the docs examples seem to say that you use dragon to launch another
python program from the command line like this
dragon my_python_script.py
, and dragon works by
replacing all standard Multiprocessing classes with Dragon equivalent
classes before CPython resolves the inheritance tree.
(from Inheritance and Multiple Start Methods
<https://dragonhpc.github.io/dragon/doc/_build/html/pguide/dragon_multiprocessing.html#inheritance-and-multiple-start-methods>
.)
But this is inconvenient if I can't represent my workload as a single
standalone python script. Instead I ideally want to be able to call the
Executor from inside a running python process on an interactive job (e.g.
from within a jupyter notebook cell) and have it execute across a whole
allocation.
Do I need to somehow auto-generate this script and make a subprocess.call
to the dragon executable?
Or if I try omitting the dragon executable (mentioned on this page
<https://dragonhpc.github.io/dragon/doc/_build/html/pguide/dragon_multiprocessing.html#multiprocessing-and-dragon-without-patching>)
then I'm not sure what this implies:
The Dragon core library can still be imported via e.g. from
dragon.managed_memory import MemoryPool and used. In this case, the
"dragon" start method must not be set. The infrastructure will not be
started.
This part:
Note that all other parts of the Dragon stack, in particular the Dragon
Native
<https://dragonhpc.github.io/dragon/doc/_build/html/ref/native/index.html#dragon-native>
API require the running Dragon infrastructure and are thus not supported
without patching Multiprocessing.
seems to be saying that I can still use Dragon Core but not Dragon Native
from within a python program that I didn't launch using the dragon
executable. Is the dragon.workflows.parsl_executor.DragonPoolExecutor
<https://dragonhpc.github.io/dragon/doc/_build/html/ref/workflows/dragon.workflows.parsl_executor.html#dragon.workflows.parsl_executor.DragonPoolExecutor>
in Dragon Core or Dragon Native?
------------------------------
@applio <https://github.com/applio> you said you
got dragon to run the add-asarray.py example
<https://github.com/cubed-dev/cubed/blob/main/examples/add-asarray.py>
single node as the executor already
so I'm curious what your approach was?
cc @tomwhite <https://github.com/tomwhite>
—
Reply to this email directly, view it on GitHub
<#20>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAM7WSZDFWEPS53VNO6YBPTZPPGIXAVCNFSM6AAAAABL5AUUCGVHI2DSMVQWIX3LMV43ASLTON2WKOZSGQ2DKNJUGY2TQNA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
@TomNicholas and I chatted Monday (yesterday) and I promised a PR for cubed with the example I got working when traveling back from the SciPy Conference in July (where Tom and I got to meet in person). Along with my PR for cubed, I will add an example for the Dragon repo as well to showcase how to use cubed and Dragon via cubed's existing use of concurrent.futures/multiprocessing. |
No worries - @applio actually came to our meeting yesterday and explained how he got around this problem by spawning the processes using dragon manually. (See the Cubed on Dragon notes in here https://docs.google.com/document/d/1_FkLZ3NjXlzlc7p4mr1GtWuPGKN3-E991dQEdPv6bNc/edit?usp=drivesdk) Once he shares the PR I will link it here in case anyone else in future has the same question. |
I have posted 2 PRs against Cubed just now:
Discussion of those PRs can probably continue there but would it be helpful to continue on any topics here in this thread? (We can always create a new one too.) |
I think the Dragon docs could probably explain how to do this spawn trick.
This is also still not clear to me - I raised cubed-dev/cubed#557 on Cubed to discuss that. |
I would like to use dragon as within the context of a larger piece of python software (Cubed - see cubed-dev/cubed#467). In particular I want to write an equivalent to Cubed's
ProcessesExecutor
(orThreadsExecutor
) but which uses Dragon as theconcurrent_executor
.All this executor needs to do is execute a series of stages each made up of a number of embarrasingly parallel tasks (each of which are python functions). I just want Dragon to launch the tasks in parallel for me across a whole HPC allocation.
I'm looking through the docs and I have two main questions:
1) Should I use the
dragon.workflows.parsl_executor.DragonPoolExecutor
?That seems like a drop-in replacement, but if that's actually using Parsl (which I noticed got built when I built the
dragon
executable) and that's all I want to use then would I be better off not bothering with Dragon and just using theparsl.executors.ThreadPoolExecutor
instead? What's the difference?2) How do I launch dragon from within the context of another python program?
All the docs examples seem to say that you use dragon to launch another python program from the command line like this
, and dragon works by
(from Inheritance and Multiple Start Methods.)
But this is inconvenient if I can't represent my workload as a single standalone python script. Instead I ideally want to be able to call the Executor from inside a running python process on an interactive job (e.g. from within a jupyter notebook cell) and have it execute across a whole allocation.
Do I need to somehow auto-generate this script and make a
subprocess.call
to thedragon
executable?Or if I try omitting the
dragon
executable (mentioned on this page) then I'm not sure what this implies:This part:
seems to be saying that I can still use Dragon Core but not Dragon Native from within a python program that I didn't launch using the
dragon
executable. Is thedragon.workflows.parsl_executor.DragonPoolExecutor
in Dragon Core or Dragon Native?@applio you said you
so I'm curious what your approach was?
cc @tomwhite
The text was updated successfully, but these errors were encountered: