-
-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add streaming API #463
base: master
Are you sure you want to change the base?
Add streaming API #463
Conversation
c32db49
to
8eb1a6a
Compare
Am I correct in thinking that all the necessary items except for tests are complete? |
Generally yes, I think we're pretty close to this being merge-ready. There are some remaining TODOs that I need to finish, but most are reasonably small. I could definitely use help with writing tests - just validating that we can run various kinds of pipelines and that they work across multiple workers would be really useful. |
1c4473b
to
21217ab
Compare
ed89a7f
to
3274093
Compare
3274093
to
4013384
Compare
9f0ac3f
to
96bad48
Compare
96bad48
to
b3b70e1
Compare
Because it doesn't actually do anything now.
Using `myid()` with `workers()` meant that when the context was initialized with a single worker the processor list would be: `[OSProc(1), OSProc(1)]`. `procs()` will always include PID 1 and any other workers, which is what we want.
245d06a
to
e772af0
Compare
Adds a
spawn_streaming
task queue to transform tasks into continuously-executing equivalents that automatically take from inputs streams/channels and put their result to an output stream/channel. Useful for processing tons of individual elements of some large (or infinite) collection.Todo:
Stream
objectStream
finish_stream(xyz; return=abc)
to return custom value (elsenothing
)mmap
?)waitany
-style input stream, taking inputs from multiple taskstake!
from input streams concurrently, andwaitall
on them before continuingput!
into output streams concurrently, andwaitall
on them before continuing