-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
int: Use polling crate as interface to poll system #123
Conversation
Using As we discussed on #119, the current Anyway, this should provide Windows support, through |
Good idea, I've opened smol-rs/polling#68 to address this.
I've done some testing with this. The
True, I'll add it later. I seem to have broken Linux, so I need to go back and figure out how to fix that. The main thing to keep in mind when using |
For IO safety, with #119 this runs into issues since Ah, so |
I mean, in theory, we could just
Yes. I've been planning on rewriting the Windows backend for |
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #123 +/- ##
==========================================
+ Coverage 87.89% 88.17% +0.28%
==========================================
Files 14 15 +1
Lines 1553 1556 +3
==========================================
+ Hits 1365 1372 +7
+ Misses 188 184 -4
Flags with carried forward coverage won't be shown. Click here to find out more.
... and 3 files with indirect coverage changes Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report in Codecov by Sentry. |
I won't enable Windows support in CI yet, since I've stumbled into an open design decision regarding the ping event sources. On Windows, there's no real equivalent to an
|
Please don't. Doing so can have implications for resource tracking on the drm-subsystem and might make it harder to figure out if every fd has been properly set to be That said I applaud the effort put in to add multi-platform support. 👍 |
I realized I probably marked this as ready prematurely, since |
Ah, I merged #119 without thinking about this PR, sorry for that. I hope this is not too big of a problem? |
Don't worry, it doesn't intersect much with this PR at all. Besides, this one probably won't be ready for a while. |
Fix test failures Polling does not guarantee this behavior Increase code coverage by marking certain branches
|
At least with a bad Technically this allows safe code to cause a violation of IO safety rules, though I'm not sure if polling alone would cause soundness issues. I guess it would some sort of (not necessarily unsound) problems if the fd is reused and registered again as a different |
If we're confident that won't cause soundness issues and also doesn't really introduce more weird bugs with bad implementations of |
It's not unsound yet; in the future, I/O safety may be checked by MIRI and other tools. But that would require an ecosystem overhaul that's years out. This isn't an issue for the time being. |
True, that could produce MIRI errors if MIRI some day has a concept of fd providence and handles But yeah, either way something that could be improved but shouldn't be a real issue. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That looks pretty good overall, thanks for this contribution!
Just have a few nitpicks/questions, and this will need a changelog entry, I'm pretty sure this is a breaking change.
src/loop_logic.rs
Outdated
slotmap::new_key_type! { | ||
pub(crate) struct CalloopKey; | ||
} | ||
// The maximum number of sources that we are allowed to have. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not the maximum number of sources, it's the number of bits used to store the source id.
src/sys.rs
Outdated
/// are available. If the same FD is inserted in multiple event loops, all of | ||
/// them are notified of readiness. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that last part guaranteed?
IIRC we had some woes about a given FD being monitored by two threads using epoll, and only one of the two threads being woken up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All I see in epoll(7)
is this:
If multiple threads (or processes, if child processes have inherited the epoll file descriptor across fork(2)) are blocked in epoll_wait(2) waiting on the same epoll file descriptor and a file descriptor in the interest list that is marked for edge-triggered (EPOLLET) notification becomes ready, just one of the threads (or processes) is awoken from epoll_wait(2). This provides a useful optimization for avoiding "thundering herd" wake-ups in some scenarios.
Notably this says "waiting on the same epoll file descriptor", so as long as the different event loops have their own epoll instances, this shouldn't be a problem?
According to epoll_ctl(2)
there is an EPOLLEXCLUSIVE
flag to opt into exclusive wakeups more generally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While it isn't guaranteed yet (see smol-rs/polling#81), the current plan moving forwards is to make it specified behavior that polling from more than one reactor is woken up (at least in oneshot mode).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, my point is that we should not make promises in the documentation that we cannot keep, so I'd like this documentation to reflect what is actually guaranteed and what is not.
For example, it's perfectly acceptable to say that the behavior is platform-dependent, but in that case it'd be good to link to the appropriate platform-specific documentation.
|
||
/// Thread-safe handle which can be used to wake up the `Poll`. | ||
#[derive(Clone)] | ||
pub(crate) struct Notifier(Arc<Poller>); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's probably possible to rework the Ping
event source to use that, by integrating it more tightly like the timers.
Though that can probably be done in a later PR.
src/sys.rs
Outdated
/// The sources registered as level triggered. | ||
/// | ||
/// Some platforms that `polling` supports do not support level-triggered events. As of the time | ||
/// of writing, this only includes Solaris and illumos. To work around this, we emulate level | ||
/// triggered events by keeping this map of file descriptors. | ||
level_triggered: Option<RefCell<HashMap<usize, (Raw, polling::Event)>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, to make sure I understand correctly, this is emulating level-triggered using oneshot, right?
In that case, does oneshot raise an event if the source is already ready when registered? I assume that yes, as this is necessary for this emulation to work, but I'd like to be sure.
Also, could you expand this comment by explaining how we emulate level-triggers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ran this code:
use polling::{Poller, Event};
use std::net::{TcpListener, TcpStream};
use std::io::prelude::*;
fn main() {
let (stream1, mut stream2) = tcp_pipe();
let mut poller = Poller::new().unwrap();
stream2.write(b"hello").unwrap();
poller.add(&stream1, Event::readable(0)).unwrap();
let mut events = vec![];
poller.wait(&mut events, None).unwrap();
println!("{:?}", events);
}
fn tcp_pipe() -> (TcpStream, TcpStream) {
let listener = TcpListener::bind("127.0.0.1:0").unwrap();
let addr = listener.local_addr().unwrap();
let stream1 = TcpStream::connect(addr).unwrap();
let stream2 = listener.accept().unwrap().0;
(stream1, stream2)
}
Got this result:
[Event { key: 0, readable: true, writable: false }]
So, yes, it looks like oneshot mode triggers on pre-registration events properly.
Okay, looks good now, thanks! |
Replaces the
sys
submodules with thepolling
crate.