Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

int: Use polling crate as interface to poll system #123

Merged
merged 9 commits into from
Apr 9, 2023

Conversation

notgull
Copy link
Member

@notgull notgull commented Jan 1, 2023

Replaces the sys submodules with the polling crate.

@ids1024
Copy link
Member

ids1024 commented Jan 1, 2023

Using polling, is there a way to subscribe to non-fd kqueue event types as suggested in #116, or would that be impossible with this implementation?

As we discussed on #119, the current EventSource API could violate IO safety with poll, but not with epoll/kqueue which hold the reference to the file in kernel. Though I guess that's mainly a technical issue with bad EventSource implementations, and using RawFd it's already trivial to violate IO safety from safe code anyway.

Anyway, this should provide Windows support, through wepoll? I'm not sure exactly what the implications of that are vs. direct use of native IOCP. If Windows support is added this should probably be documented and have a CI test.

@notgull
Copy link
Member Author

notgull commented Jan 1, 2023

Using polling, is there a way to subscribe to non-fd kqueue event types as suggested in #116, or would that be impossible with this implementation?

Good idea, I've opened smol-rs/polling#68 to address this.

As we discussed on #119, the current EventSource API could violate IO safety with poll, but not with epoll/kqueue which hold the reference to the file in kernel. Though I guess that's mainly a technical issue with bad EventSource implementations, and using RawFd it's already trivial to violate IO safety from safe code anyway.

I've done some testing with this. The poll() implementation might be IO unsafe in this way, but the rest aren't. In any case, we're planning on migrating to I/O safe primitives once Debian stable supports it. See smol-rs/polling#38

Anyway, this should provide Windows support, through wepoll? I'm not sure exactly what the implications of that are vs. direct use of native IOCP. If Windows support is added this should probably be documented and have a CI test.

True, I'll add it later. I seem to have broken Linux, so I need to go back and figure out how to fix that.

The main thing to keep in mind when using wepoll is that in addition to ICOP is uses \Device\Afd, an unstable, undocumented Windows API. This is mostly just something to be aware of, as it isn't really an issue in practice (for instance, Node.JS uses the same unstable API). However, it has caused issues in the past; see tokio-rs/mio#1444.

@ids1024
Copy link
Member

ids1024 commented Jan 1, 2023

For IO safety, with #119 this runs into issues since register/unregister are passed a BorrowedFd, so the poller shouldn't be able to keep a file descriptor (to call poll on). I think some kind of change to calloop's EventSource API would be necessary to handle this correctly.

Ah, so polling uses wepoll, while mio uses the same technique as wepoll but implemented in Rust? So the advantages and caveats should be similar other than requiring a C compiler.

@notgull
Copy link
Member Author

notgull commented Jan 1, 2023

For IO safety, with #119 this runs into issues since register/unregister are passed a BorrowedFd, so the poller shouldn't be able to keep a file descriptor (to call poll on).

I mean, in theory, we could just dup the file descriptor.

Ah, so polling uses epoll, while mio uses the same technique as wepoll but implemented in Rust? So the advantages and caveats should be similar other than requiring a C compiler.

Yes. I've been planning on rewriting the Windows backend for polling from scratch, actually; the wepoll dependency is mildly annoying.

@codecov
Copy link

codecov bot commented Jan 6, 2023

Codecov Report

Patch coverage: 98.59% and project coverage change: +0.28 🎉

Comparison is base (cba4d3a) 87.89% compared to head (439a547) 88.17%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #123      +/-   ##
==========================================
+ Coverage   87.89%   88.17%   +0.28%     
==========================================
  Files          14       15       +1     
  Lines        1553     1556       +3     
==========================================
+ Hits         1365     1372       +7     
+ Misses        188      184       -4     
Flag Coverage Δ
macos-latest 87.77% <98.59%> (-0.12%) ⬇️
ubuntu-latest 87.70% <98.59%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
src/error.rs 0.00% <ø> (ø)
src/sources/timer.rs 86.53% <ø> (ø)
src/sys.rs 98.41% <98.41%> (ø)
src/io.rs 82.75% <100.00%> (+0.09%) ⬆️
src/loop_logic.rs 90.45% <100.00%> (-1.73%) ⬇️

... and 3 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@notgull
Copy link
Member Author

notgull commented Jan 6, 2023

I won't enable Windows support in CI yet, since I've stumbled into an open design decision regarding the ping event sources. On Windows, there's no real equivalent to an eventfd or a Unix pair pipe. You can use a TCP-self pipe, but that has a tendency to break.

polling gets around this by posting a null-notification to the IOCP and then interpreting that as a wakeup. I'm not sure if this strategy is possible with the ping source's multiple token values.

@notgull notgull marked this pull request as ready for review January 6, 2023 17:12
@Drakulix
Copy link
Member

Drakulix commented Jan 6, 2023

I mean, in theory, we could just dup the file descriptor.

Please don't. Doing so can have implications for resource tracking on the drm-subsystem and might make it harder to figure out if every fd has been properly set to be CLOEXEC, which is super important for smithay, where we use this library.

That said I applaud the effort put in to add multi-platform support. 👍

@notgull
Copy link
Member Author

notgull commented Jan 6, 2023

I realized I probably marked this as ready prematurely, since polling hasn't released smol-rs/polling#59 yet.

@notgull notgull marked this pull request as draft January 6, 2023 23:20
@elinorbgr
Copy link
Member

Ah, I merged #119 without thinking about this PR, sorry for that. I hope this is not too big of a problem?

@notgull
Copy link
Member Author

notgull commented Jan 9, 2023

Don't worry, it doesn't intersect much with this PR at all. Besides, this one probably won't be ready for a while.

Fix test failures

Polling does not guarantee this behavior

Increase code coverage by marking certain branches
@notgull
Copy link
Member Author

notgull commented Mar 8, 2023

polling version 2.6 has been released! The new version not only contains new polling modes, but also uses a new, no-C-dependency Windows backend but also provides handles for handling signals on kqueue. I think this is ready for review now.

@notgull notgull marked this pull request as ready for review March 8, 2023 18:58
@ids1024
Copy link
Member

ids1024 commented Mar 14, 2023

At least with a bad EventSource implementation, an fd could be registered, closed and possibly re-used, and later used in calls to poll, right? As was mentioned in #119 this technically wasn't an issue with just epoll/kqueue backends.

Technically this allows safe code to cause a violation of IO safety rules, though I'm not sure if polling alone would cause soundness issues. I guess it would some sort of (not necessarily unsound) problems if the fd is reused and registered again as a different EventSource?

@ids1024
Copy link
Member

ids1024 commented Mar 14, 2023

If we're confident that won't cause soundness issues and also doesn't really introduce more weird bugs with bad implementations of EventSource beyond what's already possible, that doesn't need to block merging this PR. But it's worth considering if we can change the calloop API to ensure that's impossible with safe code.

@notgull
Copy link
Member Author

notgull commented Mar 14, 2023

It's not unsound yet; in the future, I/O safety may be checked by MIRI and other tools. But that would require an ecosystem overhaul that's years out. This isn't an issue for the time being.

@ids1024
Copy link
Member

ids1024 commented Mar 14, 2023

True, that could produce MIRI errors if MIRI some day has a concept of fd providence and handles poll in some way. (If it doesn't already; haven't tried running calloop in miri). But in actual use the impact of calling poll from libc is in itself opaque to the compiler, if it doesn't impact the behavior of other calls operating on the same fd, so that shouldn't actually cause soundness issues in the future.

But yeah, either way something that could be improved but shouldn't be a real issue.

Copy link
Member

@elinorbgr elinorbgr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That looks pretty good overall, thanks for this contribution!

Just have a few nitpicks/questions, and this will need a changelog entry, I'm pretty sure this is a breaking change.

slotmap::new_key_type! {
pub(crate) struct CalloopKey;
}
// The maximum number of sources that we are allowed to have.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not the maximum number of sources, it's the number of bits used to store the source id.

src/sys.rs Outdated
Comment on lines 28 to 29
/// are available. If the same FD is inserted in multiple event loops, all of
/// them are notified of readiness.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that last part guaranteed?

IIRC we had some woes about a given FD being monitored by two threads using epoll, and only one of the two threads being woken up.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All I see in epoll(7) is this:

If multiple threads (or processes, if child processes have inherited the epoll file descriptor across fork(2)) are blocked in epoll_wait(2) waiting on the same epoll file descriptor and a file descriptor in the interest list that is marked for edge-triggered (EPOLLET) notification becomes ready, just one of the threads (or processes) is awoken from epoll_wait(2). This provides a useful optimization for avoiding "thundering herd" wake-ups in some scenarios.

Notably this says "waiting on the same epoll file descriptor", so as long as the different event loops have their own epoll instances, this shouldn't be a problem?

According to epoll_ctl(2) there is an EPOLLEXCLUSIVE flag to opt into exclusive wakeups more generally.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While it isn't guaranteed yet (see smol-rs/polling#81), the current plan moving forwards is to make it specified behavior that polling from more than one reactor is woken up (at least in oneshot mode).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall, my point is that we should not make promises in the documentation that we cannot keep, so I'd like this documentation to reflect what is actually guaranteed and what is not.

For example, it's perfectly acceptable to say that the behavior is platform-dependent, but in that case it'd be good to link to the appropriate platform-specific documentation.

src/sys.rs Outdated Show resolved Hide resolved

/// Thread-safe handle which can be used to wake up the `Poll`.
#[derive(Clone)]
pub(crate) struct Notifier(Arc<Poller>);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's probably possible to rework the Ping event source to use that, by integrating it more tightly like the timers.

Though that can probably be done in a later PR.

src/loop_logic.rs Outdated Show resolved Hide resolved
src/sys.rs Show resolved Hide resolved
src/sys.rs Outdated
Comment on lines 179 to 184
/// The sources registered as level triggered.
///
/// Some platforms that `polling` supports do not support level-triggered events. As of the time
/// of writing, this only includes Solaris and illumos. To work around this, we emulate level
/// triggered events by keeping this map of file descriptors.
level_triggered: Option<RefCell<HashMap<usize, (Raw, polling::Event)>>>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, to make sure I understand correctly, this is emulating level-triggered using oneshot, right?

In that case, does oneshot raise an event if the source is already ready when registered? I assume that yes, as this is necessary for this emulation to work, but I'd like to be sure.

Also, could you expand this comment by explaining how we emulate level-triggers?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran this code:

use polling::{Poller, Event};
use std::net::{TcpListener, TcpStream};
use std::io::prelude::*;

fn main() {
    let (stream1, mut stream2) = tcp_pipe();
    let mut poller = Poller::new().unwrap();
    stream2.write(b"hello").unwrap();
    poller.add(&stream1, Event::readable(0)).unwrap();

    let mut events = vec![];
    poller.wait(&mut events, None).unwrap();
    println!("{:?}", events);
}

fn tcp_pipe() -> (TcpStream, TcpStream) {
    let listener = TcpListener::bind("127.0.0.1:0").unwrap();
    let addr = listener.local_addr().unwrap();
    let stream1 = TcpStream::connect(addr).unwrap();
    let stream2 = listener.accept().unwrap().0;
    (stream1, stream2)
}

Got this result:

[Event { key: 0, readable: true, writable: false }]

So, yes, it looks like oneshot mode triggers on pre-registration events properly.

@elinorbgr
Copy link
Member

Okay, looks good now, thanks!

@elinorbgr elinorbgr merged commit 725e16a into Smithay:master Apr 9, 2023
@notgull notgull deleted the polling branch April 9, 2023 18:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants