Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

more dbus #3

Open
asdf8dfafjk opened this issue Oct 22, 2022 · 4 comments
Open

more dbus #3

asdf8dfafjk opened this issue Oct 22, 2022 · 4 comments
Labels
enhancement New feature or request

Comments

@asdf8dfafjk
Copy link

asdf8dfafjk commented Oct 22, 2022

Hi, so I was thinking about something, to continue the conversation from reddit where you were interested in feedback. How about this framework have two modular components, one that publishes events on dbus and the other that handles the actions part, here would be a rough workflow:

M1:

  1. collect all events from udev etc (also dbus itself?)

  2. republish them on your own bus

M2:

  1. Python API to directly subscribe to your events. (Or to dbus events)

  2. affect stuff like polybar , notify etc

You're already doing everything except (2) of M1. I think having a (2) would appeal a lot to what is already the target audience, people who want to minimize stuff that runs on their computer.

Eg. I use a heavily modified fork of qmpanel where I have added some dbus listening and would write my own cpp to do stuff by listening to your bus.

Dbus would allow me to do things in- process and in general is a more standard approach imho.

@kriansa
Copy link
Owner

kriansa commented Oct 22, 2022

As I understand, this design would require us to split the runtime into two separate pieces, one would be a "dbus proxy", responsible for listening for system events and producing DBus events. The other would be a "producer", listening for these DBus events and dispatching them across registered callbacks. While the second piece can be optional and "pluggable", the general software value is not evident without it.

In order to make this architecture reliable, we would need to ensure that these two separate pieces run in parallel. Whether or not they are in the same process, it is still a complex part to manage failures on them (i.e. a crash in the proxy would result in a crash on the producer and vice-versa?) and keep them coordinated would definitely require a third piece, some sort of manager.

I get that this would increase the potential for the integration of new tools in this ecosystem. I imagine, for example, someone interfacing it with different languages and even making it easier to configure by using a config language such as YAML.

I must say though, that the increased complexity is probably not paying off for the short or even medium term. Especially at this initial stage where I'm still trying to figure out the exact target audience for the tool so we can start engaging a community, I prefer not to engage in a complex architectural change such as this one and instead focus on delivering an end-to-end solution that solves this problem very well.

My immediate goals would be to first make it more stable - I've been honing this tool since I had a working version of it like 3 months ago, and at least for my use cases, I feel it's very stable now, but I don't know how well it will behave on other systems. That alone is not an easy task. For starters, we need a test suite, something that hasn't been started yet. In parallel I'd love to increase the coverage of system events we can interact with, allowing even more powerful usages. I can't help comparing it to Hammerspoon in terms of integrations. 😝

I appreciate the thorough suggestion, and please, keep the feedback coming. I'd love to know how it's working out for you once you have time to play with it!

@asdf8dfafjk
Copy link
Author

Yes, everything you're saying makes sense. Architecture and overarching goals should probably take backseat to immediate functionality needs specially considering the investment return ratio.

Thank you for taking the time to think about my comment and write this.

That said I guess if M2 were to subscribe directly to m1's dbus then I suspect there would be less worry about coordination so to speak.

The way I personally see your fw is that so much stuff happens that I have some potential reactions to but I'm not going to go look at every individual event source udev, dbus of this process, dbus of another process, blah. But now there is this single place allowing me to subscribe to them in a somewhat standard way. Reactions themselves are (speaking personally) not important to me. But again, every niche has majority who just want stuff done. 90% of tiling wm users are still going to need a way to just get things done. And this will potentially bring more people from DEs.

Again, just bouncing ideas really.

@kriansa
Copy link
Owner

kriansa commented Oct 26, 2022

I still think that having M2 subscribing to M1 and offering a full-baked solution that "just works", with the existing interface, will require some coordination and thus increase the complexity. Debugging failed events, for instance, would be harder and noisy since we would need to add logs to the emitter and the receiver.

On the other hand, having two separate programs would increase maintainability over time. One is simply responsible for proxying system events to a single, unified (and hopefully well-documented) dbus interface. The other would do whatever on top of that. It would follow more closely the UNIX philosophy, creating even greater potential for scalability among desktop tools.

I'll leave this issue open in case someone finds this idea interesting and want to take a stab at it.

@kriansa kriansa added the enhancement New feature or request label Oct 26, 2022
@asdf8dfafjk
Copy link
Author

I agree 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants