-
Notifications
You must be signed in to change notification settings - Fork 686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] feat(/sources
): add bulk DELETE
endpoint
#7228
base: develop
Are you sure you want to change the base?
Conversation
Strict addition to the v1 Journalist API for deleting sources in bulk with O(1) instead O(n) latency from the perspective of a remote (over Tor) client. No breaking changes; no side effects. Deliberately overdocumented for now. You can test with "make dev" with (e.g.): $ curl -X DELETE -d "$(curl http://localhost:8081/api/v1/sources | jq ['.sources[] | .uuid]')" -H "Content-Type: application/json" http://localhost:8081/api/v1/sources % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-100 16231 100 16231 0 0 489k 0 --:--:-- --:--:-- --:--:-- 495k { "failed": [], "succeeded": [ "cf7bd746-80ce-4e1f-8c94-668fde45347b", "57d67752-4a63-4d72-8177-4e883248abe7", "bd900dbd-7bfa-4a50-b485-b969a2f0351e" ] }
We all know Tor's latency penalty firsthand, but when I say # Just a GET to the Source Interface, but the difference is negligible versus
# a POST/DELETE to the Journalist Interface:
[workstation user ~]% time ./fetch_source_interface.sh 1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 1111 0 0:00:02 0:00:02 --:--:-- 1111
./fetch_source_interface.sh 1 0.01s user 0.02s system 1% cpu 2.823 total
# So, to delete 10 sources in 1 bulk operation, that would be ~3 seconds from
# button-press to feedback in the Client...
[workstation user ~]% time ./fetch_source_interface.sh 10
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 1063 0 0:00:02 0:00:02 --:--:-- 1063
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 1227 0 0:00:02 0:00:02 --:--:-- 1227
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 1099 0 0:00:02 0:00:02 --:--:-- 1098
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 743 0 0:00:04 0:00:04 --:--:-- 743
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 680 0 0:00:04 0:00:04 --:--:-- 679
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 1234 0 0:00:02 0:00:02 --:--:-- 1235
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 828 0 0:00:03 0:00:03 --:--:-- 828
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 692 0 0:00:04 0:00:04 --:--:-- 692
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 783 0 0:00:03 0:00:03 --:--:-- 782
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 3098 100 3098 0 0 1227 0 0:00:02 0:00:02 --:--:-- 1228
./fetch_source_interface.sh 10 0.17s user 0.25s system 1% cpu 34.609 total
# ...versus ~35 seconds to delete 10 sources in 10 individual jobs. |
Overall I think this is a good idea, especially given the known latency wins. IMO the complexity primarily comes in how we do error handling (fail fast? keep trying?). I would like to see some limit imposed (maybe 50?) just so we're not running close to timeouts in worst case scenarios of large sources and/or legacy GPG keys. I'm not sure if that means the client should also impose a limit or if it needs to implement batching, which is why I'm bringing it up now. |
/sources
): add bulk DELETE
endpoint/sources
): add bulk DELETE
endpoint
I'm moving this out of draft status just so discussion will appear on Slack. It's still WIP. We discussed this today and concluded:
|
23d3e22
to
6253662
Compare
@zenmonkeykstop, to clarify from backlog-pruning, I have no further implementation work planned here. There's a naïve test plan here already, but I need to add tests. Let me know if you'd like me to expand the test plan as well. Otherwise the implementation here is ready for review and could be released at any time with no breaking changes on the Server or downstream changes required in the Client. |
I've marked this for v2.11 for discussion when we plan our next sprint. |
|
||
# Deletion is idempotent, so count nonexistent `Source`s as | ||
# successes. | ||
except NoResultFound: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The try/except should only cover exactly the Source.query.filter().one()
line, so any error in delete_collection/append is not caught by this.
Or just use .one_or_none()
and avoid catching NoResultFound entirely.
transit. | ||
""" | ||
data = request.json | ||
if not isinstance(data, list): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you also want to check that all the contents of the list are strings?
"succeeded": succeeded, | ||
} | ||
), | ||
200 if len(failed) == 0 else 207, # Success or Multi-Status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If they all fail, should that still be 207? I think that's an acceptable compromise to keep the complexity of the client part down.
Status
Work in progress:
Description of Changes
In support of freedomofpress/securedrop-client#2160, this pull request proposes a Journalist API endpoint for deleting sources in bulk with$$O(1)$$ instead $$O(n)$$ latency from the perspective of a remote (over Tor) client. This is a strict addition to the v1 Journalist API: no breaking changes; no side effects.
Testing
Needs tests. For now you can test with
make dev
with (e.g.):Deployment
Checklist
If you made changes to the server application code:
make lint
) and tests (make test
) pass in the development containerIf you made non-trivial code changes:
Choose one of the following: