-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send an email when a file transfer fails #1353
Comments
#1350 should also already help in a similar way. |
We don't need reports or any special entry_points ... there are multiple worklists for exacly this purpose...
when a send fails, the corresponding message should be in worklist.failed.
but that message will go into the retry queue, and a be retried five minutes later... so if it fails again, there will be an email every five minutes ... for about 3 days (based on default settings.) |
I didn't even think about the fact that |
To avoid multiple emails being sent, I think we could probably leverage If we also add |
when I said the message and "all it's fields" ... that includes the "report" field you mentioned... so that could be leveraged in writing the mail message. |
I was able to get an email to send when a transfer failed in my test plugin. However, I had to work around the This is what I did to work around the problem (in the
A work around in the email plugin could be to have a new option that uses
|
I've been trying to integrate the diskqueue in the plugin (to avoid multiple sendings of emails) and have gotten unsatisfactory results. In the housekeeping, before files get retried, I'm not able to find the diskqueue file within the configs cache directory. This is what is seen from the running process.
I checked back the diskqueue logic, and see that when a sarracenia/sarracenia/diskqueue.py Lines 271 to 279 in 5c78ad8
|
A band-aid fix work around for this is to append the retried messages to a list, and check if they exist within the list every time the plugin is called. This is not a good long-term work around though. There's no way to clear the list if the retried file gets eventually sent. If the process also gets restarted, the emails will get sent again.
|
I might not understand what you are trying to do. If you want to prevent retries after you have sent the email... all you need to do is remove the messages from the worklist.failed. so the loop should be something like: to_mail=worklist.failed
worklist.failed=[]
for m in to_mail:
whatever the mail logic is. |
Do you want to suppress retrying of sending the file,... or just prevent multiple emails (but keep retrying so it gets sent eventually.) I guess I the problem here is that you are trying to use the unmodified email sender... sounded like a great idea at first... but it probably doesn't quite match (need to do different things with the worklists vs. the built-in email send thing. ... I think you might need a custom callback that re-implements mail logic. |
That won't work because we want the file to keep retrying. The email sent would just be to notify the client that a transfer failed, that's why we would only want to send it once. We don't want to spam the client, but we want to try to resend the file normally. |
OK then look at the fields in the message... I think there is a field set when a message is a retry... something like msg['retry'] or msg['isRetry'] and you don't send the mail if that field is set. |
There is no retry field available in the message when it retries, even with
However, we can still add the field manually in the message. Adding the below works in my plugin.
|
We have a client that would like to have an email be sent when a transfer fails.
I've been thinking this might be possible with message reports.
@petersilva said
I'm thinking if we can have the reported message include the transfer error and feed that message to an email sender that this might work. I haven't checked the code to confirm this or not.
I'm not sure of another way to do this. Possibly another option could be to introduce a new flowCB entry point?
This could also be good for our team to have this implemented if critical data feeds start having transfer problems. Could send an email to NetOps.
The text was updated successfully, but these errors were encountered: