-
Notifications
You must be signed in to change notification settings - Fork 410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Cronicle service not starting after it has stopped #877
Comments
After looking at previous issues, I have found a similar issue to mine. Apologies for the duplicate. #570 I still would need some assistance on this matter though.. as you can see, it's trying to write to a file that gives nano an error message. this is the output (last 2 entries) of debug.sh
Could we maybe convert this issue from a bug to a feature request? The feature request would be to add a max number of log files that cronicle can hold at any time and add an option for each job to not write logs (maybe not write logs if file size exceeds X kb?). |
It sounds like you're running a LOT of jobs, or you have a VERY small amount of INODEs on your disk. Either way, I highly suggest you decrease this configuration parameter: https://github.com/jhuckaby/Cronicle/blob/master/docs/Configuration.md#job_data_expire_days Set it very low, like 30 days or even lower, if you don't need long data retention for your jobs. However, note that this parameter does not retroactively affect existing jobs, only new ones, so you may need to do an export, wipe (delete), then import: https://github.com/jhuckaby/Cronicle/blob/master/docs/CommandLine.md#data-import-and-export The data export / import does not include historical job data. I'd highly recommend you do this anyway, because running out of disk space (or INODES) leaves Cronicle's "database" in an indeterminate / corrupted state. Also, for a high job volume setup, please consider using something other than the local Filesystem. Cronicle can use S3 or any S3-compatible service (MinIO, etc.). Good luck, and I'm very sorry you ran into this. |
I ended up removing files from /data and I messed up the instance badly haha, but I reinstalled it fresh. Indeed, we are running a lot of jobs, at least 5 every 5 minutes and more will come, even at 2 minutes interval. I set it up with couchbase now, do you think it's a reliable solution for storing the logs? From what I have researched, couchbase would not eat up the inodes as badly as filesystem and the only constraint would be space on partition. |
Couchbase is very reliable in my experience. Just note that they have a 20 MB limit on object size (or at least they did the last time I checked). So make sure your job logs are smaller than 20 MB each 😊 |
Hi Joseph,
Are there any documented steps for a wipe (delete)? We've also had a crash due to inodes and would like to reset to reduce the inode usage, but hesitant to touch the data folder. There seems to be a lot of files also in _cleanup are these safe to remove? Thanks for your continued efforts on this project |
I thought I would loop back as I ended up testing and running the process myself
|
Very glad you were able to figure it out, and complete a successful migration!
Not really, it's just
Those files are part of a database table that tracks expirations of the job logs, so it knows when to delete them all. I would not remove those directly, unless you are wiping everything. |
Is there an existing issue for this?
What happened?
Cronicle service failed on 2 days ago:
I cannot start it at all:
this was the error i got using:
the file it tried to open was logs/Storage.log and it was really small (77kb) but even nano could not open it... it gave me the same error.
Space is not a problem, we have enough space free on disk. What may be the issue? The service stopped on saturday and we lost a lot of important logs and actions.
Operating System
Ubuntu 22.04
Node.js Version
v20.18.0
Cronicle Version
0.9.71
Server Setup
Single Server
Storage Setup
Local Filesystem
Relevant log output
Code of Conduct
The text was updated successfully, but these errors were encountered: