-
Notifications
You must be signed in to change notification settings - Fork 154
performance enhancements #238
Comments
Is that 2.5 GB ram usage coming from the process that is running the server or the jobs themselves? |
it's coming from the process running the server. i just looked now and it seems to be going up to 12 gigs. i got 64 gb ram on this server so it's not really causing issues right now, but 12 gb ram for a cron job manager is kinda eek. if i restart it, it will go down and start chewing up ram again, like Chrome. i just left it like this so we can debug or maybe improve w/e. 12852 root 20 0 11.780g 0.011t 5240 S 4.3 18.3 1009:54 ruby |
Does the memory usage steadily increase after a restart? It might be a memory leak as opposed to just general inefficiency. |
dang, i missed your reply. and yes, the memory usage increases slowly after a restart. i restarted it at 12gb 12 days ago, and now it's back at 10 gb once more and going up: i am running centos 7.2 64bit. |
rocking a good 13 gb ram used now: 5160 root 20 0 12.761g 0.012t 5256 S 1.3 19.9 1193:34 ruby |
5160 root 20 0 23.112g 0.022t 4844 S 7.0 36.4 2175:48 ruby do you want to test anything or? |
i was checking the db and i noticed that the interface was getting considerably sluggish as the job executions count went up - i had over 600 k and it was working like a pig. so i emptied that table and job_execution_outputs which was hitting 1.5 gb on 11 mil rows and now the thing is speedy |
hi,
i have been playing with this for about 2 weeks now on a 12 cores xeon server with 64 gigs ram.
is there any way you could like improve the resource usage? when running commands like minicron server start or when i look under the usage this thing works like a pig. i got 2.5 gb ram used to manage some 30 cron jobs.
i really like the interface and stuff but could you optimize the thing a little bit better? the thing is i don't know if this is a ruby thing - i hate ruby but it would be nice if it didn't require a spaceship to run some cronjobs
thanks.
The text was updated successfully, but these errors were encountered: