The log files (in /root/.forever
) created by forever
have reached a large size and is almost filling up the hard disk.
If the log file were to be deleted while the forever
process is still running, forever logs 0
will return undefined
. The only way for logging of the current forever
process to resume is to stop
it and start
the node script again.
Is there a way to just trim the log file without disrupting logging or the forever
process?
So Foreverjs will continue to write to the same file handle and ideally would support something that allows you to send it a signal and rotate to a different file.
Without that, which requires code change on the Forever.js package, your options look like:
cp forever-guid.log backup && :> forever-guid.log;
This has the slight risk of if your writing to the log file at a speedy pace, that you'll end up writing a log line between the backup and the nulling, resulting in the loss of the log line.
You can set up logrotate to watch the forever log directory to copy and truncate automatically based on filesize or time.
You can have your logging code look at how many lines the log file is and then doing the copy truncate - this would allow you to avoid the potential data loss.
EDIT: I had originally thought that split
and truncate
could do the job. They probably can but an implementation would look really awkward. Split
doesn't have a good way to splitting the file into a short one (the original log) and a long one (the backup). Truncate
(which in addition to the fact that it's not always installed) doesn't reset the write pointer, so forever just writes the same byte as it would have, resulting in strange data.