Manage your logs using Logrotate

You have finally set up your own server and your application using this guide, and you feel proud that your app is now running in the vast web. A few days later, when you accessed your application’s URL, you are greeted with a Server Not Responding error! In panic, you SSH into your server to see what is going on, and soon find out that you have just run out of disk space.

What happened?

Most likely the reason is that your log files grew larger and larger as your application is running and it started eating through the entire disk. A couple of possible reasons are:

  • There is a background process that runs database queries every set amount of time, and those queries get logged
  • Using a background queue such as DelayedJob that continuously logs debug messages into the log file
  • Malicious scrapers and bots that repeatedly attempt to gain admin access to your application (by using common login paths such as those used in WordPress, etc)
  • Unnecessary logging of events that have large contents (e.g. logging entire objects or entire request data)

To prevent this scenario we need a way to make sure our logs are cleaned up automatically, so we don’t need to manually remove old logs as they have a tendency to be forgotten. One popular tool for this is a program called logrotate.

Logrotate works by making sure that your log files don’t grow in size unchecked. It compresses the log file and labels it with a timestamp so you can go back to older log files, while keeping the current log file contents only within the specified time range. For example, it can compress and archive logs at the end of the day every day so you begin with a fresh log file at the start of each day.


Let’s start by installing it, here we assume that we are using Ubuntu or any Debian-based system.

sudo apt-get update
sudo apt-get install logrotate

System Configuration

Logrotate’s configuration file can be found in /etc/logrotate.conf. Modify it using your preferred text editor.

sudo nano /etc/logrotate.conf

By default you will see something like this:

# see "man logrotate" for details
# rotate log files weekly

# keep 4 weeks worth of backlogs
rotate 4

# create new (empty) log files after rotating old ones

# uncomment this if you want your log files compressed
# compress

This is the system or overall setting of logrotate. It is not recommended that we actually modify this file (although you can if you wish) and it is best practice to use specific configuration files for each log file we want to rotate/archive.

Custom Configuration

The custom configuration files are placed in the /etc/logrotate.d directory. We create a separate file for each log type so it is easier to modify and manage. For example, here is a configuration for rotating Nginx logs:

Start by creating the custom configuration file called “nginx”

sudo nano /etc/logrotate.d/nginx

The contents of the file looks like:

/opt/nginx/logs/access.log {
rotate 30

/opt/nginx/logs/error.log {
rotate 30

read more

Reignite Your Finance

When was the last time you have looked at your expenses this month? How much are you saving per year? Do you pay all your bills and debts on time? Do you have investments years ago that you have forgotten about? The new year can be a good time to reflect on how you are doing with your finances and to see if it has turned cold in some way. If it has, then maybe it is time to reignite your passion about handling your finances well. This is my story on how I regained my passion on personal finance after almost a decade.

read more

2016 Year In Review

2016 will be over in less than an hour, and as we all welcome the year 2017, here are some of the things I have learned in the past year as well as adjustments that I need to make in the coming new year.


In the middle of the year my father had a serious illness which required hospitalization for several weeks. My mother did her best to take care of him physically while me and my siblings took care of the finances. In the end it was well worth it as my father recovered and is doing well.

My daughter is also growing up very fast. Last year my body aches due to carrying her multiple times a day, but now she prefers walking by herself and no longer wants to be carried that often. I knew it was all cliche back then but they really do grow up so fast.

read more

Should you get VUL Insurance?

Variable Universal Life (VUL) insurance is one of the most popular types of life insurance sold today. One of its main selling points against traditional life insurance is that it has an investment component, meaning that in addition to the guaranteed death benefit you receive, it also accumulates a cash value that is invested as part of your premiums.

This option makes sense for people who do not have the time or the interest in investing their own money. They just want to let others handle their own money and at least have more returns compared to letting their cash sit in their house or in the bank. However, as conscious investors we need to be aware that VUL insurance may at times not be the best option for us in terms of value for money. This article will explore the different aspects of VUL insurance and how we can analyze if a particular product best suits us.

read more

Running Long Tasks and Scripts

As your web application grows, there will be times when you need to run scripts or code snippets that could take quite a while to finish. Examples of these are generating large and complex reports, or updating your database with new values. When there are large numbers of records in your application, these scripts may take hours or even days to finish.

We usually access our application using the SSH protocol to log in and perform tasks in the server remotely. This article describes ways to perform long-running scripts or tasks in your application in order from least effective to most effective.

Using an open SSH session

This is the simplest and most direct way to run your script. From the SSH session, just invoke the command directly in the terminal:

bundle exec rake my_long_task:execute
bundle exec rails runner ", arg2).process"

While the easiest, this is also the most brittle of the methods, as this requires a constant and reliable SSH connection to the remote server. If the SSH client does not receive a response from the server for a set amount of time, it will terminate the connection and kill your script prematurely. Due to this, one workaround is to make your script output messages (via print or puts) so that the SSH process does not get terminated. Alternatively you can also consistently send commands to the terminal (by pressing any keys or the Enter key) for this purpose.

It is possible to keep the SSH connection alive without timing out using the ServerAliveInterval option when you open the SSH connection:

ssh -o ServerAliveInterval=5 -o ServerAliveCountMax=1 $HOST

While these methods work in making sure that the SSH connection is kept alive and does not prematurely terminate, it only solves half of the problem. When your computer (that you use to connect to the remote server) suddenly loses its network connection or loses power due to a battery drain or a power interruption, your long-running script will also be terminated. Due to these potential scenarios the following methods work better than running the scripts directly.

Using a background job library

Background jobs are used by web applications to run code while not blocking the web request-response cycle. A common example for this is sending emails; for instance when you request to reset your password, the application does not need to actually send the password reset email before prompting the user to check his/her email inbox. Usually the code that sends the password reset email is sent to a background job so that the user prompt will be served to the user immediately.

In Ruby, some of the most popular background job libraries are Sidekiq, Resque, and DelayedJob.

Background jobs are also very helpful in managing error conditions. Usually they have a mechanism of rescuing and handling exceptions and automatically retry the job if it fails for any reason. Using the same example above, if the mail server suddenly stopped working, the background job processor will just retry sending the same email again when the mail server comes back up.  Thus the worst case user experience is that the password reset email delivery will be delayed, compare this to the user seeing a 500 Server Error message or an exception trace when the background job is not utilized.

For running long scripts or tasks, background jobs can also be used to handle the processing. This is better than running the script directly as it no longer requires the local computer to have a continuous connection to the remote server. You just send the long script into the job queue, and then you can log out or turn off your machine and the script will be running in the background queue., arg2).delay.process

While this method will work well for scripts that take only a few minutes to a few hours to process, this is problematic for scripts that take tens of hours or even days to complete. The reason for this is that background job processors have their own timeout mechanism that will automatically terminate the job after a certain time has passed. This is implemented in the libraries to protect them from running looping tasks or hogging server resources. The server’s operating system could also prematurely terminate the background job processes if they exceed a certain threshold of resource usage.

Because of these caveats when using background jobs, the next and last method is the best option so far.

Using a background process

Instead of using a background job to process the long-running script, we can instead do a combination of running the script directly and running it via a background job: running the script and putting it in the background.

The “nohup &” is used when you want to run a process that does not terminate when you log out of a shell. The & command is used to run commands in the background, and combining this with nohup ensures that the command does not get terminated even if the shell has been logged out. The most basic structure is:

nohup script &

Where script is the command that you want to run. This can be further improved by placing the output of the script in a log file that you can view later to analyze what has happened. The command now looks like:

nohup script > script.out 2>&1 &

Where script.out is the log file, and the 2>&1 command just makes sure that stderr is also placed in stdout so errors can also be viewed in the log file.

An example command that runs a rake task to start Resque workers in the background looks like:

nohup bundle exec rake resque:work QUEUE="*" --trace > rake.out 2>&1 &

If you have a Ruby script that you want to run:

nohup ruby ./myscript.rb > myscript.out 2>&1 &

If you are using Ruby on Rails and you want to run a specific class in your application, you can use the rails runner command to invoke the method, and then run it in the background:

nohup bundle exec rails runner ", arg2).process" > longscript.out 2>&1 &
 [1] 12345

read more