/home/adeel

Creating a JSON logger for Flask

By default Flask writes logs to the console in plain-text format. This can be limiting if you intend to store your logs in a text file and periodically send them to a central monitoring service. For example, Kibana, only accepts JSON logs by default.

You might also want to enrich your logs with additional metadata, e.g. timestamps, method names, log type (Warn, Debug, etc.). In this post we will use the Python logging library to modify Flask’s logging format and write them to a text file. In the end we will see how to periodically send these logs to an external service using Flume.

In our app we would like to setup two types of loggers. One for writing logs to the console, and the other for writing them to a file. CLI logs can be brief and only contain essential data, while file logs can be more extensive with full exception details.

Creating a file log handler

We can use the logging.handlers.TimedRotatingFileHandler method to create the file handler. Over time this file can grow, so we will instruct the handler to create a new file every 12 hours.

For a full list of file log handlers, see here: https://docs.python.org/3/library/logging.handlers.html#filehandler

json_handler = logging.handlers.TimedRotatingFileHandler(
    filename=f"{LOG_DIR}/project-abc.log",
    interval=12,
    when="H",
    backupCount=1,
)

The when parameter specifies the interval rate. In our case a new file will get created every 12 hours. Setting backupCount to 1 keeps a single backup of the log file. Afterwards it will get deleted.

In the next step we create our custom JSON formatter to update the log formatting to our needs.

class JsonFormatter(logging.Formatter):
    def formatException(self, exc_info):
        result = super(JsonFormatter, self).formatException(exc_info)
        json_result = {
        "timestamp": f"{datetime.now()}",
        "level": "ERROR",
        "logger": "app",
        "message": f"{result}",
        }
        return json.dumps(json_result)

Next we instantiate this class and pass it our desired JSON format.

json_formatter = JsonFormatter(
'{"timestamp":"%(asctime)s", "level":"%(levelname)s", "logger":"%(module)s", "message":"%(message)s"}'
)

All logs will be written in the format given above, including exception logs, since we override the formatException method in our class.

Creating a custom JsonFormatter is necessary as we intend to override its formatException method. By default exceptions are written in plain-text format, but this is incompatible with our Kibana setup, which expects data to be in JSON format.

Next we register this formatter class with our Flask app. This is usually done in the project’s app.py file. The root.addHandler call will register this formatter with our Flask app.

json_handler.setFormatter(json_formatter)
root.addHandler(json_handler)

Configuring the CLI logs

So far we only tinkered with file logs. To create a custom log format for CLI logs we must use the logging.StreamHandler class. After instantiating it we can set our desired log format.

console_handler = logging.StreamHandler()
console_formatter = logging.Formatter(
"[%(asctime)s] - %(name)s - %(levelname)s - %(message)s"
)

As we did for the file log handler, we need to register this formatter in our app.py file.

console_handler.setFormatter(console_formatter)
root.addHandler(console_handler)

Result

Once everything is configured your Flask project should produce JSON logs, like below.

{"timestamp":"2022-08-08 20:12:40,326", "level":"INFO", "logger":"_internal", "message":"127.0.0.1 - - [08/Aug/2022 20:12:40] "ESC[32mGET / HTTP/1.1ESC[0m" 302 -"}
{"timestamp":"2022-08-08 20:12:40,368", "level":"INFO", "logger":"_internal", "message":"127.0.0.1 - - [08/Aug/2022 20:12:40] "ESC[37mGET /swagger-ui HTTP/1.1ESC[0m" 200 -"}
{"timestamp":"2022-08-08 20:12:40,431", "level":"INFO", "logger":"_internal", "message":"127.0.0.1 - - [08/Aug/2022 20:12:40] "ESC[37mGET /swaggerui/droid-sans.css HTTP/1.1ESC[0m" 200 -"}
{"timestamp":"2022-08-08 20:12:40,433", "level":"INFO", "logger":"_internal", "message":"127.0.0.1 - - [08/Aug/2022 20:12:40] "ESC[37mGET /swaggerui/swagger-ui.css HTTP/1.1ESC[0m" 200 -"}
{"timestamp":"2022-08-08 20:12:40,433", "level":"INFO", "logger":"_internal", "message":"127.0.0.1 - - [08/Aug/2022 20:12:40] "ESC[37mGET /swaggerui/swagger-ui-bundle.js HTTP/1.1ESC[0m" 200 -"}

Sending logs with Flume

Setting up Flume is outside the scope of this post, but we show the configuration needed to send these logs to your monitoring service using the log file we created above.

# # ##################### #
# # Configure the sources #
# # ##################### #
logger.sources.r0.filegroups.f = /var/log/.*
logger.sources.r0.headers.f.file = /var/log/project-abc/
logger.sources.r0.headers.f.filename = project-abc.log

# ################### #
# Configure the sinks #
# ################### #
logger.sinks.k0.hostname = service-xyz.ch
logger.sinks.k0.port = 10001
logger.sinks.k0.batch-size = 200

The source is the project where the logs are being created and the sink is our destination. In our configuration we installed Flume alongside our main project in OpenShift which periodically sends logs to our monitoring service.