Foreword

As a Django application developer, you often encounter situations like this: You want to be able to perform periodic asynchronous background tasks. It will come in handy if you want to create some background checks, send notifications or build a cache

My first choice is to install django-celery-beats. I am very satisfied with the result. I can dynamically configure my periodic tasks based on user configuration because Celery reads and executes the configuration from the database.

On the other hand, my application now relies on celery (must be run as a separate service) and Redis server (celery is used as a message broker).

The containerization of applications has become weird. I am not sure, whether I should include the celery service in the container or as a dependency of using docker-compose. I also think it's ridiculous that I only need Redis instances to perform periodic tasks. I also want to reduce the size of the hypervisord configuration.

I want to maintain flexibility, but reduce the number of dependencies and configuration templates.

solution

My celery is removed from the unwanted application (I did not use other great celery features). I just want to load periodically from django.conf.settings or database.

I use Alpine Linux as the base image for my application. The basic operating system can already handle the execution of periodic tasks. This is a good old crond. This configuration is described in the Alpine Linux docs.

I created a simple Django management command called setup my application. Let's also assume that I have another Django management command popularity, which should run every five minutes. In this example, I will read a configuration from the Django settings variable, CRON_JOBS it might look like this:

CRON_JOBS = {
    'popularity':'*/5 * * * *'
}

The variable consists of the Django management command name and periodicity pair. This command will crond use the python-crontab library to create a configuration based on it. This command should be run before the first run and every time the configuration changes (remember, because you have created CRON job rules from the management command, you can use ORM to read the configuration from the database).

# management/commands/setup.py
from crontab import CronTab
from django.core.management import BaseCommand

class Command(BaseCommand):
    cron = CronTab(tabfile='/etc/crontabs/root', user=True)
    cron.remove_all()

    for command, schedule in settings.CRON_JOBS.items():
        job = cron.new(command='cd /usr/src/app && python3 manage.py {}'.format(command), comment=command)
        job.setall(schedule)
        job.enable()

    cron.write()

Remember that if you load data from the database, you must use the setup management command to perform this change again. You can use the call_command method for this purpose.

from Django.core import management

def save_config():
    # do whatever you want
    management.call_method('setup')

Create a container

As I said before, my Django application is based on the Alpine Linux container and is executed from entrypoint.sh (in the order above):

  1. Perform migrations,
  2. Execute setup to create management commands for initial CRON job configuration,
  3. And initialize the supervisord service (will manage gunicorn and crond services).

supervisord configuration

My supervisord is used to manage the execution of gunicorn application server and crond service.

If the application is located in the /usr/src/app directory and gunicorn is installed in the directory, /root/.local/bin/gunicorn may look like supervisor.conf:

[supervisord]
nodaemon=true

[program:gunicorn]
directory=/usr/src/app
command=/root/.local/bin/gunicorn -b 0.0.0.0:8000 -w 4 my_app.wsgi --log-level=debug --log-file=/var/log/gunicorn.log
autostart=true
autorestart=true
priority=900

[program:cron]
directory=/usr/src/app
command=crond -f
autostart=true
autorestart=true
priority=500
stdout_logfile=/var/log/cron.std.log
stderr_logfile=/var/log/cron.err.log

Dockerfile

The minimum Dockerfile must contain at least:

  • Copy the application source code,
  • Install dependencies,
  • Copy configuration,
  • The execution of the entry point.
FROM alpine:3.15

WORKDIR /usr/src/app

# Copy source
COPY..

# Dependencies
RUN apk add --no-cache python3 supervisor
RUN pip3 install --user gunicorn
RUN pip3 install --user -r requirements.txt

# Configuration
COPY conf/supervisor.conf /etc/supervisord.conf

# Execution
RUN chmod +x conf/entrypoint.sh
CMD ["conf/entrypoint.sh"]
Likes(0)

Comment list count 0 Comments

No Comments