skip to Main Content

I saw that Laravel 8,9,10 handles the queue worker on SIGTERM signal. But according to the bellow comment from Worker class (Worker.php on line 185), laravel handles it only for the current worker execution, if supervisor (or other monitoring tool used) is not altered to take into account that SIGTERM and to NOT start the worker again, then it has little effect because the worker is stopped from laravel after he finishes the execution.(also artisan queue:restart can be issued when the supervisor stops restarting the queue, so that all jobs are gracefully stopped).

        // Finally, we will check to see if we have exceeded our memory limits or if
        // the queue should restart based on other indications. If so, we'll stop
        // this worker and let whatever is "monitoring" it restart the process.

How about the scheduler? How can I prevent (from within Laravel) the AppConsoleKernel::schedule function to start the commands if a SIGTERM signal has been received?

Basically

php artisan schedule:run

should do nothing after SIGTERM signal.

When the docker container will be shut down, it receives this signal. I want to stop any new scheduled comands on that container from starting. This happens on a deploy or on a scale down scenario. I don’t want the container to be killed in the middle of a process.

Acc. to https://stackoverflow.com/a/53733389/7309871 (if nothing changed since 2018) running the comands in a queue would solve this problem but restricts the scheduler to jobs only.

Since Laravel uses symfony https://symfony.com/blog/new-in-symfony-5-2-console-signals

If you prefer to handle some signals for all application commands
(e.g. to log or profile commands), define an event listener or
subscriber and listen to the new ConsoleEvents::SIGNAL event.

But this can’t handle it anyway because:

This could be a possible solution for the kernel but as its window of execution if very narrow and because this is executed each minute missing the SIGTERM sent at second 30 for example, a better way would be to stop execution of php artisan schedule:run server side after SIGTERM signal is received.

AppConsoleKernel.php

/**
 * @inheritdoc
 */
public function __construct(Application $app, Dispatcher $events)
{
    parent::__construct($app, $events);
    $this->app->singleton(SignalSingleton::class);

    if (!extension_loaded('pcntl')) {
        return;
    }

    pcntl_async_signals(true);
    pcntl_signal(SIGTERM, function (int $signo, mixed $siginfo): void {
        Log::info('SIGTERM received');
        resolve(SignalSingleton::class)->shedulerIsEnabled = false;
    });
}

/**
 * Define the application's command schedule.
 */
protected function schedule(Schedule $schedule): void
{
    if (!resolve(SignalSingleton::class)->shedulerIsEnabled) {
        return;
    }
...

The SignalSingleton class should contain only a public bool $shedulerIsEnabled = true;

2

Answers


  1. Chosen as BEST ANSWER

    After some days of research it turns our this is not a php/laravel job but rather a devops one. So when a container is to be shut down using a SIGTERM signal, the signal must be caught (not in php) and immediately after that:

    1. Stop all new http requests from the load balancer to the target container.
    2. The supervisor or any other tool that is used should be stopped from restarting the workers.
    3. Run php artisan queue:restart that will gracefully stop all workers
    4. Stop or prevent the scheduler from running by not calling anymore: php artisan schedule:run every 60 seconds
    5. When all started processes from the load balancer, scheduler and worker(queue) are finished, kill the container.

    This will prevent any processes from being killed.


  2. Most people run their queue workers in supervisord setups, which has some downfalls like not supporting delayed startups (unless you add a script w/ a sleep), and is unable to handle FATAL crashes, which means the queue worker will crash and supervisor won’t restart it (ever seen your queues fill up? 🙂

    There’s a solution I just started using, running the below script as the CMD of a php-fpm docker-container.

    benefits:

    • no need to add supervisord to the php-fpm container (saves lots of resources)
    • queues will for sure be restarted on failure
    • can handle graceful shutdown during auto-scaling events by trapping signals
    • it’s simple and efficient
    #!/usr/bin/env bash
    
    function stopQueues() {
        echo "Stopping queues..."
        php artisan queue:restart
        exit 0
    }
    
    # handle graceful shutdown
    trap "" SIGPIPE
    trap stopQueues SIGTERM SIGINT SIGHUP
    
    echo "Starting PHP queue workers..."
    # SQS_QUEUE (bg)
    (while :; do
        php artisan queue:work --queue="$(grep -oP 'SQS_QUEUE=K.*' .env)" --no-interaction
        sleep 1
    done &)
    
    # SQS_FALLBACK_QUEUE (bg)
    (while :; do
        php artisan queue:work --queue="$(grep -oP 'SQS_FALLBACK_QUEUE=K.*' .env)" --no-interaction
        sleep 1
    done &)
    
    # SQS_NOTIFICATION_QUEUE (fg) blocking
    while :; do
        php artisan queue:work --queue="$(grep -oP 'SQS_NOTIFICATION_QUEUE=K.*' .env)" --no-interaction
        sleep 1
    done
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search