skip to Main Content
#!/bin/sh

SQS_QUEUE_URL="url"
AWS_REGION="eu-central-1"
LOCK_KEY="supervisor-lock"
# Unique identifier for the lock

# Attempt to acquire the lock by sending a message to the queue
lock_acquired=$(aws sqs send-message --queue-url "$SQS_QUEUE_URL" --region "$AWS_REGION" --message-body "$LOCK_KEY")
echo "Lock Acquired: $lock_acquired"

sleep 5

message=$(aws sqs receive-message --queue-url "$SQS_QUEUE_URL" --region "$AWS_REGION" --max-number-of-messages 1 --wait-time-seconds 0 --query "Messages[0].ReceiptHandle")

echo $message

if [ -n "$message" ]; then

    if [ "$ENVIRONMENT_NAME" = "development" ]; then
        echo "Starting Supervisor setup..."
        # handle in case you here it fails
        sudo mkdir -p /var/log/supervisor
        sudo mkdir -p /var/log/celery
        cp -r /var/app/current/supervisor /etc
        export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
        source /var/app/venv/*/bin/activate
        #/var/app/venv/staging-LQM1lest/bin/supervisorctl -c /etc/supervisor/project-supervisord.conf shutdown
        #sleep 5
        /var/app/venv/staging-LQM1lest/bin/supervisord -c /etc/supervisor/project-supervisord.conf
        echo "Starting Supervisor..."

        # Release the lock by deleting the message from the queue
        aws sqs delete-message --queue-url "$SQS_QUEUE_URL" --receipt-handle "$message" --region eu-central-1

        echo "message deleted"
    elif [ "$ENVIRONMENT_NAME" = "production" ]; then
        # Start supervisor only in development environment
        echo "Skipping Supervisor setup in production environment."
    fi

else
    echo "Failed to acquire the lock. Another instance is already setting up Supervisor."
fi

So I have an EB env that runs always 2 instances in parallel. When I deploy I want the commands to run only in one instance.
So the idea is to have a message lock mechanism so that the 2 instances somehow have a way to communicate.
I also want this to be resilient to ec2 replacement due to autoscaling kicking in.
So if the instance running the supervisor gets replaced it should run the supervisor again. Always and only in 1 instance at the time.

The result I get from this script atm is that the commands get to run in both instances and I have 2 messages left in the queue.

What I want to get is 0 messages in the queue and supervisor running in 1 instance.

I did manually run those commands and they all work, the delete as well but seems like is not deleting the message, I do get the print message as well.

Can anyone help please

2

Answers


  1. Chosen as BEST ANSWER

    I'm now running this :

    container_commands:
      00_deploy_hook_permissions:
        command: |
          sudo find .platform/ -type f -iname "*.sh" -exec chmod -R 755 {} ;
          sudo find /var/app/staging/.platform/ -type f -iname "*.sh" -exec chmod -R 755 {} ;
      01_run_supervisor_script:
        command: chmod +x script/celery.sh
        leader_only: true
    
    

    so moved the script in script/celery.sh I can see from the logs that the command was executed but looks like it did not.

    My modify script for testing :

    #!/bin/sh
    
    
    sudo mkdir -p /var/log/supervisor
    sudo mkdir -p /var/log/celery
    cp -r /var/app/current/supervisor /etc
    export $(cat /opt/elasticbeanstalk/deployment/env | xargs)
    source /var/app/venv/*/bin/activate
    
    /var/app/venv/staging-LQM1lest/bin/supervisord -c /etc/supervisor/project-supervisord.conf
    
    ````
    
    Am I missing anything here ?
    

  2. You have to run your script using Container commands with leader_only flag set to true. This way it will execute only on one instance.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search