skip to Main Content

I have a script that runs for a few minutes as a job on the cluster in the production environment. There are between 0 and 100 such jobs, each with 1 script per job, running at the same time on the cluster. Usually, there are no jobs running, or a burst of about 4-8 such jobs.

I want to prevent such jobs from running when I deploy a new version of the code into production.

How do I do that to optimize maintainability?

My initial idea was this:

  1. Use a semaphore file or a lock file that is created at the beginning of deployment and then removed after the code has been deployed. Deploy runs for 0.5 – 10 min, depending on the complexity of the current deploy tasks.
  2. This lock file is also automatically deleted by a separate cron job after, for example, 30 min, if deploy fails to remove this file. For example, if the deploy in rudely killed, this file should not hang around forever blocking the jobs. That is, the file is deleted by a separate cron job if it is older than 30 minutes.
  3. The production code checks for this lock file and waits until it is gone. So the jobs wait no more than 30 min.

I am concerned about possible race conditions, and considering maybe using a database-based solution. In the case of my application, I would use postgreSQL. This database-based solution may be more complex to implement and maintain, but may be less probe to race conditions.

Perhaps there is a standard mechanism to achieve this in Capistrano, which is used for deployment of this code?

Notes:

  • When you answer the question, please compare maintainability of your suggested solution with that of the simple solution I propose above (using lock files)

  • I am not sure if I need to take the race conditions into account. That is, is this system (with lock files) really race condition-prone? Or is it an unlikely possibility?

FAQs:

Is there a particular reason these jobs shouldn’t run during deployment?

I had cases when multiple jobs would run during mid-deployment, and fail because of that. Finding and rerunning such failed jobs is time-consuming. Delaying them during deployment carries only a small and rare performance hit, and is by far the most acceptable solution. For our system, maintainability is priority number one.

2

Answers


  1. As I described in comments this feature can be integrated as feature flag. Very popular solution for rails is gem flipper.

    Pseudo code for your job can looks like (while I don’t know your jobs code)

    class ProcessingJob < ApplicationJob
      queue_as :default
    
      def perform
        return unless Flipper.enabled?(:jobs_processing)
    
        ... job's code
      end
    end
    

    Flipper has admin UI for enabling/disabling feature flags. So, for example, you can create feature jobs_processing, enable it and then at some point before deploy you can turn off it.

    While feature flag is disabled during deploy you will sure that no jobs are performed. And after deploy you can enable it again.

    You could think Flipper is complicated solution for you feature, so you can do something simplier without gem and just create table in database, update your admin page with enabling/disabling features.

    class CreateFeatureFlags < ActiveRecord::Migration[7.1]
      def change
        create_table :feature_flags do |t|
          t.string :name, null: false, index: { unique: true }
          t.timestamps
        end
      end
    end
    
    class FeatureFlag < ApplicationRecord
      def self.enabled?(name)
        where(name: name).exists?
      end
    end
    
    FeatureFlag.enabled?('jobs_processing')
    
    Login or Signup to reply.
  2. Working with advisory locks at simplest level using psql.

    Session 1

    select pg_advisory_lock(3752667);
    

    Contents of advisory_lock_test.sql file:

    select pg_advisory_lock(3752667);
    select "VendorID" from nyc_taxi_pl limit 10;
    

    Then session 2:

    psql -d test -U postgres -p 5452 -f advisory_lock_text.sql 
    Null display is "NULL".
    

    Then in session 1:

    select pg_advisory_unlock(3752667);
    

    Back to session 2:

    Null display is "NULL".
     pg_advisory_lock 
    ------------------
     
    (1 row)
    
     VendorID 
    ----------
            1
            2
            2
            2
            2
            2
            1
            1
            2
            2
    (10 rows)
    

    Note:

    The below is using session level locks. Transaction locks are also available using pg_advisory_xact_lock

    Basically you create a lock in a session with pg_advisory_lock(3752667) where the number can be one 64 bit integer of two 32 bit integers. These could come from values that you fetch from a table so a number is scoped to a particular action e.g. select pg_advisory_lock((select lock_number from a_lock where action = 'deploy'));. Then in the second or other sessions you try to acquire a lock on the same number. If the number is in use, not unlocked or the original session did not exit, the other sessions will wait until the original session releases the lock. At that point the rest of the commands will run.

    In your case create a number, possibly in a table, that is associated with deploying. When you run the deployments lock on the number before you run the changes, then unlock at end of deployment. If the deployment fails and the session ends the lock will also be released The other scripts would also need to start with attempting to lock on that number also. If it is in use they will wait until it is released and then run the rest of the script commands and unlock. How manageable this is depends on the number of scripts you are dealing with and getting people to stick to the process.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search