Migrating from Heroku to Railway.app: Python Flask app with Redis and Postgres. Using Redis as an asynchronous job queue, with the RQ Redis queue python library.
Procfile, which works in dev, looks like this:
web: gunicorn app:app
worker: rq worker --with-scheduler
The last line of the Deploy log looks as if the worker is loading:
[2022-10-07 22:33:46 +0000] [1] [INFO] Starting gunicorn 20.0.4
[2022-10-07 22:33:46 +0000] [1] [INFO] Listening at: http://0.0.0.0:6040/ (1)
[2022-10-07 22:33:46 +0000] [1] [INFO] Using worker: sync
[2022-10-07 22:33:46 +0000] [11] [INFO] Booting worker with pid: 11
However, none of my Redis-enqueued jobs are starting. It’s as if the worker process does not exist. Railway’s documentation says little except that Procfiles are supported.
Because there is no SSH, I cannot look at the live processes to see if the worker is running. Other than in the deploy log, I don’t see any evidence of a worker process. Redis queue works successfully in the dev environment, and the staging/production environments are successfully addressing the correct Redis URLs.
How can I check to see if the Procfile-started worker process on a railway service is indeed live? Has anyone else had trouble starting workers from the Procfile on Railway.app? What might I be missing?
2
Answers
You can use docker deployment.
According to Railway docs, the Procfile will only load a single executable as detailed here. If you have a web and a worker process specified, only the first (web) process will load.
The unofficial way to get around this is to put both the web and worker calls into a bash script (called railway.sh in this example) like so:
Then, call the shell file from Procfile:
Another clean way to get around this is to delete your Procfile and create two services on railway stemming from the same repo. Then, define the worker and web commands under the start command of each service.
For example: For the worker service created, its run command would be:
And for the web service: