I have a Laravel Spark project that uses Horizon to manage a job queue with Redis.
Locally, (on my Homestead box, Mac OS) everything works as expected, but on our new Digital Ocean (Forge provisioned) Droplet, which is a memory-optimized 256GB, 32vCPUs, 10TB, and 1x 800GB VPS, I keep getting the error:
PDOException: Packets out of order. Expected 0 received 1. Packet size=23
Or some variation of that error, where the packet size info may be different.
After many hours/days of debugging and research, I have come across many posts on StackOverflow and elsewhere, that seem to indicate that this can be fixed by doing a number of things, listed below:
-
Set
PDO::ATTR_EMULATE_PREPARES
to true in my database.php config. This has absolutely no effect on the problem, and actually introduces another issue, whereby integers are cast as strings. -
Set
DB_HOST
to127.0.0.1
instead oflocalhost
, so that it uses TCP instead of a UNIX socket. Again, this has no effect. -
Set
DB_SOCKET
to the socket path listed in MySQL by logging into MySQL (MariaDB) and runningshow variables like '%socket%';
which lists the socket path as/run/mysqld/mysqld.sock
. I also leaveDB_HOST
set tolocalhost
. This has no effect either. One thing I did note, was that thepdo_mysql.default_socket
variable is set to/var/run/mysqld/mysqld.sock
, I’m not sure if this is part of the problem? -
I have massively increased the MySQL configuration settings found in
/etc/mysql/mariadb.conf.d/50-server.cnf
to the following:key_buffer_size = 2048M
max_allowed_packet = 2048M
max_connections = 1000
thread_concurrency = 100
query_cache_size = 256M
I must admit, that changing these settings was a last resort/clutching at straws type scenario. However, this did alleviate the issue to some degree, but it did not fix it completely, as MySQL still fails 99% of the time, albeit at a later stage.
In terms of the queue, I have a total of 1,136 workers
split between 6 supervisors/queues and it’s all handled via Laravel Horizon, which is being run as a Daemon.
I am also using the Laravel Websockets PHP package for broadcasting, again, which is also being run as a Daemon.
My current environment configuration is as follows (sensitive info omitted).
APP_NAME="App Name"
APP_ENV=production
APP_DEBUG=false
APP_KEY=thekey
APP_URL=https://appurl.com
LOG_CHANNEL=single
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=databse
DB_USERNAME=username
DB_PASSWORD=password
BROADCAST_DRIVER=pusher
CACHE_DRIVER=file
QUEUE_CONNECTION=redis
SESSION_DRIVER=file
SESSION_LIFETIME=120
REDIS_HOST=127.0.0.1
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=smtp.gmail.com
MAIL_PORT=587
[email protected]
MAIL_PASSWORD=password
MAIL_ENCRYPTION=tls
[email protected]
MAIL_FROM_NAME="${APP_NAME}"
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_DEFAULT_REGION="us-east-1"
AWS_BUCKET=
PUSHER_APP_ID=appid
PUSHER_APP_KEY=appkey
PUSHER_APP_SECRET=appsecret
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
AUTHY_SECRET=
CASHIER_CURRENCY=usd
CASHIER_CURRENCY_LOCALE=en
CASHIER_MODEL=AppModelsUser
STRIPE_KEY=stripekey
STRIPE_SECRET=stripesecret
# ECHO SERVER
LARAVEL_WEBSOCKETS_PORT=port
The server setup is as follows:
- Max File Upload Size: 1024
- Max Execution Time: 300
- PHP Version: 7.4
- MariaDB Version: 10.3.22
I have checked all logs (see below) at the time the MySQL server crashes/goes away, and there is nothing in the MySQL logs at all. No error whatsoever. I also don’t see anything in:
/var/log/nginx/error.log
/var/log/nginx/access.log
/var/log/php7.4-fpm.log
I’m currently still digging through and debugging, but right now, I’m stumped. This is the first time I’ve ever come across this error.
Could this be down to hitting the database (read/write) too fast?
A little information on how the queues work.
- I have an initial controller that dispatches a job to the queue.
- Once this job completes, it fires an event which then starts the process of running several other listeners/events in sequence, all of which depend on the previous jobs completing before new events are fired and new listeners/jobs take up the work.
- In total, there are 30 events that are broadcast.
- In total, there are 30 listeners.
- In total there are 5 jobs.
These all work sequentially based on the listener/job that was run and the event that it fires.
I have also monitored the laravel.log
live and when the crash occurs, nothing is logged at all. Although, I do occasionally get production.ERROR: Failed to connect to Pusher.
whether MySQL crashes or not, so I don’t think that has any bearing on this problem.
I even noticed that the Laravel API rate limit was being hit, so I made sure to drastically increase that from 60 to 500. Still no joy.
Lastly, it doesn’t seem to matter which Event, Job, or Listener is running as the error occurs on random ones. So, not sure it’s code-specific, although, it may well be.
Hopefully, I’ve provided enough background and detailed information to get some help with this, but if I’ve missed anything, please do let me know and I’ll add it to the question. Thanks.
5
Answers
We were getting a similar PHP warning about packets out of order.
What solved it for us is increasing max_connections in the MySQL my.cnf.
Your current max_connections are probably 1024. We increased ours to 4096 and the warning went away.
In MySQL you can see your current max_connections with this command:
MySQL 8 – in
mysql.cnf
, disable all this ->and looks like help me.
For me what fixed it was increasing the max packet size.
In
my.cnf
, I added:And then
service mysql stop
,service mysql start
, and it worked 🙂I hit a similar issue that was reproducible, it was a programming error:
I was using an unbuffered database cursor and did not close the cursor before firing off other DB operations. The exact error thrown was
Packets out of order. Expected 1 received 2.
The first thing to check is the
wait_timeout
of the MySQL server, in relation to the time that your application takes between queries. I’m able to recreate this error consistently by sleeping longer than wait_timeout seconds between SQL queries.If your application performs a query, then does something else for a while that takes longer than that period, the MySQL server terminates the connection, but your PHP code may not be aware that the server has disconnected. If the PHP application then tries to issue another query using the the closed connection, it will generate this error (in my tests, consistently with
Expected 0 received 1
.You could fix this by:
wait_timeout
, either globally on the server, or on a per-session basis using the commandSET session wait_timeout=<new_value>;
wait_timeout
seconds have elapsed between queries.This error could probably occur because of other problems as well.
I would check that you are using a persistent connection and not connecting to the server over and over again. Sometimes the connection process, especially with many simultaneous workers, causes a lot of network overhead that could cause a problem such as this.
Also, sometimes, in a production, high-transaction volume server, weird network stuff happens and this may just happen occasionally, even, it seems over the loopback interface in your case.
In any case, it is best to write your code so that it can gracefully handle errors and retry. Often, you could wrap your SQL query in a
try..catch
to catch this error when it happens and try again.