I’m having a hard time understanding the behavior of my queries.
Given following tables:
CREATE TABLE messages (
id BINARY(16) NOT NULL PRIMARY KEY,
chat_id VARCHAR(128) NOT NULL,
author_id VARCHAR(128) NOT NULL,
created_at BIGINT SIGNED NOT NULL,
content VARCHAR(500) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
UNIQUE INDEX chat_id_created_at(chat_id, created_at DESC)
);
CREATE TABLE chat_users (
user_id VARCHAR(128) NOT NULL,
chat_id VARCHAR(128) NOT NULL,
active TINYINT(1) NOT NULL,
created_at BIGINT signed NOT NULL,
PRIMARY KEY (user_id, chat_id),
INDEX chat_users_chat_id(chat_id)
);
and user_id
I want to get timestamps of most recent messages in all the chats a user belongs to.
With chat_id_created_at
index in messages
the following query works instantaneously:
explain format=json (
SELECT MAX(m.created_at) FROM messages m WHERE m.chat_id = 'xyz'
)
EXPLAIN OUTPUT:
{
"query_block": {
"select_id": 1,
"message": "Select tables optimized away"
}
}
but when I use the same query as a subquery, it seems that all rows per chat_id
are scanned to get a max. Some chats have hundreds of thousands of messages in them so it’s becoming slow for them.
explain format=json (
SELECT cu.chat_id,
(SELECT MAX(m.created_at)
FROM messages m WHERE m.chat_id = cu.chat_id),
cu.active
FROM chat_users cu
WHERE cu.user_id = '???'
)
EXPLAIN OUTPUT:
{
"query_block": {
"select_id": 1,
"cost_info": {
"query_cost": "0.85"
},
"table": {
"table_name": "cu",
"access_type": "ref",
"possible_keys": [
"PRIMARY"
],
"key": "PRIMARY",
"used_key_parts": [
"user_id"
],
"key_length": "386",
"ref": [
"const"
],
"rows_examined_per_scan": 6,
"rows_produced_per_join": 6,
"filtered": "100.00",
"cost_info": {
"read_cost": "0.25",
"eval_cost": "0.60",
"prefix_cost": "0.85",
"data_read_per_join": "11K"
},
"used_columns": [
"user_id",
"chat_id",
"active"
]
},
"select_list_subqueries": [
{
"dependent": true,
"cacheable": false,
"query_block": {
"select_id": 2,
"cost_info": {
"query_cost": "4.64"
},
"table": {
"table_name": "m",
"access_type": "ref",
"possible_keys": [
"chat_id_created_at"
],
"key": "chat_id_created_at",
"used_key_parts": [
"chat_id"
],
"key_length": "386",
"ref": [
"cu.chat_id"
],
"rows_examined_per_scan": 39,
"rows_produced_per_join": 39,
"filtered": "100.00",
"using_index": true,
"cost_info": {
"read_cost": "0.73",
"eval_cost": "3.91",
"prefix_cost": "4.64",
"data_read_per_join": "122K"
},
"used_columns": [
"chat_id",
"created_at"
]
}
}
}
]
}
}
I also tried with ORDER BY m.created_at DESC LIMIT 1
instead of MAX(m.created_at)
, but same thing happens.
I feel like I’m missing something obvious here.
2
Answers
Join with a grouped subquery instead of using a correlated subquery.
While your query is about max, subqueries and also being concerned about speed, may i suggest that you use a different table which will store only the latest values of the userid and its latest post information which you like to kepp as computed.
This seems like old school, but when you work with huge data and start having problems, the simplest answers are the best.
I suggest adding a new column for faster access and linking:
New table (add index to both of the columns per your choice, preferred would be a clustered index):
Now, whenever there is a new message by a user, just update this table as well, and you shall have the latest info at all times.