SELECT
obo.id, obo.ready_notified, obo.view_notified, obo.order_name,
CONVERT_TZ(obo.submitted_at, 'GMT', 'America/New_York') AS submitted_at,
obo.order_counter, obo.is_pos, obo.occasion, obo.dine_in_table_number,
obo.cashier_id, obo.terminal_name, obo.terminal_id, obo.payment_type,
obo.total, obo.is_paid_by_split, obo.payment_type_raw, obo.is_submitted,
obo.is_cancelled, obo.is_split_cancelled,obo.has_refund, obo.has_adjustment,
obo.adjusted_total, obo.tip,
obo.refund_total, obo.refund_pending, obo.order_hash,
GROUP_CONCAT(oth.transaction_id) AS transaction_id
FROM
order_botorder obo
LEFT JOIN order_ordertransactionhistory AS oth ON obo.id = oth.bot_order_id
WHERE
obo.store_id = 407
AND obo.is_open = 0
AND (obo.is_submitted = 1 OR obo.is_closed = 1)
and (
CAST(obo.id AS CHAR) LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.phone_number LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.order_counter LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.dine_in_table_number LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.delivery_address LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.occasion_raw LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.terminal_id LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.cashier_id LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.payment_type_raw LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR obo.total LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
OR oth.transaction_id LIKE '%tr_7AcXLYxUSK6K0WBVshqg1g%'
)
GROUP BY
obo.id
ORDER BY
obo.submitted_at DESC
LIMIT
10 OFFSET 0;
here is the explainnation of the query it is almost searching data in 285384 that should not happen
id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | SIMPLE | obo | NULL | ref | PRIMARY, order_botorder_order_counter_store_id_d14a38a1_uniq, order_botor_id_43a8b8_idx, order_botorder_order_hash_6c7414b4, idx_order_botorder_occasion, idx_order_botorder_phone_number, idx_order_botorder_order_counter, idx_order_botorder_dine_in_table_number, order_botorder_server_id_id_a2196482_fk_auth_user_id, idx_submitted_at, order_botor_is_subm_7011be_idx, order_botor_has_sub_df4837_idx, order_botor_is_clos_dfa4b1_idx, order_botor_is_tab__25acbe_idx, order_botor_is_pos_7954a9_idx, order_botor_is_onli_8f9a5f_idx, order_botor_submitt_736eb5_idx, idx_orderbotorder_id, idx_is_cancelled, idx_occasion, idx_optimized_query, idx_obo_store_id, idx_obo_submitted_closed_open, idx_obo_id, idx_obo_phone_number, idx_obo_order_counter, idx_obo_dine_in_table_number, idx_obo_occasion_raw, idx_obo_terminal_id, idx_obo_cashier_id, idx_obo_payment_type_raw, idx_obo_total, idx_fulltext_order_details | idx_optimized_query | 6 | const, const | 285384 | 100.00 | Using index condition; Using temporary; Using filesort |
1 | SIMPLE | oth | NULL | ref | idx_bot_order_id, order_order_bot_ord_baf3b4_idx, idx_oth_bot_order_id | order_order_bot_ord_baf3b4_idx | 4 | order_api_migration.obo.id | 1 | 100.00 | Using where; Using index |
3
Answers
The explain says
Using filesort
, and it does do that on 285384 rows. this is needed because it should be known which 10 rows are the first.Becasue you have this:
The index (probably
idx_submitted_at
) is not used.In stead an index with the name
idx_optimized_query
is used, because MySQL thing that is the smarter approach to find this data.EDIT:
Because this is all about selecting the correct info, you can try (not tested, because I do not have your data, and exact index definitions…)
In your SQL you do:
This means that you want to select the last 10 rows ordered by
submitted_at
.When you add this to your WHERE clause:
It could be that MySQL decides to use an index on
submitted_at
, when that exists, and your final result should select less records than the current 285384, which means thefilesort
is quicker.Try indexing
obo.is_open
,obo.store_id
,obo.is_submitted
andobo.is_closed
. You can check this : https://www.w3schools.com/mysql/mysql_create_index.aspIndexing should improve performance in join queries
You’re using
LIKE('%something%')
, many times actually, which means that MySQL has to do a full table scan because it can’t use indexes (yes, I will nuance later in this post).Imagine I hand you a phonebook for your country and ask you to give me all contacts matching ‘
smi%
‘. Then you browse toS
->M
->I
and start reading all names (Smith
,Smither
,Smith-Jones
, …).Now imagine I hand you the same phonebook and ask you to give me all names matching ‘
%mit%
‘. Now you have no option to go through all individual names in the phonebook from the very first to the very last to check if it matches ‘%mit%
‘.What you’re doing is even worse: You’re not only looking for names matching ‘
%something%
‘ but also addresses, phonenumbers, everything, and the kitchen sink.Now, to nuance this a bit: if the phonebooks were divided into books per city (in your case a store) that would make your job easier; you wouldn’t have to go through cities you’re not interested in. So you could start by adding an index on ‘city’ (in your case ‘store’).
I quite sure indexing on the
is_<something>
fields won’t do very much (assuming they’re all boolean0
or1
values) because of very low cardinality. But it may help.What you want to do is to ‘get rid of’ as many rows as you need to scan before actually scanning. Adding an index on
obo.store_id
(if not already present) will help because that, essentially, will split the phonebooks into cities (stores); then all you need to do is grab the cities (stores) you’re interested in and scan those. As per Luuk’s suggestion you can also ‘get rid of’ a lot of rows by first telling MySQL you’re not interested in rows from/before/after some date, which can then be indexed, or any other expression. Indexes on those fields will usually help – but don’t go overboard adding indexes on every field; it will help in searching for stuff but it will also make inserts and deletes slower because indexes need to be updated on mutations of your data.