I have a MYSQL database with a table ‘product’, about 500 000 rows, with a column ‘oem’ (varchar 255, utf8_unicode_ci). I have another table ‘oem’ with a column ‘oem’ too (varchar 255, utf8_unicode_ci), about 150 000 rows.
I need to search all products from table ‘product’ with product.oem containing an oem from table ‘oem’.
Example :
product 1 oem : ABBA is selected because is containing ‘BB’ that match exactly an oem from table oem;
And i don’t want to select product with oem that match exactly an oem in table oem.
Example :
product 2 oem : BAAB is not selected because is containing ‘BAAB’ that match an oem in table oem;
So i did this simple SQL query using ‘like’ :
SELECT p.*
FROM product p
WHERE EXISTS (
SELECT 1
FROM oem
WHERE p.oem LIKE CONCAT('%', oem.oem, '%')
AND p.oem != oem.oem
) AND p.id > 0
ORDER BY p.id ASC
LIMIT 5000;
This works in my local environment but it’s so slow i had to reduce the limit number to 5000. I have indexes on column p.id (PRIMARY), p.oem and oem.oem. It doesn’t work in production (same data), 503 service unavailable after very long processing.
I also tried join query :
SELECT p.*
FROM product p
JOIN oem
ON p.oem LIKE CONCAT('%', oem.oem, '%')
WHERE p.oem != oem.oem
AND p.id > 0
ORDER BY p.id ASC
LIMIT 5000;
It is 10 times worse.
So my question is, what is the best way to achieve what i am trying to do here ? Thanks
2
Answers
Its an unusual requirement to search for a substring of OEMs.
I would create a new table with primary keys of the product and the OEM, then populate this table by running your search for each product (as I assume there will be multiple results for each product)
As its such a slow operation, you can populate this table slowly, and in batches so it won’t break the transaction log. eg you run the search for product 1, then product 2, writing the results to the table separately each time. It will take ages (as will any query you come up with that does substring matches) but you’ll then have a lookup table to refer to repeatedly afterwards.
May I suggest this as a better way to "chunk" the processing:
This keeps the number of rows per chunk looked at to a minimum—possibly less than 5000—rather than occasionally running to the end of the table.
I would suggest 1000 instead of 5000 — to further limit the impact on other processing, without significantly impacting the processing.
After those changes, time both the
EXISTS
andJOIN
approaches.Also, let’s avoid doing the
EXISTS
over and over…