I have a table with close to a billion records, and need to query it with HAVING
. It’s very slow (about 15 minutes on decent hardware). How to speed it up?
SELECT ((mean - 3.0E-4)/(stddev/sqrt(N))) as t, ttest.strategyid, mean, stddev, N,
kurtosis, strategies.strategyId
FROM ttest,strategies
WHERE ttest.strategyid=strategies.id AND dataset=3 AND patternclassid="1"
AND exitclassid="1" AND N>= 300 HAVING t>=1.8
I think the problem is t
cannot be indexed because it needs to be computed. I cannot add it as a column because the ‘3.0E-4’ will vary per query.
Table:
create table ttest (
strategyid bigint,
patternclassid integer not null,
exitclassid integer not null,
dataset integer not null,
N integer,
mean double,
stddev double,
skewness double,
kurtosis double,
primary key (strategyid, dataset)
);
create index ti3 on ttest (mean);
create index ti4 on ttest (dataset,patternclassid,exitclassid,N);
create table strategies (
id bigint ,
strategyId varchar(500),
primary key(id),
unique key(strategyId)
);
explain select..
:
id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | SIMPLE | ttest | NULL | range | PRIMARY,ti4 | ti4 | 17 | NULL | 1910344 | 100.00 | Using index condition; Using MRR |
1 | SIMPLE | strategies | NULL | eq_ref | PRIMARY | PRIMARY | 8 | Jellyfish_test.ttest.strategyid | 1 | 100.00 | Using where |
2
Answers
The query needs to reformulated and an index needs to be added.
Plan A:
and a ‘composite’ and ‘covering’ index on test. Replace your
ti4
with this (to make it ‘covering’):Plan B:
With the same index.
Unfortunately the expression for
t
needs to be repeated. By moving it fromHAVING
toWHERE
, avoids gathering unwanted rows, only to end up throwing them away. Maybe the optimizer will do that automatically. Please provideEXPLAIN SELECT ...
to see.Also, it is unclear whether one of the two formulations will run faster than the other.
To be honest, I’ve never seen
HAVING
being used like this; for 20+ years I’ve assumed it can only be used inGROUP BY
situations!Anyway, IMHO you don’t need it here, as Rick James points out, you can put it all in the
WHERE
.Rewriting it a bit I end up with:
Most of that we can indeed foresee a reasonable index. The problem remains with the last calculation:
However, before we go to that: how many rows are there if you ignore this ‘formula’? 100? 200? If so, indexing as foreseen in Rick James’ answer should be sufficient IMHO.
If it’s 1000’s or many more than the question becomes: how much of those are thrown out by the formula? 1%? 50% 99%? If it’s on the low side then again, indexing as proposed by Rick James will do. If however you only need to keep a few you may want to further optimize this and index accordingly.
From your explanation I understand that
3.0E-4
is variable so we can’t include it in the index.. so we’ll need to extract the parts we can:If my algebra isn’t failing me you can play with the formula like this:
So the query becomes:
I’m not familiar with mysql but glancing the documentation it should be possible to include ‘generated columns’ in the index. So, we’ll do exactly that with
(1.8 * (t.stddev / sqrt(t.N)) - t.mean)
.Your indexed fields thus become:
dataset, paternclassid, exitclassid, N, (1.8 * (t.stddev / sqrt(t.N))) - t.mean)
Note that the system will have to calculate this value for each and every row on insert (and possibly update) you do on the table. However, once there (and indexed) it should make the query quite a bit faster.