Using: MySQL 5.6 on Debian 9, total DB size is around 450Gb
Updated to 5.7, ran mysql_upgrade, noticed that around 150 GB has been taken up. 2 tables are really large and they stayed in ‘copying to tmp table’ for a couple of hours
Noticed innodb_file_per_table
was on and created large ibd files that weren’t there previously.
Restored from a snapshot, disabled the file_per_table, ran mysql_upgrade again. 100GB gone, which is almost 1/4 of my total DB.
In the first case, it pulled the data from ibdata and put it into a separate file but ibdata never shrinks so taken space almost doubled.
What happens in the second case? Does the temp table get created within the ibdata file that never shrinks so even when table is not used anymore – space is still gone?
Another thing I noticed is that space consumption doesn’t start until query has been in copying to tmp table status for like an hour or so.
1) Is there any way to avoid/minimize space increase?
Would running update with file_per_table on, then disabling it and running alter table engine innodb
free up the space?
2) Any way to predict how much space will be occupied? At least per table
3) How does max_tmp_table_size play into this?
2
Answers
It sounds like you painted yourself into a corner by not running innodb_file_per_table from the start, so now you have a huge, unshrinkable ibdata1 file.
1) There isn’t.
1.1) It might reduce the overall space usage by rebuilding the tables outside the ibdata1 file, then rebuild them again to inside ibdata1, reusing some of the unused space inside ibdata1
2) Yes:
3) It doesn’t. The tables you are seeing are probably tables being rebuilt for some reason (not sure why, I have to admit I haven’t seen that happen from
mysql_upgrade
before). max_tmp_table_size is only for implicit (when a query plan saysusing temporary
) and explicity (CREATE TEMPORARY TABLE ...
) temporary tables, not for table rebuilds.The only(?) way to switch to
file_per_table
without the disk bloat is