skip to Main Content

I am using Azure SQL Database with 10gb of storage (Standard S2: 50 DTUs). I am running a process where I am deleting all rows in a table every 6 hours and recreating and loading table from our business system. What I am noticing is that although the data in our source isn’t getting exponentially larger it seems the database is getting larger at a higher speed. I am wondering if azure SQL when dropping and recreating stores even though its deleted and counted against your total storage?

Thanks for any help on this.

enter image description here

2

Answers


  1. Please defrag all indexes. As mentioned on this article, fragmentation can claim a lot space.

    Run sp_helpfile to verify the log is not consuming space also. If the log is big run the following statement to recover space.

    DBCC SHRINKFILE (log, 0)
    

    Consider also to shrink the database.

    DBCC SHRINKDATABASE (N'db1')
    

    On this thread I provided some queries that may be useful about database size.

    Login or Signup to reply.
  2. While what Alberto Morillo says is good I suspect something changed the 13/12/2022:

    enter image description here

    The options are two:

    1. The stored procedure that is supposed to delete the data is not deleting data or is deleting less data
    2. More data are ingested since the 13 of December
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search