skip to Main Content

I am currently trying to optimize an Azure SQL Managed Instance under the General Purpose Service Tier, equipped with Premium-series memory-optimized hardware, running on 16 vCores with 512 GB of storage.

My goal is to improve the overall performance in terms of IOPS and throughput. Referring to Microsoft’s Premium SSD size chart https://learn.microsoft.com/en-us/azure/virtual-machines/disks-types#premium-ssd-size, it seems that larger disk sizes correspond to higher performance benchmarks.

My questions are:

  1. If I increase the storage allocated to my Azure SQL Managed Instance from 512 GB to 1024 GB, will I automatically get the performance enhancements listed for the next tier (P30) in the Premium SSD sizes chart, which suggests 5,000 base provisioned IOPS and 200 MB/s throughput?

  2. Are these performance increments applied automatically with the increase in disk size, or is there additional configuration or provisioning required on my part?

  3. Importantly, will these performance benefits apply even if I do not fully utilize the disk space? using about 265 /512 GB currently

My understanding is unclear whether simply opting for a larger storage size guarantees a faster SSD tier’s performance benefits…

I’ve tried to increase the vCores to utilize more processing power as the main issue I run into is when many users are trying to pull down data at once from this database, it seems to timeout for some users, and our CPU utilization reaches 100%, if we increase the throughput of the disk reads, will that lead to faster sync times, potentially offsetting the CPU from maxing out or prevent timeouts on the Managed Instance?

Thank you for your guidance and expertise!

2

Answers


  1. It always very hard to say to if a increase in compute resources will improve performance, it completely depends where the performance bottleneck is in the workload. The best way is just to scale up and measure the performance. You can always scale down if it doesn’t bring the performance improvements you need.

    But in general, SQL could benefit from more IOPS, but that goes hand in hand with the memory.

    CPU utilization of 100% is not something to worry about with SQL,if the CPU goes down after a period of time, for example five minutes or so, it’s totally fine. That’s just SQL works it takes all the resources it can get when needed.


    Side note

    Sounds like you guys could also benefit from an API or more API, instead of users directly pulling the database


    And instead of having one instance you could also consider an azure sql elastic pool. With an elastic pool you can just add the database and minimum and maximum resources for the databases and when a lot of users pulling the database it will automatically scale

    Login or Signup to reply.
  2. " main issue I run into is when many users are trying to pull down data at once from this database, it seems to timeout for some users, and our CPU utilization reaches 100%, if we increase the throughput of the disk reads, will that lead to faster sync times, potentially offsetting the CPU from maxing out"

    If your bottleneck is CPU, adding IOPS won’t help. Use Query Store to find which queries are using excessive CPU and work to optimize those.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search