For a SQL server DBA – How to start Azure Learning – Part 3 (SQL Azure DB HADR)

In continuation of my previous post, I will talk about HADR for SQL Azure databases. As we know, we can’t take traditional backups for Azure DBs therefore all solution revolving around it will also change. As an organization, we will still want to be always available and resilient to any outages or accidental issues.

If you are not aware of what HADR means , please check this post.

Currently, if we talk about HADR for on-premise SQL servers, the solutions we think about are:

1. AlwaysON availability groups
2. SQL server Failover cluster
3. Mirroring with high safety and witness
4. Replication
5. Log Shipping

All of the above are somewhere dependent on SQL DB and T-log backups. Therefore, any of the above configuration is not feasible to be configured manually with SQL Azure databases. However, logically similar solutions are available.

If you relate well, we are paying for Database as a Service and Microsoft takes whole responsibility to keep it up and running based on their committed SLA. But still, we need to manage HADR at some level to manage our customized requirement and above SLA.

Business Continuity and Disaster Recovery (BCDR Instead of HADR) is another term which is more in trend for SQL Azure databases. lets discuss on the various features available for SQL Azure DBs and how to manage them.

For SQL Azure DBs, there is an uptime SLA committed by Microsoft. Based on the latest documentation, the uptime SLA for:

1. Web and Business tier databases is 99.9%
2. Basic , Standard and Business Tiers is 99.99%.

Now, you may be thinking in numbers on how many hours/minutes of downtime per month with 99.99% or 99.9%. The  calculation which goes into it is:

Monthly Uptime %=(Maximum Available Minutes-Downtime)/(Maximum Available Minutes) . 

For more information, please check this link

Moreover, by default there are two additional replicas created for any SQL database hosted on azure. It helps to maintain the uptime SLA.  If the primary replica fails the secondary replica will take a charge and all the new connections will be routed to the new primary replica by the gateway. The changes are replicated on quorum basis i.e. if changes are committed on primary and one of the replicas, the commit is completed and the commit on second replica will follow.

Even 99.99% is not enough for 24X7 application and it may cause huge financial loss if the database is down for a single minute.

There are three major types of failures, where we need resilience to support such application:

1. Recovery from Accidental deletion of data (Point in time restore – Oops recovery)
2. Machine outage where our DB is hosted in Azure
3. Datacenter/regional outages

  • For the first option , please refer my blog post.
  • For second option , the secondary copy from the local replicas(as discussed above) is brought online.
  • For the third option, we need to manage with the below features of SQL Azure DB.

1. Geo restore – It’s discussed in the post
2. Geo Replication (Active and Standard)

When it comes to HADR, we often talk about RTO and RPO. However, ERT is newly introduced term for Azure which is subset of RTO:

Estimated Recovery Time (ERT) – The estimated duration for the database to be fully functional after a restore/failover request. It may happen that database has come up but application components are still being recovered. Therefore, for database ERT applies and for application RTO.

Recovery Time Objective (RTO)
– The time interval that defines the maximum acceptable duration of unavailability before the application fully recovers from a disruptive event.

Recovery Point Objective (RPO)
– The amount of most recent data changes (time interval) the application could lose after recovery.

BCDR option

Basic tier

Standard tier

Premium tier

Point In Time Restore Any restore point within 7 days Any restore point within 14 days Any restore point within 35 days
Geo-Restore ERT* < 12h
RPO† < 1h
ERT* < 12h
RPO† < 1h
ERT* < 12h
RPO† < 1h
Standard Geo-Replication Not included ERT* < 30s
RPO† < 5s
ERT* < 30s
RPO† < 5s
Active Geo-Replication Not included Not included ERT* < 30s
RPO† < 5s

We will discuss specifically about Standard and Active Geo replication , further.

Standard Geo Replication – This feature is only available for Standard and Premium tiers. It creates a standby copy of the database in the DR paired region but is not readable. Once the failover is done, secondary becomes primary and then application can connect it for read/write purpose. This is best fit for applications where you need aggressive RPO and need just a standby copy of the primary database.


To configure it , click on the database page -> click on Geo-Replication tab.  if you click on add secondary , you will see something like this:

Just choose a server and your secondary database will be hosted there with the same tier,  which in our case is standard.  As the Secondary copy is just for standby i.e. we can’t read data from it, only 75% of the actual tier cost is charged.

For detailed information on Standard geo replication please check the post

Active Geo Replication : This replication is available only for Premier tier databases. With this replication , you can create four replicas in any datacenter and will be read only. This can be used for load balancing the workload of the application.


For more information , please check this link

Here is the comparison of Standard and Active Geo-replication:

Scenario Standard
Regional disaster Yes Yes
DR drill Yes Yes
Online application upgrade No Yes
Online application relocation No Yes
Read load balancing No Yes


I will write about Database Sharding in the next post! Stay tuned !

The views expressed on this website/blog are mine alone and do not reflect the views of my company. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

For a SQL server DBA – How to start Azure Learning – Part 2 (SQL Azure DB Backups)

In reference to my previous post , I am going to talk about how backups will be managed in Azure. This is one of the activities which every DBA does almost every day or week. Just to let you know , there is a change in the backups of azure databases. You can no longer take DB/log backups for SQL Azure database(PaaS).

Before you read forward, just be sure of what’s SQL on Azure VMs (IaaS) and SQL azure databases(PaaS). Please check this link : –

Generally, if you want to move the SQL database from test/development to production environment – you take SQL database full backup and restore it. But, this will change with SQL Azure database. In SQL Azure DBs , we have options

1. Bacpac – It consists of schema and data. If you want to move your database from on-premise SQL server to SQL Azure database, then you will be create a Bacpac file of your database. Move this Bacpac file to Azure storage and restore it as a Azure DB. Currently, there is no option of traditional backups for Azure DB.

2. Dacpac -> it consists of schema only. if you want to create database on Azure without data then this option is used.

For information on Bacpac and Dacpac , please check: –

Sometime, DB professionals use DB migration wizard to move the databases from on-premise to SQL Azure.

3. Copy database -> This option is used , if you want to move your existing azure database into some other location in Azure itself. It’s an asynchronous operation and connection to azure is not required after it started. For more information on Copy database , please check –

As above three topics are discussed in detail on the web already therefore, refered the links for deep understanding.

Q:Now, you got how backups in Azure work but at the same time, how will you manage if there is any accidental deletion of the data. In other words, how will you do point in time recovery.

A: You will be glad to know that, SQL Azure automatically takes full, differential and t-log backups in the background. By default one full backup every week , differential backup everyday and t-log backup every 5 minutes is taken. Depending on the tier of your database , the retention time will be set e.g. if you have database deployed in Premium tier then 35 days is the retention time. Even you can restore the deleted database within 35 days/retention time.

Moreover, you will get backup storage by default equivalent to the double of max database size of you performance tier e.g. for premium it’s 500 GB which means you will get 1000 GB of backup storage. if you want increase the size of backup storage, you will need to pay extra money for that (Contact Azure support for that).

For more information , please check –
Backup and Restore for Azure database –

Please note – For Web and Business edition you still need to manage backups (Bacpac and database copy) yourself for point in time recovery.

let’s see how point in time recovery works in Azure database:

Point in time recovery using Azure dashboard
Point in time recovery using Azure dashboard

Once you select the time and click on confirm , you will see:

It's create a database with copy only option till the selected time in the restore wizard
It creates a database with copy only option for the selected time in the restore wizard

Q: How will HADR function for SQL Azure databases.
A:  As, there is no DB/t-log backups so, features like AlwaysON/mirroring/log shipping etc. doesn’t exist in Azure DB world for now.

I will write my next post on HADR for SQL Azure databases and how it’s different from on-premise SQL server HADR.

The intent of this series is to pick some of the features which have changed and will change your way of working with databases.


The views expressed on this website/blog are mine alone and do not reflect the views of my company. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

For a SQL server DBA – How to start Azure Learning – Part 1 (DTU)

As we all know, It’s a Cloud era! There are lots changes which are coming our way, which will directly influence are day to day work. People are still contemplating how this change will impact them and where to invest their time for technical growth. I have been talking about this with many DBAs and I feel somewhere they are confused on how to get into azure. Just to help people in deciding which direction to move in, I thought of writing something on this in the form of a series of post.

As a database professional, we have two major directions to focus:

1. Platform as a Service (PaaS)
2. Infrastructure as a Service (IaaS)

For PaaS , I may learn migration , HADR or monitoring ,very easily. As it’s all going to be based on similar concepts but the main point to understand is , when to move to PaaS. Similarly, we may learn how to deploy and setup SQL on VMs in Azure but when we should go for IaaS or when to choose PaaS over IaaS or vice-versa.

In this Azure learning series, I will share some of my thoughts on how to choose between these two. But before getting into those details, lets discuss on the changes which are really going to make us think differently. It will help to build some base and understand how to leverage cloud

For both PaaS and IaaS – I will talk about what are the things which have changed from our current on-premise infrastructure. In this post, we will start with PaaS.

Before you read forward, just be sure of what’s SQL on Azure VMs (IaaS) and SQL azure databases(PaaS). Please check this link : –

Resource Consumption (PaaS):

As we know , now the concepts are changing for the resource utilization. Earlier , we used to talk about CPU in GHz and memory in GBs etc. but it’s all going to change , now it will be all DTUs (Database Throughput Unit). In SQL Azure databases , we need to be cognizant of the various performance tiers we have e.g.

Service Tier/Performance Level DTU MAX DB Size Max Concurrent Requests Max Concurrent Logins Max Sessions Benchmark Transaction Rate Predictability
Basic 5 2 GB 30 30 300 16,600 transactions per hour Good
Standard/S0 10 250 GB 60 60 600 521 transactions per minute Better
Standard/S1 20 250 GB 90 90 900 934 transactions per minute Better
Standard/S2 50 250 GB 120 120 1,200 2,570 transactions per minute Better
Standard/S3 100 250 GB 200 200 2,400 5,100 transactions per minute Better
Premium/P1 125 500 GB 200 200 2,400 105 transactions per second Best
Premium/P2 250 500 GB 400 400 4,800 228 transactions per second Best
Premium/P3 1000 500 GB 1,600 1,600 19,200 735 transactions per second Best

It must be a very open ended question for us to decide on , which performance tier should be chosen or how many DTUs will suffice my application needs. For each tier, the cost is different therefore choosing the tier will be very critical to project budgets. You will always want to fit in the lower slab to save money which will mean tuning the application as much as possible.

I have seen , customers have not been interested in investing time to benchmark their applications.But now, it’s going to be really important.

To move into Azure, we should learn various patterns of the application:

1. How many transactions/second application has during the peak loads
2. How many max concurrent connections does it create
3. Performance during peak loads as well as during lean period – for switching between performance tiers

Currently, we are used to planning disks/tempdb/max server memory/trace flags etc. etc. but with Azure PaaS , it’s all going to be taken care for us automatically. We just need to understand how many DTUs do we need and which database to be moved to Azure. If you articulate it, it’s going to change our conversations and the way we plan about new applications/migrations. Isn’t it a big change? but surely a change for good.

We need to have more focus towards, how to benchmark an application and understand which performance tier to choose. If none of this fit your workload requirement, there are some ways like sharding and elastic DB which you need to explore to scale up even further.

Let explain with an example, today we don’t think much about the data type and it’s length before we create a table e.g. if date has to be stored, we don’t bother much to use datetime(8 bytes) datatype without knowing the number of bytes it’s going to consume. Had we chosen date(4 bytes) data type , we would have saved 5 extra bytes of storage. It holds true for other data types as well. When the number of rows grow high, the space saving is big. If the database size if cut down, there may be chances to move to lower performance tier which means saving cost for the company.

Given the limitations we have in cloud, there is a huge scope of optimization the resource consumption for our application to work efficiently and “economically”.  I will come up with a detailed post on benchmarking and DTUs but for now , it’s just to build some basics.

People, who are thinking on how these DTUs were calculated  – please refer :

I will come up with Database backup options in Azure and how it changes our day to day work. I will write post on all the changes which I have observed in SQL Azure databases (PaaS) and then will move to changes in SQL on Azure VMs (IaaS).


The views expressed on this website/blog are mine alone and do not reflect the views of my company. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.