Azure Stack: Hybrid, Compliant and Consistent by design.

Azure Stack is a Hybrid Cloud platform, the only one of its kind that empowers organizations to deliver Azure Services in their own datacenters. Azure Stack’s unique approach is intended to give organizations the flexibility and capacity they require to envision their journey to the cloud at their own pace. Azure Stack is deployed in an organization’s datacenter as an integrated system with 4-12 identical configurable nodes (varies as per choice of OEM) tailor-made to run Azure Stack by Microsoft’s strong network of OEM partners like HPE, Dell EMC, Lenovo, Huawei and more. Azure Stack also comes in a single node deployment flavor called the Azure Stack Development Kit. This deployment doesn’t demonstrate Azure Stack’s full capabilities but is a handy, less resource intensive option to evaluate and learn about the Azure Stack experience.

The full version of Azure Stack can be deployed in multiple ways (covered later), but it tends to deliver most value in a hybrid deployment scenario that lets the user combine the flexibility of a hyperscale public cloud like Azure and the low latency performance and control one gets in their own datacenters. Azure Stack can also be a perfect fit for workloads in premises that face sporadic connectivity to the internet. Process data locally on the Azure Stack integrated system and once connectivity is restored, seamlessly run analytics or other PaaS offerings on the cloud with that processed data. Azure Stack integrated systems have completely locked down infrastructure from a permissions and networking perspective and can be deployed disconnected from the public cloud. This kind of deployment is ideal in organizations with strict data regulation policies and where data sovereignty is of utmost concern. Not to mention even in a connected deployment, organizations can harness Microsoft’s vast range of security offerings on the world’s most compliant public cloud platform.

As mentioned earlier, Azure Stack integrated systems can be deployed connected or disconnected from the public cloud. These deployment modes are options provided to clearly define the pricing models, identity stores and in turn, the usage scenarios of the integrated systems. Say an organization deploys Azure Stack that can connect to the Azure public cloud as and when required. This gives the organization the option to choose between Azure Active Directory (AAD) and Active Directory Federation Services (ADFS) as their identity store. A connected deployment mode also gives the option to choose between a pay-as-you-go billing model or a capacity-based billing model. The pay-as-you-go model, as the name suggests is like buying an Azure Subscription, enabling the organization to be charged only for the resources they use. In a capacity-based billing model, the organization is required to purchase an Azure Stack capacity plan SKU whose price depends on the configuration of the integrated system they intend to deploy. Disconnected deployments of Azure Stack fit in when organizations intend to use the integrated system in a private cloud solution. In these deployments, ADFS is the only possible identity store but this doesn’t mean that the organization forfeits the choice to connect to Azure in the future.

As of 2018, the latest version of Azure Stack offers all the Azure infrastructure services like Virtual Machines, Scale Sets, Azure Storage, Azure Networking and Key Vault. Current PaaS offerings include the Azure App Service, Container Service (including Docker Swarm, Mesosphere and Kubernetes management templates), Azure Functions and SQL server resource provider. Besides this out of the box functionality, Azure Stack also provides users a plethora of services and IaaS/PaaS solution templates ready to deploy from Azure Marketplace and it also gives users the option to integrate their existing DevOps tooling (Jenkins, PowerShell, Visual Studio, etc.) with Azure Stack.

To conclude, moving to the cloud is an inevitability, and Azure Stack makes the journey of organizations that are currently on-premises to the cloud much more streamlined by giving them the flexibility of the hybrid operating model. Azure Stack enables organizations to generate simplicity and deliver results through truly consistent user experiences that form the backbone to any successful hybrid cloud model. Be it management and monitoring portals, IaaS & PaaS offerings or PowerShell and DevOps tools, Azure Stack on-premises looks and works exactly like Azure making it fit to be named an extension of Azure.

Get started with Azure Stack

Embrace DevOps as a Database Administrator – Build container images with latest code release

In the previous posts of this series, automating code release and putting T-SQL code under source control was discussed. In this post, we will discuss about how the latest code release can be put on the containers directly and leverage the same image to build Dev/Test/Prod environments.

The flow of this post will be :

1. Spin a SQL container on Linux

2. Restore a database on the container

3. Run SQLPackage.exe to automate the code release to container

4. Build an image with all the changes

5. Spin containers with the latest changes for dev/test/prod

Note : – I presume, you have an understanding about containers. If not, please check this post before reading further.

In the last post, we discussed about SQLPackage.exe to automate the code release.

image

and the input parameters for this exe were as follows:

Script:

“C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe”

/action:Publish

/sourceFile:”C:\Users\harshch\source\repos\DBcouncil\DBcounil\bin\Debug\DBcouncil.dacpac”

/targetconnectionstring:”Data Source=localhost; Initial Catalog=WideWorldImporters; Integrated Security=true;”  << target connection string will point to the container

1) Let’s look at the container environment, we have got here.  Container was spun using the following commands and we have a container instance running with a DB:

docker run -e “ACCEPT_EULA=Y” -e “MSSQL_SA_PASSWORD=test123@” -e “MSSQL_PID=Developer” –cap-add SYS_PTRACE -p 1401:1433 microsoft/mssql-server-linux

image

2) Let’s copy the backup file to the container:

image

3) After restoring the database to the container instance , let’s run the schema comparison tool to identify the delta of changes:

image

4) Same set of changes are identified:

image

5) Let’s use SQLpackage.exe to deploy the change to container:

“C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe”

/action:Publish

/sourceFile:”C:\Users\harshch\source\repos\DBcouncil\DBcounil\bin\Debug\DBcouncil.dacpac”

/targetconnectionstring:”Data Source=localhost,1404; Initial Catalog=WideWorldImporters; Integrated Security=false; user id=sa ;password=test123@”   << target connection string pointing to the container>>

image

6) Let’s connect to SQL instance built on container and see if the changes have been deployed:

image

7) Now let’s commit these changes to the container image so that, next container we spin has all these changes inbuilt. Again, this entire lifecycle can be automated through powershell or windows scripts. Moreover, this script can be part of build process in TFS for automating the application release.

Syntax to commit the changes to image is  – docker commit < containerID > < newcustomimagename >  (mentioned in step 2 in the below screenshot)

image

if you see the size of image named SQL_2017_release1 is higher than the other images because the database has been restored in this image.

9) Now, let’s spin up a container from this image and see if we have the database already created:

c:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin>docker run -e “ACCEPT_EULA=Y” -e “MSSQL_SA_PASSWORD=test123@” -e “MSSQL_PID=Developer” –cap-add SYS_PTRACE -p 1405:1433 -v  -d sql_2017_release1

image

10)  Let’s connect to the new container SQL instance and see if we have the database created with all the changes:

image

We can see all the deployed changes were successfully ported to the new container. This new image could be used to build dev/test/Prod environments. In the next post, we will discuss about how to orchestrate containers using Kubernetes on Azure.

HTH!

Embrace DevOps as a Database Administrator- automate the T-SQL code releases

In continuation to the series of posts on DevOps for a DBA, lets learn how the SQL code release can be automated. In the previous post, we discussed on importing the schema of the database in SQL Server Data Tools and putting  T-SQL code under source control.

Historically, all the releases have been manually executed on the production server. Let’s see how using SSDT this can be optimized. There are two ways to do that:

1. Leverage SQL Server Schema Comparison tool –> Manual

2. Leverage SQLPackage.exe  -> Automated

1. Leverage SQL Server Schema Comparison tool –  Let’s see how this tool can be used:

1) Click on New Schema Comparison tool:

image

2) Select the source of the comparison, in this case it’s the project where the changes have been made:

image

3) Target is going to be the database where the code needs to be deployed:

image

4) Lets click on Compare and see the results:

image

If you see the above output, it clearly shows the action which is to remove old column named Sales.Invoices.BillTOCustomerID and add Sales.Invoices.OrderID1 and subsequently change the procedure where that table is being used.

4) Moreover, you can also control the behavior of the deployment as follows:

image

So far, we have identified the delta of changes to be applied and now let’s deploy the change to the actual database. One way to apply the changes is to click on the update and make the changes happen:

image

5) When you click on yes, the changes will be applied to the target database:

image

6) Let’s get into the database on SSMS and see if the change has reflected there :

image

DevOps is all about automation and the above method has lots of manual intervention. Let’s discuss about SQL Package.exe and how it can help to automate the entire code release process.

1) Let’s locate SQLPackage.exe:

image

2) Let’s check the parameters of this exe:

image

3) In our case we will use the following command to publish all the changes automatically:

“C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe
/action:Publish
/sourceFile: Location of the DACPAC file which will contain all the changes made to the db along with the entire script of the database
/targetconnectionstring : Target instance where all the changes need to be applied or need a fresh DB with all the changes

Location of the DACPAC file can be found under:

image

4) In our case, let’s apply the changes to the target database through script which can be automated using windows or through TFS build process :

Script:

“C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe”

/action:Publish

/sourceFile:”C:\Users\harshch\source\repos\DBcouncil\DBcounil\bin\Debug\DBcouncil.dacpac”

/targetconnectionstring:”Data Source=localhost; Initial Catalog=WideWorldImporters; Integrated Security=true;”

image

5) Let’s check the changes in the SSMS:

image

This script can be put in TFS build or automate in windows scheduler to pull all the changes periodically and apply those to the test/dev/prod servers. Moreover, to create master tables in the build process, script items could be added as follows:

image

In the next posts, we will discuss on

1: How to deploy these changes directly to container

2. How to update the image with the latest changes

3. How to deploy the containers with the latest changes

4. How to manage containers with Azure Container Service

 

HTH!

Embrace DevOps as a Database Administrator

I have been writing posts related to embracing new trends like Cloud/NoSQL/Business analytics etc. Currently, I have been speaking about DevOps and containers in various events and this will be a great learning to share.

I think this is a new evolution for software development industry which everyone is embracing with open arms. The agility and speed it provides to manage the software code and deployments, has been really impressive. Being a data guy, I was figuring out a way to join this bandwagon. Finally, I got something interesting on how as a Database professional , one can be part of this. DevOps is anyways a big change but I will just talk about how a database guy can contribute.

Generally, DBAs have to do lots of code release on daily basis and move around lots of changes from dev to test, staging or pre-prod and then production server. Generally, DBAs struggle to keep track of all the scripts and it takes up 20-30% of their working hours throughout. In this series of posts, I will explain how this can be optimized and how as a DBA, the practices like Source control can be helpful and adding containers to this can make the entire process really simple and easy to manage. While writing this post, I have presumed that the reader has understanding of DevOps concept.

 

In this post, let’s talk about a free of cost tool , SQL Server Data Tools(SSDT) which can be downloaded from here. Let’s see how we can play around with this tool and how it can help in code release automation .

 

Open Visual studio and create New SQL Server Database project and click OK:

clip_image001

 

Right click on the solution name and then click import to add the database:

image

 

Click on select connection and then browse to the right instance and the database:

image

 

When the DB is connected, click on import to import all the DB schemas

image

 

Now if you see the solution explorer, you will find all the schemas and the objects related to the DB:

image

 

Let’s add this solution to the source control and in this case I connect to Visual studio team services online account:

image

 

In this case, I will connect to Database_Migration project – You first need to have your account here or you want connect to the local TFS server (Connect with your development team on this):

image

 

After connecting to the TFS server, let’s check the source code by clicking on “Add Solution to source control under source control option”:

image

 

Create the project name, in this case, it’s DBcouncil and click OK :

 

image

 

When the project is created, go back to visual studio project and check in the code:

image

 

Now let’s see how the code looks like on TFS. In this case we have used Visual studio team services i.e. TFS online account. let’s open the link to access the online account of team services. However, even On-premise TFS or Git and other source control tools are also supported:

 

image

 

when you login into this account, you will be routed to this dashboard:

image

 

Click on Database_Migration where our project was created and then click on code to “DBCouncil” project:

 

image

 

Here you can see all the schemas and their objects:

 

image

 

Now let’s make some code changes and see how the changes could be tracked. In this case, we have renamed the location column for cities table and it also shows where this change will impact e.g. if there is any reference in stored procedure or the other objects.  Here it shows the reference of this column in stored procedure GetCityUpdates – if we click on apply, the references will also be updated.

 

image

 

Now , lets checking the changes here and see on the TFS online portal:

 

image

 

If you see the changeset for DBCouncil project, you will be able to see:

 

image

 

it saves you a hassle of checking the history of changes manually. I have seen DBAs keeping all the change scripts in either emails or a shared folder. Instead, if that can be put under source control, you can find out changes like this:

image

 

All these changes are still in the project in SSDT, these changes are yet to be deployed to the database.

In the next blog posts, I will share

1. How to automate the code release process

2. Integration of the DBs with containers on Windows and Linux

3. How containers can help to setup the dev/test/prod environments within seconds

4. How to manage these containers with Azure Container Service – Kubernetes

 

HTH!

SQL Server 2017 on Windows Containers – Part 3 (Linux Platform)

In continuation to my previous post about SQL on Windows container, I am going to write about SQL on Windows containers for Linux platform. We can have both Linux and Windows containers on Windows platform. I am going to share the steps, to have SQL on Linux containers.

Step 1 – Enable container  and Hyper-V role on Windows:

image

Step 2  – Download Docker Engine for Windows 10 from https://docs.docker.com/docker-for-windows/install/

clip_image001[7]

Step 3 – Change the mode to Linux containers – right click on the Docker icon in the windows tray:

image

After switching to the Linux Containers, automatically Linux VM is created. If you open Hyper-V manager, you could see:

image

Step 4 –  Pull the Docker image from the Docker hub – https://hub.docker.com/r/microsoft/mssql-server-linux/

Command to be run  –  docker pull microsoft/mssql-server-linux

image

Note – In this case, the image has already been downloaded.Otherwise, it will download the image from the docker hub.

Step 5 – Once the image is downloaded,  run this command to check if the image is downloaded :

Docker images

image

Step 6 – To check how many components have been patched in the image, run this command:

docker history microsoft/mssql-server-linux

image

Step 7 – Spin up the container using this command :

docker run -e “ACCEPT_EULA=Y” -e “MSSQL_SA_PASSWORD=test123@” -e “MSSQL_PID=Developer” –cap-add SYS_PTRACE -p 1401:1433 -d microsoft/mssql-server-linux

To read about the switches used in the above command check this link –  https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker

Step 8 – Check the status of the container – run this command 

Docker ps –a

image

If you see in the picture above, it shows the status and id of the container and status which is up for 31 hours. To confirm if the SQL inside the container is up and running or check the SQL errorlog, run this command:

Docker logs <Container_id>

image

Step 9 – Let’s connect to SQL server intance: This command is to connect from within the container

docker exec –it <Docker ID>/opt/mssql-tools/bin/sqlcmd -S localhost -U SA

image

Step 10 – Let’s connect from SQL Server management Studio from outside the container:

Connect either using IP of the machine hosting containers or the localhost and  port number. Entering port number is must- If you have multiple containers spun then port number will uniquely identify the container:

image

 

Step 11 – Let’s connect inside the container and check the files and folders:

image

just run the top command and you will see the SQL server process running: 

image

HTH!