Embrace DevOps as a Database Administrator – Build container images with latest code release

In the previous posts of this series, automating code release and putting T-SQL code under source control was discussed. In this post, we will discuss about how the latest code release can be put on the containers directly and leverage the same image to build Dev/Test/Prod environments.

The flow of this post will be :

1. Spin a SQL container on Linux

2. Restore a database on the container

3. Run SQLPackage.exe to automate the code release to container

4. Build an image with all the changes

5. Spin containers with the latest changes for dev/test/prod

Note : – I presume, you have an understanding about containers. If not, please check this post before reading further.

In the last post, we discussed about SQLPackage.exe to automate the code release.

image

and the input parameters for this exe were as follows:

Script:

“C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe”

/action:Publish

/sourceFile:”C:\Users\harshch\source\repos\DBcouncil\DBcounil\bin\Debug\DBcouncil.dacpac”

/targetconnectionstring:”Data Source=localhost; Initial Catalog=WideWorldImporters; Integrated Security=true;”  << target connection string will point to the container

1) Let’s look at the container environment, we have got here.  Container was spun using the following commands and we have a container instance running with a DB:

docker run -e “ACCEPT_EULA=Y” -e “MSSQL_SA_PASSWORD=test123@” -e “MSSQL_PID=Developer” –cap-add SYS_PTRACE -p 1401:1433 microsoft/mssql-server-linux

image

2) Let’s copy the backup file to the container:

image

3) After restoring the database to the container instance , let’s run the schema comparison tool to identify the delta of changes:

image

4) Same set of changes are identified:

image

5) Let’s use SQLpackage.exe to deploy the change to container:

“C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe”

/action:Publish

/sourceFile:”C:\Users\harshch\source\repos\DBcouncil\DBcounil\bin\Debug\DBcouncil.dacpac”

/targetconnectionstring:”Data Source=localhost,1404; Initial Catalog=WideWorldImporters; Integrated Security=false; user id=sa ;password=test123@”   << target connection string pointing to the container>>

image

6) Let’s connect to SQL instance built on container and see if the changes have been deployed:

image

7) Now let’s commit these changes to the container image so that, next container we spin has all these changes inbuilt. Again, this entire lifecycle can be automated through powershell or windows scripts. Moreover, this script can be part of build process in TFS for automating the application release.

Syntax to commit the changes to image is  – docker commit < containerID > < newcustomimagename >  (mentioned in step 2 in the below screenshot)

image

if you see the size of image named SQL_2017_release1 is higher than the other images because the database has been restored in this image.

9) Now, let’s spin up a container from this image and see if we have the database already created:

c:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin>docker run -e “ACCEPT_EULA=Y” -e “MSSQL_SA_PASSWORD=test123@” -e “MSSQL_PID=Developer” –cap-add SYS_PTRACE -p 1405:1433 -v  -d sql_2017_release1

image

10)  Let’s connect to the new container SQL instance and see if we have the database created with all the changes:

image

We can see all the deployed changes were successfully ported to the new container. This new image could be used to build dev/test/Prod environments. In the next post, we will discuss about how to orchestrate containers using Kubernetes on Azure.

HTH!

Advertisements

Embrace DevOps as a Database Administrator- automate the T-SQL code releases

In continuation to the series of posts on DevOps for a DBA, lets learn how the SQL code release can be automated. In the previous post, we discussed on importing the schema of the database in SQL Server Data Tools and putting  T-SQL code under source control.

Historically, all the releases have been manually executed on the production server. Let’s see how using SSDT this can be optimized. There are two ways to do that:

1. Leverage SQL Server Schema Comparison tool –> Manual

2. Leverage SQLPackage.exe  -> Automated

1. Leverage SQL Server Schema Comparison tool –  Let’s see how this tool can be used:

1) Click on New Schema Comparison tool:

image

2) Select the source of the comparison, in this case it’s the project where the changes have been made:

image

3) Target is going to be the database where the code needs to be deployed:

image

4) Lets click on Compare and see the results:

image

If you see the above output, it clearly shows the action which is to remove old column named Sales.Invoices.BillTOCustomerID and add Sales.Invoices.OrderID1 and subsequently change the procedure where that table is being used.

4) Moreover, you can also control the behavior of the deployment as follows:

image

So far, we have identified the delta of changes to be applied and now let’s deploy the change to the actual database. One way to apply the changes is to click on the update and make the changes happen:

image

5) When you click on yes, the changes will be applied to the target database:

image

6) Let’s get into the database on SSMS and see if the change has reflected there :

image

DevOps is all about automation and the above method has lots of manual intervention. Let’s discuss about SQL Package.exe and how it can help to automate the entire code release process.

1) Let’s locate SQLPackage.exe:

image

2) Let’s check the parameters of this exe:

image

3) In our case we will use the following command to publish all the changes automatically:

“C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe
/action:Publish
/sourceFile: Location of the DACPAC file which will contain all the changes made to the db along with the entire script of the database
/targetconnectionstring : Target instance where all the changes need to be applied or need a fresh DB with all the changes

Location of the DACPAC file can be found under:

image

4) In our case, let’s apply the changes to the target database through script which can be automated using windows or through TFS build process :

Script:

“C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin\sqlpackage.exe”

/action:Publish

/sourceFile:”C:\Users\harshch\source\repos\DBcouncil\DBcounil\bin\Debug\DBcouncil.dacpac”

/targetconnectionstring:”Data Source=localhost; Initial Catalog=WideWorldImporters; Integrated Security=true;”

image

5) Let’s check the changes in the SSMS:

image

This script can be put in TFS build or automate in windows scheduler to pull all the changes periodically and apply those to the test/dev/prod servers. Moreover, to create master tables in the build process, script items could be added as follows:

image

In the next posts, we will discuss on

1: How to deploy these changes directly to container

2. How to update the image with the latest changes

3. How to deploy the containers with the latest changes

4. How to manage containers with Azure Container Service

 

HTH!

Embrace DevOps as a Database Administrator

I have been writing posts related to embracing new trends like Cloud/NoSQL/Business analytics etc. Currently, I have been speaking about DevOps and containers in various events and this will be a great learning to share.

I think this is a new evolution for software development industry which everyone is embracing with open arms. The agility and speed it provides to manage the software code and deployments, has been really impressive. Being a data guy, I was figuring out a way to join this bandwagon. Finally, I got something interesting on how as a Database professional , one can be part of this. DevOps is anyways a big change but I will just talk about how a database guy can contribute.

Generally, DBAs have to do lots of code release on daily basis and move around lots of changes from dev to test, staging or pre-prod and then production server. Generally, DBAs struggle to keep track of all the scripts and it takes up 20-30% of their working hours throughout. In this series of posts, I will explain how this can be optimized and how as a DBA, the practices like Source control can be helpful and adding containers to this can make the entire process really simple and easy to manage. While writing this post, I have presumed that the reader has understanding of DevOps concept.

 

In this post, let’s talk about a free of cost tool , SQL Server Data Tools(SSDT) which can be downloaded from here. Let’s see how we can play around with this tool and how it can help in code release automation .

 

Open Visual studio and create New SQL Server Database project and click OK:

clip_image001

 

Right click on the solution name and then click import to add the database:

image

 

Click on select connection and then browse to the right instance and the database:

image

 

When the DB is connected, click on import to import all the DB schemas

image

 

Now if you see the solution explorer, you will find all the schemas and the objects related to the DB:

image

 

Let’s add this solution to the source control and in this case I connect to Visual studio team services online account:

image

 

In this case, I will connect to Database_Migration project – You first need to have your account here or you want connect to the local TFS server (Connect with your development team on this):

image

 

After connecting to the TFS server, let’s check the source code by clicking on “Add Solution to source control under source control option”:

image

 

Create the project name, in this case, it’s DBcouncil and click OK :

 

image

 

When the project is created, go back to visual studio project and check in the code:

image

 

Now let’s see how the code looks like on TFS. In this case we have used Visual studio team services i.e. TFS online account. let’s open the link to access the online account of team services. However, even On-premise TFS or Git and other source control tools are also supported:

 

image

 

when you login into this account, you will be routed to this dashboard:

image

 

Click on Database_Migration where our project was created and then click on code to “DBCouncil” project:

 

image

 

Here you can see all the schemas and their objects:

 

image

 

Now let’s make some code changes and see how the changes could be tracked. In this case, we have renamed the location column for cities table and it also shows where this change will impact e.g. if there is any reference in stored procedure or the other objects.  Here it shows the reference of this column in stored procedure GetCityUpdates – if we click on apply, the references will also be updated.

 

image

 

Now , lets checking the changes here and see on the TFS online portal:

 

image

 

If you see the changeset for DBCouncil project, you will be able to see:

 

image

 

it saves you a hassle of checking the history of changes manually. I have seen DBAs keeping all the change scripts in either emails or a shared folder. Instead, if that can be put under source control, you can find out changes like this:

image

 

All these changes are still in the project in SSDT, these changes are yet to be deployed to the database.

In the next blog posts, I will share

1. How to automate the code release process

2. Integration of the DBs with containers on Windows and Linux

3. How containers can help to setup the dev/test/prod environments within seconds

4. How to manage these containers with Azure Container Service – Kubernetes

 

HTH!

SQL Server 2017 on Windows Containers – Part 3 (Linux Platform)

In continuation to my previous post about SQL on Windows container, I am going to write about SQL on Windows containers for Linux platform. We can have both Linux and Windows containers on Windows platform. I am going to share the steps, to have SQL on Linux containers.

Step 1 – Enable container  and Hyper-V role on Windows:

image

Step 2  – Download Docker Engine for Windows 10 from https://docs.docker.com/docker-for-windows/install/

clip_image001[7]

Step 3 – Change the mode to Linux containers – right click on the Docker icon in the windows tray:

image

After switching to the Linux Containers, automatically Linux VM is created. If you open Hyper-V manager, you could see:

image

Step 4 –  Pull the Docker image from the Docker hub – https://hub.docker.com/r/microsoft/mssql-server-linux/

Command to be run  –  docker pull microsoft/mssql-server-linux

image

Note – In this case, the image has already been downloaded.Otherwise, it will download the image from the docker hub.

Step 5 – Once the image is downloaded,  run this command to check if the image is downloaded :

Docker images

image

Step 6 – To check how many components have been patched in the image, run this command:

docker history microsoft/mssql-server-linux

image

Step 7 – Spin up the container using this command :

docker run -e “ACCEPT_EULA=Y” -e “MSSQL_SA_PASSWORD=test123@” -e “MSSQL_PID=Developer” –cap-add SYS_PTRACE -p 1401:1433 -d microsoft/mssql-server-linux

To read about the switches used in the above command check this link –  https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker

Step 8 – Check the status of the container – run this command 

Docker ps –a

image

If you see in the picture above, it shows the status and id of the container and status which is up for 31 hours. To confirm if the SQL inside the container is up and running or check the SQL errorlog, run this command:

Docker logs <Container_id>

image

Step 9 – Let’s connect to SQL server intance: This command is to connect from within the container

docker exec –it <Docker ID>/opt/mssql-tools/bin/sqlcmd -S localhost -U SA

image

Step 10 – Let’s connect from SQL Server management Studio from outside the container:

Connect either using IP of the machine hosting containers or the localhost and  port number. Entering port number is must- If you have multiple containers spun then port number will uniquely identify the container:

image

 

Step 11 – Let’s connect inside the container and check the files and folders:

image

just run the top command and you will see the SQL server process running: 

image

HTH!

SQL Server 2017 on Windows Containers – Part 2 (Installation)

I had presented this session on our Local SQL community in Delhi and it was fun to see how quickly, we could spin up multiple SQL instances within a second.  I am going to publish my demo scripts here so that, it could be leveraged by the readers:

I did my demo on Windows 10 machine:

Step 1 – Enable container role on Windows:

image

 

Step 2  – Download Docker Engine for Windows 10 from https://docs.docker.com/docker-for-windows/install/

clip_image001[7]

 

Step 3 – Change the mode to Windows containers – right click on the Docker icon in the windows tray:

clip_image001[9]

 

Step 4 –  Pull the Docker image from the Docker hub – https://hub.docker.com/r/microsoft/mssql-server-windows/

Command to be run  – docker pull microsoft/mssql-server-windows-developer

 

 

 

 

Step 5 – Once the image is downloaded,  run this command to check if the image is downloaded :

Docker images

clip_image001[11]

 

Step 6 – To check how many components have been patched in the image, run this command:

Docker history microsoft/mssql-server-windows

image

Step 7 – Spin up the container using this command :

docker run -d -p 1433:1500 -e sa_password=test123@ -e ACCEPT_EULA=Y microsoft/mssql-server-windows

To read about the switches used in the above command check this link –  https://docs.microsoft.com/en-us/sql/linux/quickstart-install-connect-docker

Step 8 – Check the status of the container – run this command 

Docker ps –a

image

If you see in the picture above, it shows the status and id of the container and status which is up for 31 hours. To confirm if the SQL inside the container is up and running or check the SQL errorlog, run this command:

Docker logs <Container_id>

image

 

Step 9 – Let’s connect to SQL server intance: This command is to connect from within the container

docker exec -it <DOCKER_CONTAINER_ID> sqlcmd -S. –Usa

image

Step 10 – Let’s connect from SQL Server management Studio:

1) Run this command to get the ip address of the container:

docker inspect -format='{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ <Docker_id>’

image

 

Step 11 – Connect to SQL server instance this IP and the password used to spin up the container:

image

 

Step 12 – Let’s connect inside the container and check the files and folders:

docker exec –it <Container ID> “powershell”

After the above command is executed, you will see the powershell gettting opened, just run the command tasklist  to see the processes which are running inside the container. You can run any command on the powershell to operate inside the container.

image

 

 

 

 

 

 

 

Step 13 – The storage and files inside the containers will remain until you remove the container. However, if you want to expand your files or create new databases on the disk attached to the local host, you will need to create volume and then attach those volumes to the disks of the host OS.

Persisting the data

Docker volume ls

image

docker volume create data2

image

Docker volume ls

image

docker run -d -p 1433:1500 -e sa_password=test123@ -e ACCEPT_EULA=Y -v c:/data:c:/data2 microsoft/mssql-server-windows

image

Note  – In step 4, we have mapped the local storage c:\data to the c:\data2 inside the container. Let’s check it out:

1) Let’s get inside the container into powershell

image

2) Run the command to check if the data2 directory has been created:

image

 

 

 

 

 

 

3) Create the database inside and data2 folder and then let’s check how it looks like in the host OS:

If you see, we can see data2 and there is already a folder created:

image

Let’s create the database and place the files in data2 folder.

let’s see how c:\data looks like in the host OS. The files of the database inside the container were persisted on the OS drive:

image

 

 

HTH!

SQL Server 2017 on Windows containers – Part 1

I have got various opportunities to work with the datacenter folks who maintain the SQL server environments along with other applications. One of the major challenge they face is, patching and maintaining these servers. With the advent of virtualization and Cloud technology, virtual machine is one of  the preferences to install the enterprise application. They maintain servers and patching , installing Antivirus or other relevant software regularly. System center makes these tasks really easy however, still it takes 20-30% of their working hours to do these activities.

There are multiple challenges faced by the datacenter staff:

1. Install entire OS from the scratch in the VMs and then install SQL server on top of it

2.  Install antivirus and software on each VM

3.  Patch each of the VMs and then monitor and manage those

This concept of containers caters to these pain areas. *It’s a soft layer of virtualization in which you don’ t need to install a separate virtual machine for each environment. You can still install container image on the host OS. Container image will share the same kernel with the host machine. It helps to save lots of efforts and resources for large environments. However, the pros and cons are debatable for both the approaches in terms of security and isolation.

*Note – You can still install container images in Hyper-V mode on the top of the host OS. It fits well for multi tenant systems where the kernel of OS needs to be separate.

Before we get into SQL server installation, it’s really important to understand how containers work.  As per the picture below, for installing virtual machine –

docker-containers-vms

1. Operating system, device drivers , antivirus etc. are need to installed separately and then the  actual application like SQL server will be installed

2. To start SQL server, first we will need to boot the VM and then start the application and it’s a long process and spending of lots of time and resources.

3. Lots of memory and CPU cycles are spent in the process of booting and keeping the OS up and running

However, with containers – it’s much simpler

1. Just install container on the top of Host OS and there is no need to install the VM and OS separately

2. Very quick to boot up the image as there is no need to load whole bunch of OS binaries. It’s available only with Windows Nano server or core edition

3. Saves lots of CPU cycles and memory and it’s easier to maintain

Let’s try to understand, what are containers and how SQL server 2017 can be installed on top of it.

Common Jargon which is used to talk about containers:

  • Docker Engine
  • Containers
  • Image
  • Dockers Hub

Docker Engine– Dockers inc. is the company which started the concept of docker in 2013. Hypervisor is to virtual machine which Docker engine is to the container. It’s an open source technology open for the community contribution.  It’s the core technology to manage container images as well as starting and stopping containers. Everything else like clustering, security or orchestration is built around this technology.  Docker engine is required to installed to manage the dockers on Windows and Linux.

Containers – Containers , logically are like virtual machine, are soft virtualization layer running on top of host OS. They share the kernel of host OS. Moreover, there is an option to install those on hyper-v machine and preferred for the application which have to be isolated.

Images – Image technically is the template you use to spin up the containers. Image will have it’s base OS and then on top of it the application which you want to use and the sandbox which will capture the changes. Once the container is stopped, it can be converted to image by applying the changes in the sandbox.

image

Docker Hub – It’s the repository where you can put the images of your own or download the images publically available.

Let’s learn in the next posts, how to configure container on Windows and Linux OS on Windows 10 host.

HTH!

 

Summary of the learning in the year 2016-2017

It’s been really long since I wrote any post. I wanted to write this post to talk about how the last year has been and how I have been trying to evolve in the data platform technologies. Last year , I got an opportunity to peep outside RDMS world and I found to be a whole new dimension of data. I remember, I attended one of the sessions in Year 2015 which amazed with the power of data. This session was about Data Platform technologies where I saw this:

image

This is the list of different data platform companies which are working to create products in various categories like Relational/NoSQL/Caching etc. It’s a long list Smile 

I had heard a lot about NoSQL and Big Data but I never got a chance to learn or get into all this. But last year, I decided to learn what it’s all about. I started learning NOSQL and I even wrote a series of posts on the concepts of NOSQL from a DBA perspective. I went in depth of documentStore like MongoDB and DocumentDB and organized an event on NoSQL for the community. My intent has always been to share what’s new in the market and help community evolve as per the latest trend.

For the last almost eleven years of my career, I have immersed myself in technical learning and I wanted to learn about some bit of architecture and planning. I wanted to learn

1. What exactly senior leadership team does
2. How the business planning is done
3. How the business solutions and technical solutions are made to solve a business problem
4. How the architectures are created

In summary, how the entire business functions and what are the various types of profiles involved to solve a business problem. I got an opportunity to do TOGAF certification and I grabbed it whole heartedly. It was a tough challenge to align myself with the business processes to clear this exam. However, it was quite interesting to know the various types of architectures that are created to solve a business problem. It was fabulous experience to learn about TOGAF and getting certification cleared in the first attempt. I wrote a small blog post on this as well.

Later, I moved to Big Data where I thoroughly learnt about Entire Big Data stack on Microsoft Azure Platform i.e. Cortana Analytics Suite. This completely changed my point of view of looking at data and I was blown away with the power of Big Data.

 

One great advantage of this era of cloud computing is, every technology or product is getting integrated at one single location. For past couple of years, I have been working on SQL Server on Azure and on Premise and It was the perfect time to extend my learning to Big Data on Azure. I learnt about Cortana Analytics Suite and also did some customer projects as well. Moreover, We organized two events for the SQL Server Community in North India region and wrote some posts to share the understanding. 

Another interesting subject Machine Learning , Oh boy! it’s great to understand the power of machine learning and the kind of impact it could bring to the various business problems. Actually, it’s very difficult to master these technologies but understanding where these technologies fit in the solutions is really interesting.  Fortunately, I got opportunities with the customers for POC and Education deliveries. It further helped me to delve deeper in these technologies. To enhance my knowledge further on the big data solutions, I started preparing for 70-475 Designing and Implementing Big Data Analytics Solutions. It covers:

1. Stream Analytics
2.  Azure Data Factory
3. Machine Learning
4. HDInsight
5. Azure Data Lake Store
6. Storm/Spark
7. R

The exam was really tough and had to lots of hands on during preparation. However, cleared it in the first attempt and got lots of learning points further on. I wanted to write this post to share my non-RDBMS learning in the recent past. It may help the data professionals to find a new S curve and learn something new.

I am going to move to a new position in Microsoft however, the passion for writing and community work will still be on.  Soon, there is a plan to have a community meet for SQL on Linux and the new features of SQL Server 2017.

Stay Tuned for more updates!