Build Smart Solutions using Big Data Stack on Microsoft Azure Platform – Build HDInsight Cluster

We have discussed about the following components so far:

  • Azure Data Factory  -> It’s a transformation service for the Big Data Stack on Azure
  • Azure Data Lake Store –> It’s a storage to store any types of any file formats on Azure
  • Azure Data Lake Analytics –> It’s a compute which process the data on Azure

Now, let’s talk about next managed service know as Big Data Cluster as a Service. For ADLA, we just write the query , select the parallelism and execute the query – without worrying about what’s happening underneath. For HDInsight cluster, you will get virtual machines for RDP where you could write your hive or pig queries and manage within preselected sources.

Let’s see how to create this cluster:

image

 

Just write HDinsight in the search textbox and option to create HDInsight cluster will appear.

image

Select the option Hadoop from the drop down and choose appropriate options. One of the main screens is:

image

selecting the number of worker nodes. Based on the compute requirement, you could select worker nodes. Once rest of the inputs are completed, click create and it’ll roughly take around 20 minutes to setup your cluster.  you could further check the progress by checking the below options from the portal:

image

 

HTH!

Advertisements

Build Smart Solutions using Big Data Stack on Microsoft Azure Platform – Azure Data Factory (Part 1)

In the previous posts, I wrote about Azure Data Lake Store (ADLS) and Azure Data Lake Analytics (ADLA). To make you understand better, let’s say ADLS works as a SAN and ADLA works as compute (server with RAM and CPUs). They make a great server machine together. Now, what is Azure Data Factory ?

Azure Data Factory(ADF)  is a framework that’s used for data transformation on Azure. Like, we have SSIS service for On-premise SQL Server similarly, ADF is a transformation service for Azure data platform services – primarily. Let’s take the same example for perfmon analysis, we need to process the perfmon logs for 500,000 machines on daily basis.

1. The data has to be ingested into the system  – Azure Data Lake Store
2. The data has to be cleaned  – Azure Data Lake Analytics / Hive Queries / Pig Queries
3. The data has to be transformed for the reporting/aggregation – Azure Data Lake Analytics / Hive Queries / Pig Queries/ Machine Learning model
4.  The data has to be inserted to a destination for reporting – Azure SQL DW or SQL Azure DB

How to run all these steps in sequence and on regular intervals? ADF is the solution for all that. To use ADF, we need to create ADF account first:

image

 

Once you click create, you will this screen:

image

 

 

This is the dashboard which we will use to create the transformation using ADF.  In the next post, I will write about how to create pipeline for transformation.

HTH!

Build Smart Solutions using Big Data Stack on Microsoft Azure Platform – Azure Data Lake Analytics (Part 2)

After learning how to create Azure Data Lake Analytics (ADLA) account. It’s the time to write some queries to leverage this account. As we know, U-SQL is a query language for this platform. The best thing about Microsoft Big Data Stack on Azure is, the query languages are SQL like and are really easy to understand.

Let’s see how to leverage ADLA and write U-SQL queries. There are options like:

1. Submit the job directly from the portal –

image

Major Parameters are as follows:

1. Job Name –> Name of the Job
2. Parallelism –> Maximum number of compute processes that can happen at the same time
3. Priority –> Lesser the value , higher the priority is. Job with higher priority will run first.
4. Query Editor –> Write your U-SQL Queries

2. Other tools like:
image

Download the add-ins as per the preference. Another example, I will show you is from Visual Studio. After installing the tools for visual studio, the options will look like:

image

 

Just select the first option and the interface will be like this:

image

On the left hand side, you are seeing the Azure Analytics account, underneath that we have ADLA database (Master as a default). It gives a feel of SQL Server DBs underneath which we can have procedures/tables/views etc.

In the middle, there is an option to select the database (a new DB can be created), schema of the object , ADLA account (Local is by default) and then Submit. Moreover, if you click the drop down underneath submit , you can even select the parallelism and priority.

On the right side, you can register/create assemblies for the programming purpose and then later use those in the U-SQL queries.

After submitting the job when it completes, the interface looks like:

image
As shown in the pic, you can see the status of job and see how long it took to run the entire job.

Moreover, to learn U-SQL, please follow – https://msdn.microsoft.com/en-US/library/azure/mt591959(Azure.100).aspx

Let’s finish the all the components of Cortana Analytics Suite. After that, we will pick up a real life scenario and explain how all these components fit together.

HTH!

Build Smart Solutions using Big Data Stack on Microsoft Azure Platform – Azure Data Lake Analytics(Part 1)

As we have set the context right through previous posts, now it’s time to understand how Big Data Queries are written and executed. As we know, we can store the data on Azure data lake store and there will be a use case for that. Let’s take a very easy example of Perfmon data – e.g. I have written some queries to process the perfmon data on daily basis. Let’s say, we want to find out, how many servers out of 500,000, servers faced memory pressure. We have automated perfmon data collectors scheduled on all the systems and the logs need to be analyzed on the daily basis.

Scenario:

1. Perfmon data collector files in CSV format are saved on Azure data lake store
2. Need to process all the files to find out the servers which faced memory pressure

In this scenario, we have options like put the data inside SQL Server and then do the analysis on the top of it. Analyzing perfmon data for 500,000 server is going to need lots of compute on SQL server and it may cost really heavy for the hardware. Moreover, the query has to be run just once per day. Do you think, it’s wise purchase 128 core machine and with TBs of SAN to do this job? In such case, we have options to process the data using Big Data solutions.

Note – I have used this very simple example to help you understand the concepts. We will talk about real life case studies as we move forward. 

In this particular scenario, I have choices like:

1. Use Azure Data Lake Analytics
2. Use Azure Data Lake HDInsight Hive cluster

For this post, I will pick Azure Data Lake Analytics (ADLA). This particular Azure service is also known as Big Data Query as a Service. Let’s first see how to create ADLA:

Step 1

image
Step 2  Enter the Data Lake Store detail for the storage and other details

image

In above steps, we have create compute account i.e. Azure Data Lake Analytics account which will process the files for us. ( Analogically, one machine with set of processors/RAM(ADLA) and for storage we added ADL store to the account). In Azure, we have both storage and compute as different entities. It helps to scale either compute or storage independent of each other.

Step 3 – After clicking create, the dashboard will look like this:

image

Now, both the compute (to process the files) and storage (where the perfmon files are stored) is created. As this service is big data query as a service, we can just write big data queries which internally will be executed by Azure platform automatically. It’s a PaaS service like SQL Azure DB where you just write your queries without bothering about what machine is underneath or where the files are stored internally.

Analogically, it’s a broker for you who you hand over the files , give him the instructions  , instruct how many people should work on the task (for compute) and then he shares the results with you. This broker understand U-SQL as a language like T-SQL is for SQL Server. If you want to get your task done, you need to write U-SQL queries and submit to the ADLA. Based on the instructions and compute defined by you, it will return the results.
Let’s talk about framework to write U-SQL Queries in the upcoming posts.

 

HTH!

Build Smart Solutions using Big Data Stack on Microsoft Azure Platform – Azure Data Lake Store

Let’s start with advanced storage which we have got on Microsoft Azure. Now we have two options for storage 1. Blob Storage 2. Azure Data Lake Storage(ADLS). ADLS is more optimized for analytics workload therefore, when it comes to Big Data/Advanced analytics ADLS should be the first choice. Moreover, when we talk about Big Data, one must understand the concepts of HDFS (Hadoop Distributed File System) and Map Reduce. For more information, please check – Video

Before we get into Azure Data lake Store, it’s really important to understand Azure Data Lake is a fully managed Big Data Service from Microsoft. It consists of three major components:

1. HDInsight (Big Data Cluster as a Service) (It further has 5 types of clusters)
image
We have an option create any of these 5 types of the cluster as per the needs.

2. Azure Data lake Store (Hyper Scale Storage optimized for analytics)
3. Azure Data Lake Analytics ( Big Data Queries as a Service)

ADLS is HDFS for Big Data Analytics on Azure. The major benefits, it serves are:
1. No Limits on the file size – maximum file size can be in PBs
2. Optimized for Analytics workload
3. Integration with all major Big Data Players
4. Fully managed and supported by Microsoft
5. Can store data of any file formats
6. Enterprise ready with the features like access control and encryption at rest

It’s really simple to create an Azure Data Lake Store Account:

Step 1:  Search for the Azure Data Lake Service on the Portal
image

Step 2:  Enter the Service Name , Resource Group name and choose the appropriate location. Currently, it’s under preview and there will be limited options on the location of the data centers.

image

Step 3 : Use Data Viewer to upload and download the data – if the size of the data is small.

image

However, you have options to upload the data to ADL Store using various tools like ADL Copy or Azure Data Factory Copy data Pipeline to upload/download the data from ADL store. As shown the above picture, you can easily monitor the number of requests and data ingress/egress rate from the portal itself.  In the next blog post, we will talk about leveraging ADL store for ADL analytics and Azure Data Factory.

 

HTH!