Tips to prepare for TOGAF exam

After clearing my exam, I have been getting lots of queries on how to prepare for this exam. I wanted to share my experience with community so that, it can help other folks with their prep. I took this exam in the month of June, 2016. There are multiple options to take this exam:

1. TOGAF foundation
2. TOGAF Certified
3. TOGAF Foundation & Certified combined
4. TOGAF Bridge exam (TOGAF 8 to TOGAF 9 upgrade)

I took the 3 option , TOGAF foundation and Certified combined exam. There are various option to prepare – one is download the book from The Open Group website or the other one is to go for instructor led training. I am not sure if people have cleared the exam without any expert advice or based or just based on their prior experience. However, I have cleared the exam after taking training from Simplilearn. I really liked their trainers, they were experienced professionals and very knowledgeable about this subject.

Training and Self study material:
Especially, Satish Byali – I really liked the way he trained us by sharing real life experiences. Following are the great resources he shared , available outside for free:

1. He shared some demos of the tool like Abacus which is used to create the architecture based on TOGAF standards.
2. Various great examples like E-Pragati an initiative by Andhra Government – they are using TOGAF architecture standards. Some resources worth a look:

3. Some other good resources:

4. Most important one : Posters for each phase – http://orbussoftware.com/resources?type=Poster&topic=&date= . You need to remember every bit of all these. These posters talk about inputs and outputs of each phase and various deliverables which are must knows for the exam.

Reading from the book is one thing, but if we know what really happens at the ground level e.g. look at the abacus tool , it shares all the artifacts’ created at each phase. If you look at them how they look in reality and how to create them , will help to absorb these concepts well and relate them. You can download the trial version of Abacus to play around for one month.

Taking simulation test and tips from internet search
There are two simulation test for Foundation and certification exam each which you get in Simplilearn website. Moreover, there are 80 bonus questions extra for practice. I had taken those tests 3-4 times each. Moreover, I also did some internet search related to gather more exam tips and here are the ones which really helped:

1. https://setandbma.wordpress.com/2011/05/16/togaf-preparation-aid-for-part-2/
2. http://blog.goodelearning.com/togaf/passing-exam-top-ten-tips-togaf-certification/
3. http://techblog.fywservices.com/2012/11/techniques-to-prepare-for-and-pass-the-togaf-certification-exams/

Must watch Video series – https://www.youtube.com/watch?v=3M4NKwoaLk4


On the exam day:
Finally, after two months of preparation – the exam day came. The first tip , I will give is be confident and don’t panic. My exam was at 12:30 PM and the test center was 60 KMs away from my home. I had done all the research on how to reach the center , prior to one day of the exam. It helped to avoid un-necessary stress and I reached the exam center by 12:00 PM, as planned. Things were really smooth and I was confident to take the exam.

At the center –
1. Those Guys asked for two ID proofs as per the process so, please carry two. BTW, I wasn’t aware of that.
2. They gave a locker for keeping all my stuffs. Nothing was allowed to be carried inside the test center but just two ID proofs.
3. They do a security test before taking you the test center
4. You are allowed to take the breaks in between provided the clock for the test has to be running. Also, they will do a security check again.


Foundation Test :
  Finally, the time came when I had to click the right options. There is no negative marking therefore you should attempt all the 40 questions. 22 is the passing score for this exam and I scored 31 which was above expectations.

1.  I finished all the questions within 30 minutes and marked all the questions which I was not sure with the answers
2. I reviewed all the questions once again and you must do this. Because , you will find some answers in the next questions sometimes.
3. I reviewed all the marked questions in the second round again.

I finished the test 5 minutes prior to the completion time. I was moved to the certification test straight away:

Certified Test:  As soon as I clicked finished for the foundation exam, I was at the first question of certification exam. You have to go through long paragraphs to understand the scenario and then there will be questions and 4 options to select from.

1. The best tip i got , was to select an answer which is really TOGAFish. You will see the answers talk about deliverables and artifects.
2. This is open book exam, I confirmed the artifacts and deliverables mentioned in the answers were correct or not.
3. The book is really helpful, you can search anything from the content and text of the chapters in the book.
4. If you practice well in the demo tests, you will get an idea of what types of questions you will get and will be really helpful in the exam
5. I’d finished my exam in 1 hour and then for next 30 minutes, I reviewed all the questions till the time was over.

When I clicked finish, I got my result flashed in front of me and it was a success. After clearing the exam, you will get an email from Open Group.. Use the instructions mentioned in the email to retrieve your certificate.

I hope reading these tips will help you and I will again say – don’t panic and just go for it. All the Best!

Embrace NoSQL as a relational Guy – Column Family Store

It’s been really long, ever since I wrote any post. It’s been a really busy month otherwise, It’s really difficult to stay away from writing – consciously. This is the final post from this series of posts about NoSQL technologies.

My Intent for this series of post, is to cover breadth of technologies to help the readers understand the bigger picture. It feels like a revolution where the technology is growing at a massive scale. Everyone must have , at least a basic understanding about the technologies like Cloud Computing/NoSQL/Big Data/Machine Learning etc.

Okay, let’s talk about Column family store aka Columnar family. The best way to explain this will be, by taking an example of Columnstore Indexes starting SQL Server 2012. In the traditional SQL Server tables the data is stored in the form of rows:

column-vs-row-oriented-database

Referring to above picture – If we need to select column data , in RDBMS systems the data is stored in the unit of a complete row on the page. Even if we select a single column from the table, entire row has to be brought in the memory.

For business intelligence reports , we generally rely on aggregations like sum /avg /max /count etc. Imagine aggregation of a single column on a table with 1 billion rows will have to scan entire table to process the query. On the other hand, if the data is stored in the form of individual columns, then aggregations reduce huge number of I/Os. Moreover, Columnar databases offer huge compression ratio which can easily convert a 1 TB into few GBs.

This database system is specifically designed to cater to aggregations and BI reporting e.g. If you want to find and average hits on each website on the web from the terabytes of size of a table, it’s probably going to take days but with Columnar database, the benefit that we will get is:

1. The data will picked depending on the columns selected in the query instead if the entire row. Mostly, these aggregations go for scans and scanning TBs of data is going to take very long. By leveraging Columnar databases, the I/O will drastically reduce.

2. Using Columnar databases e.g. HBase, we can leverage distributed processing to fetch the result faster. As we know, with these NoSQL technologies , we can scale out really well and can leverage power of distributed queries across multiple machines.The data which takes days to process can be processed within minutes or seconds.

Major Known players for Columnar databases are: HBase, Cassandra, Amazon’s DynamoDB and Google’s Bigtable.
References: – https://www.youtube.com/watch?v=C3ilG2-tIn0
https://www.youtube.com/watch?v=mRvkikVuojU
HTH!

Embrace NoSQL as a relational Guy – Key Value Pair

There are majorly two types of Key Value pair DBs, 1. Persisted  2. Non-persisted (cache based). This is a very popular type of NoSQL database e.g.

Persisted –> Azure tables, Amazon Dynamo, CouchBase etc.

Non-Persisted –> Redis Cache , Memcached etc.  (Main purpose is caching on the websites)

The data in these databases is stored in the form key and Value:image

The data is accessed based on the key and the value can be JSON or XML or image or any thing which fits in blob storage i.e. the value is stored in the form of blobs. Like other NoSQL DBs ,they are not schema bound. For the ecommerce websites, if we want to store the information about a customer shopping. We can have Key as customer id and value as all the shopping information. Since they have all the required data stored in a single unit in the form of a value, it can be scaled really efficiently.
Another use case for Key value store is , storing session information e.g. a game where millions of users are active online, their profile information can be stored in key value pair. These databases can handle massive scale easily and it can also provide redundancy to avoid loss of data. Moreover, there are many applications which are used to just store information in the form of images , can leverage this key value pair database easily.

As a RDBMS guy, it’s little difficult to relate to these databases but just try to understand the context for now. We will try to discover more about these databases as we go along. In a summary, that’s another factor which influence our decision to choose a NoSQL DB : (Tables refer to Azure tables (key value pair DBs)):

image

References – http://davidchappellopinari.blogspot.in/
https://infocus.emc.com/april_reeve/big-data-architectures-nosql-use-cases-for-key-value-databases/

HTH!

Embrace NoSQL technologies as a relational Guy – Graph DB

It’s a fact that NoSQL technologies are growing at a rapid pace. I even heard someone saying NoSQL is old now , NewSQL is the new trend. NewSQL gives performance of NoSQL and follows ACID principals of RDBMS system. Anyways, lets focus on Graph DB for now.

Graph databases are specialized in dealing with relationship data. They specialize in finding the patters and relationships between the certain events or employees of organization or certain operations. It can help to make the application more interactive by suggesting more options based on the previous patterns of browsing or shopping.

Have you noticed:

1. Facebook offers option of suggested friends

2. LinkedIn offering suggested connections or connections from the same organization-that’s the use of Graph Databases.

3. Flipkart/Amazon offering “people also viewed” (Real time recommendations) options help you purchase more products.

4. Master data management where based on the support case, knowledge base article could be suggested for the faster resolution.

5. Dependency analysis of shutting down an IT operation i.e. users which may be impacted if this router is shut down for maintenance. It can help to send the advanced notification to those employees.

Graph DBs are being used for all of the above scenarios. Just check the below picture to see how relationships look like:

image

Neo4J is one of the best Graph DB companies today. The language used for Neo4J is Cypher. It’s being used largely by the major Tech./healthcare/manufacturing companies.

Please check this video for more details about relationship and properties. It’s series of videos which you could look for to understand more about this subject.

HTH!

Embrace NoSQL technologies as a relational Guy – Document Store

This is the one of the most famous NoSQL technologies today. In Document Store we don’t store word/PowerPoint/Excel Spreadsheets but majorly JSON documents.

I am sure you have heard about XML (Extensible Markup Language) documents, it’s simply used as intermediate data when we want to communicate with various devices/applications.e.g. if .net based application wants to communicate with Java based application or even C language based device – they can send XML documents to share the data across each other. JSON(JavaScript Object Notation) is based on similar concept but it’s more optimized and easier to use hence largely accepted as a standard of communication.

 

image

 

The reason why JSON has become so popular , with websites like Flipkart,Amazon or even gaming companies or cross device communication, is the ease to use and high performance while working with application of large scale.

Let’s take an example of an application which stores data from a sensing device. It hits thousands of time within seconds and moreover, the data transmission is in the form of JSON. If the data is stored in the RDBMS system,with every hit the JSON will have to converted to a plain data and inserted into the table. If it’s to be done thousands of time per second , there will definitely be an overhead. There will be an issue with the scalability of this system. How about saving the data in Document Store in the form of JSON itself and then if required just query to fetch the results.

Another example is, the shopping websites or blogging websites or even online book library where we don’t want to be schema bound and there is no need to maintain relationships between data, Document Store DBs play a vital role. The release cycle of the application becomes shorter due to schema free architecture and lesser need to perform database impact analysis.

Just for information, If you have heard about Polyglot persistence based applications e.g. shopping websites / Online Libraries or event video libraries , these kinds of services use multiple database systems e.g. for product displays they use JSON (Document store) , for transactions they may use RDBMS , for relationship data they may use graph DB etc. . We have lots of flexibility these days to leverage various DB systems to cater to different needs.

DocumentDB, MongoDB , CouchDB , RavenDB are key players in the market for Document Store. DocumentDB  and Mongolabs (MongoDB on Azure) are two managed services that can be hosted in Azure PaaS platform. However, MongoDB , CouchDB and RavenDB can be installed on bare metal machines.

image

For a free Demo of DocumentDB, check this URL – http://www.documentdb.com/sql/demo#

image

We will discuss about DocumentDB in detail in the future posts. First, I will try to finish the introduction and use of all NoSQL databases types.

HTH!

Embrace NoSQL technologies as a relational guy! Intro.

I have been writing in the form of series quite a bit. Recently, I wrote a series of posts for SQL Azure DB which really helped the readers to understand the subject. Even, I had to write the same series for SQL on Azure VMs but somehow I couldn’t finish that. Hopefully, I will finish that in the near future.

For now, lets talk on what’s NoSQL all about. There has been lots of discussion and publicity around this subject over the past few years and it’s gaining lot of popularity because of it’s efficiency. If you are in the world where SQL/Oracle/DB2 etc. are the only resorts for the data storage, then you really need to upgrade. It doesn’t mean RDBMS systems like SQL/Oracle/DB2 etc. are going out of trend , it just means that now for different needs we need to pick different database systems. People no longer rely just on RDBMS systems.

I have been reading some of the really amazing blog posts,written by David Campbell. For now, I’m going to diverge a little bit from the subject. Using the technique mentioned by David , I will explain the bigger picture.The best way to understand where the data world is going, is to understand the below mentioned data categories:

1. Operational Data
2. Analytical Data
3. Streaming Data

image

Operational Data – Operational data is the data used by the applications to maintain their state e.g. Payment data/Customer information like we have on OLTP or non-transaction systems. However, slowly people realized the real value of historical data which could be used to  understand the trends for higher customer satisfaction or for building the business strategies. That’s how all the technologies for data warehousing started gaining traction.

Analytical Data
–  This is the read only data , analyzed using Data warehouse or Big Data systems to understand the business trends and historical data. Due to the huge volume of the data , Big Data is gaining the trends as it needs extreme hardware capacity to analyze PBs of data.However, for the smaller systems traditional OLAP systems work perfectly fine.

Streaming Data
– In this modern world, people want analytics in the real time e.g. fitness tracker on the people’s wrists, toll payment devices in the cars or sensors on the oil well etc. One way is to store the entire data and then do the analysis but sometimes delay in the processing is not affordable e.g. if oil company wants to raise an alert if the pressure in the well is increasing or if you wanted to know how many cars passed through a specific city in last 30 minutes. There has to be a provision to read live stream and make sense out of that. This is the world of IOT.

Understanding the above terms , was really important for everyone working in the data platform. There is a plethora of companies working towards making data platform really a happening world. Have you heard about 3Vs of data – Volume , Variety , Velocity? In today’s world it’s difficult to manage these 3 Vs in RDBMS. When it comes to RDBMS, we talk about structured data in the form of Tables/Columns. Anything and everything you want to store in the RDBMS systems has to be stored in the form of Tables/Columns. How about the data like flat files/JSON/ telemetry data where there is no structure? As I shared in the beginning, for different types of data we need different database systems.

In a nutshell, there are five major categories of database technologies:

1. Relational databases (RDBMS)
2. Document Store
3. Column family Store
4. Key Value Store
5. Graph Databases

We will discuss each of the category for the NoSQL in detail, in upcoming posts.

HTH!

Disclaimer: The views expressed on this website/blog are mine alone and do not reflect the views of my company. All postings on this blog are provided “AS IS” with no warranties, and confers no rights.

SQL Server 2016 Community Launch – SQL Server Delhi NCR community

It was a pleasure to host SQL Server 2016 community launch event for the full day. We had a great support from both speakers and attendees. It was an occasion of learning for everyone where attendees learnt about SQL Server 2016 features and speakers learnt about the various SQL Server solutions from the attendees. The whole intent of this event was to make attendees aware of the new and enhanced capabilities of SQL Server 2016.

 

The agenda of the event was:

  • Enhanced Security – Always Encrypted| Row Level Security| Dynamic Data MaskingUdhay Bhanu Pathania
  • Performance and Scalability – Live Query Statistics| In-memory OLTPAnil Sharma
  • HADR –  Hybrid DR on Azure VMs using SQL Server 2016 AlwaysON and Enhancements Harsh Chawla | Twitter , Blog 
  • HyperScale Cloud – Stretch DB | Managed Backups Gaurav Srivastava 
  • Advanced Analytics | Leverage R and PolyBase Queries in SQL Server 2016 Sourabh Agarwal 

 

Let me share the learning references for these topics:

AlwaysON SQL 2016 AG – http://searchsqlserver.techtarget.com/feature/Whats-new-in-2016s-SQL-Server-AlwaysOn-Availability-Groups
                                       https://www.youtube.com/watch?v=vG8H7hTNfdY&index=7&list=PL8nfc9haGeb6T3HaGhQWvBz1AqS9d6Zv_

AlwaysON Encrypted – https://www.youtube.com/watch?v=EPIq70NzQ4k
                                    http://blogs.microsoft.com/next/2015/05/27/always-encrypted-sql-server-2016-includes-new-advances-that-keeps-data-safer/#sm.0018nh41016q6epnxzk2rrogl856l

Dynamic Data Masking – https://www.youtube.com/watch?v=7ch8tbstkyM

Live Query Statistics – https://www.mssqltips.com/sqlservertip/3685/live-query-statistics-in-sql-server-2016/
Query Store – https://www.youtube.com/watch?v=HxBRjZXi3L0

In-memory OLTP – https://www.youtube.com/watch?v=l5l5eophmK4
                             https://msdn.microsoft.com/en-us/library/dn133186.aspx

R-Integration with SQL Server 2016 – https://www.youtube.com/watch?v=8Sly49zDZEw
Polybase in SQL Server 2016 – https://www.youtube.com/watch?v=lBxSB0UY4wA

 

HTH!