Created by Toyin Akin | Video: h264, 1280×720 | Audio: AAC 48KHz 2ch | Duration: 01:54 H/M | Lec: 24 | 511 MB Language: English
Once you have a running Cloudera Manager installation, we walk through the installation and logic of the Hadoop daemons
What you’ll learn
Able to see Cloudera Manager at work installing a distributed Hadoop cluster easily
Acquire the concepts in which to split the various Hadoop services across cluster nodes.
Get a picture as to how one can operate a Hadoop cluster in Production.
The software needed for this course is freely available
This course is not recommened if you have no desire to work with/in distributed computing
If you already have a running Cloudera Manager installation this course follows on with the logic behind the placement of the Hadoop master/slave daemons across your cluster. We actually go ahead and discuss the placement and perform the installation of Hadoop.
If you do not have a Cloudera Manager installation and you want to follow along hands on, you can complete the course : "Real World Vagrant – Automate a Cloudera Manager Build – Toyin Akin" beforehand.
"Big Data" technology is a hot and highly valuable skill to have – and this course will teach you how to quickly deploy a Hadoop Cluster using the Cloudera stack.
Cloudera allows you to download a QuickStart Virtual machine which is great for developers, but this is of no use for the Operations team to start the planning and the building out of DEV / UAT and PROD environments within their organizations. What assumptions were made when the QuickStart VM was put together?
In addition, hosting all of Cloudera’s processes as well as Hadoop’s processes on one VM is not a model that any large organization can or should follow. The Hadoop services need to be split out across multiple VMs/Servers. In fact that’s the whole point out Hadoop!
Distributed Data and Distributed Compute.
After all, if you are developing against or operating a distributed environment, it needs to be tested. Tested in terms of the forcing various failure modes within the cluster and ensuing that the cluster can still respond to user requests. Killing the QuickStart VM destroys the entire cluster!
You’ll learn the same techniques these large enterprise guys use to move to the next step in building out an enterprise grade Hadoop cluster.
If you are a developer, the operations team can build out that centralized cluster in which you are truly testing against a distributed cluster. Testing code against the Quickstart VM may work, but as any experienced distributed developer knows, verifying code against a pseudo cluster on a single machine is different than verifying against code against a truly distributed cluster.
As an example bottlenecks in Networks or CPU cycles will come to light. In addition, this will also assist in capacity planing of the UAT / PROD cluster as initial metrics can be acquired.
If you are in operations then this gives the operations team an environment for the team to start learning how to jointly operate the cluster. Here the team can start to understand cluster metrics, adding/removing cluster nodes, managing the various Hadoop services (Zookeeper, HDFS, YARN and Spark) and a lot more. We also look at managing Cloudera Hadoop Parcels as well as changing Hadoop versions once a cluster is deployed.
The operation team can start to develop procedures and change management documentation ready for Production operation of a Hadoop cluster.
Here I present a curriculum as to the current state of my Cloudera courses.
My Hadoop courses are based on Vagrant so that you can practice and destroy your virtual environment before applying the installation onto real servers/VMs.
For those with little or no knowledge of the Hadoop eco system Udemy course : Big Data Intro for IT Administrators, Devs and Consultants
I would first practice with Vagrant so that you can carve out a virtual environment on your local desktop. You don’t want to corrupt your physical servers if you do not understand the steps or make a mistake. Udemy course : Real World Vagrant For Distributed Computing
I would then, on the virtual servers, deploy Cloudera Manager plus agents. Agents are the guys that will sit on all the slave nodes ready to deploy your Hadoop services Udemy course : Real World Vagrant – Automate a Cloudera Manager Build
Then deploy the Hadoop services across your cluster (via the installed Cloudera Manager in the previous step or your own Cloudera Manager installation). We look at the logic regarding the placement of master and slave services. Udemy course : Real World Hadoop – Deploying Hadoop with Cloudera Manager
If you want to play around with HDFS commands (Hands on distributed file manipulation). Udemy course : Real World Hadoop – Hands on Enterprise Distributed Storage.
You can also automate the deployment of the Hadoop services via Python (using the Cloudera Manager Python API). But this is an advanced step and thus I would make sure that you understand how to manually deploy the Hadoop services first. Udemy course : Real World Hadoop – Automating Hadoop install with Python!
There is also the upgrade step. Once you have a running cluster, how do you upgrade to a newer hadoop cluster (Both for Cloudera Manager and the Hadoop Services). Udemy course : Real World Hadoop – Upgrade Cloudera and Hadoop hands on
Who this course is for?
Software engineers who want to expand their skills into the world of distributed computing
System Engineers that want to expand their skillsets beyond the single Hadoop server
Developers who want to write/test their Hadoop code against a centralized, distributed Hadoop enviroment
(Buy premium account for maximum speed and resuming ability)