#decommissioning a node cloudera
Explore tagged Tumblr posts
manasranjanmurlidharrana · 5 years ago
Text
How Mr. Manasranjan Murlidhar Rana Helped Union Bank Switzerland as a Certified Hadoop Administrator
Mr. Manasranjan Murlidhar Rana is a certified Hadoop Administrator and an IT professional with 10 years of experience. During his entire career, he has contributed a lot to Hadoop administration for different organizations, including the famous Union Bank of Switzerland.
Mr. Rana’s Knowledge in Hadoop Architecture and its Components
Mr. Manasranjan Murlidhar Rana has vast knowledge and understanding of various aspects related to Hadoop Architecture and its different components. These are MapReduce, YARN, HDFS, HBase, Pig, Flume, Hive, and Zookeeper. He even has the experience to build and maintain multiple clusters in Hadoop, like the production and development of diverse sizes and configurations.
Tumblr media
His contribution is observed in the establishment of rack topology to deal with big Hadoop clusters. In this blog post, we will discuss in detail about the massive contribution of Manasranjan Murlidhar Rana as a Hadoop Administrator to deal with various operations of the Union Bank of Switzerland.
Role of Mr. Rana in Union Bank of Switzerland
Right from the year 2016 to until now, Mr. Manasranjan Murlidhar Rana played the role of a Hadoop Administrator with 10 other members for his client named Union Bank of Switzerland. During about 4 years, he worked a lot to enhance the process of data management for his client UBS.
1. Works for the Set up of Hadoop Cluster
Manasranjan Murlidhar Rana and his entire team were involved in the set up of the Hadoop Cluster in UBS right from the beginning to the end procedure. In this way, the complete team works hard to install, configure, and monitor the complete Hadoop Cluster effectively. Here, the Hadoop cluster refers to a computation cluster designed to store and analyze unstructured data in a well-distributed computational environment.
2. Handles 4 Different Clusters and Uses Ambari Server
Mr. Manasranjan Murlidhar Rana is responsible for handling four different clusters of the software development process. These are DEV, UAT, QA, and Prod. He and his entire team even used the innovative Ambari server extensively to maintain different Hadoop cluster and its components. The Ambari server collects data from a cluster and thereby, controls each host.
3. Cluster Maintenance and Review of Hadoop Log Files
Mr. Manasranjan Murlidhar Rana and his team have done many things to maintain the entire Hadoop cluster, along with commissioning plus decommissioning of data nodes. Moreover, he contributed to monitoring different software development related clusters, troubleshoot and manage the available data backups, while reviewed log files of Hadoop. He also reviewed and managed log files of Hadoop as an important of the Hadoop administration to communicate, troubleshoot, and escalate tons of issues to step ahead in the right direction.
4. Successful Installation of Hadoop Components and its Ecosystem
Hadoop Ecosystem consists of Hadoop daemons. Hadoop Daemons in terms of computation imply a process operating in the background and they are of five types, i.e. DataNode, NameNode, TaskTracker, JobTracker, and Secondary NameNode.
Besides, Hadoop has few other components named Flume, Sqoop and HDFS, all of which have specific functions. Indeed, installation, configuration, and maintenance of each of the Hadoop daemons and Hadoop ecosystem components are not easy.
However, based on the hands-on experience of Mr. Manasranjan Rana, he succeeded to guide his entire team to install Hadoop ecosystems and its components named HBase, Flume, Sqoop, and many more. Especially, he worked to use Sqoop to import and export data in HDFS, while to use Flume for loading any log data directly into HDFS.
5. Monitor the Hadoop Deployment and Other Related Procedures
Based on the vast knowledge and expertise to deal with Hadoop elements, Mr. Manasranjan Murlidhar Rana monitored systems and services, work for the architectural design and proper implementation of Hadoop deployment and make sure of other procedures, like disaster recovery, data backup, and configuration management.
6. Used Cloudera Manager and App Dynamics
Based on the hands-on experience of Mr. Manasranjan Murlidhar Rana to use App Dynamics, he monitored multiple clusters and environments available under Hadoop. He even checked the job performance, workload, and capacity planning with the help of the Cloudera Manager. Along with this, he worked with varieties of system engineering aspects to formulate plans and deploy innovative Hadoop environments. He even expanded the already existing Hadoop cluster successfully.
7. Setting Up of My-SQL Replications and Maintenance of My-SQL Databases
Other than the expertise of Mr. Manasranjan Murlidhar Rana in various aspects of Bigdata, especially the Hadoop ecosystem and its components, he has good command on different types of databases, like Oracle, Ms-Access, and My-SQL.
Thus, according to his work experience, he maintained databases by using My-SQL, established users, and maintained the backup or recovery of available databases. He was also responsible for the establishment of master and slave replications for the My-SQL database and helped business apps to maintain data in various My-SQL servers.
Therefore, with good knowledge of Hadoop Ambari Server, Hadoop components, and demons, along with the entire Hadoop Ecosystem, Mr. Manasranjan Murlidhar Rana has given contributions towards the smart management of available data for the Union Bank of Switzerland.
Tumblr media
Find Mr. Manasranjan Murlidhar Rana on Social Media. Here are some social media profiles:-
https://giphy.com/channel/manasranjanmurlidharrana https://myspace.com/manasranjanmurlidharrana https://mix.com/manasranjanmurlidhar https://www.meetup.com/members/315532262/ https://www.goodreads.com/user/show/121165799-manasranjan-murlidhar https://disqus.com/by/manasranjanmurlidharrana/
1 note · View note
hadooptpoint · 8 years ago
Text
Commissioning and Decommissioning Nodes in a Hadoop Cluster
Commissioning and Decommissioning Nodes in a Hadoop Cluster
One of the best Advantage in Hadoop is commissioning and decommissioning nodes in a hadoop cluster,If any node in hadoop cluster is crashed then decommissioning is useful suppose we want to add more nodes to our hadoop cluster then commissioning concept is useful.one of the most common task of a Hadoop administrator is to commission (Add) and decommission (Remove) Data Nodes in a Hadoop Cluster.
View On WordPress
0 notes
udemy-gift-coupon-blog · 6 years ago
Link
CCA 131 - Cloudera Certified Hadoop and Spark Administrator ##FreeCourse ##UdemyFrancais #Administrator #CCA #CERTIFIED #Cloudera #Hadoop #Spark CCA 131 - Cloudera Certified Hadoop and Spark Administrator CCA 131 is certification exam conducted by the leading Big Data Vendor, Cloudera. This online proctored exam is scenario based which means it is very hands on. You will be provided with multi-node cluster and need to take care of given tasks. To prepare the certification one need to have hands on exposure in building and managing the clusters. However, with limited infrastructure it is difficult to practice in a laptop. We understand that problem and built the course using Google Cloud Platform where you can get credit up to $300 till offer last and use it to get hands on exposure in building and managing Big Data Clusters using CDH. Required Skills Install - Demonstrate an understanding of the installation process for Cloudera Manager, CDH, and the ecosystem projects. Set up a local CDH repository Perform OS-level configuration for Hadoop installation Install Cloudera Manager server and agents Install CDH using Cloudera Manager Add a new node to an existing cluster Add a service using Cloudera Manager Configure - Perform basic and advanced configuration needed to effectively administer a Hadoop cluster Configure a service using Cloudera Manager Create an HDFS user's home directory Configure NameNode HA Configure ResourceManager HA Configure proxy for Hiveserver2/Impala Manage - Maintain and modify the cluster to support day-to-day operations in the enterprise Rebalance the cluster Set up alerting for excessive disk fill Define and install a rack topology script Install new type of I/O compression library in cluster Revise YARN resource assignment based on user feedback Commission/decommission a node Secure - Enable relevant services and configure the cluster to meet goals defined by security policy; demonstrate knowledge of basic security practices Configure HDFS ACLs Install and configure Sentry Configure Hue user authorization and authentication Enable/configure log and query redaction Create encrypted zones in HDFS Test - Benchmark the cluster operational metrics, test system configuration for operation and efficiency Execute file system commands via HTTPFS Efficiently copy data within a cluster/between clusters Create/restore a snapshot of an HDFS directory Get/set ACLs for a file or directory structure Benchmark the cluster (I/O, CPU, network) Troubleshoot - Demonstrate ability to find the root cause of a problem, optimize inefficient execution, and resolve resource contention scenarios Resolve errors/warnings in Cloudera Manager Resolve performance problems/errors in cluster operation Determine reason for application failure Configure the Fair Scheduler to resolve application delays Our Approach You will start with creating Cloudera QuickStart VM (in case you have laptop with 16 GB RAM with Quad Core). This will facilitate you to get comfortable with Cloudera Manager. You will be able to sign up for GCP and avail credit up to $300 while offer lasts. Credits are valid up to year. You will then understand brief overview about GCP and provision 7 to 8 Virtual Machines using templates. You will also attaching external hard drive to configure for HDFS later. Once servers are provisioned, you will go ahead and set up Ansible for Server Automation. You will take care of local repository for Cloudera Manager and Cloudera Distribution of Hadoop using Packages. You will then setup Cloudera Manager with custom database and then Cloudera Distribution of Hadoop using Wizard that comes as part of Cloudera Manager. As part of setting up of Cloudera Distribution of Hadoop you will setup HDFS, learn HDFS Commands, Setup YARN, Configure HDFS and YARN High Availability, Understand about Schedulers, Setup Spark, Transition to Parcels, Setup Hive and Impala, Setup HBase and Kafka etc. Once all the services are configured, we will revise for exam by mapping with required skills of the exam. Who this course is for: System Administrators who want to understand Big Data eco system and setup clusters Experienced Big Data Administrators who want to prepare for the certification exam Entry level professionals who want to learn basics and Setup Big Data Clusters 👉 Activate Udemy Coupon 👈 Free Tutorials Udemy Review Real Discount Udemy Free Courses Udemy Coupon Udemy Francais Coupon Udemy gratuit Coursera and Edx ELearningFree Course Free Online Training Udemy Udemy Free Coupons Udemy Free Discount Coupons Udemy Online Course Udemy Online Training 100% FREE Udemy Discount Coupons https://www.couponudemy.com/blog/cca-131-cloudera-certified-hadoop-and-spark-administrator/
0 notes