Updating ubuntu 11 04 to 11 10
Suggested Read: How to Upgrade Kernel in Cent OS 7 Ready to update your kernel on Ubuntu or one of their derivatives such as Debian and Linux Mint? To find the current version of installed kernel on our system we can do: To upgrade the kernel in Ubuntu, go to and choose the desired version (Kernel 4.14 is the latest at the time of writing) from the list by clicking on it.
Just keep in mind when setting up the single-node clusters that we will later connect and “merge” the two machines, so pick reasonable network settings etc. Now that you have two single-node clusters up and running, we will modify the Hadoop configuration to make one Ubuntu box the “master” (which will also act as a slave) and the other Ubuntu box a “slave”.The main goal of tutorial is to get a more sophisticated Hadoop installation up and running, namely building a multi-node cluster using two Ubuntu boxes.This tutorial has been tested with the following software versions: From two single-node clusters to a multi-node cluster – We will build a multi-node cluster using two Ubuntu boxes in this tutorial.Periodically new devices and technology coming out and it’s important to keep our Linux system kernel up-to-date if we want to get the most of out it.Moreover, updating system kernel will ease us to take advantage of new kernel fuctions and also it helps us to protect ourselves from vulnerabilities that have been found in earlier versions.In my humble opinion, the best way to do this for starters is to install, configure and test a “local” Hadoop setup for each of the two Ubuntu boxes, and in a second step to “merge” these two single-node clusters into one multi-node cluster in which one Ubuntu box will become the designated master (but also act as a slave with regard to data storage and processing), and the other box will become only a slave.
It’s much easier to track down any problems you might encounter due to the reduced complexity of doing a single-node cluster setup first on each machine. The tutorial approach outlined above means that you should read now my previous tutorial on how to setup up a Hadoop single-node cluster and follow the steps described there to build a single-node Hadoop cluster on each of the two Ubuntu boxes.
Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware.
It provides high throughput access to In a previous tutorial, I described how to setup up a Hadoop single-node cluster on an Ubuntu box.
If the hostnames of your machines are different (e.g.
“node01“) then you must adapt the settings in this tutorial as appropriate. This should come hardly as a surprise, but for the sake of completeness I have to point out that both machines must be able to reach each other over the network.
Basically, the “master” daemons are responsible for coordination and management of the “slave” daemons while the latter will do the actual data storage and data processing work.