#OpenJDK path Ubuntu
Explore tagged Tumblr posts
Text
I just cleaned up my setup by removing the Snap version of Java and installing OpenJDK 21 using APT. Much smoother, more compatible, and no path issues. If you're a dev or just tired of Java acting weird—this guide is for you. 💻✨ 👉 Read the full post and fix your setup #today.
#Eclipse Java fix#gist#GitHub#IDE#install Java Ubuntu#Java developer guide#Java IDE compatibility#Java installation Ubuntu#Java runtime environment#linux#Linux development tools#open#open source#OpenJDK#OpenJDK 21 APT#OpenJDK path Ubuntu#remove Snap Java#Ubuntu#Ubuntu Java setup#Ubuntu JDK fix
0 notes
Text
Mastering Hadoop Installation on Ubuntu Server: A Step-by-Step Guide
Are you ready to dive into big data processing with Hadoop on Ubuntu Server? Look no further! In this comprehensive guide, we’ll walk you through the installation process step-by-step, ensuring you have everything you need to get started. Whether you’re a Linux aficionado or a Windows RDP enthusiast looking to buy RDP and install Ubuntu on RDP, this guide has you covered.
Understanding Ubuntu Server: Before we delve into the installation process, let’s take a moment to understand Ubuntu Server. Ubuntu is one of the most popular Linux distributions, known for its stability, security, and ease of use. Ubuntu Server is specifically designed for server environments, making it an ideal choice for hosting Hadoop clusters.

2. Setting Up Your Environment: If you’re using Ubuntu Server on a physical machine or a virtual environment like VMware or VirtualBox, ensure that it meets the minimum system requirements for running Hadoop. This includes having sufficient RAM, disk space, and processing power. Alternatively, if you’re considering using Windows RDP, you can buy RDP and install Ubuntu on it, providing a flexible and scalable environment for Hadoop deployment.
3. Installing Ubuntu Server: Begin by downloading the latest version of Ubuntu Server from the official website. Once downloaded, follow the on-screen instructions to create a bootable USB drive or DVD. Boot your system from the installation media and follow the prompts to install Ubuntu Server. Make sure to allocate disk space for the operating system and any additional storage required for Hadoop data.
4. Configuring Network Settings: After installing Ubuntu Server, configure the network settings to ensure connectivity within your environment. This includes assigning a static IP address, configuring DNS servers, and setting up network interfaces. Proper network configuration is essential for communication between Hadoop nodes in a distributed environment.
5. Updating System Packages: Before installing Hadoop, it’s essential to update the system packages to ensure you have the latest security patches and software updates. Use the following commands to update the package repository and upgrade installed packages:bashCopy codesudo apt update sudo apt upgrade
6. Installing Java Development Kit (JDK): Hadoop is built on Java, so you must install the Java Development Kit (JDK) to run Hadoop applications. Ubuntu repositories provide OpenJDK, an open-source implementation of the Java Platform. Install OpenJDK using the following command:bashCopy codesudo apt install openjdk-11-jdk
7. Downloading and Installing Hadoop: Next, download the latest stable release of Hadoop from the official Apache Hadoop website. Once downloaded, extract the Hadoop archive to a directory of your choice. For example, you can use the following commands to download and extract Hadoop:bashCopy codewget https://www.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz tar -xvf hadoop-3.3.1.tar.gz
8. Configuring Hadoop Environment: After installing Hadoop, you’ll need to configure its environment variables to specify the Java runtime environment and other settings. Edit the hadoop-env.sh file located in the etc/hadoop directory and set the JAVA_HOME variable to the path of your JDK installation:bashCopy codeexport JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64
9. Setting Up Hadoop Cluster: Once Hadoop is installed and configured on your Ubuntu Server, you can proceed to set up a Hadoop cluster. This involves configuring Hadoop’s core-site.xml, hdfs-site.xml, and mapred-site.xml configuration files and starting the Hadoop daemons on each node in the cluster.
10. Testing Hadoop Installation: To ensure that Hadoop is installed and configured correctly, you can run some basic tests. Start by formatting the Hadoop Distributed File System (HDFS) using the following command:bashCopy codehdfs namenode -format
Then, start the Hadoop daemons and verify their status using the following commands:bashCopy codestart-dfs.sh start-yarn.sh
Finally, run a sample MapReduce job to confirm that Hadoop is functioning correctly:bashCopy codehadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi 16 10000
Congratulations! You’ve successfully installed Hadoop on the Ubuntu Server, ready to tackle big data processing tasks with ease.
In conclusion, setting up Hadoop on the Ubuntu Server is a straightforward process that anyone can accomplish with the right guidance. Whether you’re a Linux enthusiast or prefer the flexibility of Windows RDP, you can buy RDP and install Ubuntu on it to create a robust Hadoop environment. With Hadoop up and running, you’re well-equipped to handle the challenges of big data processing and analysis.
0 notes
Text
Phpstorm ubuntu free

PHPSTORM UBUNTU FREE HOW TO
PHPSTORM UBUNTU FREE MAC OS
PHPSTORM UBUNTU FREE INSTALL
You can also follow us on Google+, Twitter or like our Facebook page.
PHPSTORM UBUNTU FREE INSTALL
I hope this article helped you to install PhpStorm on Ubuntu 16.04/17.04. As always, if you found this post useful, then subscribe to our free newsletter. If for any reason you want to uninstall PhpStorm, simply delete /opt/phpstorm/ directory and all subdirectories and files in it. Next time, you can start PhpStorm from Unity Dash. Next, you need to enter your password to create a launch script and desktop entry.Īfter that’s done, you can start a new project in PhpStorm. In the next window, select a theme, color, font and hit OK button. If you are not ready to purchase a license to use PhpStorm, you can select the 30 day free trial option. If you are new to PhpStorm, select the second option and hit the OK button. Now type phpstorm in the terminal to launch the application. sudo mv PhpStorm-145.1616.3/ /opt/phpstorm/Ĭreate a symlink sudo ln -s /opt/phpstorm/bin/phpstorm.sh /usr/local/bin/phpstorm It’s better to move this PhpStorm-145.1616.3 directory to /opt. tar.gz file with this command: tar xvf PhpStorm-2016.1.2.tar.gzĪ new folder called PhpStorm-145.1616.3 will be created within the current working directory. Once downloaded, open a terminal window and change your working directory to the download directory, then extract the. You may need to change the version number if you are reading this at a later time. The latest version is 2016.1.2 at the time of this writing. If you like command line, you can use wget to download. By default PHPStorm IDE should be installed to the path. Install PHPStorm using the following command: umake ide phpstorm. sudo apt update sudo apt upgrade sudo apt install ubuntu-make. Install phpstorm on Ubuntu 16.04/17.04įirst go the official website and download the tar archive. First, make sure all your system packages are up to date by running the following apt Commands in the terminal. The above two commands will set the correct Java environment variables. sudo apt-get install oracle-java8-set-default Once installed we need to set Java environment variables such as JAVA_HOME on Ubuntu 16.04/17.04. You can open existing project in PHPStorm. Basics of PHPStorm: You click on Create New Project to create a new PHP project and follow the instructions depending on your project requirements. Now, you can use PHPStorm for your web development projects. Sudo apt-get install java-common oracle-java8-installerĭuring the installation process you will need to accept the Oracle License agreement. You will get 30 days of free access to PHPStorm IDE. sudo add-apt-repository ppa:webupd8team/java Remove OpenJDK sudo apt-get remove openjdk*Īdd the PPA and install Oracle Java 8 with following 3 commands. Install Oracle Java 8 on Ubuntu 16.04/17.04īecause phpstorm is a Java program, so first we need to install Oracle Java on Ubuntu and the latest stable version is Oracle Java 8.
PHPSTORM UBUNTU FREE HOW TO
This tutorial will explain how to install Phpstorm on Ubuntu 16.04/17.04.
PHPSTORM UBUNTU FREE MAC OS
It’s a commercial and cross-platform product from Jetbrains and can run on Linux, Mac OS and Windows. For those of you who don’t know, PhpStorm is a cool, lightning-smart IDE for PHP developers.

0 notes
Text
Download magic launcher

#DOWNLOAD MAGIC LAUNCHER INSTALL#
#DOWNLOAD MAGIC LAUNCHER MOD#
#DOWNLOAD MAGIC LAUNCHER 32 BIT#
#DOWNLOAD MAGIC LAUNCHER ANDROID#
#DOWNLOAD MAGIC LAUNCHER MODS#
Upon hitting test, it did launch my minecraft game though I did note that even though my items were intact my achievements had croaked and I seem to have to start at the very beginning on that. Looks like open jdk has a better java path that takes into account the amd64 bit.īut I have more news - today when I opened the magic launcher and tried the same stuff, it worked. Somewhere on the internet, I read that java 7 oracle is better than openjdk and I made sure I updated.Īs you said the java path was /usr/lib/jvm/java-7-oracle/jre/bin/java and this was what was causing the architecture word width mismatch exception.
#DOWNLOAD MAGIC LAUNCHER MODS#
There are a lot of other Mods they have run in the past but this is what there using right now.
#DOWNLOAD MAGIC LAUNCHER INSTALL#
I would be more inclined to install Ubuntu 12.04 32bit in the first place to keep from having the headaches that would likely come from overly modifying my Ubuntu 64 bit operating system.
#DOWNLOAD MAGIC LAUNCHER MOD#
I'm concerned that ok, I fixed this Minecraft mod but now I broke something else that needs a the 64 bit version of Java on my Ubuntu 12.04 64 bit operating system.
#DOWNLOAD MAGIC LAUNCHER 32 BIT#
As for me I wont down grade to a 32 bit Java. I would bet there are other mods that would not work with 64 bit Java. Now OptiFine C Light will not work with with ModLoader.zip and OptiFine C Light will not work with 64 bit Java. The same is true of mods that require special sound files & etc. On the Magic launcher's setup page ModLoader.zip has to be at the top of the top window, so it loads first, that is with mods that require the ModLoader to run. In circles trying to get the launcher working, I am eager to try some mods etc, so I was hoping someone out here can help me out. I know that to run Minecraft the export statement is mandatory (especially so for java 7), so I am running around I think the problem revolves around setting up java correctly, I am wondering if I should have got a 32 bit java installed. If I try and change it to match the LD_LIBRARY_PATH, I dont have an executable I can point to in the amd64 directory - so I am unable to set it as it does not accept a directory. minecraft/bin/natives/liblwjgl.so: wrong ELF class: ELFCLASS32 (Possible cause: architecture word width mismatch) If I leave the default, /usr/lib/jvm/java-7-oracle/jre/bin/java, then while hitting TEST I get the error: Now when I run the magic launcher it comes up fine but while doing set up, in the advanced tab I dont know what to give for JAVA. Padsp java -Xmx512M -Xms256M -cp Minecraft.jar I run the minecraft game using the standard script:Įxport LD_LIBRARY_PATH="/usr/lib/jvm/java-7-oracle/jre/lib/amd64" So I tried to do that but I got stuck - this problem might be more Ubuntu related than the launcher itself but I still need some expertise. So I stumbled upon this launcher and it looks like it would be ideal for me to take my minecraft experience to the next level in terms of mods etc. I play minecraft on Ubuntu 11.10 and I have a fairly decent system and the game runs without any serious issues. We do not store personal information of user.I am very new to the world of minecraft and I am slowly getting to be good at it thanks to a lot of help from forums like these and of course other nooks and crannies of the internet.
#DOWNLOAD MAGIC LAUNCHER ANDROID#
You can personalize your Android phone or tablet with our amazing lovely themes for Android, TODAY! If you like this Magic Launcher app then share with your friends and family member. Rate us and give your valuable comment for this awesome Magic Launcher to create more cool apps like this. > Not Install another Launchers to use this launcher. > New high-tech features will be added constantly. > Collection of HD Icons and HD Magic Wallpaper. > Make your phone unique with all sorts of cool Magic animated effects. New Magic launcher theme is now available! Apply the Loves Launcher to enjoy with FREE Wallpapers and Icon Pack! Make your phone stylish! This fantastic cute Loves launcher will transform your Android device for free! If you are a fan of colorful themes, we have another surprise, this Loves theme launcher will be perfect for you! Get it right now and have a completely new makeover of your Android Smartphone.Ĭhange completely the way your Android looks like with this Magic theme. Magic Launcher specially designed for Magic lover, provides delicate app icons, wallpapers, folder and app drawer interface. Looking for a new way to personalize your Smartphone? If yes, New Magic Launcher is the perfect Magic Launcher for you! Match your personal style to your Smartphone with some amazing wallpapers and fantastic icons and amazing features, that will make your Android device look awesome, with our new Magic Launcher theme! Tired of using the same wallpapers and the plain icons all the time? Download the Magic Launcher to get an amazing new experience!

0 notes
Text
In this tutorial, I’ll walk you through the steps to install Oracle Java 11 on Ubuntu 22.04|20.04|18.04. Java 11 is a long-term support (LTS) release made available to the General public on 25 September 2018 and is production-ready. You can install Java 11 on Ubuntu 22.04|20.04|18.04 either from one of the following methods: Java SE Development Kit 11 (JDK 11) OpenJDK 11 This guide will cover the installation of Oracle Java 11 on Ubuntu 22.04|20.04|18.04 from both methods. Method 1: Install Oracle Java 11 from Upstream repo / PPA – Recommended For Ubuntu 22.04/20.04 , run: sudo apt update sudo apt install openjdk-11-jdk Ubuntu 18.04 sudo add-apt-repository ppa:linuxuprising/java sudo apt update sudo apt install oracle-java11-set-default If you don’t want to set Java 11 as default, then install: sudo apt install oracle-java11-installer Conform Java verison: $ java -version openjdk version "11.0.7" 2020-04-14 OpenJDK Runtime Environment (build 11.0.7+10-post-Ubuntu-3ubuntu1) OpenJDK 64-Bit Server VM (build 11.0.7+10-post-Ubuntu-3ubuntu1, mixed mode, sharing) Setting Default version of Java If you have more than one version of Java installed in your system, you can refer to our guide on how to set default version for all applications. How to set default Java version on Ubuntu / Debian Method 2: Manually install OpenJDK 11 on Ubuntu 22.04|20.04|18.04 OpenJDK is a free and open-source implementation of the Java Platform, Standard Edition licensed under the GNU General Public License version 2. Check the latest release of OpenJDK 11 before running the command below: wget https://download.java.net/openjdk/jdk11/ri/openjdk-11+28_linux-x64_bin.tar.gz This will download OpenJDK 11 tar file to your working directory. After the download, extract the archive tar zxvf openjdk-11+28_linux-x64_bin.tar.gz Move the resulting folder to /usr/local/ sudo mv jdk-11* /usr/local/ Set environment variables sudo vim /etc/profile.d/jdk.sh Add: export JAVA_HOME=/usr/local/jdk-11 export PATH=$PATH:$JAVA_HOME/bin Source your profile file and check java command $ source /etc/profile $ java -version openjdk version "11.0.2" 2018-10-16 OpenJDK Runtime Environment 18.9 (build 11.0.2+9) OpenJDK 64-Bit Server VM 18.9 (build 11.0.2+9, mixed mode) $ which java /usr/local/jdk-11.0.2/bin/java Method 3: Manually Install Java SE Development Kit 11 (JDK 11) Download the latest release of JDK 11. wget --no-check-certificate -c --header "Cookie: oraclelicense=accept-securebackup-cookie" https://download.oracle.com/otn-pub/java/jdk/11.0.12%2B8/f411702ca7704a54a79ead0c2e0942a3/jdk-11.0.12_linux-x64_bin.deb Then install the package with the dpkgcommand sudo apt install ./jdk-11.0.12_linux-x64_bin.deb If you encounter dependency issues, then run: $ sudo apt -f install $ sudo dpkg -i jdk-11.0.12_linux-x64_bin.deb Set environment variables sudo vim /etc/profile.d/jdk.sh Add: export JAVA_HOME=/usr/lib/jvm/jdk-11.0.12/ export PATH=$PATH:$JAVA_HOME/bin Source the file and confirm Java version installed $ source /etc/profile.d/jdk.sh $ java -version java version "11.0.12" 2021-10-16 LTS Java(TM) SE Runtime Environment 18.9 (build 11.0.2+9-LTS) Java HotSpot(TM) 64-Bit Server VM 18.9 (build 11.0.2+9-LTS, mixed mode
0 notes
Text
How to install java on mac youtube.com

HOW TO INSTALL JAVA ON MAC YOUTUBE.COM HOW TO
HOW TO INSTALL JAVA ON MAC YOUTUBE.COM MAC OS X
HOW TO INSTALL JAVA ON MAC YOUTUBE.COM INSTALL
HOW TO INSTALL JAVA ON MAC YOUTUBE.COM MAC
The Ubuntu repository offers two (2), open-source Java packages, Java Development Kit (Open JDK) and Java Runtime Environment (Open JRE). In this document, we look at different packages within the Java SE. Access to the command-line/terminal window.Installing Java on CentOS 7 or CentOS 8.If you are looking for other Java installation guides, please refer to: HelloWorld.Note: This guide provides instructions that work on Ubuntu 18.04, Ubuntu 20.04 and any other Ubuntu-based distribution (including Linux Mint, Kubuntu, and Elementary OS). We are mapping the local directory with the directory: /usr/src/myapp inside the containerĬreate a docker-compose.yml file: version: "2".Here we are specifying the Java container running version 8 of the SDK ( java:8 - to use Java 7, you could just specify: java:7).Project dependencies are installed within the container - so if you mess up your config you can simply nuke the container and start again.Very easy to switch to different versions of Java by simply changing the tag on the container.No need to set up any version of Java on your local machine (you'll just run Java within a container which you pull from Docker Hub).You can simply run your application within the official JDK container - meaning that you don't have to worry about getting everything set up on your local machine (or worry about running multiple different versions of the JDK for different apps etc)Īlthough this might not help you with your current installation issues, it is a solution which means you can side-step the minefield of issues related with trying to get Java running correctly on your dev machine! To set JAVA_HOME: $ jenv enable-plugin exportĪn option that I am starting to really like for running applications on my local computer is to use Docker. To see all the installed java: $ jenv versionsĪbove command will give the list of installed java: * system (set by /Users/lyncean/.jenv/version)Ĭonfigure the java version which you want to use: $ jenv global oracle64-1.6.0.39 $ jenv add /Library/Java/JavaVirtualMachines/jdk1.11.0_2.jdk/Contents/Home $ echo 'eval "$(jenv init -)"' > ~/.bash_profileĪdd the installed java to jenv: $ jenv add /Library/Java/JavaVirtualMachines/jdk1.8.0_202.jdk/Contents/Home $ echo 'export PATH="$HOME/.jenv/bin:$PATH"' > ~/.bash_profile
HOW TO INSTALL JAVA ON MAC YOUTUBE.COM INSTALL
Install and configure jenv: $ brew install jenv If you want to install/manage multiple version then you can use 'jenv': To install java 8: $ brew cask install adoptopenjdk/openjdk/adoptopenjdk8 To install latest java: $ brew cask install java Install cask (with Homebrew 0.9.5 or higher, cask is included so skip this step): $ brew tap caskroom/cask
HOW TO INSTALL JAVA ON MAC YOUTUBE.COM MAC
Why doesn't Oracle's installer put it where it really goes? And how can I work around this problem?Īssumption: Mac machine and you already have installed homebrew. Ironically, the "Java" control panel under System Preferences shows only Java 1.8! usr/libexec/java_home -V still only lists the old Java 1.6. I've tried adding a symbolic link to make it look like 1.8 is in the /System/Library.
HOW TO INSTALL JAVA ON MAC YOUTUBE.COM HOW TO
But /usr/libexec/java_home doesn't find 1.8, so all the posts I've found on how to set your current java version don't work. Not sure why the latest installer puts this in /Library instead of /System/Library (nor what the difference is). I ran Oracle's Java 8 installer, and the files look like they ended up at /Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdkīut previous versions are at /System/Library/Java/JavaFrameworks/jdk1.6.
HOW TO INSTALL JAVA ON MAC YOUTUBE.COM MAC OS X
I'm using IntelliJ 13 CE and Mac OS X 9 Mavericks. I want to do some programming with the latest JavaFX, which requires Java 8. Editors note: This question was asked in 2014, and the answers may be outdated.

0 notes
Text
Instalar Gradle en Ubuntu, CentOS y derivados
Instalar Gradle en Ubuntu, CentOS y derivados. Gradle es una herramienta de compilación automatizada basada en los principios de Apache Ant y Apache Maven, pero a diferencia de estos que ofrecen un formulario tradicional basado en XML para representar la configuracion del proyecto, Gradle nos proporciona DSL en los lenguajes de programación Groovy y Kotlin. Otras diferencias importantes que nos llaman la atención: Apache Maven, se basa en el concepto del ciclo de vida del proyecto; Apache Ant, en el orden que se ejecutan las tareas determinado por la relación de las dependencias. En el lado opuesto tenemos a Gradle que utiliza un grafo acíclico dirigido (DAG) para determinar el orden en el que se deben hacer las tareas. Gradle fue diseñado para ensamblajes extensibles de proyectos múltiples y, además admite ensamblajes incrementales al determinar automáticamente qué componentes del árbol ensamblador no han cambiado y, qué otras tareas dependientes de estas partes no necesitan reiniciar otra vez. En el articulo de hoy veremos cómo instalar Gradle en CentOS 7 / 8, y en Ubuntu 18.04 y superiores (se incluyen sus derivados y Debian 10).
Instalar Gradle en Ubuntu, CentOS y derivados
El único requisito previo para instalar Gradle, es tener Java JDK o JRE en su versión 8 (o superior) instalada en el sistema. Nosotros instalaremos OpenJDK y un par de herramientas necesarias. Instalar OpenJDK en Ubuntu, Linux Mint y derivados: sudo apt update sudo apt install -y default-jdk-headless sudo apt install -y wget unzip Instalar OpenJDK en CentOS y derivados: sudo yum update sudo yum install -y java yum install -y wget unzip El resto de la instalación es igual en CentOS y Ubuntu.
Instalar Gradle en linux
Instalar Gradle en Linux
Nosotros instalaremos la última versión, en este caso Gradle 6.3. Puedes verificar si existe alguna actualización en su página oficial. Descargamos Gradle. cd /tmp wget https://services.gradle.org/distributions/gradle-6.3-bin.zip Extraemos el paquete y movemos su contenido al directorio de la herramienta. unzip gradle-*.zip mkdir /opt/gradle cp -pr gradle-*/* /opt/gradle Comprobamos que los archivos se han movido correctamente. ls /opt/gradle/ ejemplo de salida... LICENSE NOTICE bin getting-started.html init.d lib media Ahora debes incluir el /bin de Gradle en las variables de entorno PATH. echo "export PATH=/opt/gradle/bin:${PATH}" | tee /etc/profile.d/gradle.sh El archivo gradle.sh debe contener lo siguiente... export PATH=/opt/gradle/bin:${PATH} Le concedes los permisos de ejecución necesarios. sudo chmod +x /etc/profile.d/gradle.sh Cargamos las variable para la sesión actual. source /etc/profile.d/gradle.sh Como punto final del articulo, verificamos la instalación de Gradle. gradle -v ejemplo de salida correcta... Welcome to Gradle 6.3! Here are the highlights of this release: - Java 14 support - Improved error messages for unexpected failures For more details see https://docs.gradle.org/6.3/release-notes.html ------------------------------------------------------------ Gradle 6.3 ------------------------------------------------------------ Build time: 2020-04-12 20:32:04 UTC Revision: bacd40b727b0130eeac8855ae3f9fd9a0b207c60 Kotlin: 1.3.70 Groovy: 2.5.10 Ant: Apache Ant(TM) version 1.10.7 compiled on September 1 2019 JVM: 11.0.6 (Ubuntu 11.0.6+10-post-Ubuntu-1ubuntu118.04.1) OS: Linux 5.0.0-1026-gcp amd64 Canales de Telegram: Canal SoloLinux – Canal SoloWordpress Espero que este articulo te sea de utilidad, puedes ayudarnos a mantener el servidor con una donación (paypal), o también colaborar con el simple gesto de compartir nuestros artículos en tu sitio web, blog, foro o redes sociales. Read the full article
#ApacheAnt#ApacheMaven#Centos#Gradle#Gradle6.3#grafoacíclicodirigido#groovy#InstalarGradle#instalarGradleenCentOS#InstalarGradleenLinux#instalarGradleenUbuntu#JavaJDK#OpenJDK#Variablesdeentorno
0 notes
Text
SDN yak shaving (ODL)
1.Download mininet VM
http://mininet.org/download/
· Import mininet to VMbox
· Set network to bridge mode
· Check IP in mininet VM
· Connect mininet through windows terminal
2.Install Opendaylight
· Download ubuntu-14.04.3-server-amd64
· Set network to bridge mode
· Install dependency
apt-get update
apt-get install maven git openjdk-7-jre openjdk-7-jdk unzip
· download opendaylight
wget https://nexus.opendaylight.org/content/repositories/public/org/opendaylight/integration/distribution-karaf/0.3.1-Lithium-SR1/distribution-karaf-0.3.1-Lithium-SR1.tar.gz
· unzip file
· cd distribution-karaf-0.3.1-Lithium-SR1
· set java path
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
Set network:
sudo vi /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet static# set your own network address make sure your c#omputer and your VM in the same subnet
address 192.168.172.105
netmask 255.255.255.0
network 192.168.172.0
broadcst 192.168.172.255
gateway 192.168.172.1
dns-nameservers 8.8.8.8
sudo shutdown -r reboot
cd unzip distribution-karaf-0.3.1-Lithium-SR1/bin
sudo ./karaf #start opendaylight
feature:install odl-restconf-all odl-l2switch-switch odl-dlux-all odl-mdsal-all
3.Association
After we use mininet create a network then we can associate the opendaylight GUI API with mininet
then we can visit the website(remember set mininet remote addr for this address)
http://192.168.172.105:8181/index.html
0 notes
Text
How to set up an Apache Spark cluster with a Hadoop Cluster on AWS (Part 1)
One of the big points of interest in the latest years comes from the posibilities that Big Data entails. Organizations, from the smallest startup to the biggest, oldschool enterprise, are coming to the realization that there's tons of data to easily come around in these days of Massive, Always-on Networked Computing in the Hands of Everyone(tm), be it through Data Mining or out of their old, classic datasets, and it turns out there's tons of value in being able to do something for a change with that data. Maybe you can use your data to understand your users better to market better to them, or you can understand market trends better and ramp up your production at the right time to maximize profits... there's tons of ways to get smarter and better at business from Data.
But what's the problem? Data is finnicky. Data can be a diamond in the rough: are you getting the right data? Does it need cleaning up or normalizing (protip: it usually does) or formatting to be usable? How do you transform it somehow to make it useful? Do you need to serve it to someone, and how, to maximize efficiency?
A lot of times, the scale of that data is a challenge too. So, this is what we call a Data Lake. We call it data lake because there's a preemptive assumption: that data ingresses and egresses from your Organization in many shapes and forms, but it has many different shapes and sizes. But, how can we make sense of your data lake? How do you pull this off? O, what is the shape of water, thou aske? Well, that's the crux of the matter.
Enter Big Data, which is kind of an umbrella term for what we're trying to accomplish. The very prospect of Big Data is that we have the tech and the beef (distributed computing, the cloud, and good networking) to make sense of all of your data, no matter even if it's in the order of magnitude of petabytes.
So, there's lots of different propositions in the tech market today to attack this ~even though the more you look it seems that every company is coming up with their own propietary ways to solve this and sell you some smoke and mirrors rather than actual results~. But lately the dust has settled on a few main players. Specifically: Apache Spark to run compute, and Hadoop in the persistence layer.
Why Spark and Hadoop? Spark is a framework to run compute in parallel across many machines, that plays fantastically well with JVM languages. We are coming off almost 30 years of fantastic legacy in the Java ecosystem, which is like a programming lingua franca at this point. Particularly, it's exciting to program on Spark on languages such as Scala or Clojure, that not only have [strong concurrency models])(https://the-null-log.org/post/177889728984/concurrency-parallellism-and-strong-concurrency), but also have normalized conceptions of map and reduce operations to munge and crunch data baked right into the language (it will be seen, in a bit, that Map/Reduce is a fundamentally useful construct to process Big Data).
On the other part, Hadoop can make many disk volumes be seen as just one, while handling all the nitty gritty shitty details behind scenes. Let's face it: when you operate in the order of petabytes, your data is not gonna fit in a single machine, so that's why you need a good distributed file system.
And yes, before you say so, yes: I know there's managed services. I know Amazon has EMR and Redshift, I know I don't need to do this manually if Amazon Will Run It For Me Instead(tm). But SHUT UP.
I'm gonna set up a cluster so you don't have to!
And besides, we can use very exciting cloud technologies, that leverage really modern programming paradigms and that enable us to exploit the web better and faster, particularly with the event model of cloud computing. More on that later, because it's something that I really love about cloud services, but we can't go in depth on Events right now.
So this exercise will proceed in three phases:
1) Defining the compute architecture and bringing up infrastructure 2) Defining the data lake and your ingress/egress 3) Crunching data!
Defining the compute architecture and bringing up infrastructure
Spark
Spark works clustered, in a master-slave configuration, with many worker nodes reacting to instructions sent by a master node, which thus works as the Data plane. With something sophisticated as a container orchestrator you could run these workloads with containerization and scale up/down as needed. Cool, right?
So this is a rough estimate of how our architecture is going to look like:
The number of worker nodes is up to you and your requirements ;). But this is the main idea. Rough sketch.
We'll run all the machines on EC2. Eventually, we could run all our compute containerized like I said, but for now, we'll do servers.
For each machine I plan to run small, replicable, machines. One of the tenets of cloud computing is that your compute resources should be stateless and immutable and for the sake of practicity you should consider them ephemeral and transparently replaceable.
For the machines I'll Use AML (Amazon Linux). A nice, recent version! I love CentOS-likes and AML is well suited for EC2.
Now, we will provision the machines using cloud-init. Cloud-init is a fantastic resource if you subscribe to sane cloud computing principles, particularly the premise of infrastructure as code. Cloud-init is a tool that you can find in most modern linux distros that you can run first thing after creating a machine, with what's basically yaml-serialized rules as to how the machine should be configured in terms of unix users, groups and permissions, access controls (such as ssh keys), firewall rules, installation of utilities into the machine and any and all other housekeeping needed.
Why is it that important to write your bootstrapping logic in cloud-init directives? In cloud computing, given that the resources that you have access to are theoretically endlessly elastic and scalable, then you should focus more rather on the substance behind the compute resources that you use for your operations rather than the resource itself, since the resources can be deprovisioned or scaled in replication at any time. Thus, if you specify the configuration, tools, utilities, and rules that should dictate how your resource works in a text file, not only your resource becomes easily available and easily replicable, but you also get to version it as if it was any other piece of logic in your application. Since this configuration should not change arbitrarily, that means that any and all resources trhat you provision will, each and every time, be configured exactly the same and work exactly the same as any other resources that you have provisioned.
Besides, tangentially: cloud-init gives you a comfy layer of abstraction that puts you one step closer to the deliciousness of the lift and shift ideal. If you notice, cloud-init has constructs to handle user creation and installing utilities and such, without having to code directly to the environment. You don't have to worry if you're using a Slackware-like or Debian-like, and which assumptions are made under the hood or not :)
(Bear in mind that I have only tested this on Ubuntu on AWS. If you're running another distro or are on another cloud, you are GOING TO HAVE TO adjust the cloud-init directives to match your environment! Debugging is key! You can look on the cloud-init log after your compute launches, usually by default in: /var/log/cloud-init-output.log)
Marvelous!
Infrastructure as code is the bees fuckin knees, y'all!
So, this is my cloud-init script, which is supported natively in AWS EC2:
#cloud-config repo_update: true repo_upgrade: all # users: # - name: nullset # groups: users, admin # shell: /bin/bash # sudo: ALL=(ALL) NOPASSWD:ALL # ssh_authorized_keys: ssh-rsa ... nullset2@Palutena packages: - git - ufw - openjdk-8-jre runcmd: - [ bash, /opt/cloud-init-scripts/setup_spark_master.sh ] write_files: - path: /opt/cloud-init-scripts/setup_spark_master.sh content: | #!/bin/bash SPARK_VERSION="2.4.0" HADOOP_VERSION="2.7" APACHE_MIRROR="apache.uib.no" LOCALNET="0.0.0.0/0" # Firewall setup ufw allow from $LOCALNET ufw allow 80/tcp ufw allow 443/tcp ufw allow 4040:4050/tcp ufw allow 7077/tcp ufw allow 8080/tcp # Download and unpack Spark curl -o /tmp/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz http://$APACHE_MIRROR/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz tar xvz -C /opt -f /tmp/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz ln -sf /opt/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION/ /opt/spark chown -R root.root /opt/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION/* # Configure Spark master cp /opt/spark/conf/spark-env.sh.template /opt/spark/conf/spark-env.sh sed -i 's/# - SPARK_MASTER_OPTS.*/SPARK_MASTER_OPTS="-Dspark.deploy.defaultCores=4 -Dspark.executor.memory=2G"/' /opt/spark/conf/spark-env.sh # Make sure our hostname is resolvable by adding it to /etc/hosts echo $(ip -o addr show dev eth0 | fgrep "inet " | egrep -o '[0-9.]+/[0-9]+' | cut -f1 -d/) $HOSTNAME | sudo tee -a /etc/hosts # Start Spark Master with IP address of enp0s3 as the address to use /opt/spark/sbin/start-master.sh -h $(ip -o addr show dev eth0 | fgrep "inet " | egrep -o '[0-9.]+/[0-9]+' | cut -f1 -d/) - path: /etc/profile.d/ec2-api-tools.sh content: | #/bin/bash export JAVA_HOME=/usr/lib/jvm/java-1.8.0 export PATH=$PATH:$JAVA_HOME/bin
Of particular attention: Notice how I setup a user for myself on the machine by adding my public SSH key. You should add your personal public key here or you can use a private key generated by ec2 to connect to the machine and delete the users block if you prefer to use a private key generated by ec2.
We will use this as our "canon" image for our spark master. So, let's create the machine and pass this cloud-init script as the User Data when configuring our compute instance:
If you run this and everything goes fine, you should end up with a complete spark installation under /opt/spark, with a bunch of helper scripts located in /opt/spark/sbin. You should be able to confirm or debug any issues by taking a look at your cloud-init log which should be by default on /var/log/cloud-init.log.
If you see something like this you made it:
starting org.apache.spark.deploy.master.Master, logging to /opt/spark/logs/spark-[user]-org.apache.spark.deploy.master.Master-1-[hostname].out
Now, we'll do something very similar for the worker nodes and launch them with cloud-init directives. Remember to replace the value for the IP of the master server that we created in the step before you run this!!!!!
#cloud-config repo_update: true repo_upgrade: all # users: # - name: nullset # groups: users, admin # shell: /bin/bash # sudo: ALL=(ALL) NOPASSWD:ALL # ssh_authorized_keys: ssh-rsa ... nullset2@Palutena packages: - git - ufw - openjdk-8-jre runcmd: - [ bash, /opt/cloud-init-scripts/init_spark_worker.sh ] write_files: - path: /opt/cloud-init-scripts/init_spark_worker.sh content: | #!/bin/bash SPARK_VERSION="2.4.0" HADOOP_VERSION="2.7" APACHE_MIRROR="apache.uib.no" LOCALNET="0.0.0.0/0" SPARK_MASTER_IP="<ip of master spun up before>" # Firewall setup ufw allow from $LOCALNET ufw allow 8081/tcp # Download and unpack Spark curl -o /tmp/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz http://$APACHE_MIRROR/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz tar xvz -C /opt -f /tmp/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION.tgz ln -sf /opt/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION/ /opt/spark chown -R root.root /opt/spark-$SPARK_VERSION-bin-hadoop$HADOOP_VERSION/* # Make sure our hostname is resolvable by adding it to /etc/hosts echo $(ip -o addr show dev eth0 | fgrep "inet " | egrep -o '[0-9.]+/[0-9]+' | cut -f1 -d/) $HOSTNAME | sudo tee -a /etc/hosts # Start Spark worker with address of Spark master to join cluster /opt/spark/sbin/start-slave.sh spark://$SPARK_MASTER_IP:7077 - path: /etc/profile.d/ec2-api-tools.sh content: | #/bin/bash export JAVA_HOME=/usr/lib/jvm/java-1.8.0 export PATH=$PATH:$JAVA_HOME/bin
Notice: in both scripts we have a variable that has the value for a certain IP subnet. I am currently setting it to 0.0.0.0/0 which means that the subnet that the machine will be on will allow any connections from the world. This is fine enough for development but if you're going to deploy this cluster for production you must change this value. It helps if you're familiar with setting firewall rules on ufw or iptables and/or handling security groups on AWS (which is a completely different subject, which we'll pick up on later).
Another Notice: PLEASE ensure that your TCP rules on your master/slave security groups look like this before you move onward! This goes without saying but you should ensure that both machines can talk to each other through TCP port 7077 which is the spark default for communication and 8080 for the master's Web UI and 8081 for the slave Web UI. It should look something like this
The cool thing at this point is that you could save this as an EC2 Golden Image and use it to replicate workers fast. However, I would not recommend to do that at this point because you would end up with identical configuration across nodes and that could lead to issues. Repeat as many times as needed to provision all of your workers. You could probably instead use an auto-scaling group and make it so things such as the IP of the master and whatnot are read dynamically instead of hardcoded. But this is a start :).
And finally it should be possible to confirm that the cluster is running and has associated workers connected to it if you take a look at the Spark Master UI, which should be pretty simple if you look at the content being served on the master on port 8080. So open up the ip address of your master node on port 8080 on a web browser and you should see the web UI.
So that's it for the time being! Next time we'll set up a Hadoop cluster and grab us a bunch of humongous datasets to crunch for fun. Sounds exciting?
0 notes
Text
Presto Docker Container with Graviton 2 Processor
I recently tried to run Presto on Arm architecture system to evaluate how it can cost-effectively achieve faster performance as part of my work. Thanks to AWS, we can make use of server machines having Arm processors such as Graviton 1/2. We have succeeded in experimenting without having much difficulty. The result of the research was described in the following articles.
Presto Experiment with Graviton Processor
High Performance SQL: AWS Graviton2 Benchmarks with Presto and Arm Treasure Data CDP
But I have found that they do not uncover the full detail of how to set up the Docker environment and steps to build an Arm-supporting docker image. Graviton 2 is now publicly available. We can even personally try the processor for running the distributed system like Presto. Therefore, I’m going to restate the aspect of the process step by step here.
The topics this article will cover are:
How to install docker engine in the Arm machine
How to build Arm-supporting docker image for Presto
How to run the Arm-supporting container in Graviton 2 instance
Setup Graviton 2
I used the Ubuntu 18.04 (LTS) built for the Arm64 platform. The AMI id is ami-0d221091ef7082bcf. As it does not contain a docker engine inside, we need to install it manually. The instance type I used is m6g.medium. Once the instance is ready, follow the below steps.
Setup the Repository
Install necessary packages first.
$ sudo apt-get update $ sudo apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common
Add docker’s official GPG key.
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Verify the key.
$ sudo apt-key fingerprint 0EBFCD88 pub rsa4096 2017-02-22 [SCEA] 9DC8 5822 9FC7 DD38 854A E2D8 8D81 803C 0EBF CD88 uid [ unknown] Docker Release (CE deb) <[email protected]> sub rsa4096 2017-02-22 [S]
Finally, add the repository for installing the docker engine for the Arm platform.
$ sudo add-apt-repository \ "deb [arch=arm64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable"
Install Docker Engine
$ sudo apt-get update $ sudo apt-get install docker-ce docker-ce-cli containerd.io
See the list of available versions.
$ apt-cache madison docker-ce docker-ce | 5:19.03.12~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages docker-ce | 5:19.03.11~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages docker-ce | 5:19.03.10~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages docker-ce | 5:19.03.9~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages docker-ce | 5:19.03.8~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages docker-ce | 5:19.03.7~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages docker-ce | 5:19.03.6~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages docker-ce | 5:19.03.5~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages docker-ce | 5:19.03.4~3-0~ubuntu-bionic | https://download.docker.com/linux/ubuntu bionic/stable arm64 Packages ...
I choose the latest one, 5:19.03.12~3-0~ubuntu-bionic.
Install the package.
$ sudo apt-get install \ docker-ce=5:19.03.12~3-0~ubuntu-bionic \ docker-ce-cli=5:19.03.12~3-0~ubuntu-bionic \ containerd.io
To run the docker command without root permission, add the user into the docker group.
$ sudo usermod -aG docker ubuntu
Login into the instance again to reflect the latest user setting.
Build the Docker Image
Let’s build an Arm-supporting image from the source. Put the following Dockerfile under any directory as you like. I put it under /path/to/presto-arm64v8.
FROM arm64v8/openjdk:11 RUN \ set -xeu && \ apt-get -y -q update && \ apt-get -y -q install less && \ apt-get -q clean all && \ rm -rf /var/cache/yum && \ rm -rf /tmp/* /var/tmp/* && \ groupadd presto --gid 1000 && \ useradd presto --uid 1000 --gid 1000 && \ mkdir -p /usr/lib/presto /data/presto && \ chown -R "presto:presto" /usr/lib/presto /data/presto ARG PRESTO_VERSION COPY --chown=presto:presto presto-server-${PRESTO_VERSION} /usr/lib/presto EXPOSE 8080 USER presto:presto ENV LANG en_US.UTF-8 CMD ["/usr/lib/presto/bin/run-presto"]
We also need a script to launch the process as follows. The following file is put under /path/to/presto-arm64v8/bin/run-presto.
#!/bin/bash set -xeuo pipefail if [[ ! -d /usr/lib/presto/etc ]]; then if [[ -d /etc/presto ]]; then ln -s /etc/presto /usr/lib/presto/etc else ln -s /usr/lib/presto/default/etc /usr/lib/presto/etc fi fi set +e grep -s -q 'node.id' /usr/lib/presto/etc/node.properties NODE_ID_EXISTS=$? set -e NODE_ID="" if [[ ${NODE_ID_EXISTS} != 0 ]] ; then NODE_ID="-Dnode.id=${HOSTNAME}" fi exec /usr/lib/presto/bin/launcher run ${NODE_ID} "$@"
Afterward, we can build the latest presto.
$ cd /path/to/presto $ ./mvnw -T 1C install -DskipTests
Make sure to find the artifact under /path/to/presto/presto-server/target. Finally, the following commands will provide the docker image supporting Arm architecture.
$ export PRESTO_VERSION=340-SNAPSHOT $ export WORK_DIR=/path/to/presto-arm64v8 # Copy presto server module $ cp /path/to/presto/presto-server/target/presto-server-${PRESTO_VERSION}.tar.gz ${WORK_DIR} $ tar -C ${WORK_DIR} -xzf ${WORK_DIR}/presto-server-${PRESTO_VERSION}.tar.gz $ rm ${WORK_DIR}/presto-server-${PRESTO_VERSION}.tar.gz $ cp -R /path/to/bin default ${WORK_DIR}/presto-server-${PRESTO_VERSION} docker buildx build ${WORK_DIR} \ --platform linux/arm64/v8 \ -f Dockerfile --build-arg "PRESTO_VERSION=340-SNAPSHOT" \ -t "presto:${PRESTO_VERSION}-arm64v8" \ --load
The image you want should be listed in the list of images.
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE presto 340-SNAPSHOT-arm64v8 cf9c4124516f 3 hours ago 1.25GB
Run the Container
We can transfer the image by using save and load command of docker. The following command will serialize the image in the tar.gz format so that we can copy the image to the Graviton2 instance through the network. It will take several minutes to complete.
$ docker save presto:340-SNAPSHOT-arm64v8 | gzip > presto-arm64v8.tar.gz
Copy the image to the instance. It will also take several minutes.
$ scp -i ~/.ssh/mykey.pem \ presto-arm64v8.tar.gz [email protected]:/home/ubuntu
Using the load command will bring the image into the executable format in the instance.
$ ssh -i ~/.ssh/mykey.pem [email protected] $ docker load < presto-arm64v8.tar.gz
Finally, you get there.
$ docker run -p 8080:8080 \ -it presto:340-SNAPSHOT-arm64v8 ... WARNING: Support for the ARM architecture is experimental ...
Note that Arm support is still an experimental feature as the warning message says. Please let the community know if you find something wrong using Presto in the Arm platform.
Thanks.
Reference
Install Docker Engine on Ubuntu
Presto Docker Image
AWS Graviton Processor
Presto Experiment with Graviton Processor
High Performance SQL: AWS Graviton2 Benchmarks with Presto and Arm Treasure Data CDP
source http://www.lewuathe.com/run-presto-on-graviton2-processor.html
0 notes
Link
Hello! In this blog, I will show you how we can install Hadoop version 3.2.1 in distributed mode on Ubuntu 16.04. I have installed VirtualBox to create three virtual machines for the nodes on my machine.
Step 1: Virtual machines and configuration changes I have assigned the permanent IP address to my virtual machines by changing the configuration of my TP-Link router. The IP of machines are mentioned below 192.168.0.10 192.168.0.11 192.168.0.12
I have done some configuration changes on each virtual machines as mentioned below.
Machine 1: I have installed NameNode on this machine IP: 192.168.0.10 hostname: server1-cluster1
I have edited the /etc/hosts file $ sudo vim /etc/hosts and entered the following information in the host file 192.168.0.10 server1.bigdata.com 192.168.0.11 server2.bigdata.com server2-cluster1 192.168.0.12 server3.bigdata.com server3-cluster1
Now this machine knows machines 192.168.0.11 as “server2.bigdata.com” or “server2-cluster1” and 192.168.0.12 as “server3.bigdata.com” or “:qserver3-cluster1”
After editing the /etc/hosts file look like as below.
Similarly, I have added /etc/hosts file for the other two machines.
Machine 2: I have installed Secondary NameNode on this machine IP: 192.168.0.11
host name: server2-cluster1
Machine 3: I have installed NameNode on this machine IP: 192.168.0.12 host name: server3-cluster1
Step 2: Java Installation ( On all machines ) Install the required Java version on each of the virtual machines as mentioned on the link
https://cwiki.apache.org/confluence/display/HADOOP2/HadoopJavaVersions
I have installed OpenJdk 8 on the virtual machines.
Java installation command $sudo apt-get install openjdk-8-jdk
Java command to check the version
$javac -version
Step 3: ssh installation ( On all machines ) ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons if the optional start and stop scripts are to be used. Additionally, it is recommended that pdsh also be installed for better ssh resource management.
The command to install ssh $sudo apt install ssh
Command to install pdsh $sudo apt install pdsh
Step 4: Create new user and group on Ubuntu ( On all machines ) I have created a group with the name “big data” and user with the name “Hadoop user”.
Command to create the group (group name is bigdata) $sudo addgroup bigdata
Command to create a new user and add to the group ( Added hadoopuser to the group bigdata ) $sudo adduser --ingroup bigdata hadoopuser
Step 5: Configuring ‘sudo’ permission for the new user - “hadoopuser” ( On all machines )
The command to open the sudoers file $ sudo visudo
Since by default, Ubuntu text editor is nano we will need to use CTRL + O to edit. Add the below permissions to sudoers. hadoopuser ALL=(ALL) ALL Use CTRL + x keyboard shortcut to exit out. Enter Y to save the file
Step 6: Switch user to “hadoopuser”
Command to switch user $su hadoopuser
Step 7: Configuring Key based login on Master Node ( server1.bigdata.com ) so that it can communicate with slaves through ssh without password
Command to generating a new SSH public and private key pair on your master node $ ssh-keygen -t rsa -P ""
Below command will add the public key to the authorized_keys $ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
Command to making sure that 'hostname' is added to the list of known hosts so that a script execution doesn't get interrupted by a question about trusting computer's authenticity. $ ssh hostname
ssh-copy-id is a small script which copy the ssh public-key to a remote host; appending it to your remote authorized_keys.
$ ssh-copy-id -i $HOME/.ssh/id_rsa.pub [email protected] $ ssh-copy-id -i $HOME/.ssh/id_rsa.pub [email protected]
ssh is a program for logging into a remote machine and for executing commands on a remote machine. Check remote login works with password or not. $ ssh server2.bigdata.com
Exit from remote login with ‘exit’ command $ exit
Step 8: Creating Hadoop installation directory and configuring permission ( On all machines )
I have created the directory name local at path /opt/ . Please create /opt directory first in case it doesn’t exist
Command to create ‘local’ directory at path ‘/opt/local/’ sudo mkdir /opt/local
Changing the ownership and permissions of the directory /opt/local
$ sudo chown -R hadoopuser:bigdata /opt/local/ $ sudo chmod -R 755 /opt/local
Step 9: Configuring hadoop tmp directory ( On all machines )
Hadoop tmp directory is used as base for the temporary directories locally
Command to create ‘tmp’ directory $ sudo mkdir /app/hadoop $ sudo mkdir /app/hadoop/tmp
Command to change ownership to hadoopuser and group ownership to bigdata $ sudo chown -R hadoopuser:bigdata /app/hadoop
Command to change the read and write permission to 755 $ sudo chmod -R 755 /app/hadoop
Perform Step 10 to Step 18 on the Hadoop Master Node ( server1.bigdata.com ) Step 10: Downloading Hadoop
Command to change directory to hadoop installation directory in our case it is /opt/local $cd /opt/local
I have downloaded hadoop 3.2.1 using wget command $wget https://downloads.apache.org/hadoop/common/hadoop-3.2.1/hadoop-3.2.1.tar.gz
Command to extract hadoop $tar -xvzf hadoop-3.2.1.tar.gz
Command to rename extracted directory hadoop-3.2.1 $mv hadoop-3.2.1 hadoop
Step 11: Configuring hadoop environment variables Edit $HOME/.bashrc file by adding the java and hadoop path.
sudo gedit $HOME/.bashrc
Add the following lines to set the Java and Hadoop environment variable export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/opt/local/hadoop export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=/opt/local/hadoop/lib/native" export PDSH_RCMD_TYPE=ssh
Reload your changed $HOME/.bashrc settings $source $HOME/.bashrc
Step 12: Setting JAVA_HOME in the hadoop-env.sh file
Change the directory to /opt/local/hadoop/etc/hadoop $cd $HADOOP_HOME/etc/hadoop
Edit hadoop-env.sh file. $sudo vim hadoop-env.sh
Add the below lines to hadoop-env.sh file. Save and Close. export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
Step 13. Edit the core-site.xml file as below:
<configuration> <property> <name>fs.default.name</name> <value>hdfs://server1.bigdata.com:9000</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/app/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> </configuration>
Step 14: Edit the hdfs-site.xml file as below <configuration>
<property> <name>dfs.name.dir</name> <value>/app/hadoop/tmp/namenode</value> </property> <property> <name>dfs.data.dir</name> <value>/app/hadoop/tmp/datanode</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.datanode.use.datanode.hostname</name> <value>false</value> </property> <property> <name>dfs.namenode.datanode.registration.ip-hostname-check</name> <value>false</value> </property> <property> <name>dfs.namenode.http-address</name> <value>server1.bigdata.com:50070</value> <description>Your NameNode hostname for http access.</description> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>server2.bigdata.com:50090</value> <description>Your Secondary NameNode hostname for http access.</description> </property> </configuration>
Step 15: Edit the yarn-site.xml as below
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description>Long running service which executes on Node Manager(s) and provides MapReduce Sort and Shuffle functionality.</description> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> <description>Enable log aggregation so application logs are moved onto hdfs and are viewable via web ui after the application completed. The default location on hdfs is '/log' and can be changed via yarn.nodemanager.remote-app-log-dir property</description> </property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>2560</value> <description>Amount of physical memory, in MB, that can be allocated for containers.</description> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>256</value> </property> <property> <description>Minimum increment setting - set to same as min-allocation</description> <name>yarn.scheduler.increment-allocation-mb</name> <value>256</value> </property> <property> <description>Max available cores data node.</description> <name>yarn.nodemanager.resource.cpu-vcores</name> <value>2</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value> <final>false</final> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>server1.bigdata.com</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>server1.bigdata.com:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>server1.bigdata.com:8031</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>server1.bigdata.com:8032</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>server1.bigdata.com:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>server1.bigdata.com:8088</value> </property> </configuration>
Step 16: Edit the mapred-site.xml as below <configuration>
<property> <name>mapred.job.tracker</name> <value>server1.bigdata.com:9001</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.map.memory.mb</name> <value>256</value> </property> <!-- Default 1024. Recommend setting to 4096. Should not be higher than YARN max allocation --> <property> <name>mapreduce.reduce.memory.mb</name> <value>256</value> </property> <property> <description>Application master allocation</description> <name>yarn.app.mapreduce.am.resource.mb</name> <value>256</value> </property> <!-- Recommend heapsizes to be 75% of mapreduce.map/reduce.memory.mb --> <property> <name>mapreduce.map.java.opts</name> <value>-Xmx204m</value> </property> <property> <name>mapreduce.reduce.java.opts</name> <value>-Xmx204m</value> </property> <property> <description>Application Master JVM opts</description> <name>yarn.app.mapreduce.am.command-opts</name> <value>-Xmx204m</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=/opt/local/hadoop</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=/opt/local/hadoop</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=/opt/local/hadoop</value> </property> </configuration>
Step 17: Edit worker and add the following line to the worker
server2.bigdata.com server3.bigdata.com server1.bigdata.com
Step 18: Secure copy or SCP is a means of securely transferring computer files between a local host and a remote host or between two remote hosts
Here, we are transferring configured hadoop files from master to slave nodes
scp -r /opt/local/* [email protected]?:/opt/local scp -r /opt/local/* [email protected]?:/opt/local
Transfering environment configuration to slave machine scp -r $HOME/.bashrc [email protected]?:$HOME/.bashrc scp -r $HOME/.bashrc [email protected]?:$HOME/.bashrc
Step 19 : Format namenode on hadoop master ( server1.bigdata.com )
Change the directory to /opt/local/hadoop/sbin $cd /opt/local/hadoop/sbin
Format the datanode. $ hadoop namenode -format
hadoop namenode -format WARNING: Use of this script to execute namenode is deprecated. WARNING: Attempting to execute replacement "hdfs namenode" instead.
2020-07-02 18:09:49,711 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = server1-cluster1/127.0.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 3.2.1 STARTUP_MSG: classpath = **********************************<Print all the classpath>************** STARTUP_MSG: build = https://gitbox.apache.org/repos/asf/hadoop.git -r b3cbbb467e22ea829b3808f4b7b01d07e0bf3842; compiled by 'rohithsharmaks' on 2019-09-10T15:56Z STARTUP_MSG: java = 1.8.0_252 ************************************************************/ 2020-07-02 18:09:49,729 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2020-07-02 18:09:49,816 INFO namenode.NameNode: createNameNode [-format] 2020-07-02 18:09:50,236 INFO common.Util: Assuming 'file' scheme for path /app/hadoop/tmp/namenode in configuration. 2020-07-02 18:09:50,236 INFO common.Util: Assuming 'file' scheme for path /app/hadoop/tmp/namenode in configuration. Formatting using clusterid: CID-1c02b838-66e7-4532-b57a-c0deddeaee1c 2020-07-02 18:09:50,268 INFO namenode.FSEditLog: Edit logging is async:true 2020-07-02 18:09:50,280 INFO namenode.FSNamesystem: KeyProvider: null 2020-07-02 18:09:50,281 INFO namenode.FSNamesystem: fsLock is fair: true 2020-07-02 18:09:50,282 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2020-07-02 18:09:50,285 INFO namenode.FSNamesystem: fsOwner = hadoopuser (auth:SIMPLE) 2020-07-02 18:09:50,285 INFO namenode.FSNamesystem: supergroup = supergroup 2020-07-02 18:09:50,285 INFO namenode.FSNamesystem: isPermissionEnabled = false 2020-07-02 18:09:50,286 INFO namenode.FSNamesystem: HA Enabled: false 2020-07-02 18:09:50,313 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling 2020-07-02 18:09:50,329 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000 2020-07-02 18:09:50,329 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=false 2020-07-02 18:09:50,332 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2020-07-02 18:09:50,332 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Jul 02 18:09:50 2020-07-02 18:09:50,333 INFO util.GSet: Computing capacity for map BlocksMap 2020-07-02 18:09:50,333 INFO util.GSet: VM type = 64-bit 2020-07-02 18:09:50,334 INFO util.GSet: 2.0% max memory 955.1 MB = 19.1 MB 2020-07-02 18:09:50,335 INFO util.GSet: capacity = 2^21 = 2097152 entries 2020-07-02 18:09:50,343 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled 2020-07-02 18:09:50,344 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false 2020-07-02 18:09:50,348 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS 2020-07-02 18:09:50,348 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2020-07-02 18:09:50,348 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0 2020-07-02 18:09:50,349 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000 2020-07-02 18:09:50,349 INFO blockmanagement.BlockManager: 2 2020-07-02 18:09:50,349 INFO blockmanagement.BlockManager: 512 2020-07-02 18:09:50,349 INFO blockmanagement.BlockManager: 1 2020-07-02 18:09:50,349 INFO blockmanagement.BlockManager: =2 2020-07-02 18:09:50,350 INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms 2020-07-02 18:09:50,350 INFO blockmanagement.BlockManager: = false defaultReplication = maxReplication = minReplication = maxReplicationStreams encryptDataTransfer 2020-07-02 18:09:50,350 INFO blockmanagement.BlockManager: = 1000 2020-07-02 18:09:50,382 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911 2020-07-02 18:09:50,382 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215 2020-07-02 18:09:50,383 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215 2020-07-02 18:09:50,383 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215 2020-07-02 18:09:50,401 INFO util.GSet: Computing capacity for map INodeMap 2020-07-02 18:09:50,401 INFO util.GSet: VM type = 64-bit 2020-07-02 18:09:50,401 INFO util.GSet: 1.0% max memory 955.1 MB = 9.6 MB 2020-07-02 18:09:50,401 INFO util.GSet: capacity = 2^20 = 1048576 entries 2020-07-02 18:09:50,402 INFO namenode.FSDirectory: ACLs enabled? false 2020-07-02 18:09:50,402 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true 2020-07-02 18:09:50,402 INFO namenode.FSDirectory: XAttrs enabled? true 2020-07-02 18:09:50,402 INFO namenode.NameNode: Caching file names occurring more than 10 times 2020-07-02 18:09:50,405 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536 2020-07-02 18:09:50,408 INFO snapshot.SnapshotManager: SkipList is disabled 2020-07-02 18:09:50,410 INFO util.GSet: Computing capacity for map cachedBlocks 2020-07-02 18:09:50,411 INFO util.GSet: VM type = 64-bit 2020-07-02 18:09:50,411 INFO util.GSet: 0.25% max memory 955.1 MB = 2.4 MB 2020-07-02 18:09:50,411 INFO util.GSet: capacity = 2^18 = 262144 entries 2020-07-02 18:09:50,416 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2020-07-02 18:09:50,416 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2020-07-02 18:09:50,417 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 maxNumBlocksToLog 2020-07-02 18:09:50,421 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2020-07-02 18:09:50,421 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2020-07-02 18:09:50,423 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2020-07-02 18:09:50,423 INFO util.GSet: VM type = 64-bit 2020-07-02 18:09:50,423 INFO util.GSet: 0.029999999329447746% max memory 955.1 MB = 293.4 KB 2020-07-02 18:09:50,423 INFO util.GSet: capacity = 2^15 = 32768 entries 2020-07-02 18:09:50,447 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1756667812-127.0.1.1-1593693590440 2020-07-02 18:09:50,462 INFO common.Storage: Storage directory /app/hadoop/tmp/namenode has been successfully formatted. 2020-07-02 18:09:50,498 INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop/tmp/namenode/current/fsimage.ckpt_0000000000000000000 using no compression 2020-07-02 18:09:50,560 INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop/tmp/namenode/current/fsimage.ckpt_0000000000000000000 of size 405 bytes saved in 0 seconds . 2020-07-02 18:09:50,568 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 2020-07-02 18:09:50,573 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid=0 when meet shutdown. 2020-07-02 18:09:50,573 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************
Step 20: Start Namenode, Secondary NameNode and DataNode daemon ( server1.bigdata.com )
$ start-dfs.sh
Step 21: Start yarn daemons. ( server1.bigdata.com )
$ start-yarn.sh
Step 22:
The JPS (Java Virtual Machine Process Status Tool) tool is limited to reporting information on JVMs for which it has the access permissions. $ jps
10624 NodeManager 10128 DataNode 9986 NameNode
10986 Jps 10477 ResourceManager
Only on slave machines - (server2.bigdata.com and server3.bigdata.com ) $ jps 4849 DataNode 4979 SecondaryNameNode
5084 NodeManager 5310 Jps
$ jps 4277 NodeManager 4499 Jps 4145 DataNode
Congratulations! We have successfully completed the three-node installation setup for Hadoop 3.2.1. Please feel free to reach us through comments for any issues or queries.
We are a 360-degree SaaS app development company that specializes in providing end-to-end DevOps Services Provider to strengthen enterprise IT infrastructure. Our team of DevOps solutions and service providers have an extensive experience in using the latest tool and technologies to render technical infrastructure support with a focus on continuous delivery and continuous integration. We use agile development methodologies to streamline enterprise software architecture through our DevOps Services Provider . We also provide Cloud Application Development Services to prepare the project and formulate effective DevOps strategies to maximize enterprise benefits with end-to-end cloud integration.
0 notes
Text
Welcome to this guide on how to install Java 17 (OpenJDK 17) on Ubuntu 22.04|20.04|18.04. Java is a high-level object-oriented programming language and computing platform intended to let application developers write once and run everywhere. This means that a compiled java code runs on all platforms that support Java without the need for recompilation. JDK is a collection of various programming tools such as JRE(Java Runtime Environment), Java, Javac, Jar, and many others. Java 17 LTS is the latest long-term support release for the Java SE platform released on 14 September 2021. Java 17 comes with the following amazing features: Enhanced pseudo-random number generators New rendering pipeline for MacOS, using the Apple Metal API as an alternative to the existing pipeline that uses the deprecated OpenGL API Remove the Experimental AOT and JIT Compiler Sealed classes and interfaces restrict which other classes or interfaces may extend or implement them. Removal of the Remote Method Invocation (RMI) Activation mechanism Deprecate the Applet API for Removal Porting the JDK to MacOS/AArch64 in response to Apple’s plan to transition its Macintosh computers from x64 to AArch64 Strong encapsulation for JDK internals Context-specific deserialization filters The foreign function and memory API which allows Java programs to interoperate with code and data outside of the Java runtime With the above information, we are now set to dive into the Java 17 LTS installation. In this guide, we will cover 2 methods to get Java 17 installed on Ubuntu 22.04|20.04|18.04. OpenJDK from APT repos OpenJDK Manually Oracle JDK/JRE from PPA Oracle JDK manual install Option 1 – Install OpenJDK 17 on Ubuntu 22.04|20.04|18.04 from APT The easiest and quickest way is installation OpenJDK 17 on Ubuntu 22.04|20.04|18.04 from OS upstream repositories. sudo apt update sudo apt install openjdk-17-jdk For JRE run the following commands to install sudo apt install openjdk-17-jre Check Java version to validate a successful installation: $ java --version openjdk 17.0.3 2022-04-19 OpenJDK Runtime Environment (build 17.0.3+7-Ubuntu-0ubuntu0.22.04.1) OpenJDK 64-Bit Server VM (build 17.0.3+7-Ubuntu-0ubuntu0.22.04.1, mixed mode, sharing) Option 2 – Install OpenJDK 17 on Ubuntu 22.04|20.04|18.04 Manually Java OpenJDK 17 is an open-source implementation of the Java SE platform. Since the OpenJDK versions available in the default repositories are not up to date, we will have to download the Open-source JDK 17 using the wget command. ### Linux 64-bit ### wget https://download.java.net/java/GA/jdk17.0.2/dfd4a8d0985749f896bed50d7138ee7f/8/GPL/openjdk-17.0.2_linux-x64_bin.tar.gz ### Linux ARM64 ### wget https://download.java.net/java/GA/jdk17.0.2/dfd4a8d0985749f896bed50d7138ee7f/8/GPL/openjdk-17.0.2_linux-aarch64_bin.tar.gz Extract the downloaded tarball. ### Linux 64-bit ### tar xvf openjdk-17.0.2_linux-x64_bin.tar.gz ### Linux ARM64 ### tar xvf openjdk-17.0.2_linux-aarch64_bin.tar.gz Then move the extracted file to the /opt/ directory as shown. sudo mv jdk-17.0.2/ /opt/jdk-17/ Set the environment variables. echo 'export JAVA_HOME=/opt/jdk-17' | sudo tee /etc/profile.d/java17.sh echo 'export PATH=$JAVA_HOME/bin:$PATH'|sudo tee -a /etc/profile.d/java17.sh source /etc/profile.d/java17.sh Verify the Java installation as below. $ echo $JAVA_HOME /opt/jdk-17 Check the installed version of Java. $ java --version openjdk 17.0.2 2022-01-18 OpenJDK Runtime Environment (build 17.0.2+8-86) OpenJDK 64-Bit Server VM (build 17.0.2+8-86, mixed mode, sharing) Option 3 – Install Oracle JDK 17 on Ubuntu 22.04|20.04|18.04 For this option, we download the production-ready Java from the Java SE Downloads page. Alternatively, one can install Oracle JDK 17 on Ubuntu using the “Linux Uprising” team PPA installer script which automatically downloads and sets the default Java version on Ubuntu 64-bit based systems.
Clean previous OpenJDK installation: sudo rm /etc/profile.d/java17.sh exit Add the PPA repository to your Ubuntu system. sudo add-apt-repository ppa:linuxuprising/java Then install Oracle JDK 17 as shown. sudo apt update sudo apt install oracle-java17-installer Accept installation prompts: Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: binutils binutils-common binutils-x86-64-linux-gnu gsfonts gsfonts-x11 java-common libbinutils libctf-nobfd0 libctf0 libfontenc1 oracle-java17-set-default x11-common xfonts-encodings xfonts-utils Suggested packages: binutils-doc binfmt-support visualvm ttf-baekmuk | ttf-unfonts | ttf-unfonts-core ttf-kochi-gothic | ttf-sazanami-gothic ttf-kochi-mincho | ttf-sazanami-mincho ttf-arphic-uming firefox | firefox-2 | iceweasel | mozilla-firefox | iceape-browser | mozilla-browser | epiphany-gecko | epiphany-webkit | epiphany-browser | galeon | midbrowser | moblin-web-browser | xulrunner | xulrunner-1.9 | konqueror | chromium-browser | midori | google-chrome The following NEW packages will be installed: binutils binutils-common binutils-x86-64-linux-gnu gsfonts gsfonts-x11 java-common libbinutils libctf-nobfd0 libctf0 libfontenc1 oracle-java17-installer oracle-java17-set-default x11-common xfonts-encodings xfonts-utils 0 upgraded, 15 newly installed, 0 to remove and 68 not upgraded. Need to get 6260 kB of archives. After this operation, 20.0 MB of additional disk space will be used. Do you want to continue? [Y/n] y You will see this installer window where you are supposed to agree to the License Terms. Agree to the License Terms by clicking Yes Verify the installed version of Java by checking the version. $ java -version java version "17.0.1" 2021-10-19 LTS Java(TM) SE Runtime Environment (build 17.0.1+12-LTS-39) Java HotSpot(TM) 64-Bit Server VM (build 17.0.1+12-LTS-39, mixed mode, sharing) $ javac -version javac 17 Option 4 – Manually install Oracle JDK 17 on Ubuntu 22.04|20.04|18.04 The official Oracle JDK is a development environment for building applications and components using the Java programming language. This toolkit includes tools for developing and testing programs written in the Java programming language and running on the Java platform. We’ll download Oracle JDK 17 Debian installer wget https://download.oracle.com/java/17/archive/jdk-17.0.3_linux-x64_bin.deb Run the installer after it’s downloaded: sudo apt install ./jdk-17.0.3_linux-x64_bin.deb Additional dependencies should be installed automatically: Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'jdk-17' instead of './jdk-17.0.3_linux-x64_bin.deb' The following additional packages will be installed: libc6-i386 libc6-x32 libxtst6 The following NEW packages will be installed: jdk-17 libc6-i386 libc6-x32 libxtst6 0 upgraded, 4 newly installed, 0 to remove and 63 not upgraded. Need to get 5517 kB/161 MB of archives. After this operation, 346 MB of additional disk space will be used. Do you want to continue? [Y/n] y Set JAVA environment echo 'export JAVA_HOME=/usr/lib/jvm/jdk-17/' | sudo tee -a /etc/profile echo 'export PATH=$PATH:$JAVA_HOME/bin' | sudo tee -a /etc/profile Source the file and confirm Java version: $ source /etc/profile $ echo $JAVA_HOME /usr/lib/jvm/jdk-17/ $ java -version java version "17.0.2" 2022-01-18 LTS Java(TM) SE Runtime Environment (build 17.0.2+8-LTS-86) Java HotSpot(TM) 64-Bit Server VM (build 17.0.2+8-LTS-86, mixed mode, sharing) Set Default Java Version on Ubuntu 22.04|20.04|18.04 Setting the default Java version applies in instances where you have multiple Java versions installed on your system. First, you need to list all the available versions. sudo update-alternatives --config java Output: There are 2 choices for the alternative java (providing /usr/bin/java).
Selection Path Priority Status ------------------------------------------------------------ * 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode 1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode 2 /usr/lib/jvm/java-17-oracle/bin/java 1091 manual mode Press to keep the current choice[*], or type selection number: 2 From the output, select the Java version you want to set as your default version by entering the number as shown above. Verify the set version $ java -version java version "17.0.1" 2021-10-19 LTS Java(TM) SE Runtime Environment (build 17.0.1+12-LTS-39) Java HotSpot(TM) 64-Bit Server VM (build 17.0.1+12-LTS-39, mixed mode, sharing) Setting JAVA_HOME Environment Variable. Setting the JAVA_HOME environment variable is important as it is used by Java applications to determine the install location of Java and the exact version to use when running the applications. We will set a persistent path so we edit the file /etc/profile as below. sudo vi /etc/profile In the file, add the Java path as shown. JAVA_HOME="/path/to/java/install" In this case, my Java path will be. JAVA_HOME="/usr/lib/jvm/java-17-oracle" For these changes to apply, you require to log in and log out or use the source command. source /etc/environment Verify the set variables. $ echo $JAVA_HOME /usr/lib/jvm/java-17-oracle Test the Java Installation We will now test the Java installation using a simple HTML file. In this guide, we will create an HTML file with the name Hello_World.java. cat > Hello_World.java
0 notes
Text
how to remove openjdk
how to remove openjdk
[ad_1]
I have a container running ubuntu 16 and it has openjdk installed in a custom path:
root@8dd3cb6eda0b:/opt/couchbase/lib/cbas/runtime/bin# ./java --version openjdk 11.0.4 2019-07-16 OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.4+11) OpenJDK 64-Bit Server VM AdoptOpenJDK (build 11.0.4+11, mixed mode)
I want to remove this but don't see openjdk package installed:
root@8dd3cb6eda0b:/…
View On WordPress
0 notes
Text
In this guide, I’ll show you how you can set the default Java version on Ubuntu / Debian Linux system. It is common to run more than one version of Java in your Ubuntu or Debian system – For development reasons or varying applications requirements. Earlier on we had done an article on installing Java on Ubuntu / Debian: How to Install Java 11 on Ubuntu How to Install Java 8 on Ubuntu Install Java 17 (OpenJDK 17) on Debian Suppose you install Java 11 and you had another version of Java installed earlier, you can select default Java version to use using the update-alternatives --config java command. Step 1: Checking Java versions installed on Ubuntu / Debian To get a list of installed Java versions, run the command: $ sudo update-java-alternatives --list java-1.11.0-openjdk-amd64 1111 /usr/lib/jvm/java-1.11.0-openjdk-amd64 java-1.8.0-openjdk-amd64 1081 /usr/lib/jvm/java-1.8.0-openjdk-amd64 You’ll get a list of all Java editions that are installed on your Debian / Ubuntu system. Identify the version you wan to change to then proceed to next step. Step 2: Set default Java version on Ubuntu / Debian Once you have a list of Java version, set a default one by running the command. I’ll change mine from Java 11 to Java 8: $ sudo update-alternatives --config java There are 2 choices for the alternative java (providing /usr/bin/java). Selection Path Priority Status ------------------------------------------------------------ * 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode 1 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode 2 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java 1081 manual mode Press to keep the current choice[*], or type selection number: 2 update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java to provide /usr/bin/java (java) in manual mode Check Java version $ java -version openjdk version "1.8.0_302" OpenJDK Runtime Environment (build 1.8.0_302-8u302-b08-0ubuntu2-b08) OpenJDK 64-Bit Server VM (build 25.302-b08, mixed mode) The same can be done for javac. $ sudo update-alternatives --config javac There are 2 choices for the alternative javac (providing /usr/bin/javac). Selection Path Priority Status ------------------------------------------------------------ * 0 /usr/lib/jvm/java-11-openjdk-amd64/bin/javac 1111 auto mode 1 /usr/lib/jvm/java-11-openjdk-amd64/bin/javac 1111 manual mode 2 /usr/lib/jvm/java-8-openjdk-amd64/bin/javac 1081 manual mode Press to keep the current choice[*], or type selection number: 2 update-alternatives: using /usr/lib/jvm/java-8-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode If JAVA_HOME is not set correctly, run the command below to set from current default Java configured: export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::") For JRE, use: export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:jre/bin/java::") Persistence can be achieved by placing the export command in your .bashrc or /etc/profilefile. $ vim ~/.bashrc export JAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::") We hope this article helped you to set the default Java version on Ubuntu / Debian Linux system. Stay connected for more articles on Java and Development tools.
0 notes
Photo
Jenkins のマスターとスレーブを Docker コンテナで起動して Python の unittest を pyenv 環境で動かすまでのメモ https://ift.tt/2Jrpja5
前提 — Jenkins を動かす環境 — Python の unittest
Jenkins のコンテナの起動と初期設定 — 起動 — 初期設定
プロジェクトの設定
スレーブ環境の準備 — スレーブ環境の前提 — スレーブの追加 — プロジェクトでスレーブを利用するように設定
ビルド
以上
前提
Jenkins を動かす環境
Jenkins のマスター環境もスレーブ環境も Docker コンテナで起動する
Docker を動かすホストの OS は Ubuntu 16.04
Python の unittest
Python は pyenv を介して Python 3.6.4 を利用する
以下のようなコードとテストコードを用意した
# sample.py def foo(): return True def bar(): return True # tests/test_sample.py import unittest import sample class SampleTest(unittest.TestCase): def test_foo(self): self.assertTrue(sample.foo()) def test_bar(self): self.assertTrue(sample.bar()
以下のように実行すると, テストが走る.
$ python -m unittest tests.test_sample .. ---------------------------------------------------------------------- Ran 2 tests in 0.000s
尚, これらのコードは Backlog Git にホストする.
Jenkins のコンテナの起動と初期設定
起動
以下を実行して, Jenkins コンテナを起動する.
mkdir -p ~/sandbox/jenkins/jenkins_home cd ~/sandbox/jenkins/jenkins_home docker run --name=jenkins -d -p 8080:8080 -p 50000:50000 -v $(pwd):/var/jenkins_home jenkins/jenkins:lts
起動したら, 初期設定を進める.
初期設定
以下のような設定を行った.
認証情報
プラグインのインストール (今回は Backlog プラグインと pyenv プラグインをインストールした)
プロジェクトの設定
プロジェクトは以下のような項目の設定を行った.
General
プロジェクト名
実行するノードを制限 (後述)
ソースコード管理
リポジトリ URL (今回は Backlog Git を利用した)
認証情報 (Backlog Git の認証情報を設定)
ブランチ指定子 (全てのブランチを対象とする為, ** とした)
ビルド環境
pyenv build wrapper にチェック
The Python version に 3.6.4 を入力
ビルド
シェルスクリプトに python -m unittest tests.test_sample を入力
スレーブ環境の準備
スレーブ環境の前提
SSH ログイン出来る状態にしておく
jenkins ユーザーを作成して, マスターから SSH ログイン出来る状態にしておく (今回はパスワード認証, パスワードベタ書き)
Java 環境を用意してエージェントプログラム slave.jar が利用出来る状態にしておく
その他, ビルドに必要なパッケージを用意しておく (今回は Python インストールに必要なパッケージをインストールしておく)
最終的に以下のような Dockerfile を用意した.
FROM ubuntu:16.04 # RUN apt-get update RUN apt-get -y install sudo openssh-server openjdk-8-jdk git gcc make openssl libssl-dev libbz2-dev libreadline-dev libsqlite3-dev # RUN mkdir -p /var/run/sshd RUN useradd -d /home/jenkins -m -s /bin/bash jenkins RUN echo jenkins:your_password | chpasswd RUN echo 'jenkins ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers # EXPOSE 22 CMD ["/usr/sbin/sshd","-D"]
以下のようにビルドしてランしておく.
docker build -t jenkins-slave . docker run --name=jenkins-slave -t -d -p 22:22 jenkins-slave
スレーブの追加
[Jenkinsの管理] > [ノードの管理] にて, [新規ノード作成] をクリックして, まずは以下のように設定.
ノード名
Permanent Agent にチェック
更に以下のパラメータを設定する.
リモートFSルート (スレーブに作成した jenkins ユーザーのホームディレクトリ /home/jenkins を指定)
起動方法 (「SSH 経由でUnixマシンのスレーブエージェント」を指定)
ホスト (コンテナの IP アドレスを入力)
認証情報 (スレーブに作成した jenkins ユーザーのパスワードを事前に Jenkins の認証情報に定義しておくと良いかなー)
Host Key Verification Strategy (Not Verifying Verification Strategy を指定)
正常にスレーブの追加が行われると, 以下のようにログが出力される.
[03/18/18 13:41:42] [SSH] Opening SSH connection to xxx.xxx.xxx.xxx:22. [03/18/18 13:41:42] [SSH] WARNING: SSH Host Keys are not being verified. Man-in-the-middle attacks may be possible against this connection. [03/18/18 13:41:43] [SSH] Authentication successful. [03/18/18 13:41:43] [SSH] The remote user's environment is: BASH=/bin/bash BASHOPTS=cmdhist:complete_fullquote:extquote:force_fignore:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath BASH_ALIASES=() BASH_ARGC=() BASH_ARGV=() BASH_CMDS=() BASH_EXECUTION_STRING=set BASH_LINENO=() BASH_SOURCE=() BASH_VERSINFO=([0]="4" [1]="3" [2]="48" [3]="1" [4]="release" [5]="x86_64-pc-linux-gnu") BASH_VERSION='4.3.48(1)-release' DIRSTACK=() EUID=1000 GROUPS=() HOME=/home/jenkins HOSTNAME=030406dab1e6 HOSTTYPE=x86_64 IFS=$' \t\n' LOGNAME=jenkins MACHTYPE=x86_64-pc-linux-gnu MAIL=/var/mail/jenkins OPTERR=1 OPTIND=1 OSTYPE=linux-gnu PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games PIPESTATUS=([0]="0") PPID=36 PS4='+ ' PWD=/home/jenkins SHELL=/bin/bash SHELLOPTS=braceexpand:hashall:interactive-comments SHLVL=1 SSH_CLIENT='172.17.0.2 59868 22' SSH_CONNECTION='172.17.0.2 59868 xxx.xxx.xxx.xxx 22' TERM=dumb UID=1000 USER=jenkins _=']' [03/18/18 13:41:43] [SSH] Checking java version of java [03/18/18 13:41:43] [SSH] java -version returned 1.8.0_151. [03/18/18 13:41:43] [SSH] Starting sftp client. [03/18/18 13:41:43] [SSH] Copying latest slave.jar... [03/18/18 13:41:43] [SSH] Copied 762,466 bytes. Expanded the channel window size to 4MB [03/18/18 13:41:43] [SSH] Starting slave process: cd "/home/jenkins" && java -jar slave.jar <===[JENKINS REMOTING CAPACITY]===>channel started Remoting version: 3.17 This is a Unix agent Evacuated stdout Agent successfully connected and online
プロジェクトでスレーブを利用するように設定
プロジェクトの設定に戻って, 以下の設定を行う.
General
実行するノードを制限
ラベル式 (追加したスレーブの名前を入力)
ビルド
後はプロジェクトのビルドボタンをポチッとするだけで, 以下のようにビルドが実行される.
Started by user xxxxxxxxxxxxx Building remotely on jenkins-slave in workspace /home/jenkins/workspace/my-project > git rev-parse --is-inside-work-tree # timeout=10 Fetching changes from the remote Git repository > git config remote.origin.url https://xxxxxxxxxx.backlog.jp/git/FAM/my-project.git # timeout=10 Fetching upstream changes from https://xxxxxxxxxx.backlog.jp/git/FAM/my-project.git > git --version # timeout=10 using GIT_ASKPASS to set credentials > git fetch --tags --progress https://xxxxxxxxxx.backlog.jp/git/FAM/my-project.git +refs/heads/*:refs/remotes/origin/* Seen branch in repository origin/master Seen 1 remote branch > git show-ref --tags -d # timeout=10 Checking out Revision c8f3fc3b465451fbce37a1eb4789964c60ab22b5 (origin/master) > git config core.sparsecheckout # timeout=10 > git checkout -f c8f3fc3b465451fbce37a1eb4789964c60ab22b5 Commit message: "add files" > git rev-list --no-walk c8f3fc3b465451fbce37a1eb4789964c60ab22b5 # timeout=10 $ bash -c "[ -d \$HOME/.pyenv ]" $ bash -c "cd /home/jenkins/workspace/my-project && env PYENV_ROOT\=\$HOME/.pyenv PYENV_VERSION\=3.6.4 \$HOME/.pyenv/bin/pyenv local 2>/dev/null || true" $ bash -c "mkdir \$HOME/.pyenv.lock" $ bash -c "env PYENV_ROOT\=\$HOME/.pyenv PYENV_VERSION\=3.6.4 \$HOME/.pyenv/bin/pyenv versions --bare" $ bash -c "env PYENV_ROOT\=\$HOME/.pyenv PYENV_VERSION\=3.6.4 \$HOME/.pyenv/bin/pyenv rehash" $ bash -c "env PYENV_ROOT\=\$HOME/.pyenv PYENV_VERSION\=3.6.4 \$HOME/.pyenv/bin/pyenv exec pip list" $ bash -c "env PYENV_ROOT\=\$HOME/.pyenv PYENV_VERSION\=3.6.4 \$HOME/.pyenv/bin/pyenv rehash" $ bash -c "rm -rf \$HOME/.pyenv.lock" [my-project] $ /bin/sh -xe /tmp/jenkins6857290211274972615.sh + python -m unittest tests.test_sample .. ---------------------------------------------------------------------- Ran 2 tests in 0.000s OK Finished: SUCCESS
おお, いい感じ.
本当にスレーブがビルドに使われているかどうかは, 以下のように [Jenkinsの管理] > [ノードの管理] > [スレーブ名] > [ビルド履歴] を見ると判る.
以上
Jenkins を取り上げた記事なのに, スクリーンショットが殆ど無いということに気付いた.
ということで, Jenkins 職人への道のりは遠い.
元記事はこちら
「Jenkins のマスターとスレーブを Docker コンテナで起動して Python の unittest を pyenv 環境で動かすまでのメモ」
April 09, 2018 at 02:00PM
0 notes
Text
Instalación de Jython 2.7.0 en Ubuntu Server 16.04.

Buenas noches querido lector. Esta vez mostraré cómo ejecuté mi instalación de Jython 2.7.0 en Ubuntu 16.04. La razón de la instalación de este lenguaje de programación, es que deseo hacer cierta interacción entre Python 2 con una base de datos Derby. Un poco de hojear blogs (que son muy pocos abordando este tema), me dí cuenta que únicamente quizá podría lograrlo por medio de ODBC. Yo he estado un poco empecinado a usar lo que hasta la fecha he usado en lo que tiene que ver con conexiones a base de datos: JDBC.
Y siendo Derby una base totalmente desarrollada en JAVA por ende la compatibilidad e interacciones se vuelven más cómodas desde el lenguaje del logo del café hacia mi base Derby. Por si han seguido otras entregas de mi blog, estoy en un proyecto con ciertas limitantes de recursos de procesamiento, memoria, etc. Es por ello que debo buscar componentes lo más ligeros para lograr una buena sinergia y experiencia entre las limitantes y las tecnologías en uso. Jython entra en esa descripción:
Acorde a una descripción que encontré de Jython, el autor lo llama:
“Un lenguaje que toma la elegancia y poder de Python, para correr en una JVM“
A continuación detallo el procedimiento que ejecuté en mi servidor para lograr la instalación de Jython 2.7.0:
Prerequisitos
1. Descargar el instalador de Jython 2.7.0
* Siendo: Jython 2.7.0 - Installer : Executable jar for installing Jython
2. Crear un directorio destino dónde albergar la instalación.
*Mi caso fue /opt/jython
3. Tener la JDK u OpenJDK 7 u 8 de Java. (Aunque las notas de instalación no lo piden, lo recomiendo)
*Mi caso fue:
jython@vps158271:/opt/jython$ java -version java version "1.8.0_151" Java(TM) SE Runtime Environment (build 1.8.0_151-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
4. Tener Python 2 instalado (Aunque las notas de instalación no lo piden, lo recomiendo)
*Mi caso fue:
jython@vps158271:/opt/jython$ python -V Python 2.7.12
======================================================
Proceso de instalación
henry@vps158271:~/Downloads$ sudo java -jar jython-installer-2.7.0.jar [sudo] password for henry: Welcome to Jython ! You are about to install Jython version 2.7.0 (at any time, answer c to cancel the installation) For the installation process, the following languages are available: English, German Please select your language [E/g] >>> E Do you want to read the license agreement now ? [y/N] >>> N Do you accept the license agreement ? [Y/n] >>> Y The following installation types are available: 1. All (everything, including sources) 2. Standard (core, library modules, demos and examples, documentation) 3. Minimum (core) 9. Standalone (a single, executable .jar) Please select the installation type [ 1 /2/3/9] >>> 1 Do you want to exclude parts from the installation ? [y/N] >>> N Please enter the target directory >>> /opt/jython Your java version to start Jython is: Oracle Corporation / 1.8.0_151 Your operating system version is: Linux / 4.4.0-104-generic Summary: - mod: true - demo: true - doc: true - src: true - ensurepip: true - JRE: /usr/lib/jvm/java-8-oracle/jre Please confirm copying of files to directory /opt/jython [Y/n] >>> Y 10 % 20 % 30 % 40 % 50 % 60 % 70 % 80 % Generating start scripts ... Installing pip and setuptools 90 % Ignoring indexes: https://pypi.python.org/simple/ Downloading/unpacking setuptools Downloading/unpacking pip Installing collected packages: setuptools, pip Successfully installed setuptools pip Cleaning up... 100 % Do you want to show the contents of README ? [y/N] >>> N Congratulations! You successfully installed Jython 2.7.0 to directory /opt/jython. henry@vps158271:~/Downloads$
======================================================
Post Instalación
Como tarea final, sugiero agregar el directorio de instalación al path de un usuario dedicado al desarrollo de jython. Yo cree uno llamado “jython” y ya logueado con jython, les dejo el proceso de agregado al path:
jython@vps158271:~$ echo 'export PATH=$PATH:/opt/jython/bin' >> ~/.bashrc
======================================================
Probando Jython
jython@vps158271:/opt/jython/bin$ jython Jython 2.7.0 (default:9987c746f838, Apr 29 2015, 02:25:11) [Java HotSpot(TM) 64-Bit Server VM (Oracle Corporation)] on java1.8.0_151 Type "help", "copyright", "credits" or "license" for more information. >>> print "Hello"; Hello >>>
Listo, queda Jython instalado en Ubuntu 16.04.
0 notes