Hadoop 3. x 설치 배치

1. 설치 배치
여 기 는 단기 판 으로 설치 되 어 있 습 니 다. 제 가 설치 한 것 은 hadop 3.1.3 입 니 다. hadop 2. x 와 hadop 3. x 의 설치 배치 가 다 르 기 때문에 먼저 jdk 를 설치 하고 hadop 3. x 의 설치 패 키 지 를 준비 하면 홈 페이지 에서 다운로드 할 수 있 습 니 다.
원 격 으로 파일 을 업로드 할 수 있 도록 openssh 설치 (호스트 마다)
[root@node03 ~]# yum -y install openssh-clients

동기 화 시간 도구 (호스트 마다)
#  ntpdate  
[root@node03 ~]# yum -y install ntp ntpdate
#         
[root@node03 ~]# ntpdate cn.pool.ntp.org
#          
[root@node03 ~]# hwclock --systohc

파일 업로드 (rz) 및 다운로드 (sz) 도구
[root@node03 ~]# yum -y install lrzsz

네트워크 다운로드 도구 설치 (한 대 면 됨)
**[root@node03 ~]# yum -y install wget**

방화벽 닫 기 (대 마다)
#       
[root@node03 ~]# systemctl status firewalld
#     
[root@node03 ~]# systemctl stop firewalld
#         
[root@node03 ~]# systemctl disable firewalld
#     
[root@node03 ~]# systemctl start firewalld
#         
[root@node03 ~]# systemctl enable firewalld
#     
[root@node03 ~]# systemctl restart firewalld

SSH 비밀 로그 인 면제 설정
#  hosts  
[root@node03 ~]# vim /etc/hosts
#  ip    ,              
192.168.17.126 node03

SSH 설정
#     ssh       ,        .ssh   
#  no
[root@node03 ~]# ssh node03
#   .ssh      
[root@node03 ~]# cd ~/.ssh
#    ,    ,       
[root@node03 .ssh]# ssh-keygen -t rsa -P ''
[root@node03 .ssh]# cp id_rsa.pub authorized_keys
#                        
(
#     authorized_keys    
[root@node03 .ssh]# cat ~/.ssh/authorized_keys | ssh root@node01 'cat >> ~/.ssh/authorized_keys'
[root@node02 .ssh]# cat ~/.ssh/authorized_keys | ssh root@node01 'cat >> ~/.ssh/authorized_keys'
#       
[root@node01 .ssh]# scp ~/.ssh/authorized_keys root@node02:~/.ssh/
[root@node01 .ssh]# scp ~/.ssh/authorized_keys root@node03:~/.ssh/

JDK 설치
원본 JDK 마 운 트 해제
#       jdk  
[root@node03 ~]# rpm -qa|grep jdk
#       ,      jdk     ,     
#    
[root@node03 ~]# yum -y remove      jdk   

JDK 설치
#         jdk
[root@node03 ~]# mkdir -p /opt/module/Java/
  Java   
[root@node03 ~]# cd /opt/module/Java/
#  rz       windows      node03 
[root@node03 Java]# rz
#       
[root@node03 Java]# tar -zxvf jdk-8u212-linux-x64.tar.gz
#  JDK      
[root@node03 Java]# vi /etc/profile
#jdk    
export JAVA_HOME=/opt/module/Java/jdk1.8.0_212
export JRE_HOME=/opt/module/Java/jdk1.8.0_212/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin

#          
[root@node03 Java]# source /etc/profile
#        
[root@node03 Java]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)

Hadoop 설치
#    
[root@node03 ~]# mkdir -p /opt/module/Hadoop/
#   Hadoop   
[root@node03 ~]# cd /opt/module/Hadoop/
# rz  hadoop     node03
[root@node03 Hadoop]# rz
#       
[root@node03 Hadoop]# tar -zxvf hadoop-3.1.3.tar.gz
#  Hadoop    
#hadoop    
export HADOOP_HOME=/opt/module/Hadoop/hadoop-3.1.3
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

hadop 설정 파일
#  hadoop         
[root@node03 Hadoop]# cd /opt/module/Hadoop/hadoop-3.1.3/etc/hadoop

hadop - env. sh 파일 수정
[root@node03 hadoop]# vi hadoop-env.sh
#      
export JAVA_HOME=/opt/module/Java/jdk1.8.0_212
export HADOOP_HOME=/opt/module/Hadoop/hadoop-3.1.3
export PATH=$PATH:${HADOOP_HOME}/bin
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"
export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
#PID    ,            tmp      ,      HDFS      
export HADOOP_PID_DIR=${HADOOP_HOME}/pids
#    ,       ,        console 
#export HADOOP_ROOT_LOGGER=DEBUG,console 

yarn - env. sh 파일 수정
[root@node03 hadoop]# vi yarn-env.sh 
#         
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

core - site. xml 수정
[root@node03 hadoop]# vi core-site.xml 
#      
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/opt/module/Hadoop/hadoop-3.1.3/tmp</value>
    </property>
</configuration>

hdfs - site. xml 파일 수정
[root@node03 hadoop]# vi hdfs-site.xml 
#      
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/opt/module/Hadoop/hadoop-3.1.3/dfs/data</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/opt/module/Hadoop/hadoop-3.1.3/dfs/name</value>
    </property>
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
</configuration>

mapred - site. xml 파일 수정
[root@node03 hadoop]# vi mapred-site.xml
#      
<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.application.classpath</name>
        <value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*</value>
    </property>
    <property>
        <name>mapred.job.tracker</name>
        <value>node03:9001</value>
    </property>
</configuration>

yarn - site. xml 파일 수정
[root@node03 hadoop]# vi yarn-site.xml
#      
<configuration>

<!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
    </property>
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>node03</value>
    </property>
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>
    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property>
</configuration>

workers 파일 수정 (hadop 2. x 의 slaves 에 대응)
[root@node03 hadoop]# vi workers
#    localhost  ,            ,         
node03

sbin 아래 start - dfs. sh 와 stop - dfs. sh 파일 수정
[root@node03 hadoop]# cd /opt/module/Hadoop/hadoop-3.1.3/sbin/
[root@node03 sbin]# vi start-dfs.sh 
#           
#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
[root@node03 sbin]# vi stop-dfs.sh 
#           
#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

sbin 아래 start - yarn. sh 와 stop - yarn. sh 파일 수정
[root@node03 sbin]# vi start-yarn.sh
#           
#!/usr/bin/env bash
RN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
[root@node03 sbin]# vi stop-yarn.sh
#           
#!/usr/bin/env bash
RN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

namenode 포맷
[root@node03 ~]# hadoop namenode -format

서 비 스 를 시작 합 니 다 (여러 대가 호스트 에서 열 리 면 됩 니 다)
[root@node03 ~]# start-all.sh
Starting namenodes on [localhost]12  24 17:10:42 CST 2019pts/7  
Starting datanodes
     :  12  24 17:10:53 CST 2019pts/7  
Starting secondary namenodes [node03]12  24 17:10:55 CST 2019pts/7  
Starting resourcemanager
     :  12  24 17:10:59 CST 2019pts/7  
Starting nodemanagers
     :  12  24 17:11:03 CST 2019pts/7  
#  jps    
[root@node03 ~]# jps
15459 Jps
14361 NameNode
14683 SecondaryNameNode
14493 DataNode
14957 ResourceManager
15101 NodeManager

공식 적 으로 워드 count 를 가지 고 테스트 mapreduce 프로그램 을 진행 합 니 다.
로 컬 파일 작성
#             
[root@node03 ~]# mkdir -p /opt/module/mydata
#   mydata  
[root@node03 ~]# cd /opt/module/mydata
#           
[root@node03 mydata]# vi word.txt
#        ,         
I am student
ni hao
haha ha

로 컬 파일 을 HDFS 에 업로드 합 니 다.
#  hdfs    
[root@node03 mydata]# hadoop fs -mkdir -p /hyk/data/input
#                  
[root@node03 mydata]# hadoop fs -put word.txt /hyk/data/input
2019-12-24 17:22:52,240 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false

테스트 maprediuce
[root@node03 mydata]# hadoop jar /opt/module/Hadoop/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /hyk/data/input /hyk/data/output
2019-12-24 17:27:48,152 INFO client.RMProxy: Connecting to ResourceManager at node03/192.168.17.128:8032
2019-12-24 17:27:48,645 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/root/.staging/job_1577178667536_0001
2019-12-24 17:27:48,785 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2019-12-24 17:27:48,918 INFO input.FileInputFormat: Total input files to process : 1
2019-12-24 17:27:48,967 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2019-12-24 17:27:49,402 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2019-12-24 17:27:49,818 INFO mapreduce.JobSubmitter: number of splits:1
2019-12-24 17:27:49,972 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2019-12-24 17:27:50,392 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1577178667536_0001
2019-12-24 17:27:50,393 INFO mapreduce.JobSubmitter: Executing with tokens: []
2019-12-24 17:27:50,581 INFO conf.Configuration: resource-types.xml not found
2019-12-24 17:27:50,581 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2019-12-24 17:27:50,968 INFO impl.YarnClientImpl: Submitted application application_1577178667536_0001
2019-12-24 17:27:51,015 INFO mapreduce.Job: The url to track the job: http://node03:8088/proxy/application_1577178667536_0001/
2019-12-24 17:27:51,015 INFO mapreduce.Job: Running job: job_1577178667536_0001
2019-12-24 17:27:59,251 INFO mapreduce.Job: Job job_1577178667536_0001 running in uber mode : false
2019-12-24 17:27:59,260 INFO mapreduce.Job:  map 0% reduce 0%
2019-12-24 17:28:04,336 INFO mapreduce.Job:  map 100% reduce 0%
2019-12-24 17:28:10,385 INFO mapreduce.Job:  map 100% reduce 100%
2019-12-24 17:28:11,399 INFO mapreduce.Job: Job job_1577178667536_0001 completed successfully
2019-12-24 17:28:11,493 INFO mapreduce.Job: Counters: 53
	File System Counters
		FILE: Number of bytes read=95
		FILE: Number of bytes written=435315
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=152
		HDFS: Number of bytes written=57
		HDFS: Number of read operations=8
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Launched reduce tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=3208
		Total time spent by all reduces in occupied slots (ms)=2989
		Total time spent by all map tasks (ms)=3208
		Total time spent by all reduce tasks (ms)=2989
		Total vcore-milliseconds taken by all map tasks=3208
		Total vcore-milliseconds taken by all reduce tasks=2989
		Total megabyte-milliseconds taken by all map tasks=3284992
		Total megabyte-milliseconds taken by all reduce tasks=3060736
	Map-Reduce Framework
		Map input records=4
		Map output records=8
		Map output bytes=73
		Map output materialized bytes=95
		Input split bytes=110
		Combine input records=8
		Combine output records=8
		Reduce input groups=8
		Reduce shuffle bytes=95
		Reduce input records=8
		Reduce output records=8
		Spilled Records=16
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=146
		CPU time spent (ms)=1090
		Physical memory (bytes) snapshot=286482432
		Virtual memory (bytes) snapshot=5045534720
		Total committed heap usage (bytes)=138194944
		Peak Map Physical memory (bytes)=186015744
		Peak Map Virtual memory (bytes)=2519392256
		Peak Reduce Physical memory (bytes)=100466688
		Peak Reduce Virtual memory (bytes)=2526142464
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=42
	File Output Format Counters 
		Bytes Written=57

통계 결과 보기
[root@node03 mydata]# hadoop fs -cat /hyk/data/output/*
2019-12-24 17:32:29,752 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
I	1
am	1
ha	1
haha	1
hao	1
ni	1
student	1

Hadoop 3. x 배치 완료 문제 가 있 으 면 댓 글로 교류 할 수 있 습 니 다.

좋은 웹페이지 즐겨찾기