spark 2.3.1 독립 형 군집

4625 단어 spark2.3.1Standalone
더 읽 기
1. spark 2.3.1 다운로드
다운로드 주소:http://spark.apache.org/downloads.html
2. spark 2.3.1 설치
   / usr / spark 에 업로드 목록 아래
   압축 해제 설치: 
  
tar -zxvf spark-2.3.1-bin-hadoop2.7.tgz

 3. 수정 / etc / hosts 파일 은 다음 과 같 습 니 다.
vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.185 sky1

 수정 / etc / sysconfig / network 파일 은 다음 과 같 습 니 다.
vim /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=sky1
GATEWAY=192.168.2.1

 
4. spark 수정 프로필 (기계 4 대 를 예 로 들 면)
   conf/slaves
 vim conf/slaves
sky1
sky2
sky3
sky4



   conf/spark-env.sh
vim conf/spark-env.sh
export JAVA_HOME=/usr/java/jdk
export SPARK_MASTER_HOST=sky1
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=1g

 
5. 수정 완료 후 spark cp 다른 기계 로
scp -r /usr/spark/spark-2.3.1-bin-hadoop2.7 root@sky2:/usr/spark
 
6. spark 시작
시작 주의 방화벽 닫 기 (service iptables stop)
./sbin/start-all.sh
다른 시작 명령 (http://spark.apache.org/docs/latest/spark-standalone.html):
sbin/start-master.sh - Starts a master instance on the machine the script is executed on.
sbin/start-slaves.sh - Starts a slave instance on each machine specified in the conf/slaves file.
sbin/start-slave.sh - Starts a slave instance on the machine the script is executed on.
sbin/start-all.sh - Starts both a master and a number of slaves as described above.
sbin/stop-master.sh - Stops the master that was started via the sbin/start-master.sh script.
sbin/stop-slaves.sh - Stops all slave instances on the machines specified in the conf/slaves file.
sbin/stop-all.sh - Stops both the master and the slaves as described above.

 
  7. 시작 상황 보기:
    http://IP:8080/ spark 웹 콘 솔 보기
   netstat - antlp: spark 보기 포트 감청 상황
   
  8. 테스트 (http://spark.apache.org/docs/latest/submitting-applications.html)
     ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://sky1:7077 examples/jars/spark-examples_2.11-2.3.1.jar  10000
   기타
# Run application locally on 8 cores
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master local[8] \
  /path/to/examples.jar \
  100

# Run on a Spark standalone cluster in client deploy mode
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://207.184.161.138:7077 \
  --executor-memory 20G \
  --total-executor-cores 100 \
  /path/to/examples.jar \
  1000

# Run on a Spark standalone cluster in cluster deploy mode with supervise
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master spark://207.184.161.138:7077 \
  --deploy-mode cluster \
  --supervise \
  --executor-memory 20G \
  --total-executor-cores 100 \
  /path/to/examples.jar \
  1000

# Run on a YARN cluster
export HADOOP_CONF_DIR=XXX
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master yarn \
  --deploy-mode cluster \  # can be client for client mode
  --executor-memory 20G \
  --num-executors 50 \
  /path/to/examples.jar \
  1000

# Run a Python application on a Spark standalone cluster
./bin/spark-submit \
  --master spark://207.184.161.138:7077 \
  examples/src/main/python/pi.py \
  1000

# Run on a Mesos cluster in cluster deploy mode with supervise
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master mesos://207.184.161.138:7077 \
  --deploy-mode cluster \
  --supervise \
  --executor-memory 20G \
  --total-executor-cores 100 \
  http://path/to/examples.jar \
  1000

# Run on a Kubernetes cluster in cluster deploy mode
./bin/spark-submit \
  --class org.apache.spark.examples.SparkPi \
  --master k8s://xx.yy.zz.ww:443 \
  --deploy-mode cluster \
  --executor-memory 20G \
  --num-executors 50 \
  http://path/to/examples.jar \
  1000

 
 
 
 

좋은 웹페이지 즐겨찾기