빅 데이터 개발 의 하 이브 편 18 - 하 이브 의 휴지통
시 뮬 레이 션 오류 표
이 시 계 를 잘못 삭제 했다.
hive>
> drop table ods_fact_sale_orc;
OK
휴지통 복구 표
회수 표 보기
[root@hp1 ~]# hadoop fs -ls /user/root/.Trash/Current/user/hive/warehouse/test.db
Found 2 items
drwxrwxrwt - root hive 0 2020-12-02 19:18 /user/root/.Trash/Current/user/hive/warehouse/test.db/dm_sale_orc
drwxrwxrwt - root hive 0 2020-12-02 18:11 /user/root/.Trash/Current/user/hive/warehouse/test.db/ods_fact_sale_orc
휴지통 에서 데 이 터 를 복사 하 다.
[root@hp1 ~]# hadoop fs -cp /user/root/.Trash/Current/user/hive/warehouse/test.db/ods_fact_sale_orc /user/hive/warehouse/ods_fact_sale_orc
[root@hp1 ~]#
데이터 복구 및 검증
[root@hp1 ~]# hive
WARNING: Use "yarn jar" to launch YARN applications.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/jars/log4j-slf4j-impl-2.8.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/jars/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/jars/hive-common-2.1.1-cdh6.3.1.jar!/hive-log4j2.properties Async: false
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> use test;
OK
Time taken: 1.184 seconds
hive> load data inpath '/user/hive/warehouse/ods_fact_sale_orc' into table ods_fact_sale_orc;
FAILED: SemanticException [Error 10001]: Line 1:69 Table not found 'ods_fact_sale_orc'
hive>
> CREATE TABLE ods_fact_sale_orc(
> id bigint,
> sale_date string,
> prod_name string,
> sale_nums int)
> STORED AS ORC;
OK
Time taken: 0.396 seconds
hive>
> load data inpath '/user/hive/warehouse/ods_fact_sale_orc' into table ods_fact_sale_orc;
Loading data to table test.ods_fact_sale_orc
OK
Time taken: 1.852 seconds
hive> select count(*) from ods_fact_sale_orc;
Query ID = root_20201202193108_77c436b0-2b3e-47a8-a6e3-23f416f3bfd6
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Starting Job = job_1606698967173_0016, Tracking URL = http://hp1:8088/proxy/application_1606698967173_0016/
Kill Command = /opt/cloudera/parcels/CDH-6.3.1-1.cdh6.3.1.p0.1470567/lib/hadoop/bin/hadoop job -kill job_1606698967173_0016
Hadoop job information for Stage-1: number of mappers: 9; number of reducers: 1
2020-12-02 19:31:18,259 Stage-1 map = 0%, reduce = 0%
2020-12-02 19:31:26,557 Stage-1 map = 22%, reduce = 0%, Cumulative CPU 10.36 sec
2020-12-02 19:31:30,690 Stage-1 map = 33%, reduce = 0%, Cumulative CPU 14.55 sec
2020-12-02 19:31:31,720 Stage-1 map = 44%, reduce = 0%, Cumulative CPU 19.62 sec
2020-12-02 19:31:36,873 Stage-1 map = 56%, reduce = 0%, Cumulative CPU 24.96 sec
2020-12-02 19:31:37,901 Stage-1 map = 67%, reduce = 0%, Cumulative CPU 30.11 sec
2020-12-02 19:31:43,045 Stage-1 map = 78%, reduce = 0%, Cumulative CPU 35.28 sec
2020-12-02 19:31:44,073 Stage-1 map = 89%, reduce = 0%, Cumulative CPU 39.97 sec
2020-12-02 19:31:49,213 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 45.53 sec
2020-12-02 19:31:51,273 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 48.22 sec
MapReduce Total cumulative CPU time: 48 seconds 220 msec
Ended Job = job_1606698967173_0016
MapReduce Jobs Launched:
Stage-Stage-1: Map: 9 Reduce: 1 Cumulative CPU: 48.22 sec HDFS Read: 1991747 HDFS Write: 109 HDFS EC Read: 0 SUCCESS
Total MapReduce CPU Time Spent: 48 seconds 220 msec
OK
767830000
Time taken: 43.846 seconds, Fetched: 1 row(s)
hive>
이 내용에 흥미가 있습니까?
현재 기사가 여러분의 문제를 해결하지 못하는 경우 AI 엔진은 머신러닝 분석(스마트 모델이 방금 만들어져 부정확한 경우가 있을 수 있음)을 통해 가장 유사한 기사를 추천합니다:
Git에서 개발 환경을 정리해 보았습니다.로컬에 작업 디렉토리 만들기 mkdir [ワーキングディレクトリ名] 작업 디렉토리로 이동 cd [ワーキングディレクトリ名] 작업 디렉토리 초기화 git init git로 연결할 원격 리포지토리를 만듭니다. 이 때 REA...
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
CC BY-SA 2.5, CC BY-SA 3.0 및 CC BY-SA 4.0에 따라 라이센스가 부여됩니다.