하나의 Job 에 여러 개의 HBase 를 동시에 기록 하 는 table
5069 단어 hbase
주소
HBase MultiTableOutputFormat writing to multiple tables in one Map Reduce Job
Recently, I've been having a lot of fun learning about HBase and Hadoop. One esoteric thing I just learned about is the way that HBase tables are populated.
By default, HBase / Map Reduce jobs can only write to a single table because you set the output handler at the job level with the job.setOutputFormatClass(). However, if you are creating an HBase table, chances are that you are going to want to build an index related to that table so that you can do fast queries on the master table. The most optimal way to do this is to write the data to both tables at the same time when you are importing the data. The alternative is to write another M/R job to do this after the fact, but that means reading all of the data twice, which is a lot of extra load on the system for no real benefit. Thus, in order to write to both tables at the same time, in the same M/R job, you need to take advantage of the MultiTableOutputFormat class to achieve this result. The key here is that when you write to the context, you specify the name of the table you are writing to. This is some basic example code (with a lot of the meat removed) which demonstrates this.
static class TsvImporter extends Mapper<LongWritable, Text, ImmutableBytesWritable, Put> {
@Override
public void map(LongWritable offset, Text value, Context context) throws IOException {
// contains the line of tab separated data we are working on (needs to be parsed out).
byte[] lineBytes = value.getBytes();
// rowKey is the hbase rowKey generated from lineBytes
Put put = new Put(rowKey);
// Create your KeyValue object
put.add(kv);
context.write("actions", put); // write to the actions table
// rowKey2 is the hbase rowKey
Put put = new Put(rowKey2);
// Create your KeyValue object
put.add(kv);
context.write("actions_index", put); // write to the actions table
}
}
public static Job createSubmittableJob(Configuration conf, String[] args) throws IOException {
String pathStr = args[0];
Path inputDir = new Path(pathStr);
Job job = new Job(conf, "my_custom_job");
job.setJarByClass(TsvImporter.class);
FileInputFormat.setInputPaths(job, inputDir);
job.setInputFormatClass(TextInputFormat.class);
// this is the key to writing to multiple tables in hbase
job.setOutputFormatClass(MultiTableOutputFormat.class);
job.setMapperClass(TsvImporter.class);
job.setNumReduceTasks(0);
TableMapReduceUtil.addDependencyJars(job);
TableMapReduceUtil.addDependencyJars(job.getConfiguration());
return job;
}
이 내용에 흥미가 있습니까?
현재 기사가 여러분의 문제를 해결하지 못하는 경우 AI 엔진은 머신러닝 분석(스마트 모델이 방금 만들어져 부정확한 경우가 있을 수 있음)을 통해 가장 유사한 기사를 추천합니다:
【 Hbase 】 【 03 】 자바 조작 habsedemo1.HbaseSnapShot 2.HbaseService 3.HbaseServiceImpl 4. pom 파일 jar 패키지 도입...
텍스트를 자유롭게 공유하거나 복사할 수 있습니다.하지만 이 문서의 URL은 참조 URL로 남겨 두십시오.
CC BY-SA 2.5, CC BY-SA 3.0 및 CC BY-SA 4.0에 따라 라이센스가 부여됩니다.