4.1 Informasi umum babagan Hadoop

Paradigma MapReduce diusulake dening Google ing taun 2004 ing artikel MapReduce: Pangolahan Data Sederhana ing Kluster Gedhe . Wiwit artikel sing diusulake ngemot deskripsi paradigma, nanging implementasine ora ana, sawetara programer saka Yahoo ngusulake implementasine minangka bagian saka karya ing crawler web nutch. Sampeyan bisa maca luwih lengkap babagan sejarah Hadoop ing artikel Sejarah Hadoop: Saka 4 simpul menyang masa depan data .

Wiwitane, Hadoop utamane minangka alat kanggo nyimpen data lan nglakokake tugas MapReduce, nanging saiki Hadoop minangka tumpukan teknologi gedhe sing ana gandhengane kanggo ngolah data gedhe (ora mung karo MapReduce).

Komponen utama (inti) Hadoop yaiku:

  • Hadoop Distributed File System (HDFS) minangka sistem file sing disebarake sing ngidini sampeyan nyimpen informasi kanthi ukuran sing meh ora ana watesan.
  • Hadoop YARN minangka kerangka kanggo manajemen sumber daya kluster lan manajemen tugas, kalebu kerangka MapReduce.
  • Hadoop umum

Ana uga akeh proyek sing ana hubungane langsung karo Hadoop, nanging ora kalebu ing inti Hadoop:

  • Hive - alat kanggo pitakon kaya SQL liwat data gedhe (ngowahi pitakon SQL dadi seri tugas MapReduce);
  • Babi minangka basa pamrograman kanggo analisis data tingkat dhuwur. Siji baris kode ing basa iki bisa dadi urutan tugas MapReduce;
  • Hbase minangka basis data kolom sing ngetrapake paradigma BigTable;
  • Cassandra minangka basis data kunci-nilai sing disebarake kanthi kinerja dhuwur;
  • ZooKeeper minangka layanan kanggo panyimpenan konfigurasi sing disebarake lan sinkronisasi owah-owahan konfigurasi;
  • Mahout minangka perpustakaan lan mesin sinau mesin data gedhe.

Kapisah, aku pengin nyathet proyek Apache Spark , yaiku mesin kanggo pangolahan data sing disebarake. Apache Spark biasane nggunakake komponen Hadoop kayata HDFS lan YARN kanggo karyane, dene dhewe saiki dadi luwih populer tinimbang Hadoop:

Sawetara komponen kasebut bakal dibahas ing artikel sing kapisah ing seri materi iki, nanging saiki, ayo goleki carane sampeyan bisa miwiti nggarap Hadoop lan ngetrapake.

4.2 Mlaku program MapReduce ing Hadoop

Saiki ayo goleki carane mbukak tugas MapReduce ing Hadoop. Minangka tugas, kita bakal nggunakake conto WordCount klasik , sing wis dibahas ing pawulangan sadurunge.

Ayo kula ngelingake sampeyan rumusan masalah: ana set dokumen. Sampeyan perlu kanggo saben tembung sing kedadeyan ing set dokumen kanggo ngetung kaping pirang-pirang tembung kasebut kedadeyan ing set kasebut.

Solusi:

Peta pamisah dokumen dadi tembung lan ngasilake pasangan pasangan (tembung, 1).

Ngurangi jumlah kedadeyan saben tembung:

def map(doc):  
for word in doc.split():  
	yield word, 1 
def reduce(word, values):  
	yield word, sum(values)

Saiki tugas kanggo program solusi iki ing wangun kode sing bisa dieksekusi ing Hadoop lan mbukak.

4.3 Metode Nomer 1. Streaming Hadoop

Cara paling gampang kanggo mbukak program MapReduce ing Hadoop yaiku nggunakake antarmuka streaming Hadoop. Antarmuka streaming nganggep yen peta lan nyuda diimplementasikake minangka program sing njupuk data saka stdin lan output menyang stdout .

Program sing nglakokake fungsi peta diarani mapper. Program sing nglakokake nyuda diarani, masing-masing, reducer .

Antarmuka Streaming nganggep minangka standar yen siji baris mlebu ing mapper utawa reducer cocog karo siji entri mlebu kanggo peta .

Output saka mapper entuk input saka reducer ing wangun pasangan (tombol, nilai), nalika kabeh pasangan cocog karo tombol padha:

  • Dijamin bakal diproses kanthi siji peluncuran reducer;
  • Bakal dikirim menyang input saurutan (yaiku, yen siji reducer ngolah sawetara tombol sing beda, input kasebut bakal diklompokake miturut tombol).

Dadi ayo ngetrapake mapper lan reducer ing python:

#mapper.py  
import sys  
  
def do_map(doc):  
for word in doc.split():  
	yield word.lower(), 1  
  
for line in sys.stdin:  
	for key, value in do_map(line):  
    	print(key + "\t" + str(value))  
 
#reducer.py  
import sys  
  
def do_reduce(word, values):  
	return word, sum(values)  
  
prev_key = None  
values = []  
  
for line in sys.stdin:  
	key, value = line.split("\t")  
	if key != prev_key and prev_key is not None:  
    	result_key, result_value = do_reduce(prev_key, values)  
    	print(result_key + "\t" + str(result_value))  
    	values = []  
	prev_key = key  
	values.append(int(value))  
  
if prev_key is not None:  
	result_key, result_value = do_reduce(prev_key, values)  
	print(result_key + "\t" + str(result_value))

Data sing bakal diproses Hadoop kudu disimpen ing HDFS. Ayo upload artikel kita lan sijine ing HDFS. Kanggo nindakake iki, gunakake printah hadoop fs :

wget https://www.dropbox.com/s/opp5psid1x3jt41/lenta_articles.tar.gz  
tar xzvf lenta_articles.tar.gz  
hadoop fs -put lenta_articles 

Utilitas hadoop fs ndhukung akeh cara kanggo manipulasi sistem file, akeh sing padha karo utilitas linux standar.

Saiki ayo miwiti tugas streaming:

yarn jar /usr/lib/hadoop-mapreduce/hadoop-streaming.jar\  
 -input lenta_articles\  
 -output lenta_wordcount\  
 -file mapper.py\  
 -file reducer.py\  
 -mapper "python mapper.py"\  
 -reducer "python reducer.py" 

Utilitas benang digunakake kanggo miwiti lan ngatur macem-macem aplikasi (kalebu adhedhasar peta-ngurangi) ing kluster. Hadoop-streaming.jar mung minangka salah sawijining conto aplikasi benang kasebut.

Sabanjure yaiku opsi peluncuran:

  • input - folder karo data sumber ing hdfs;
  • output - folder ing hdfs ngendi sampeyan pengin sijine asil;
  • file - file sing dibutuhake sajrone operasi tugas ngurangi peta;
  • mapper minangka perintah konsol sing bakal digunakake kanggo tahap peta;
  • nyuda iku printah console sing bakal digunakake kanggo tataran nyuda.

Sawise diluncurake, sampeyan bisa ndeleng kemajuan tugas ing konsol lan URL kanggo ndeleng informasi sing luwih rinci babagan tugas kasebut.

Ing antarmuka sing kasedhiya ing URL iki, sampeyan bisa ngerteni status eksekusi tugas sing luwih rinci, ndeleng log saben mapper lan reducer (sing migunani banget yen ana tugas sing gagal).

Asil karya sawise eksekusi sukses ditambahake menyang HDFS ing folder sing ditemtokake ing lapangan output. Sampeyan bisa ndeleng isine nggunakake printah "hadoop fs -ls lenta_wordcount".

Asil dhewe bisa dipikolehi kaya ing ngisor iki:

hadoop fs -text lenta_wordcount/* | sort -n -k2,2 | tail -n5  
from
41  
this
43  
on
82  
and
111  
into
194 

Printah "hadoop fs -text" nampilake isi folder ing wangun teks. Aku ngurutake asil miturut jumlah kedadeyan tembung kasebut. Kaya sing dikarepake, tembung sing paling umum ing basa kasebut yaiku preposisi.

4.4 Cara nomer 2: nggunakake Jawa

Hadoop dhewe ditulis ing java, lan antarmuka asli Hadoop uga adhedhasar java. Ayo nuduhake kaya apa aplikasi java asli kanggo wordcount:

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

	public static class TokenizerMapper
        	extends Mapper<Object, Text, Text, IntWritable>{

    	private final static IntWritable one = new IntWritable(1);
    	private Text word = new Text();

    	public void map(Object key, Text value, Context context
    	) throws IOException, InterruptedException {
        	StringTokenizer itr = new StringTokenizer(value.toString());
        	while (itr.hasMoreTokens()) {
            	word.set(itr.nextToken());
            	context.write(word, one);
        	}
    	}
	}

	public static class IntSumReducer
        	extends Reducer<Text,IntWritable,Text,IntWritable> {
    	private IntWritable result = new IntWritable();

    	public void reduce(Text key, Iterable values,
                       	Context context
    	) throws IOException, InterruptedException {
        	int sum = 0;
        	for (IntWritable val : values) {
            	sum += val.get();
        	}
        	result.set(sum);
        	context.write(key, result);
    	}
	}

	public static void main(String[] args) throws Exception {
    	Configuration conf = new Configuration();
    	Job job = Job.getInstance(conf, "word count");
    	job.setJarByClass(WordCount.class);
    	job.setMapperClass(TokenizerMapper.class);
    	job.setReducerClass(IntSumReducer.class);
    	job.setOutputKeyClass(Text.class);
    	job.setOutputValueClass(IntWritable.class);
    	FileInputFormat.addInputPath(job, new Path("hdfs://localhost/user/cloudera/lenta_articles"));
    	FileOutputFormat.setOutputPath(job, new Path("hdfs://localhost/user/cloudera/lenta_wordcount"));
    	System.exit(job.waitForCompletion(true) ? 0 : 1);
	}
}

Kelas iki persis padha karo conto Python kita. Kita nggawe kelas TokenizerMapper lan IntSumReducer kanthi njupuk saka kelas Mapper lan Reducer. Kelas sing dilewati minangka paramèter template nemtokake jinis nilai input lan output. API asli nganggep yen fungsi peta diwenehi pasangan kunci-nilai minangka input. Wiwit ing kasus kita tombol kosong, kita mung nemtokake Obyek minangka jinis tombol.

Ing cara Utama, kita miwiti tugas mapreduce lan nemtokake paramèter - jeneng, mapper lan reducer, path ing HDFS, ngendi data input dumunung lan ngendi kanggo nyelehake asil. Kanggo kompilasi, kita butuh perpustakaan hadoop. Aku nggunakake Maven kanggo mbangun, sing cloudera duwe repositori. Pandhuan kanggo nyetel iku bisa ditemokaké kene. Akibaté, file pom.xmp (sing digunakake dening maven kanggo njlèntrèhaké perakitan proyek) aku entuk ing ngisor iki):

<?xml version="1.0" encoding="UTF-8"?>  
<project xmlns="http://maven.apache.org/POM/4.0.0"  
     	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
     	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">  
	<modelVersion>4.0.0</modelVersion>  
  
	<repositories>  
    	<repository>  
        	<id>cloudera</id>  
        	<url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>  
    	</repository>  
	</repositories>  
  
	<dependencies>  
    	<dependency>  
        	<groupId>org.apache.hadoop</groupId>  
        	<artifactId>hadoop-common</artifactId>  
        	<version>2.6.0-cdh5.4.2</version>  
    	</dependency>  
  
    	<dependency>  
        	<groupId>org.apache.hadoop</groupId>  
        	<artifactId>hadoop-auth</artifactId>  
        	<version>2.6.0-cdh5.4.2</version>  
    	</dependency>  
  
    	<dependency>  
        	<groupId>org.apache.hadoop</groupId>  
        	<artifactId>hadoop-hdfs</artifactId>  
        	<version>2.6.0-cdh5.4.2</version>  
    	</dependency>  
  
    	<dependency>  
        	<groupId>org.apache.hadoop</groupId>  
        	<artifactId>hadoop-mapreduce-client-app</artifactId>  
        	<version>2.6.0-cdh5.4.2</version>  
    	</dependency>  
  
	</dependencies>  
  
	<groupId>org.dca.examples</groupId>  
	<artifactId>wordcount</artifactId>  
	<version>1.0-SNAPSHOT</version>  
 
</project>

Ayo kompilasi proyek kasebut dadi paket jar:

mvn clean package

Sawise mbangun proyek dadi file jar, peluncuran kasebut ditindakake kanthi cara sing padha, kaya ing antarmuka streaming:

yarn jar wordcount-1.0-SNAPSHOT.jar  WordCount 

Kita ngenteni eksekusi lan mriksa asil:

hadoop fs -text lenta_wordcount/* | sort -n -k2,2 | tail -n5  
from
41
this
43
on
82
and
111
into
194

Kaya sing sampeyan duga, asil aplikasi asli kita padha karo asil aplikasi streaming sing diluncurake sadurunge.