Hadoop--MapReduce实现WordCount全步骤

  1. 新建maven项目

  2. +Create New Project… ->Maven -> Next

在这里插入图片描述
在这里插入图片描述在这里插入图片描述 1. 填写好GroupId和ArtifactId 点击Next -> Finish

  1. 编写wordcount项目

  2. 建立项目结构目录:右键java -> New -> package 输入package路径(本例是com.hadoop.wdcount)建立package。类似的方式在创建好的package下建立三个类WordcountMain、WordcountMapper、WordcountReducer

在这里插入图片描述
2、 编写pom.xml配置(引入要用到的hadoop的jar包)

1<?xml version="1.0" encoding="UTF-8"?> 2<project xmlns="http://maven.apache.org/POM/4.0.0" 3 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 4 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 5 <modelVersion>4.0.0</modelVersion> 6 7 <groupId>sa.hadoop</groupId> 8 <artifactId>wordcount</artifactId> 9 <version>1.0-SNAPSHOT</version> 10 11 <dependencies> 12 <dependency> 13 <groupId>org.apache.hadoop</groupId> 14 <artifactId>hadoop-common</artifactId> 15 <version>2.7.7</version> 16<!-- 我们用的是2.7.7版本的hadoop --> 17 </dependency> 18 19 <dependency> 20 <groupId>org.apache.hadoop</groupId> 21 <artifactId>hadoop-hdfs</artifactId> 22 <version>2.7.7</version> 23 </dependency> 24 <dependency> 25 <groupId>org.apache.hadoop</groupId> 26 <artifactId>hadoop-mapreduce-client-common</artifactId> 27 <version>2.7.7</version> 28 </dependency> 29 30 <dependency> 31 <groupId>org.apache.hadoop</groupId> 32 <artifactId>hadoop-mapreduce-client-core</artifactId> 33 <version>2.7.7</version> 34 </dependency> 35 </dependencies> 36 37</project> 38 39 40

3、 编写项目代码
完成刚刚建立的三个类中的逻辑实现。
(1) WordcountMapper.java

1package com.hadoop.wdcount; 2 3import org.apache.hadoop.io.IntWritable; 4import org.apache.hadoop.io.LongWritable; 5import org.apache.hadoop.io.Text; 6 7import java.io.IOException; 8 9public class WordcountMapper extends org.apache.hadoop.mapreduce.Mapper<LongWritable,Text,Text,IntWritable> { 10 11 12 @Override 13 protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { 14 String line=value.toString(); 15 String[] words=line.split(" "); 16 for (String word:words){ 17 context.write(new Text(word),new IntWritable(1)); 18 } 19 } 20} 21 22 23

(2)WordcountReducer.java

1package com.hadoop.wdcount; 2 3import org.apache.hadoop.io.IntWritable; 4import org.apache.hadoop.io.Text; 5import org.apache.hadoop.mapreduce.Reducer; 6 7import java.io.IOException; 8import java.util.Iterator; 9 10public class WordcountReducer extends Reducer<Text,IntWritable,Text,IntWritable> { 11 12 @Override 13 protected void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { 14 Integer counts=0; 15 for (IntWritable value:values){ 16 counts+=value.get(); 17 } 18 context.write(key,new IntWritable(counts)); 19 } 20} 21 22

(3)WordcountMain.java

1package com.hadoop.wdcount; 2 3import org.apache.hadoop.conf.Configuration; 4import org.apache.hadoop.fs.Path; 5import org.apache.hadoop.io.IntWritable; 6import org.apache.hadoop.io.Text; 7import org.apache.hadoop.mapreduce.Job; 8import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; 9import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; 10 11public class WordcountMain { 12 public static void main(String[] args) throws Exception { 13 Configuration conf = new Configuration(); 14 Job job = Job.getInstance(conf, "wordcount"); 15 job.setJarByClass(WordcountMain.class); 16 job.setMapperClass(WordcountMapper.class); 17 job.setReducerClass(WordcountReducer.class); 18 job.setMapOutputKeyClass(Text.class); 19 job.setMapOutputValueClass(IntWritable.class); 20 FileInputFormat.setInputPaths(job, new Path(args[0])); 21 FileOutputFormat.setOutputPath(job, new Path(args[1])); 22 boolean flag = job.waitForCompletion(true); 23 if (!flag) { 24 System.out.println("wordcount failed!"); 25 } 26 } 27} 28 29 30

将项目打包成jar

  1. 右键项目名称 -> Open Module Settings

在这里插入图片描述 1. Artifacts -> + -> JAR -> From modules with dependencies…
在这里插入图片描述 1. 填写Main Class(点击…选择WordcountMain),然后选择extract to the target JAR,点击OK。
在这里插入图片描述 1. 勾选include in project build ,其中Output directory为最后的输出目录,下面output layout是输出的各jar包,点击ok
在这里插入图片描述 1. 点击菜单Build——>Build Aritifacts… 1. 选择Build,结果可到前面4的output目录查看或者项目结构中的out目录
在这里插入图片描述

执行验证(这里采用win环境下的hadoop2.7.6作为例子,wsl暂时未验证)

先在创建jar包路径下(C:\Users\USTC\Documents\maxyi\Java\wordcount\out\artifacts\wordcount_jar)建立一个input1.txt文档,并添加内容“I believe that I will succeed!”并保存。等会儿要将该txt文件上传到hadoop。

11. 2

运行hadoop打开所有节点
cd hadoop-2.7.6/sbin
start-all.cmd

11. 2

运行成功后,来到之前创建的jar包路径,将编写好的txt文件上传到hadoop
cd /
cd C:\Users\USTC\Documents\maxyi\Java\wordcount\out\artifacts \wordcount_jar
hadoop fs -put ./input1.txt /input1
可以用以下代码查看是否上传成功。
hadoop fs -ls /
4、 删除wordcount.jar/META-INF/LICENSE,否则不能创建hadoop运行时不能创建license,会报错。
5、 运行wordcount
hadoop jar wordcount.jar com.hadoop.wdcount.WordcountMain /input1 /output2
jar 命令后有四个参数,
第一个wordcount.jar是打好的jar包
第二个com.hadoop.wdcount.WordcountMain是java工程总编写的主类,这里要包含package路径
第三个/input1是刚刚上传的输入
第四个/output2是wordcount的输出(一定要新建,不能重用之前建立的)
6、 下载输出文件查看结果是否正确
hadoop fs -get /output2
在这里插入图片描述

代码交流 2021