【Flink】Flink基础之实现WordCount程序(Java与Scala版本)
简述
WordCount(单词计数)一直是大数据入门的经典案例,下面用java和scala实现Flink的WordCount代码;
采用IDEA + Maven + Flink 环境;文末附 pom 文件和相关技术点总结;
Java实现Flink批处理版本
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.api.java.DataSet;
import org.apache.flink.api.java.ExecutionEnvironment;
import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;
public class WordCountBatchByJava {
public static void main(String[] args) throws Exception {
// 创建执行环境
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
// 加载或创建源数据
DataSet<String> text = env.fromElements("this a book", "i love china", "i am chinese");
// 转化处理数据
DataSet<Tuple2<String, Integer>> ds = text.flatMap(new LineSplitter()).groupBy(0).sum(1);
// 输出数据到目的端
ds.print();
// 执行任务操作
// 由于是Batch操作,当DataSet调用print方法时,源码内部已经调用Excute方法,所以此处不再调用,如果调用会出现错误
//env.execute("Flink Batch Word Count By Java");
}
static class LineSplitter implements FlatMapFunction<String, Tuple2<String,Integer>> {
@Override
public void flatMap(String line, Collector<Tuple2<String, Integer>> collector) throws Exception {
for (String word:line.split(" ")) {
collector.collect(new Tuple2<>(word,1));
}
}
}
}
运行输出结果如下:
(a,1)
(am,1)
(love,1)
(china,1)
(this,1)
(i,2)
(book,1)
(chinese,1)
Java实现Flink流处理版本
import org.apache.flink.api.common.functions.FlatMapFunction;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.windowing.time.Time;
import org.apache.flink.util.Collector;
public class WordCountStreamingByJava {
public static void main(String[] args) throws Exception {
// 创建执行环境
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
// 设置socket数据源
DataStreamSource<String> source = env.socketTextStream("192.168.1.111", 9999, "\n");
// 转化处理数据
DataStream<WordWithCount> dataStream = source.flatMap(new FlatMapFunction<String, WordWithCount>() {
@Override
public void flatMap(String line, Collector<WordWithCount> collector) throws Exception {
for (String word : line.split(" ")) {
collector.collect(new WordWithCount(word, 1));
}
}
}).keyBy("word")//以key分组统计
.timeWindow(Time.seconds(2),Time.seconds(2))//设置一个窗口函数,模拟数据流动
.sum("count");//计算时间窗口内的词语个数
// 输出数据到目的端
dataStream.print();
// 执行任务操作
env.execute("Flink Streaming Word Count By Java");
}
public static class WordWithCount{
public String word;
public int count;
public WordWithCount(){
}
public WordWithCount(String word, int count) {
this.word = word;
this.count = count;
}
@Override
public String toString() {
return "WordWithCount{" +
"word='" + word + '\'' +
", count=" + count +
'}';
}
}
}
启动一个shell窗口,联通9999端口,输入数据:
[root@spark111 flink-1.6.2]# nc -l 9999
山东 天津 北京 河北 河南 山东 上海 北京
山东 海南 青海 西藏 四川 海南
IDEA 输出结果如下:
4> WordWithCount{word='北京', count=2}
1> WordWithCount{word='上海', count=1}
5> WordWithCount{word='天津', count=1}
4> WordWithCount{word='河南', count=1}
7> WordWithCount{word='山东', count=2}
3> WordWithCount{word='河北', count=1}
------------------------为了区分前后时间窗口结果,手动加的这条线--------------------------
8> WordWithCount{word='海南', count=2}
8> WordWithCount{word='四川', count=1}
7> WordWithCount{word='山东', count=1}
1> WordWithCount{word='西藏', count=1}
5> WordWithCount{word='青海', count=1}
Scala实现Flink批处理版本
import org.apache.flink.api.scala._
import org.apache.flink.api.scala.ExecutionEnvironment
object WordCountBatchByScala {
def main(args: Array[String]): Unit = {
//获取执行环境
val env = ExecutionEnvironment.getExecutionEnvironment
//加载数据源
val source = env.fromElements("china is the best country","beijing is the capital of china")
//转化处理数据
val ds = source.flatMap(_.split(" ")).map((_,1)).groupBy(0).sum(1)
//输出至目的端
ds.print()
// 执行操作
// 由于是Batch操作,当DataSet调用print方法时,源码内部已经调用Excute方法,所以此处不再调用,如果调用会出现错误
//env.execute("Flink Batch Word Count By Scala")
}
}
运行结果如下:
(is,2)
(beijing,1)
(the,2)
(china,2)
(country,1)
(of,1)
(best,1)
(capital,1)
Scala实现Flink流处理版本
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.api.windowing.time.Time
object WordCountStreamingByScala {
def main(args: Array[String]): Unit = {
//获取执行环境
val env = StreamExecutionEnvironment.getExecutionEnvironment
//加载或创建数据源
val source = env.socketTextStream("192.168.1.111",9999,'\n')
//转化处理数据
val dataStream = source.flatMap(_.split(" "))
.map((_,1))
.keyBy(0)
.timeWindow(Time.seconds(2),Time.seconds(2))
.sum(1)
//输出到目的端
dataStream.print()
//执行操作
env.execute("Flink Streaming Word Count By Scala")
}
}
启动shell窗口,开启9999端口通信,输入词语:
[root@spark111 flink-1.6.2]# nc -l 9999
time is passed what is the time?
time is nine time passed again
运行结果如下:
4> (what,1)
5> (time,1)
8> (is,2)
5> (time?,1)
8> (passed,1)
5> (the,1)
------------------------为了区分前后时间窗口结果,手动加的这条线--------------------------
8> (is,1)
5> (time,2)
8> (passed,1)
7> (nine,1)
6> (again,1)
POM文件
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.ssrs</groupId>
<artifactId>flinkdemo</artifactId>
<version>1.0</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.version>2.11.12</scala.version>
<scala.binary.version>2.11</scala.binary.version>
<hadoop.version>2.8.4</hadoop.version>
<flink.version>1.6.1</flink.version>
</properties>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-table_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
</dependency>
</dependencies>
</project>
总结
-
flink处理任务流程如下:
① 获取执行环境 (Environment)
② 加载或者创建数据源(source)
③ 转化处理数据(transformation)
④ 输出目的端(sink)
⑤ 执行任务(execute)
-
在批处理中,如果输出目的端,执行的 print 命令(除此之外,还有count,collect方法),则执行任务Execute不需要调用(因为这些方法内部已经调用了Execute方法);如果调用,虽然也有正确结果,但是会有错误信息输出;错误如下:
Exception in thread "main" java.lang.RuntimeException: No new data sinks have been defined since the last execution. The last execution refers to the latest call to 'execute()', 'count()', 'collect()', or 'print()'. at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:940) at org.apache.flink.api.java.ExecutionEnvironment.createProgramPlan(ExecutionEnvironment.java:922) at org.apache.flink.api.java.LocalEnvironment.execute(LocalEnvironment.java:85) at com.ssrs.WordCountBatchByJava.main(WordCountBatchByJava.java:27)
-
如果批处理代码中,输出目的端调用writeAsCsv、writeAsText等其他方法,则后面需要调用Execute;
-
批处理获取执行环境用ExecutionEnvironment,流处理获取环境用StreamExecutionEnvironment
-
批处理后的数据是DataSet,流处理后的数据是DataStream.
作者:ShadowFiend
出处:http://www.cnblogs.com/ShadowFiend/
本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。如有问题或建议,请多多赐教,非常感谢。