spark处理jsonFile
按照spark的说法,这里的jsonFile是特殊的文件:
Note that the file that is offered as jsonFile is not a typical JSON file. Each line must contain a separate, self-contained valid JSON object. As a consequence, a regular multi-line JSON file will most often fail.
它是按行分隔多个JSON对象,否则的话就会出错。
以下是一个jsonFile的内容:
scala> val path = "examples/src/main/resources/people.json"
path: String = examples/src/main/resources/people.json
scala> Source.fromFile(path).foreach(print)
{"name":"Michael"}
{"name":"Andy", "age":30}
{"name":"Justin", "age":19}
可以获取到一个SchemaRDD:
scala> val sqlContext = new org.apache.spark.sql.SQLContext(sc)
scala> val jsonFile = sqlContext.jsonFile(path)
scala> jsonFile.printSchema()
root
|-- age: integer (nullable = true)
|-- name: string (nullable = true)
针对该SchemaRDD可以做遍历操作:
jsonFile.filter(row=>{val age=row(0).asInstanceOf[Int];age>=13&&age<=19}).collect
既然是SchemaRDD,就可以采用SQL:
scala> jsonFile.registerTempTable("people")
scala> val teenagers = sqlContext.sql("SELECT name FROM people WHERE age >= 13 AND age <= 19")
scala> teenagers.foreach(println)
+++++++++++++++++++++++++++++++++++++++++++
如本文存在任何侵权部分,请及时告知,我会第一时间删除!
转载本博客原创文章,请附上原文@cnblogs的网址!
QQ: 5854165 我的开源项目 欢迎大家一起交流编程架构技术&大数据技术! +++++++++++++++++++++++++++++++++++++++++++
如本文存在任何侵权部分,请及时告知,我会第一时间删除!
转载本博客原创文章,请附上原文@cnblogs的网址!
QQ: 5854165 我的开源项目 欢迎大家一起交流编程架构技术&大数据技术! +++++++++++++++++++++++++++++++++++++++++++