针对小文件的spark wholeTextFiles()
场景:推送过来的数据文件数量很多,并且每个只有10-30M的大小
spark读取hdfs一般都是用textfile(),但是对于这种情况,如果使用textFile默认产生的分区数将与文件数目一致,产生大量的任务。
对应这种小文件,spark提供了一个特殊的api, wholeTextFiles(), wholeTextFiles主要用于处理大量的小文件,源码如下:
/** * Read a directory of text files from HDFS, a local file system (available on all nodes), or any * Hadoop-supported file system URI. Each file is read as a single record and returned in a * key-value pair, where the key is the path of each file, the value is the content of each file. * * <p> For example, if you have the following files: * {{{ * hdfs://a-hdfs-path/part-00000 * hdfs://a-hdfs-path/part-00001 * ... * hdfs://a-hdfs-path/part-nnnnn * }}} * * Do `val rdd = sparkContext.wholeTextFile("hdfs://a-hdfs-path")`, * * <p> then `rdd` contains * {{{ * (a-hdfs-path/part-00000, its content) * (a-hdfs-path/part-00001, its content) * ... * (a-hdfs-path/part-nnnnn, its content) * }}} * * @note Small files are preferred, large file is also allowable, but may cause bad performance. * @note On some filesystems, `.../path/*` can be a more efficient way to read all files * in a directory rather than `.../path/` or `.../path` * @note Partitioning is determined by data locality. This may result in too few partitions * by default. * * @param path Directory to the input data files, the path can be comma separated paths as the * list of inputs. * @param minPartitions A suggestion value of the minimal splitting number for input data. * @return RDD representing tuples of file path and the corresponding file content */ def wholeTextFiles( path: String, minPartitions: Int = defaultMinPartitions): RDD[(String, String)] = withScope { assertNotStopped() val job = NewHadoopJob.getInstance(hadoopConfiguration) // Use setInputPaths so that wholeTextFiles aligns with hadoopFile/textFile in taking // comma separated files as input. (see SPARK-7155) NewFileInputFormat.setInputPaths(job, path) val updateConf = job.getConfiguration new WholeTextFileRDD( this, classOf[WholeTextFileInputFormat], classOf[Text], classOf[Text], updateConf, minPartitions).map(record => (record._1.toString, record._2.toString)).setName(path) }
wholeTextFiles读取文件,输入参数为路径,并且可以设置为多个路径,多个路径之间以逗号分隔。wholeTextFiles读取数据会生成一个Tuple2,Tuple2的第一个元素是该文件的完整路径名,第二个元素表示该文件的文本内容(context)。比如两行数据:
jack,1011,shanghai
kevin,2022,beijing
返回的文本内容是一行字符串,源数据的每行数据以换行符\n分隔,也即:jack,1011,shanghai\nkevin,2022,beijing
分区数可以自定义,如果不显示指定,则默认分区数定义如下:
def defaultMinPartitions: Int = math.min(defaultParallelism, 2)
也就是在不指定分区的情况下,大部分情况都是以2个分区来处理数据。
样例代码:
处理逻辑可以理解为每个小文件对应一个城市的某个区下的所有道路相关的数据(当然了实际数据并不是,哪个城市有几万个几十万个区)。文件名为区的名字,文件内容为道路的名称以及相关数据,在每行道路数据上加上区的名字。
import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaPairRDD; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.api.java.function.Function; import org.apache.spark.sql.SparkSession; import org.apache.spark.util.SizeEstimator; import scala.Tuple2; public class TestWholeTextFiles { public static void main(String[] args) { SparkConf conf = new SparkConf(); SparkSession spark = SparkSession .builder() .appName("TestWholeTextFiles") .master("local") .config(conf) .enableHiveSupport() .getOrCreate(); JavaSparkContext sc = JavaSparkContext.fromSparkContext(spark.sparkContext()); JavaPairRDD<String, String> javaPairRDD = sc.wholeTextFiles("hdfs://master01.xx.xx.cn:8020/kong/capacityLusunData_bak"); System.out.println("javaPairRDD分区数:"+javaPairRDD.getNumPartitions());//2 JavaRDD<String> map = javaPairRDD.map((Function<Tuple2<String, String>, String>) v1 -> { int index = v1._1.lastIndexOf("/"); String road_id = v1._1.substring(index+1).split("\\.")[0]; return v1._2.replace("\n", "\\|"+road_id + "\n"); }); System.out.println("mapRDD分区数:"+map.getNumPartitions());//2 map.saveAsTextFile("hdfs://master01.xx.xx.cn:8020/kong/data/testwholetextfiles/out"); } }
1