摘要:
################################################### userDF = sc.sparkContext.newAPIHadoopRDD(conf, classOf[TableInputFormat],classOf[org.apache.hadoop 阅读全文
摘要:
import java.io.{File, PrintWriter} import java.util import java.util.regex.Pattern import org.apache.spark.graphx.GraphLoader import org.apache.spark. 阅读全文
摘要:
import breeze.linalg.Vector val arr1 = Array(1,2,3,4,5) val arr2 = Array(2,3,4,5,6) val vec1 = breeze.linalg.Vector.apply(arr1)//spark线性代数库 val vec2 = 阅读全文