Spark/Scala实现推荐系统中的相似度算法(欧几里得距离、皮尔逊相关系数、余弦相似度:附实现代码)
在推荐系统中,协同过滤算法是应用较多的,具体又主要划分为基于用户和基于物品的协同过滤算法,核心点就是基于"一个人"或"一件物品",根据这个人或物品所具有的属性,比如对于人就是性别、年龄、工作、收入、喜好等,找出与这个人或物品相似的人或物,当然实际处理中参考的因子会复杂的多。
本篇文章不介绍相关数学概念,主要给出常用的相似度算法代码实现,并且同一算法有多种实现方式。
欧几里得距离
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | def euclidean2(v1: Vector, v2: Vector): Double = { require(v1.size == v2.size, s "SimilarityAlgorithms:Vector dimensions do not match: Dim(v1)=${v1.size} and Dim(v2)" + s "=${v2.size}." ) val x = v1.toArray val y = v2.toArray euclidean(x, y) } def euclidean(x: Array[Double], y: Array[Double]): Double = { require(x.length == y.length, s "SimilarityAlgorithms:Array length do not match: Len(x)=${x.length} and Len(y)" + s "=${y.length}." ) math.sqrt(x.zip(y).map(p => p._1 - p._2).map(d => d * d).sum) } def euclidean(v1: Vector, v2: Vector): Double = { val sqdist = Vectors.sqdist(v1, v2) math.sqrt(sqdist) } |
皮尔逊相关系数
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | def pearsonCorrelationSimilarity(arr1: Array[Double], arr2: Array[Double]): Double = { require(arr1.length == arr2.length, s "SimilarityAlgorithms:Array length do not match: Len(x)=${arr1.length} and Len(y)" + s "=${arr2.length}." ) val sum_vec1 = arr1.sum val sum_vec2 = arr2.sum val square_sum_vec1 = arr1.map(x => x * x).sum val square_sum_vec2 = arr2.map(x => x * x).sum val zipVec = arr1.zip(arr2) val product = zipVec.map(x => x._1 * x._2).sum val numerator = product - (sum_vec1 * sum_vec2 / arr1.length) val dominator = math.pow((square_sum_vec1 - math.pow(sum_vec1, 2) / arr1.length) * (square_sum_vec2 - math.pow(sum_vec2, 2) / arr2.length), 0.5) if (dominator == 0) Double.NaN else numerator / (dominator * 1.0) } |
余弦相似度
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | /** jblas实现余弦相似度 */ def cosineSimilarity(v1: DoubleMatrix, v2: DoubleMatrix): Double = { require(x.length == y.length, s "SimilarityAlgorithms:Array length do not match: Len(v1)=${x.length} and Len(v2)" + s "=${y.length}." ) v1.dot(v2) / (v1.norm2() * v2.norm2()) } def cosineSimilarity(v1: Vector, v2: Vector): Double = { require(v1.size == v2.size, s "SimilarityAlgorithms:Vector dimensions do not match: Dim(v1)=${v1.size} and Dim(v2)" + s "=${v2.size}." ) val x = v1.toArray val y = v2.toArray cosineSimilarity(x, y) } def cosineSimilarity(x: Array[Double], y: Array[Double]): Double = { require(x.length == y.length, s "SimilarityAlgorithms:Array length do not match: Len(x)=${x.length} and Len(y)" + s "=${y.length}." ) val member = x.zip(y).map(d => d._1 * d._2).sum val temp1 = math.sqrt(x.map(math.pow(_, 2)).sum) val temp2 = math.sqrt(y.map(math.pow(_, 2)).sum) val denominator = temp1 * temp2 if (denominator == 0) Double.NaN else member / (denominator * 1.0) } |
修正余弦相似度
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 | def adjustedCosineSimJblas(x: DoubleMatrix, y: DoubleMatrix): Double = { require(x.length == y.length, s "SimilarityAlgorithms:DoubleMatrix length do not match: Len(x)=${x.length} and Len(y)" + s "=${y.length}." ) val avg = (x.sum() + y.sum()) / (x.length + y.length) val v1 = x.sub(avg) val v2 = y.sub(avg) v1.dot(v2) / (v1.norm2() * v2.norm2()) } def adjustedCosineSimJblas(x: Array[Double], y: Array[Double]): Double = { require(x.length == y.length, s "SimilarityAlgorithms:Array length do not match: Len(x)=${x.length} and Len(y)" + s "=${y.length}." ) val v1 = new DoubleMatrix(x) val v2 = new DoubleMatrix(y) adjustedCosineSimJblas(v1, v2) } def adjustedCosineSimilarity(v1: Vector, v2: Vector): Double = { require(v1.size == v2.size, s "SimilarityAlgorithms:Vector dimensions do not match: Dim(v1)=${v1.size} and Dim(v2)" + s "=${v2.size}." ) val x = v1.toArray val y = v2.toArray adjustedCosineSimilarity(x, y) } def adjustedCosineSimilarity(x: Array[Double], y: Array[Double]): Double = { require(x.length == y.length, s "SimilarityAlgorithms:Array length do not match: Len(x)=${x.length} and Len(y)" + s "=${y.length}." ) val avg = (x.sum + y.sum) / (x.length + y.length) val member = x.map(_ - avg).zip(y.map(_ - avg)).map(d => d._1 * d._2).sum val temp1 = math.sqrt(x.map(num => math.pow(num - avg, 2)).sum) val temp2 = math.sqrt(y.map(num => math.pow(num - avg, 2)).sum) val denominator = temp1 * temp2 if (denominator == 0) Double.NaN else member / (denominator * 1.0) } |
大家如果在实际业务处理中有相关需求,可以根据实际场景对上述代码进行优化或改造,当然很多算法框架提供的一些算法是对这些相似度算法的封装,底层还是依赖于这一套,也能帮助大家做更好的了解。比如Spark MLlib在KMeans算法实现中,底层对欧几里得距离的计算实现。
推荐文章:
重要 | Spark分区并行度决定机制
解析SparkStreaming和Kafka集成的两种方式
关注微信公众号:大数据学习与分享,获取更对技术干货
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南