pyspark 常用rdd函数例子

## mapPartions
def model_pred(partitionData):
    updatedData = []
    for row in partitionData:
        pred_value = model.value.predict([row[2:]])[0]
        pred_value = float(round(pred_value,4))
        updatedData.append([row[0],row[1],pred_value])
    return iter(updatedData)

pred = df.rdd.mapPartitions(model_pred).toDF(['p_number','name',"score"])

model 需要广播

 

posted @ 2022-05-10 21:25  cup_leo  阅读(83)  评论(0编辑  收藏  举报