spark资料

http://spark.apache.org/docs/latest/programming-guide.html#rdd-operations

http://m.blog.csdn.net/article/details?id=51176969

http://www.cnblogs.com/yrqiang/p/5321368.html

http://www.youmeek.com/category/software-system/my-intellij-idea/

http://www.cnblogs.com/eczhou/p/5216918.html

 

http://blog.csdn.net/qf0129/article/details/48265987

There is an easier way to do this. If like you said you already have pip installed, then install scipy and you'll have numpy installed. #On unix like environments sudo pip install scipy #on windows like environments pip install scipy

 

from sys import path

path.append("d:\pythonfiles")

import XXX

path  append 的路径参数表示要引入的.py文件的路径(绝对),这样可以保证要引入的module在搜索的范围内。

http://www.infoq.com/cn/articles/spark-core-rdd/

 

cursor.execute(query)
columns = cursor.description
result = []
for value in cursor.fetchall():
    tmp = {}
    for (index,column) in enumerate(value):
        tmp[columns[index][0]] = column
    result.append(tmp)
pprint.pprint(result)

 

posted @ 2016-05-23 17:48  小毛驴  阅读(163)  评论(0编辑  收藏  举报