|NO.Z.00041|——————————|BigDataEnd|——|Hadoop&Spark.V02|——|Spark.v02|spark sql|sparksession|
一、Spark SQL编程
### --- sparkseeion官方地址
~~~ 官方文档:http://spark.apache.org/docs/latest/sql-getting-started.html

### --- SparkSession
~~~ 在 Spark 2.0 之前:
~~~ SQLContext 是创建 DataFrame 和执行 SQL 的入口
~~~ HiveContext通过Hive sql语句操作Hive数据,兼Hhive操作,HiveContext继承自SQLContext
~~~ 在 Spark 2.0 之后:
~~~ 将这些入口点统一到了SparkSession,SparkSession 封装了 SqlContext 及HiveContext;
~~~ 实现了 SQLContext 及 HiveContext 所有功能;
~~~ 通过SparkSession可以获取到SparkConetxt;


### --- sparksession实验示例
scala> import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SparkSession
scala> val spark = SparkSession
spark: org.apache.spark.sql.SparkSession.type = org.apache.spark.sql.SparkSession$@5b0af511
scala> .builder()
res0: spark.Builder = org.apache.spark.sql.SparkSession$Builder@47651c28
scala> .appName("Spark SQL basic example")
res1: spark.Builder = org.apache.spark.sql.SparkSession$Builder@47651c28
scala> .config("spark.some.config.option", "some-value")
res2: spark.Builder = org.apache.spark.sql.SparkSession$Builder@47651c28
scala> .getOrCreate()
21/10/20 14:11:13 WARN SparkSession$Builder: Using an existing SparkSession; some configuration may not take effect.
res3: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@2a292566
~~~ # For implicit conversions like converting RDDs to DataFrames
scala> import spark.implicits._
<console>:26: error: value implicits is not a member of object org.apache.spark.sql.SparkSession
import spark.implicits._
^
Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart
——W.S.Landor
分类:
bdv016-spark.v01
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· DeepSeek 开源周回顾「GitHub 热点速览」