第4课:Scala模式匹配、类型系统彻底精通与Spark源码阅读


match case的模式匹配
scala> def bigData(data: String){
| data match {
| case "Spark" => println("Spark")
| case "Hadoop" => println("Hadoop")
| case _ => println("Other")
| }
| }
bigData: (data: String)Unit

scala> bigData("Hadoop")
Hadoop

scala> bigData("Hadoop1")
Other

进行条件判断
def bigData(data: String){
data match {
case "Spark" => println("Spark")
case "Hadoop" => println("Hadoop")
case _ if data == "Flink" => println("Cool")
case _ => println("Other")
}
}

变量赋值
def bigData(data: String){
data match {
case "Spark" => println("Spark")
case "Hadoop" => println("Hadoop")
case data_ if data_ == "Flink" => println("Cool : " + data_)
case _ => println("Other")
}
}

匹配类型
import java.io._
def exception(e: Exception){
e match{
case fileException: FileNotFoundException => println("File not found : " + fileException)
case _: Exception => println("Exception")
}
}

 

scala> def data(array: Array[String]){
| array match {
| case Array("Scala") => println("Scala") //指定元素
| case Array(spark, hadoop, flink) => println(spark) //指定个数
| case Array("Spark",_*) => println("Spark ...") //指定开始元素
| case _ => println("Unknown")
| }
| }
data: (array: Array[String])Unit


scala> data(Array("Spark"))
Spark ...

scala> data(Array("Spark","Fink","Scala"))
Spark

scala> data(Array("Kafka","Fink","Scala","Hadoop"))
Unknown

case class 适合并发编程的消息传递,自动生成当前消息的case class的伴生对象case object
case class 使用时会生成很多对象
case object 本身就是一个实例,全局唯一

scala> case class Person(name: String)
defined class Person

类型匹配
scala> class Person
defined class Person

scala> case class Worker(name: String, salary: Double) extends Person
defined class Worker

scala> case class Student(name: String, score: Double) extends Person
defined class Student

scala> def sayHi(person: Person) {
| person match {
| case Student(name,score) => println("I am a Student :" + name + " , score : " + score)
| case Worker(name,salary) => println("I am a Worker :" + name + " , salary : " + salary)
| case _ => println("Unknown")
| }
| }
sayHi: (person: Person)Unit

scala> sayHi(Worker("Spark",6.5))
I am a Worker :Spark , salary : 6.5

泛型类
scala> class Person[T](val content: T) {
| def getContent(id: T) = id + " _ " + content
| }
defined class Person

scala> val p = new Person[String]("Spark")
p: Person[String] = Person@6024797f

scala> p.getContent("Scala")
res1: String = Scala _ Spark

类型边界(对类型本身指定边界)
上边界,泛型类必须是某个类的子类,或者类本身
[_ <: ]

下边界,泛型类必须是某个类的父类,或者类本身
[_ >: ]

scala> class Pair[T <: Comparable[T]](val first: T, val second: T){
| def bigger = if(first.compareTo(second) > 0) first else second
| }
defined class Pair

scala> new Pair("Spark", "Hadoop").bigger
res4: String = Spark
对T有一个限定(对变量类型本身的限定)。具体限定为[T<:Comparable[T]],即T类型为Comparable的子类型(如:compareTo)

 

View Bounds 视图界定(没有上下边界的类型),对类型隐式转换
<%
支持对类型本身就行隐式转换(如:工作时可能出错,将其隐式转换为可以运行的状态,运行结束后再重新恢复原来的状态)将指定的类型进行隐式转换,转换的类型可以作为类型上界、下界。

implicit为隐式转换
scala> class Compare[T : Ordering](val n1: T, val n2: T){
| def bigger(implicit ordered: Ordering[T]) = if(ordered.compare(n1, n2) > 0) n1 else n2
| }
defined class Compare

scala> new Compare[Int](8,3).bigger
res2: Int = 8

scala> new Compare[String]("Spark","Hadoop").bigger
res3: String = Spark


协变和逆变
C[+T]:如果A是B的子类,那么C[A]是C[B]的子类。逆变范围小;

C[-T]:如果A是B的子类,那么C[B]是C[A]的子类。协变范围大;

C[T]:无论A和B是什么关系,C[A]和C[B]没有从属关系。

class Person[+T] //强制定义为协变类型


Dependency[_]相当于Dependency[T]

Manifest:类型分类,上下文界定,有一个Manifest的隐式值,想得到运行时的具体信息,使用Manifest来存储具体的类型,后来演化为ClassTag

 

posted @ 2016-01-11 23:18  Jack葛  阅读(199)  评论(0编辑  收藏  举报