Spark 日志流程

对于任意一个job,逻辑上我将他划分为如下步骤:

创建一个job

提交stage0,stage1,stage2... ...stage n。

 

问题

1.一个job有多少个stage,每一个stage有多少个rdd。

2.哪些stage运行失败了,这个stage的那些task运行失败了,失败的原因是什么?

3.task运行时间很长的原因,task运行失败的原因?

4.如何知道一个stage等待资源的时间

 

 

 

反序列化Task时间

反序列化<rdd,shuffleDependency> 或者<rdd,func>

task.run时间

序列化task.run()结果的时间

 

 

复制代码
import java.util.Arrays;

import org.apache.spark.SparkConf;
import org.apache.spark.api.*;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;
import org.apache.spark.api.java.function.VoidFunction;

import scala.Tuple2;

public class HelloWord {
  public static void main(String[]args)throws Exception{
      SparkConf conf= new SparkConf().setMaster("local").setAppName("worldcount");
      
      JavaSparkContext sc=new JavaSparkContext(conf);
      JavaRDD<String> lines=sc.textFile("D:\\javaproject1\\spark-1.5.1-bin-hadoop2.4\\README.md",2);

     
      JavaRDD<String>words=lines.flatMap(new FlatMapFunction<String,String>(){

        /**
         * 
         */
        private static final long serialVersionUID = 1L;

        @Override
        public Iterable<String> call(String line) throws Exception {
            // TODO Auto-generated method stub
            return Arrays.asList(line.split(" "));
        }
          
      });
      JavaPairRDD<String,Integer> pairs=words.mapToPair(new PairFunction<String,String,Integer>(){

        /**
         * 
         */
        private static final long serialVersionUID = 1L;

        @Override
        public Tuple2<String, Integer> call(String word) throws Exception {
            // TODO Auto-generated method stub
            return new Tuple2<String,Integer>(word,1);
        }
          
      });
      
      JavaPairRDD<String,Integer>result=pairs.reduceByKey(new Function2<Integer,Integer,Integer>(){

        /**
         * 
         */
        private static final long serialVersionUID = 1L;

        @Override
        public Integer call(Integer a, Integer b) throws Exception {
            // TODO Auto-generated method stub
            return a+b;
        }
          
      });      
      
      result.foreach(new VoidFunction<Tuple2<String,Integer>>(){

        /**
         * 
         */
        private static final long serialVersionUID = 1L;

        @Override
        public void call(Tuple2<String, Integer> word) throws Exception {
            // TODO Auto-generated method stub
            System.out.println(word._1+"appeared:"+word._2+"times");
        }
          
      });
      
  }
}
复制代码

 

 

复制代码

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/03/07 12:17:09 INFO SparkContext: Running Spark version 1.5.1
16/03/07 12:17:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/03/07 12:17:10 ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)
at org.apache.hadoop.security.Groups.parseStaticMapping(Groups.java:93)
at org.apache.hadoop.security.Groups.<init>(Groups.java:77)
at org.apache.hadoop.security.Groups.getUserToGroupsMappingService(Groups.java:240)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:255)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:232)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:718)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:703)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:605)
at org.apache.spark.util.Utilsanonfun$getCurrentUserName$1.apply(Utils.scala:2084)atorg.apache.spark.util.UtilsanonfungetCurrentUserName1.apply(Utils.scala:2084)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2084)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:311)
at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:61)
at HelloWord.main(HelloWord.java:19)
16/03/07 12:17:10 INFO SecurityManager: Changing view acls to: francis
16/03/07 12:17:10 INFO SecurityManager: Changing modify acls to: francis
16/03/07 12:17:10 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(francis); users with modify permissions: Set(francis)
16/03/07 12:17:11 INFO Slf4jLogger: Slf4jLogger started
16/03/07 12:17:11 INFO Remoting: Starting remoting
16/03/07 12:17:11 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.56.1:49185]
16/03/07 12:17:11 INFO Utils: Successfully started service 'sparkDriver' on port 49185.
16/03/07 12:17:11 INFO SparkEnv: Registering MapOutputTracker
16/03/07 12:17:11 INFO SparkEnv: Registering BlockManagerMaster
16/03/07 12:17:11 INFO DiskBlockManager: Created local directory at C:\Users\francis\AppData\Local\Temp\blockmgr-dce244db-5182-4d78-a5ce-07a36e0da8a9
16/03/07 12:17:11 INFO MemoryStore: MemoryStore started with capacity 480.1 MB
16/03/07 12:17:11 INFO HttpFileServer: HTTP File server directory is C:\Users\francis\AppData\Local\Temp\spark-19b52e42-53aa-45ae-9b41-b30ef039d3da\httpd-ff99ebba-2b34-4738-a3bc-a8bf515dee3d
16/03/07 12:17:11 INFO HttpServer: Starting HTTP Server
16/03/07 12:17:11 INFO Utils: Successfully started service 'HTTP file server' on port 49186.
16/03/07 12:17:11 INFO SparkEnv: Registering OutputCommitCoordinator
16/03/07 12:17:11 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/03/07 12:17:11 INFO SparkUI: Started SparkUI at http://192.168.56.1:4040
16/03/07 12:17:12 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/03/07 12:17:12 INFO Executor: Starting executor ID driver on host localhost
16/03/07 12:17:12 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 49206.
16/03/07 12:17:12 INFO NettyBlockTransferService: Server created on 49206
16/03/07 12:17:12 INFO BlockManagerMaster: Trying to register BlockManager
16/03/07 12:17:12 INFO BlockManagerMasterEndpoint: Registering block manager localhost:49206 with 480.1 MB RAM, BlockManagerId(driver, localhost, 49206)
16/03/07 12:17:12 INFO BlockManagerMaster: Registered BlockManager
16/03/07 12:17:13 INFO MemoryStore: ensureFreeSpace(120040) called with curMem=0, maxMem=503379394
16/03/07 12:17:13 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 117.2 KB, free 479.9 MB)
16/03/07 12:17:13 INFO MemoryStore: ensureFreeSpace(12673) called with curMem=120040, maxMem=503379394
16/03/07 12:17:13 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.4 KB, free 479.9 MB)
16/03/07 12:17:13 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:49206 (size: 12.4 KB, free: 480.0 MB)
16/03/07 12:17:13 INFO SparkContext: Created broadcast 0 from textFile at HelloWord.java:20
16/03/07 12:17:14 WARN : Your hostname, franciswang resolves to a loopback/non-reachable address: fe80:0:0:0:8d9d:9a04:7f3c:2558%wlan11, but we couldn't find any external IP address!
16/03/07 12:17:16 INFO FileInputFormat: Total input paths to process : 1

 

1.创建job,创建finalStage,及其perent stages
16/03/07 12:17:16 INFO SparkContext: Starting job: foreach at HelloWord.java:67
16/03/07 12:17:16 INFO DAGScheduler: Registering RDD 3 (mapToPair at HelloWord.java:37)
16/03/07 12:17:16 INFO DAGScheduler: Got job 0 (foreach at HelloWord.java:67) with 2 output partitions
16/03/07 12:17:16 INFO DAGScheduler: Final stage: ResultStage 1(foreach at HelloWord.java:67)
16/03/07 12:17:16 INFO DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
16/03/07 12:17:16 INFO DAGScheduler: Missing parents: List(ShuffleMapStage 0)

 

2.将stage0打包成TaskSet,然后提交到TaskScheduleImpl,TaskScheduleImpl为TaskSet创建TaskSetManager,然后提交到资源池中。接下来SparkCoarseGrainedBackend获取可用的集群资源列表workerOffers,然后从资源池中取出TaskSetManager,为task调度资源,最后在workerOffers上启动task。


16/03/07 12:17:16 INFO DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at HelloWord.java:37), which has no missing parents //这个stage没有需要提交的parent stage了
16/03/07 12:17:16 INFO MemoryStore: ensureFreeSpace(4736) called with curMem=132713, maxMem=503379394
16/03/07 12:17:16 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 4.6 KB, free 479.9 MB)
16/03/07 12:17:16 INFO MemoryStore: ensureFreeSpace(2662) called with curMem=137449, maxMem=503379394
16/03/07 12:17:16 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 2.6 KB, free 479.9 MB)
16/03/07 12:17:16 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:49206 (size: 2.6 KB, free: 480.0 MB)
16/03/07 12:17:16 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861

//将stage放入资源池了
16/03/07 12:17:16 INFO DAGScheduler: Submitting 2 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[3] at mapToPair at HelloWord.java:37)
16/03/07 12:17:16 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks

//task0开始获得资源,Executor 开始运行stage 0的task0。

16/03/07 12:17:16 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 2155 bytes)
16/03/07 12:17:16 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/03/07 12:17:16 INFO HadoopRDD: Input split: file:/D:/javaproject1/spark-1.5.1-bin-hadoop2.4/README.md:0+1796
16/03/07 12:17:16 INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
16/03/07 12:17:16 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
16/03/07 12:17:16 INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
16/03/07 12:17:16 INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
16/03/07 12:17:16 INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
16/03/07 12:17:16 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 2254 bytes result sent to driver

 

//task1获得资源了,Executor 开始运行stage 0的task1.
16/03/07 12:17:16 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 2155 bytes)

16/03/07 12:17:16 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/03/07 12:17:16 INFO HadoopRDD: Input split: file:/D:/javaproject1/spark-1.5.1-bin-hadoop2.4/README.md:1796+1797
16/03/07 12:17:16 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 299 ms on localhost (1/2)
16/03/07 12:17:16 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 2254 bytes result sent to driver

stage0中所有的task都运行完毕了。

16/03/07 12:17:16 INFO DAGScheduler: ShuffleMapStage 0 (mapToPair at HelloWord.java:37) finished in 0.485 s

 


16/03/07 12:17:16 INFO DAGScheduler: looking for newly runnable stages
16/03/07 12:17:16 INFO DAGScheduler: running: Set()
16/03/07 12:17:16 INFO DAGScheduler: waiting: Set(ResultStage 1)
16/03/07 12:17:16 INFO DAGScheduler: failed: Set()
16/03/07 12:17:16 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 184 ms on localhost (2/2)

//因为stage的所有task都运行完毕了,所以将TaskSetManager从资源池中移走

16/03/07 12:17:16 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool

 

 

3.开始提交stage1
16/03/07 12:17:16 INFO DAGScheduler: Missing parents for ResultStage 1: List()
16/03/07 12:17:16 INFO DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[4] at reduceByKey at HelloWord.java:52), which is now runnable
16/03/07 12:17:16 INFO MemoryStore: ensureFreeSpace(2432) called with curMem=140111, maxMem=503379394
16/03/07 12:17:16 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.4 KB, free 479.9 MB)
16/03/07 12:17:16 INFO MemoryStore: ensureFreeSpace(1484) called with curMem=142543, maxMem=503379394
16/03/07 12:17:16 INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1484.0 B, free 479.9 MB)
16/03/07 12:17:16 INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on localhost:49206 (size: 1484.0 B, free: 480.0 MB)
16/03/07 12:17:16 INFO SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:861


16/03/07 12:17:16 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 1 (ShuffledRDD[4] at reduceByKey at HelloWord.java:52)
16/03/07 12:17:16 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks
16/03/07 12:17:16 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, PROCESS_LOCAL, 1901 bytes)
16/03/07 12:17:16 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
16/03/07 12:17:16 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
16/03/07 12:17:16 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 5 ms
packageappeared:1times
thisappeared:1times
Version"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version)appeared:1times
Becauseappeared:1times
Pythonappeared:2times
cluster.appeared:1times
itsappeared:1times
[runappeared:1times
generalappeared:2times
haveappeared:1times
pre-builtappeared:1times
locally.appeared:1times
locallyappeared:2times
changedappeared:1times
sc.parallelize(1appeared:1times
onlyappeared:1times
severalappeared:1times
Thisappeared:2times
basicappeared:1times
Configurationappeared:1times
learning,appeared:1times
documentationappeared:3times
YARN,appeared:1times
graphappeared:1times
Hiveappeared:2times
firstappeared:1times
["Specifyingappeared:1times
"yarn-client"appeared:1times
page](http://spark.apache.org/documentation.html)appeared:1times
[params]`.appeared:1times
applicationappeared:1times
[projectappeared:2times
preferappeared:1times
SparkPiappeared:2times
<http://spark.apache.org/>appeared:1times
engineappeared:1times
versionappeared:1times
fileappeared:1times
documentation,appeared:1times
MASTERappeared:1times
exampleappeared:3times
distribution.appeared:1times
areappeared:1times
paramsappeared:1times
scala>appeared:1times
DataFrames,appeared:1times
providesappeared:1times
referappeared:2times
configureappeared:1times
Interactiveappeared:2times
R,appeared:1times
canappeared:6times
buildappeared:3times
whenappeared:1times
easiestappeared:1times
Apacheappeared:1times
systems.appeared:1times
Distributions"](http://spark.apache.org/docs/latest/hadoop-third-party-distributions.html)appeared:1times
worksappeared:1times
howappeared:2times
package.appeared:1times
1000).count()appeared:1times
Noteappeared:1times
Data.appeared:1times
>>>appeared:1times
Scalaappeared:2times
Alternatively,appeared:1times
variableappeared:1times
submitappeared:1times
Testingappeared:1times
Streamingappeared:1times
module,appeared:1times
thread,appeared:1times
richappeared:1times
them,appeared:1times
detailedappeared:2times
streamappeared:1times
GraphXappeared:1times
distributionappeared:1times
["Thirdappeared:1times
Pleaseappeared:3times
returnappeared:2times
isappeared:6times
Thriftserverappeared:1times
sameappeared:1times
startappeared:1times
builtappeared:1times
oneappeared:2times
withappeared:4times
Partyappeared:1times
Spark](#building-spark).appeared:1times
Spark"](http://spark.apache.org/docs/latest/building-spark.html).appeared:1times
dataappeared:1times
wiki](https://cwiki.apache.org/confluence/display/SPARK).appeared:1times
usingappeared:2times
talkappeared:1times
Shellappeared:2times
classappeared:2times
READMEappeared:1times
computingappeared:1times
Python,appeared:2times
example:appeared:1times
##appeared:8times
fromappeared:1times
setappeared:2times
buildingappeared:3times
Nappeared:1times
Hadoop-supportedappeared:1times
otherappeared:1times
Exampleappeared:1times
analysis.appeared:1times
runs.appeared:1times
Buildingappeared:1times
higher-levelappeared:1times
needappeared:1times
Bigappeared:1times
fastappeared:1times
guide,appeared:1times
Java,appeared:1times
<class>appeared:1times
usesappeared:1times
SQLappeared:2times
willappeared:1times
guidanceappeared:3times
requiresappeared:1times
appeared:67times
Documentationappeared:1times
webappeared:1times
clusterappeared:2times
using:appeared:1times
MLlibappeared:1times
shell:appeared:2times
Scala,appeared:1times
supportsappeared:2times
built,appeared:1times
./dev/run-testsappeared:1times
build/mvnappeared:1times
sampleappeared:1times
16/03/07 12:17:16 INFO Executor: Finished task 0.0 in stage 1.0 (TID 2). 1165 bytes result sent to driver
16/03/07 12:17:16 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, PROCESS_LOCAL, 1901 bytes)
16/03/07 12:17:16 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
16/03/07 12:17:16 INFO ShuffleBlockFetcherIterator: Getting 2 non-empty blocks out of 2 blocks
16/03/07 12:17:16 INFO ShuffleBlockFetcherIterator: Started 0 remote fetches in 1 ms
Forappeared:2times
16/03/07 12:17:16 INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID 2) in 134 ms on localhost (1/2)
Programsappeared:1times
Sparkappeared:14times
particularappeared:3times
Theappeared:1times
processing.appeared:1times
APIsappeared:1times
computationappeared:1times
Tryappeared:1times
[Configurationappeared:1times
./bin/pysparkappeared:1times
Aappeared:1times
throughappeared:1times
#appeared:1times
libraryappeared:1times
followingappeared:2times
"yarn-cluster"appeared:1times
Moreappeared:1times
whichappeared:2times
Seeappeared:1times
alsoappeared:5times
storageappeared:1times
shouldappeared:2times
Toappeared:2times
forappeared:12times
Onceappeared:1times
setupappeared:1times
mesos://appeared:1times
Maven](http://maven.apache.org/).appeared:1times
latestappeared:1times
processing,appeared:1times
theappeared:21times
yourappeared:1times
notappeared:1times
differentappeared:1times
distributions.appeared:1times
given.appeared:1times
Aboutappeared:1times
ifappeared:4times
instructions.appeared:1times
beappeared:2times
doappeared:2times
Testsappeared:1times
noappeared:1times
./bin/run-exampleappeared:2times
programs,appeared:1times
includingappeared:3times
`./bin/run-exampleappeared:1times
Spark.appeared:1times
Versionsappeared:1times
HDFSappeared:1times
individualappeared:1times
spark://appeared:1times
Itappeared:2times
anappeared:3times
programmingappeared:1times
machineappeared:1times
run:appeared:1times
environmentappeared:1times
cleanappeared:1times
1000:appeared:2times
Andappeared:1times
runappeared:7times
./bin/spark-shellappeared:1times
URL,appeared:1times
"local"appeared:1times
MASTER=spark://host:7077appeared:1times
onappeared:6times
Youappeared:3times
threads.appeared:1times
againstappeared:1times
[Apacheappeared:1times
helpappeared:1times
printappeared:1times
testsappeared:2times
examplesappeared:2times
atappeared:2times
inappeared:5times
-DskipTestsappeared:1times
optimizedappeared:1times
downloadedappeared:1times
versionsappeared:1times
graphsappeared:1times
Guide](http://spark.apache.org/docs/latest/configuration.html)appeared:1times
onlineappeared:1times
usageappeared:1times
abbreviatedappeared:1times
comesappeared:1times
directory.appeared:1times
overviewappeared:1times
[buildingappeared:1times
`examples`appeared:2times
Manyappeared:1times
Runningappeared:1times
wayappeared:1times
useappeared:3times
Onlineappeared:1times
site,appeared:1times
tests](https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools).appeared:1times
runningappeared:1times
findappeared:1times
sc.parallelize(range(1000)).count()appeared:1times
containsappeared:1times
projectappeared:1times
youappeared:4times
Piappeared:1times
thatappeared:3times
protocolsappeared:1times
aappeared:10times
orappeared:4times
high-levelappeared:1times
nameappeared:1times
Hadoop,appeared:2times
toappeared:14times
availableappeared:1times
(Youappeared:1times
coreappeared:1times
instance:appeared:1times
seeappeared:1times
ofappeared:5times
toolsappeared:1times
"local[N]"appeared:1times
programsappeared:2times
package.)appeared:1times
["Buildingappeared:1times
mustappeared:1times
andappeared:10times
command,appeared:2times
systemappeared:1times
Hadoopappeared:4times
16/03/07 12:17:16 INFO Executor: Finished task 1.0 in stage 1.0 (TID 3). 1165 bytes result sent to driver
16/03/07 12:17:16 INFO DAGScheduler: ResultStage 1 (foreach at HelloWord.java:67) finished in 0.168 s
16/03/07 12:17:16 INFO TaskSetManager: Finished task 1.0 in stage 1.0 (TID 3) in 54 ms on localhost (2/2)
16/03/07 12:17:16 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool

 

job0运行完毕了,
16/03/07 12:17:16 INFO DAGScheduler: Job 0 finished: foreach at HelloWord.java:67, took 0.870726 s
16/03/07 12:17:16 INFO SparkContext: Invoking stop() from shutdown hook
16/03/07 12:17:17 INFO SparkUI: Stopped Spark web UI at http://192.168.56.1:4040
16/03/07 12:17:17 INFO DAGScheduler: Stopping DAGScheduler
16/03/07 12:17:17 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/03/07 12:17:17 INFO MemoryStore: MemoryStore cleared
16/03/07 12:17:17 INFO BlockManager: BlockManager stopped
16/03/07 12:17:17 INFO BlockManagerMaster: BlockManagerMaster stopped
16/03/07 12:17:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/03/07 12:17:17 INFO SparkContext: Successfully stopped SparkContext
16/03/07 12:17:17 INFO ShutdownHookManager: Shutdown hook called
16/03/07 12:17:17 INFO ShutdownHookManager: Deleting directory C:\Users\francis\AppData\Local\Temp\spark-19b52e42-53aa-45ae-9b41-b30ef039d3da

复制代码
posted @   王宝生  阅读(1297)  评论(0编辑  收藏  举报
编辑推荐:
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 从 HTTP 原因短语缺失研究 HTTP/2 和 HTTP/3 的设计差异
· AI与.NET技术实操系列:向量存储与相似性搜索在 .NET 中的实现
阅读排行:
· 周边上新:园子的第一款马克杯温暖上架
· Open-Sora 2.0 重磅开源!
· .NET周刊【3月第1期 2025-03-02】
· [AI/GPT/综述] AI Agent的设计模式综述
· 分享 3 个 .NET 开源的文件压缩处理库,助力快速实现文件压缩解压功能!
点击右上角即可分享
微信分享提示