一、Spark 运行架构


Spark 运行架构如下图:
各个RDD之间存在着依赖关系,这些依赖关系形成有向无环图DAG,DAGScheduler对这些依赖关系形成的DAG,进行Stage划分,划分的规则很简单,从后往前回溯,遇到窄依赖加入本stage,遇见宽依赖进行Stage切分。完成了Stage的划分,DAGScheduler基于每个Stage生成TaskSet,并将TaskSet提交给TaskScheduler。TaskScheduler 负责具体的task调度,在Worker节点上启动task。




二、源码解析:DAGScheduler中的DAG划分
    当RDD触发一个Action操作(如:colllect)后,导致SparkContext.runJob的执行。而在SparkContext的run方法中会调用DAGScheduler的run方法最终调用了DAGScheduler的submit方法:
  1. def submitJob[T, U](
  2. rdd: RDD[T],
  3. func: (TaskContext, Iterator[T]) => U,
  4. partitions: Seq[Int],
  5. callSite: CallSite,
  6. resultHandler: (Int, U) => Unit,
  7. properties: Properties): JobWaiter[U] = {
  8. // Check to make sure we are not launching a task on a partition that does not exist.
  9. val maxPartitions = rdd.partitions.length
  10. partitions.find(p => p >= maxPartitions || p < 0).foreach { p =>
  11. throw new IllegalArgumentException(
  12. "Attempting to access a non-existent partition: " + p + ". " +
  13. "Total number of partitions: " + maxPartitions)
  14. }
  15. val jobId = nextJobId.getAndIncrement()
  16. if (partitions.size == 0) {
  17. // Return immediately if the job is running 0 tasks
  18. return new JobWaiter[U](this, jobId, 0, resultHandler)
  19. }
  20. assert(partitions.size > 0)
  21. val func2 = func.asInstanceOf[(TaskContext, Iterator[_]) => _]
  22. val waiter = new JobWaiter(this, jobId, partitions.size, resultHandler)
  23. //给eventProcessLoop发送JobSubmitted消息
  24. eventProcessLoop.post(JobSubmitted(
  25. jobId, rdd, func2, partitions.toArray, callSite, waiter,
  26. SerializationUtils.clone(properties)))
  27. waiter
  28. }

DAGScheduler的submit方法中,像eventProcessLoop对象发送了JobSubmitted消息。eventProcessLoop是DAGSchedulerEventProcessLoop类的对象

  1. private[scheduler] val eventProcessLoop = new DAGSchedulerEventProcessLoop(this)

DAGSchedulerEventProcessLoop,接收各种消息并进行处理,处理的逻辑在其doOnReceive方法中:

  1. private def doOnReceive(event: DAGSchedulerEvent): Unit = event match {
  2.    //Job提交
  1. case JobSubmitted(jobId, rdd, func, partitions, callSite, listener, properties) =>
  2. dagScheduler.handleJobSubmitted(jobId, rdd, func, partitions, callSite, listener, properties)
  3. case MapStageSubmitted(jobId, dependency, callSite, listener, properties) =>
  4. dagScheduler.handleMapStageSubmitted(jobId, dependency, callSite, listener, properties)
  5. case StageCancelled(stageId) =>
  6. dagScheduler.handleStageCancellation(stageId)
  7. case JobCancelled(jobId) =>
  8. dagScheduler.handleJobCancellation(jobId)
  9. case JobGroupCancelled(groupId) =>
  10. dagScheduler.handleJobGroupCancelled(groupId)
  11. case AllJobsCancelled =>
  12. dagScheduler.doCancelAllJobs()
  13. case ExecutorAdded(execId, host) =>
  14. dagScheduler.handleExecutorAdded(execId, host)
  15. case ExecutorLost(execId) =>
  16. dagScheduler.handleExecutorLost(execId, fetchFailed = false)
  17. case BeginEvent(task, taskInfo) =>
  18. dagScheduler.handleBeginEvent(task, taskInfo)
  19. case GettingResultEvent(taskInfo) =>
  20. dagScheduler.handleGetTaskResult(taskInfo)
  21. case completion: CompletionEvent =>
  22. dagScheduler.handleTaskCompletion(completion)
  23. case TaskSetFailed(taskSet, reason, exception) =>
  24. dagScheduler.handleTaskSetFailed(taskSet, reason, exception)
  25. case ResubmitFailedStages =>
  26. dagScheduler.resubmitFailedStages()
  27. }

until locs.length).foreach { i =>
  • if (locs(i) ne null) {
  • // locs(i) will be null if missing
  • stage.addOutputLoc(i, locs(i))
  • }
  • }
  • } else {
  • // Kind of ugly: need to register RDDs with the cache and map output tracker here
  • // since we can't do it in the RDD constructor because # of partitions is unknown
  • logInfo("Registering RDD " + rdd.id + " (" + rdd.getCreationSite + ")")
  • mapOutputTracker.registerShuffle(shuffleDep.shuffleId, rdd.partitions.length)
  • }
  • stage
  • }