Apache Tez Design
http://tez.incubator.apache.org/
http://dongxicheng.org/mapreduce-nextgen/apache-tez/
http://dongxicheng.org/mapreduce-nextgen/apache-tez-newest-progress/
Tez aims to be a general purpose execution runtime that enhances various scenarios that are not well served by classic Map-Reduce.
In the short term the major focus is to support Hive and Pig, specifically to enable performance improvements to batch and ad-hoc interactive queries.
What services will Tez provide
Tez兼容传统的map-reduce jobs, 当然主要focus提供基于DAG的jobs和相应的API以及primitives.Tez provides runtime components:
- An execution environment that can handle traditional map-reduce jobs
- An execution environment that handles DAG-based jobs comprising various built-in and extendable primitives
- Cluster-side determination of input pieces
- Runtime planning such as task cardinality determination and dynamic modification to the DAG structure
Tez provides APIs to access these services:
- Traditional map-reduce functionality is accessed via java classes written to the Job interface: org.apache.hadoop.mapred.Job and/or org.apache.hadoop.mapreduce.v2.app.job.Job;
and by specifying in yarn-site that the map-reduce framework should be Tez. - DAG-based execution is accessed via the new Tez DAG API: org.apache.tez.dag.api.*, org.apache.tez.engine.api.*.
Tez provides pre-made primitives for use with the DAG API (org.apache.tez.engine.common.*)
- Vertex Input
- Vertex Output
- Sorting
- Shuffling
- Merging
- Data transfer
Tez-YARN architecture
In the above figure Tez is represented by the red components: client-side API, an AppMaster, and multiple containers that execute child processes under the control of the AppMaster.
Three separate software stacks are involved in the execution of a Tez job, each using components from the clientapplication, Tez, and YARN:
DAG topologies and scenarios
The following terminology is used:
Job Vertex: A “stage” in the job plan. 逻辑顶点, 可以理解成stage
Job Edge: The logical connections between Job Vertices. 逻辑边, 关联
Vertex: A materialized stage at runtime comprising a certain number of materialized tasks. 物理顶点, 由并行的tasks节点组成
Edge: Represents actual data movement between tasks. 物理边, 代表实际数据流向
Task: A process performing computation within a YARN container. Task, 一个执行节点
Task cardinality: The number of materialized tasks in a Vertex. Task基数, Vertex的并发度
Static plan: Planning decisions fixed before job submission.
Dynamic plan: Planning decisions made at runtime in the AppMaster process.
Tez API
The Tez API comprises many services that support applications to run DAG-style jobs. An application that makes use of Tez will need to:
1. Create a job plan (the DAG) comprising vertices, edges, and data source references
2. Create task implementations that perform computations and interact with the DAG AppMaster
3. Configure Yarn and Tez appropriately
DAG definition API
抽象DAG的定义接口
public class DAG{ DAG(); void addVertex(Vertex); void addEdge(Edge); void addConfiguration(String, String); void setName(String); void verify(); DAGPlan createDaG(); } public class Vertex { Vertex(String vertexName, String processorName, int parallelism); void setTaskResource(); void setTaskLocationsHint(TaskLocationHint[]); void setJavaOpts(String); String getVertexName(); String getProcessorName(); int getParallelism(); Resource getTaskResource(); TaskLocationHint[] getTaskLocationsHint(); String getJavaOpts(); } public class Edge { Edge(Vertex inputVertex, Vertex outputVertex, EdgeProperty edgeProperty); String getInputVertex(); String getOutputVertex(); EdgeProperty getEdgeProperty(); String getId(); }
Execution APIs
Task作为Tez的执行者, 遵循input, output, processor的模式
public interface Master //a context object for task execution. currently only stub public interface Input{ void initialize(Configuration conf, Master master) boolean hasNext() Object getNextKey() Iterable<Object> getNextValues() float getProgress() void close() } public interface Output{ void initialize(Configuration conf, Master master); void write(Object key, Object value); OutputContext getOutputContext(); void close(); } public interface Partitioner { int getPartition(Object key, Object value, int numPartitions); } public interface Processor { void initialize(Configuration conf, Master master) void process(Input[] in, Output[] out) void close() } public interface Task{ void initialize(Configuration conf, Master master) Input[] getInputs(); Processor getProcessor(); Output[] getOutputs(); void run() void close() }