Export

语法文件

export_stmt ::=
    KW_EXPORT KW_TABLE base_table_ref:tblRef
    where_clause:whereExpr
    KW_TO STRING_LITERAL:path
    opt_properties:properties
    opt_broker:broker
    {:
        RESULT = new ExportStmt(tblRef, whereExpr, path, properties, broker);
    :}
    ;
 

export有类似于Load的语法,export语法分为五段:

● KW_EXPORT KW_TABLE base_table_ref:tblRef

● where_clause

● KW_TO STRING_LITERAL:path

● opt_properties - PROPERTIES (xxx)

● opt_broker - WITH xxx可以是Broker或者是HDFS,S3,最终都转成BrokerDesc

EXPORT TABLE test_export 
TO "hdfs://ctyunns/tmp/doris/"
WITH BROKER "hdfs_broker"(
    "hadoop.security.authentication"="kerberos",
    "kerberos_principal"="hdfs@BIGDATA.CHINATELECOM.CN",
    "kerberos_keytab"="/etc/security/keytabs/hdfs_export.keytab",
    'dfs.nameservices'='ctyunns',
    'dfs.ha.namenodes.ctyunns'='nn1,nn2',
    'dfs.namenode.rpc-address.ctyunns.nn1'='nm-bigdata-030017237.ctc.local:54310',
    'dfs.namenode.rpc-address.ctyunns.nn2'='nm-bigdata-030017238.ctc.local:54310',
    'dfs.client.failover.proxy.provider.ctyunns'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider'
    );
 

Repository

语法文件

Repository也有类似的语法,但细看实则不同

CREATE REPOSITORY `default_repo`
WITH BROKER `hdfs_broker`
ON LOCATION "hdfs://ctyunns/dorisRepo"
PROPERTIES
(
    'dfs.nameservices'='ctyunns',
    'dfs.ha.namenodes.ctyunns'='nn1,nn2',
    'dfs.namenode.rpc-address.ctyunns.nn1'='nm-bigdata-030017237.ctc.local:54310',
    'dfs.namenode.rpc-address.ctyunns.nn2'='nm-bigdata-030017238.ctc.local:54310',
    'dfs.client.failover.proxy.provider.ctyunns'='org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider',
    "hadoop.security.authentication"="kerberos",
    "kerberos_principal"="hdfs@BIGDATA.CHINATELECOM.CN",
    "kerberos_keytab"="/etc/security/keytabs/hdfs_export.keytab"   
);
 

XXX on LOCATION xxx PROPERTIES是一个整体组成StorageBackend

| KW_CREATE opt_read_only:isReadOnly KW_REPOSITORY ident:repoName KW_WITH storage_backend:storage
    {:
        RESULT = new CreateRepositoryStmt(isReadOnly, repoName, storage);
    :}

storage_backend ::=
    | KW_BROKER ident:brokerName KW_ON KW_LOCATION STRING_LITERAL:location opt_properties:properties
    {:
        RESULT = new StorageBackend(brokerName, location, StorageBackend.StorageType.BROKER, properties);
    :}
    | KW_S3 KW_ON KW_LOCATION STRING_LITERAL:location opt_properties:properties
    {:
        RESULT = new StorageBackend("", location, StorageBackend.StorageType.S3, properties);
    :}
    | KW_HDFS KW_ON KW_LOCATION STRING_LITERAL:location opt_properties:properties
    {:
        RESULT = new StorageBackend("", location, StorageBackend.StorageType.HDFS, properties);
    :}
    | KW_LOCAL KW_ON KW_LOCATION STRING_LITERAL:location opt_properties:properties
    {:
        RESULT = new StorageBackend("", location, StorageBackend.StorageType.LOCAL, properties);
    :}
    ;
 

 

 

StorageBackend -> BlobStorage

BrokerDesc -> BlobStorage

 

Resource

语法文件

Resource可以理解为一系列配置的集合

CREATE EXTERNAL RESOURCE "spark0"
PROPERTIES
(
  "type" = "spark",
  "spark.master" = "yarn",
  "spark.submit.deployMode" = "cluster",
  "spark.jars" = "xxx.jar,yyy.jar",
  "spark.files" = "/tmp/aaa,/tmp/bbb",
  "spark.executor.memory" = "1g",
  "spark.yarn.queue" = "queue0",
  "spark.hadoop.yarn.resourcemanager.address" = "127.0.0.1:9999",
  "spark.hadoop.fs.defaultFS" = "hdfs://127.0.0.1:10000",
  "working_dir" = "hdfs://127.0.0.1:10000/tmp/doris",
  "broker" = "broker0",
  "broker.username" = "user0",
  "broker.password" = "password0"
);
 

语法文件如下:

KW_CREATE opt_external:isExternal KW_RESOURCE opt_if_not_exists:ifNotExists ident_or_text:resourceName opt_properties:properties
    {:
        RESULT = new CreateResourceStmt(isExternal, ifNotExists, resourceName, properties);
    :}
 

创建过程

创建过程如下:

● Resource.fromStmt

○ 创建各种不同类型的Resource子类。如SparkResource,JdbcResource

○ 调用Resource.setProperties将语法定义时的properties设置进对应的Resource子类中

● createResource - 将Resource置入内存存储

支持的Resource类型见Resource.getResourceInstance

posted @ 2023-06-25 19:11 xutao_ustc 阅读(11) 评论(0) 推荐(0) 编辑
摘要: 监控获取 访问fe:http_port/metrics时将访问MetricsAction.execute,在其中: ● 新建PrometheusMetricVisitor(visitor独立于真正提供metrics值的组件,是决定以什么方式返回给用户端的组件)。有若干种visitor,Prometh 阅读全文
posted @ 2023-06-25 19:03 xutao_ustc 阅读(26) 评论(0) 推荐(0) 编辑
摘要: 系统相关类 PluginLoader:插件的加载类,封装了插件信息、配置加载、安装过程。包含如下组件: ● PluginInfo:含有插件的基本信息 ● Plugin接口:插件初始化接口 ● AuditPlugin接口:包含审计类型插件关联的操作 初始化 PluginMgr.init初始化时将构建内 阅读全文
posted @ 2023-06-25 19:02 xutao_ustc 阅读(33) 评论(0) 推荐(0) 编辑
摘要: 通常操作元数据时,会首先更新一条内存数据,然后写入一条元数据更新日志。 这样在重启时,通过顺序回放元数据更新日志,即可在内存中重构完整的元数据。 Doris一般使用BDBJE存放元数据的更新日志。在记录到达一定数量会在BDBJE中生成新的DB(本质是checkpoint分割点) ... ... DB 阅读全文
posted @ 2023-06-25 18:58 xutao_ustc 阅读(37) 评论(0) 推荐(0) 编辑
摘要: 最早的节点管理是在BE节点的配置文件中写入fe节点的地址。BE节点在启动时,将知道fe节点的地址并加入集群。但是这样的机制会有一些问题,有时候一个测试节点接入到了线上集群,这种随意的操作测试会导致集群的拓扑结构不可控。 节点管理的目的是对节点进行认证,实现一个节点发现和认证机制。 FE节点管理 ●  阅读全文
posted @ 2023-06-25 18:56 xutao_ustc 阅读(150) 评论(0) 推荐(0) 编辑
摘要: 总体流程 StmtExecutor.execute的过程总体分为三步: ● 分析hint ● analyze - 可能会遇到需要forward到master执行的情况;ShowStmt也可能转成SelectStmt ○ Query - analyzeAndGenerateQueryPlan ○ 其他 阅读全文
posted @ 2023-06-25 18:53 xutao_ustc 阅读(17) 评论(0) 推荐(0) 编辑
摘要: load_stmt ::= KW_LOAD KW_LABEL job_label:label LPAREN data_desc_list:dataDescList RPAREN opt_broker:broker opt_properties:properties {: RESULT = new L 阅读全文
posted @ 2023-06-25 18:50 xutao_ustc 阅读(78) 评论(0) 推荐(0) 编辑
摘要: FE 起手路由 在访问curl --location-trusted -u root: -T test.csv -H "column_separator:," http://127.0.0.1:8030/api/demo/example_tbl/_stream_load时,FE如下操作: ● 检查用 阅读全文
posted @ 2023-06-25 18:45 xutao_ustc 阅读(139) 评论(0) 推荐(0) 编辑
摘要: | KW_CREATE opt_external:isExternal KW_TABLE opt_if_not_exists:ifNotExists table_name:name LPAREN column_definition_list:columns COMMA index_definitio 阅读全文
posted @ 2023-06-25 18:40 xutao_ustc 阅读(6) 评论(0) 推荐(0) 编辑
摘要: Catalog 创建 | KW_CREATE KW_CATALOG opt_if_not_exists:ifNotExists ident:catalogName opt_properties:properties {: RESULT = new CreateCatalogStmt(ifNotExi 阅读全文
posted @ 2023-06-25 18:33 xutao_ustc 阅读(25) 评论(0) 推荐(0) 编辑
点击右上角即可分享
微信分享提示