Apache Kylin安装部署

0x01 Kylin安装环境

Kylin依赖于hadoop大数据平台,安装部署之前确认,大数据平台已经安装Hadoop, HBase, Hive

1.1 了解kylin的两种二进制包

说明:特别二进制包是一个在HBase 1.1+环境上编译的Kylin快照二进制包;安装它需要HBase 1.1.3或更高版本,否则之前版本中有一个已知的关于fuzzy key过滤器的缺陷,会导致Kylin查询结果缺少记录:HBASE-14269。此外还需注意的是,这不是一个正式的发布版(每隔几周rebase KYLIN 1.3.x 分支上最新的改动),没有经过完整的测试。

0x02 安装部署

2.1 下载

可以选择自己需要的版本进行下载,这里下载的是pache-kylin-1.6.0-bin.tar.gz

2.2 安装

$ tar -zxvf apache-kylin-1.6.0-bin.tar.gz
$ mv apache-kylin-1.6.0 /home/hadoop/cloud/
$ ln -s /home/hadoop/cloud/apache-kylin-1.6.0 /home/hadoop/cloud/kylin

2.3 配置环境变量

在/etc/profile里配置KYLIN环境变量和一个名为hive_dependency的变量

vim /etc/profile

//追加
export KYLIN_HOME=/home/hadoop/kylin
export PATH=$PATH:$ KYLIN_HOME/bin
export hive_dependency=/home/hadoop/hive/conf:/home/hadoop/hive/lib/*:/home/hadoop/hive/hcatalog/share/hcatalog/hive-hcatalog-core-2.0.0.jar

使配置文件生效

# source /etc/profile
# su hadoop
$ source /etc/profile

这个配置需要在从节点master2,slave1,slave2上同时配置,因为kylin提交的任务交给mr后,hadoop集群将任务分发给从节点时,需要hive的依赖信息,如果不配置,则mr任务将报错为: hcatalogXXX找不到。

2.4 配置kylin.sh

$ vim ~/cloud/kylin/bin/kylin.sh

//显式声明 KYLIN_HOME
export KYLIN_HOME=/home/Hadoop/kylin
//在HBASE_CLASSPATH_PREFIX中显示增加$hive_dependency依赖
export HBASE_CLASSPATH_PREFIX=${tomcat_root}/bin/bootstrap.jar:${tomcat_root}/bin/tomcat-juli.jar:${tomcat_root}/lib/*:$hive_dependency:$HBASE_CLASSPATH_PREFIX

2.5 检查环境是否设置成功

$ check-env.sh
KYLIN_HOME is set to /home/hadoop/kylin

2.6 配置kylin.properties

进入conf文件夹,修改kylin各配置文件kylin.properties如下

$ vim ~/cloud/kylin/conf/kylin.properties

kylin.rest.servers=master:7070
#定义kylin用于MR jobs的job.jar包和hbase的协处理jar包,用于提升性能。
kylin.job.jar=/home/hadoop/kylin/lib/kylin-job-1.6.0-SNAPSHOT.jar
kylin.coprocessor.local.jar=/home/hadoop/kylin/lib/kylin-coprocessor-1.6.0-SNAPSHOT.jar

2.7 配置kylin_hive_conf.xml和kylin_job_conf.xml

kylin_hive_conf.xmlkylin_job_conf.xml的副本数设置为2

<property>
  <name>dfs.replication</name>
  <value>2</value>
  <description>Block replication</description>
</property>

2.8 启动服务

注意:在启动Kylin之前,先确认以下服务已经启动

  • hadoop的hdfs/yarn/jobhistory服务
start-all.sh
mr-jobhistory-daemon.sh start historyserver
  • hive 元数据库

hive --service metastore &

  • zookeeper

zkService.sh start

需要在每个节点上执行,分别启动所有节点的zookeeper服务

  • hbase

start-hbase.sh

  • 检查hive和hbase的依赖
$ find-hive-dependency.sh
$ find-hbase-dependency.sh
  • 启动和停止kylin的命令
$ kylin.sh start
$ kylin.sh stop

Web访问地址:http://192.168.1.10:7070/kylin/login

默认的登录username/password 是 ADMIN/KYLIN

0x03 测试

3.1 测试Kylin自带的sample

Kylin提供一个自动化脚本来创建测试CUBE,这个脚本也会自动创建出相应的hive数据表。运行sample例子的步骤:

S1: 运行${KYLIN_HOME}/bin/sample.sh脚本

$ sample.sh

关键提示信息:

KYLIN_HOME is set to /home/hadoop/kylin
Going to create sample tables in hive...
Sample hive tables are created successfully; Going to create sample cube...
Sample cube is created successfully in project 'learn_kylin'; Restart Kylin server or reload the metadata from web UI to see the change.

S2:在MYSQL中查看此sample创建了哪几张表

select DB_ID,OWNER,SD_ID,TBL_NAME from TBLS;

S3: 在hive客户端查看创建的表和数据量(1w条)

hive> show tables;
OK
kylin_cal_dt
kylin_category_groupings
kylin_sales
Time taken: 1.835 seconds, Fetched: 3 row(s)
hive> select count(*) from kylin_sales;
OK
Time taken: 65.351 seconds, Fetched: 1 row(s)

S4: 重启kylin server 刷新缓存

$ kylin.sh stop
$ kylin.sh start

S5:用默认的用户名密码ADMIN/KYLIN访问192.168.200.165:7070/kylin

进入控制台后选择project为learn_kylin的那个项目。

S6: 选择测试cube “kylin_sales_cube”,点击“Action”-“Build”,选择一个2014-01-01以后的日期,这是为了选择全部的10000条测试记录。

选择一个生成日期
点击提交会出现重建任务成功提交的提示

S7: 监控台查看这个任务的执行进度,直到这个任务100%完成。

任务完成
切换到model控制台会发现cube的状态成为了ready,表示可以执行sql查询了
执行过程中,在hive里会生成临时表,待任务100%完成后,这张表会自动删除

0x04 常见错误

4.1 运行check-env.sh提示

please make sure user has the privilege to run hbase shell

检查hbase环境变量是否配置正确。重新配置后问题解决。
参考:http://www.jianshu.com/p/632b61f73fe8

4.2 hadoop-env.sh脚本问题

Kylin安装问题--/home/hadoop-2.5.1/contrib/capacity-scheduler/.jar (No such file or directory)

WARNING: Failed to process JAR
 [jar:file:/home/hadoop-2.5.1/contrib/capacity-scheduler/.jar!/] for
 TLD files
 java.io.FileNotFoundException:
 /home/hadoop-2.5.1/contrib/capacity-scheduler/.jar (No such file or
 directory)
 at java.util.zip.ZipFile.open(Native Method)
 at java.util.zip.ZipFile.(ZipFile.java:215)
 at java.util.zip.ZipFile.(ZipFile.java:145)
 at java.util.jar.JarFile.(JarFile.java:153)
 at java.util.jar.JarFile.(JarFile.java:90)
 at sun.net.www.protocol.jar.URLJarFile.(URLJarFile.java:93)
 at sun.net.www.protocol.jar.URLJarFile.getJarFile(URLJarFile.java:69)
 at sun.net.www.protocol.jar.JarFileFactory.get(JarFileFactory.java:99)
 at
 sun.net.www.protocol.jar.JarURLConnection.connect(JarURLConnection.java:122)
 at
 sun.net.www.protocol.jar.JarURLConnection.getJarFile(JarURLConnection.java:89)
 at org.apache.tomcat.util.scan.FileUrlJar.(FileUrlJar.java:41)
 at org.apache.tomcat.util.scan.JarFactory.newInstance(JarFactory.java:34)
 at org.apache.catalina.startup.TldConfig.tldScanJar(TldConfig.java:485)
 at org.apache.catalina.startup.TldConfig.access$100(TldConfig.java:61)
 at
 org.apache.catalina.startup.TldConfig$TldJarScannerCallback.scan(TldConfig.java:296)
 at
 org.apache.tomcat.util.scan.StandardJarScanner.process(StandardJarScanner.java:258)
 at
 org.apache.tomcat.util.scan.StandardJarScanner.scan(StandardJarScanner.java:220)
 at org.apache.catalina.startup.TldConfig.execute(TldConfig.java:269)
 at
 org.apache.catalina.startup.TldConfig.lifecycleEvent(TldConfig.java:565)
 at
 org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:117)
 at
 org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:90)
 at
 org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5412)
 at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
 at
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
 at
 org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
 at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:649)
 at org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:1081)
 at
 org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1877)
 at
 java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
 at java.util.concurrent.FutureTask.run(FutureTask.java:262)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

其实这个问题只是一些小bug问题把这个脚本的内容改动一下就好了${HADOOP_HOME}/etc/hadoop/hadoop-env.sh把下面的这一段循环语句给注释掉

#for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
#  if [ "$HADOOP_CLASSPATH" ]; then
#    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
#  else
#    export HADOOP_CLASSPATH=$f
#  fi
#done

4.3 清理kylin空间

kylin.sh org.apache.kylin.storage.hbase.util.StorageCleanupJob --delete true

4.4 Permission denied

kylin cube测试时,报错:org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

解决办法:

1 配置hdfs-site.xml

<property>
    <name>dfs.permissions</name>
    <value>false</value>
</property>

2 在hdfs上给目录/user 777的权限

$ hadoop fs -chmod -R 777 /user

0x05 参考链接

2017-02-17 19:51:39 星期五

update1: 2017-05-04 20:10:05 星期四

posted @ 2017-09-03 11:28  ning-wang  阅读(1355)  评论(0编辑  收藏  举报