通过api如何连接kerberos认证的hadoop
虽然通过api来操作hdfs的可能性不大,但在学习hdfs api时碰到这个问题就记录一下
1.在集群中生成keytab文件
我是直接在/var/run/cloudera-scm-agent/process/xxxx-DATANODE...这个目录下拿的hdfs.keytab文件,如果是普通hdfs用户,则需要进入kerberos 客户端命令行生成用户的keytab文件
kadmin.local
进入命令行之后:
xst -norandkey -k username.keytab username/hostname@XXX.COM
将username.keytab和krb5.conf文件下载到本地
2.项目中加入集群配置文件
在resources目录下添加hdfs集群配置文件及kerberos的两个配置文件
添加后将krb5.conf中与服务器文件目录有关的配置删掉
# Configuration snippets may be placed in this directory as well
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
default_realm = LEMO.COM
# default_ccache_name = KEYRING:persistent:%{uid}
[realms]
LEMO.COM = {
kdc = yourip
admin_server = yourip
}
[domain_realm]
.hadoop01 = LEMO.COM
hadoop01 = LEMO.COM
3.初始化FileSystem时进行认证
/**
* 进行Kerberos认证的工具类
*/
public class HdfsIdentificationUtil {
//定义用户和keytab文件位置
private static final String KEY_TAB_PATH = "D:\\workspace\\bigdata\\hadoop\\hdfs\\src\\main\\resources\\hdfs.keyteb";
private static String USER_KEY = "hdfs@LEMO.COM";
public static Configuration identifacation(Configuration configuration){
//添加kerberos认证配置
System.setProperty("java.security.krb5.conf", "D:\\workspace\\bigdata\\hadoop\\hdfs\\src\\main\\resources\\krb5.conf");
configuration.set("hadoop.security.authentication", "kerberos");
configuration.set("hadoop.security.authorization", "true");
//进行认证
UserGroupInformation.setConfiguration(configuration);
try {
UserGroupInformation.loginUserFromKeytab(USER_KEY, KEY_TAB_PATH);
System.out.println(UserGroupInformation.isLoginKeytabBased());
System.out.println(UserGroupInformation.isSecurityEnabled());
} catch (IOException e) {
e.printStackTrace();
}
return configuration;
}
用工具类进行认证:
private FileSystem fileSystem;
/**
* 初始化hdfs文件系统并进行kerberos认证
* @throws IOException
*/
@Before
public void init() throws IOException {
//1.获取kerberos认证后的配置
HdfsIdentificationUtil identificationUtil = new HdfsIdentificationUtil();
Configuration configuration = identificationUtil.identifacation(new Configuration());
//获取文件系统,配置集群上运行
//configuration.set("fs.defaultFS","hdfs://hadoop01:8020");
configuration.set("HADOOP_USER_NAME", "hdfs");
fileSystem = FileSystem.get(configuration);
}
每个人都在奋不顾身,都在加倍努力,得过且过只会让你和别人的差距越来越大...