dremio PrivilegeCatalog 接口简单说明
PrivilegeCatalog 实际是一个权限检查的能力,同时dremio 的StoragePlugin 也提供了一个安全check 能力
StoragePlugin安全检查
boolean hasAccessPermission(String user, NamespaceKey key, DatasetConfig datasetConfig);
可以看出需要一个用户,namespacekey 以及数据集的配置信息(namespace 服务模块的dataset.proto 提供的完整的介绍)
PrivilegeCatalog提供接口约定
/**
* Interface to perform privilege checking on sources
*/
public interface PrivilegeCatalog {
/**
* Validate user's privilege.
* @param key
* @param privilege
*/
void validatePrivilege(NamespaceKey key, Privilege privilege);
/**
* Validate user's ownership.
* @param key
*/
void validateOwnership(NamespaceKey key);
}
Privilege 定义(属于SqlGrant 中的一个枚举),基本包含了dremio 当前可以提供的权限能力了
public enum Privilege {
VIEW_JOB_HISTORY,
ALTER_REFLECTION,
SELECT,
ALTER,
VIEW_REFLECTION,
MODIFY,
MANAGE_GRANTS,
CREATE_TABLE,
DROP,
EXTERNAL_QUERY,
INSERT,
TRUNCATE,
DELETE,
UPDATE,
CREATE_USER,
CREATE_ROLE,
EXECUTE,
CREATE_SOURCE,
UPLOAD_FILE,
ALL
}
public enum GranteeType {
USER,
ROLE
}
public enum GrantType {
PROJECT,
PDS,
VDS,
FOLDER,
SOURCE,
SPACE,
FUNCTION
}
- 实现子类
一些实现说明
因为权限属于企业版的特性,默认社区版木有提供的此能力,StoragePlugin 提供的能力从社区版一些实现来说主要面向用户扮演场景,以及底层关联权限的判断
(比如hive)
比如文件存储系统的实现
public boolean hasAccessPermission(String user, NamespaceKey key, DatasetConfig datasetConfig) {
if (config.isImpersonationEnabled()) {
if (datasetConfig.getReadDefinition() != null) { // allow accessing partial datasets
FileSystem userFs;
try {
userFs = createFS(user);
} catch (IOException ioe) {
throw new RuntimeException("Failed to check access permission", ioe);
}
final List<TimedRunnable<Boolean>> permissionCheckTasks = Lists.newArrayList();
permissionCheckTasks.addAll(getUpdateKeyPermissionTasks(datasetConfig, userFs));
permissionCheckTasks.addAll(getSplitPermissionTasks(datasetConfig, userFs, user));
try {
Stopwatch stopwatch = Stopwatch.createStarted();
final List<Boolean> accessPermissions = TimedRunnable.run("check access permission for " + key, logger, permissionCheckTasks, 16);
stopwatch.stop();
logger.debug("Checking access permission for {} took {} ms", key, stopwatch.elapsed(TimeUnit.MILLISECONDS));
for (Boolean permission : accessPermissions) {
if (!permission) {
return false;
}
}
} catch (FileNotFoundException fnfe) {
throw UserException.invalidMetadataError(fnfe)
.addContext(fnfe.getMessage())
.setAdditionalExceptionContext(new InvalidMetadataErrorContext(ImmutableList.of(key.getPathComponents())))
.buildSilently();
} catch (IOException ioe) {
throw new RuntimeException("Failed to check access permission", ioe);
}
}
}
return true;
}
hive 实现
public boolean hasAccessPermission(String user, NamespaceKey key, DatasetConfig datasetConfig) {
if (!isOpen.get()) {
throw buildAlreadyClosedException();
}
if (!metastoreImpersonationEnabled) {
return true;
}
try {
final HiveMetadataUtils.SchemaComponents schemaComponents = HiveMetadataUtils.resolveSchemaComponents(key.getPathComponents(), true);
final Table table = clientsByUser
.get(user).getTable(schemaComponents.getDbName(), schemaComponents.getTableName(), true);
if (table == null) {
return false;
}
if (storageImpersonationEnabled) {
if (datasetConfig.getReadDefinition() != null && datasetConfig.getReadDefinition().getReadSignature() != null) {
final HiveReadSignature readSignature = HiveReadSignature.parseFrom(datasetConfig.getReadDefinition().getReadSignature().toByteArray());
// for now we only support fs based read signatures
if (readSignature.getType() == HiveReadSignatureType.FILESYSTEM) {
// get list of partition properties from read definition
HiveTableXattr tableXattr = HiveTableXattr.parseFrom(datasetConfig.getReadDefinition().getExtendedProperty().asReadOnlyByteBuffer());
return hasFSPermission(getUsername(user), key, readSignature.getFsPartitionUpdateKeysList(), tableXattr);
}
}
}
return true;
} catch (TException e) {
throw UserException.connectionError(e)
.message("Unable to connect to Hive metastore: %s", e.getMessage())
.build(logger);
} catch (ExecutionException | InvalidProtocolBufferException e) {
throw new RuntimeException("Unable to connect to Hive metastore.", e);
} catch (UncheckedExecutionException e) {
Throwable rootCause = ExceptionUtils.getRootCause(e);
if(rootCause instanceof TException) {
throw UserException.connectionError(e)
.message("Unable to connect to Hive metastore: %s", rootCause.getMessage())
.build(logger);
}
Throwable cause = e.getCause();
if (cause instanceof AuthorizerServiceException || cause instanceof RuntimeException) {
throw e;
}
logger.error("User: {} is trying to access Hive dataset with path: {}.", this.getName(), key, e);
}
return false;
}
CatalogImpl 默认实现(实现都为空,如果需要自己实现,返回一个通用的runtime 类型的exception 就行了,具体可以使用UserException)
@Override
public void validatePrivilege(NamespaceKey key, SqlGrant.Privilege privilege) {
// For the default implementation, don't validate privilege.
}
@Override
public void validateOwnership(NamespaceKey key) {
// For the default implementation, don't validate privilege.
}
当前使用的地方validatePrivilege的地方(从开源系统来说主要是关于sql handler的)
说明
PrivilegeCatalog 是一个强大的安全接口定义,实现此接口可以做一些方便的安全控制,当时社区版木有提供,我们可以自己尝试扩展
sql 查询权限处理利用了存储插件的hasAccessPermission,具体处理是在DatasetManager 中包装使用的,实际调用的是ManagedStoragePlugin
public void checkAccess(NamespaceKey key, DatasetConfig datasetConfig, String userName, final MetadataRequestOptions options) {
try(AutoCloseableLock l = readLock()) {
checkState();
// 此处使用了缓存PermissionCheckCache,内部调用了存储插件时间的权限check
if (!getPermissionsCache().hasAccess(userName, key, datasetConfig, options.getStatsCollector(), sourceConfig)) {
throw UserException.permissionError()
.message("Access denied reading dataset %s.", key)
.build(logger);
}
}
}
存储插件的权限check
protected boolean checkPlugin(final String username, final NamespaceKey namespaceKey, final DatasetConfig config, final SourceConfig sourceConfig) {
return plugin.get().hasAccessPermission(username, namespaceKey, config);
}
参考资料
https://docs.dremio.com/software/security/rbac/rbac-structure/
sabot/kernel/src/main/java/com/dremio/exec/catalog/PrivilegeCatalog.java
sabot/kernel/src/main/java/com/dremio/exec/store/StoragePlugin.java
services/namespace/src/main/proto/dataset.proto
common/src/main/java/com/dremio/common/exceptions/UserException.java
sabot/kernel/src/main/java/com/dremio/exec/catalog/DatasetManager.java
sabot/kernel/src/main/java/com/dremio/exec/catalog/ManagedStoragePlugin.java
sabot/kernel/src/main/java/com/dremio/exec/catalog/PermissionCheckCache.java