具有 HDFS 密钥库的 Hadoop KMS:方案 "hdfs" 没有文件系统
Hadoop KMS with HDFS keystore: No FileSystem for scheme "hdfs"
我一直在尝试将 Hadoop kms 配置为使用 hdfs 作为密钥提供程序。我遵循 Hadoop 文档,并将以下字段添加到我的 kms-site.xml:
<property>
<name>hadoop.kms.key.provider.uri</name>
<value>jceks://hdfs@nn1.example.com/kms/test.jceks</value>
<description>
URI of the backing KeyProvider for the KMS.
</description>
</property>
该路由存在于 hdfs 中,我希望 kms 为其密钥库创建文件 test.jceks。但是,由于以下错误,kms 未能启动:
ERROR: Hadoop KMS could not be started
REASON: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "hdfs"
Stacktrace:
---------------------------------------------------
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "hdfs"
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240)
at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:121)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:132)
at org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:88)
at org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080)
at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
据我所知,这个错误似乎是因为没有为 HDFS 实现文件系统。我查过这个错误,但它总是指升级时 hdfs-client 缺少 jar,我没有这样做(这是一个全新安装)。我正在使用 Hadoop 2.7.2
感谢您的帮助!
我在 Hadoop 的 Jira 问题跟踪器中问了同样的问题 here. As the user Wei-Chiu Chuang 指出,在 HDFS 中拥有密钥库不是一个有效的用例。 KMS 不能使用 HDFS 作为后备存储,因为每个 HDFS 客户端文件访问都会经过一个循环 HDFS NameNode --> KMS --> HDFS NameNode --> KMS ....
因此只有基于文件的 KMS 才能使用本地文件系统上的密钥库文件。
我一直在尝试将 Hadoop kms 配置为使用 hdfs 作为密钥提供程序。我遵循 Hadoop 文档,并将以下字段添加到我的 kms-site.xml:
<property>
<name>hadoop.kms.key.provider.uri</name>
<value>jceks://hdfs@nn1.example.com/kms/test.jceks</value>
<description>
URI of the backing KeyProvider for the KMS.
</description>
</property>
该路由存在于 hdfs 中,我希望 kms 为其密钥库创建文件 test.jceks。但是,由于以下错误,kms 未能启动:
ERROR: Hadoop KMS could not be started
REASON: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "hdfs"
Stacktrace:
---------------------------------------------------
org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "hdfs"
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3220)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3240)
at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:121)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3291)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3259)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:470)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:356)
at org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:132)
at org.apache.hadoop.crypto.key.JavaKeyStoreProvider.<init>(JavaKeyStoreProvider.java:88)
at org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:660)
at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
at org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:187)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4276)
at org.apache.catalina.core.StandardContext.start(StandardContext.java:4779)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:803)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:780)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:583)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1080)
at org.apache.catalina.startup.HostConfig.deployDirectories(HostConfig.java:1003)
at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:507)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1322)
at org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:325)
at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:142)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1069)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:822)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1061)
at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463)
at org.apache.catalina.core.StandardService.start(StandardService.java:525)
at org.apache.catalina.core.StandardServer.start(StandardServer.java:761)
at org.apache.catalina.startup.Catalina.start(Catalina.java:595)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:289)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:414)
据我所知,这个错误似乎是因为没有为 HDFS 实现文件系统。我查过这个错误,但它总是指升级时 hdfs-client 缺少 jar,我没有这样做(这是一个全新安装)。我正在使用 Hadoop 2.7.2
感谢您的帮助!
我在 Hadoop 的 Jira 问题跟踪器中问了同样的问题 here. As the user Wei-Chiu Chuang 指出,在 HDFS 中拥有密钥库不是一个有效的用例。 KMS 不能使用 HDFS 作为后备存储,因为每个 HDFS 客户端文件访问都会经过一个循环 HDFS NameNode --> KMS --> HDFS NameNode --> KMS ....
因此只有基于文件的 KMS 才能使用本地文件系统上的密钥库文件。